{"id":5329,"date":"2022-06-22T11:53:07","date_gmt":"2022-06-22T10:53:07","guid":{"rendered":"https:\/\/inframethodology.cbs.dk\/?p=5329"},"modified":"2023-07-27T18:26:52","modified_gmt":"2023-07-27T17:26:52","slug":"sentience-on-stilts","status":"publish","type":"post","link":"https:\/\/inframethodology.cbs.dk\/?p=5329","title":{"rendered":"Sentience on Stilts"},"content":{"rendered":"\n<p>On Substack, Gary Marcus recently called the claim that LaMDA, or any other language model (like GPT-3), is sentient <a href=\"https:\/\/garymarcus.substack.com\/p\/nonsense-on-stilts?utm_source=twitter&amp;sd=pf\">&#8220;nonsense on stilts.&#8221;<\/a> Mark Coeckelbergh agreed, but with a twist. It is nonsense, he argued, not because of what we <em>know<\/em> about <em>artificial intelligence<\/em>, but because of what we <em>don&#8217;t know<\/em> about <em>sentience<\/em>. &#8220;The inconvenient truth,&#8221; <a href=\"https:\/\/coeckelbergh.medium.com\/the-heart-is-not-enough-how-the-controversy-about-a-chat-bot-reveals-the-shaky-foundations-of-576bf6c8e155\">he tells us at Medium,<\/a> &#8220;is that we do not really know [whether LaMDA is sentient]. We do not really know because we do not know what sentience or consciousness is.&#8221; <a href=\"https:\/\/twitter.com\/MCoeckelbergh\/status\/1537739409189179392\">As  he put it on Twitter in response to me,<\/a> &#8220;we know how the language model works but we still don\u2019t have a satisfactory definition of consciousness.&#8221; This strikes me as a rather strange philosophy.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-thumbnail\"><a href=\"https:\/\/inframethodology.cbs.dk\/wp-content\/uploads\/2022\/06\/Magic8ball.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" src=\"https:\/\/inframethodology.cbs.dk\/wp-content\/uploads\/2022\/06\/Magic8ball-150x150.jpg\" alt=\"\" class=\"wp-image-5331\" srcset=\"https:\/\/inframethodology.cbs.dk\/wp-content\/uploads\/2022\/06\/Magic8ball-150x150.jpg 150w, https:\/\/inframethodology.cbs.dk\/wp-content\/uploads\/2022\/06\/Magic8ball-300x300.jpg 300w, https:\/\/inframethodology.cbs.dk\/wp-content\/uploads\/2022\/06\/Magic8ball.jpg 475w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><figcaption class=\"wp-element-caption\">Image Credit: <a href=\"https:\/\/en.wikipedia.org\/wiki\/Magic_8_Ball#\/media\/File:Magic8ball.jpg\">Wikipedia.<\/a><\/figcaption><\/figure>\n<\/div>\n\n\n<p>Consider the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Magic_8_Ball\">Magic 8 Ball<\/a>. Ask it a yes\/no question and it will randomly give you one of twenty answers: 10 affirmative, 5 negative, 5 undecided. These answers are presented using familiar phrases like, &#8220;Without a doubt,&#8221; &#8220;Don&#8217;t count on it,&#8221; or &#8220;Cannot predict now.&#8221; Suppose someone asked us whether this device is sentient. Would we say, &#8220;The inconvenient truth is that we don&#8217;t know. We still don&#8217;t have a satisfactory definition of sentience&#8221;? (Presumably, we could run the same argument for the Magic 8 Ball&#8217;s alleged &#8220;clairvoyance&#8221;, which is surely not better defined than &#8220;sentience&#8221;.) Obviously not. Knowing how the device works is a sufficient basis for rejecting the claim that the device has an inner life to speak of, regardless of the fact that its output consists of recognizable linguistic tokens.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignright size-thumbnail\"><a href=\"https:\/\/inframethodology.cbs.dk\/wp-content\/uploads\/2022\/06\/Magic_8_Ball_-_Instrument_Of_Evil__2426454804.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" src=\"https:\/\/inframethodology.cbs.dk\/wp-content\/uploads\/2022\/06\/Magic_8_Ball_-_Instrument_Of_Evil__2426454804-150x150.jpg\" alt=\"\" class=\"wp-image-5332\"\/><\/a><figcaption class=\"wp-element-caption\"><strong>Are you sentient?<\/strong> <br>(Image credit: <a href=\"https:\/\/en.wikipedia.org\/wiki\/Magic_8_Ball#\/media\/File:Magic_8_Ball_-_Instrument_Of_Evil?_(2426454804).jpg\">Wikipedia<\/a>)<\/figcaption><\/figure>\n<\/div>\n\n\n<p>In his contribution to the debate in the <em>Atlantic<\/em>, Stephen Marche points out that the trouble begins with the language we use to describe our devices. To explain how the Magic 8 Ball &#8220;works&#8221;, I said that we &#8220;ask it&#8221; a question and that &#8220;it gives&#8221; us an answer. Likewise, Marche notes, the developers of language models tell us that they exhibit &#8220;impressive natural language understanding.&#8221; He warns against this kind of talk, citing a Google exec.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cI find our language is not good at expressing these things,\u201d Zoubin Ghahramani, the vice president&nbsp; of research at Google, told me. \u201cWe have words for mapping meaning between sentences and objects, and the words that we use are words like&nbsp;<em>understanding<\/em>. The problem is that, in a narrow sense, you could say these systems understand just like a calculator understands addition, and in a deeper sense they don\u2019t understand. We have to take these words with a grain of salt.\u201d<\/p>\n<cite>STEPHEN MARCHE, <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2022\/06\/google-palm-ai-artificial-consciousness\/661329\/\">&#8220;Google\u2019s AI Is Something Even Stranger Than Conscious,&#8221;<\/a> The ATLANTIC, june 19, 2022<\/cite><\/blockquote>\n\n\n\n<p>If you read that just a little too quickly you might miss another example of the way language misleads us about technology. &#8220;You could say that these systems understand just like a calculator understands addition,&#8221; Ghahramani says. But calculators don&#8217;t understand addition at all! Consider a series of examples I offered on Twitter:<\/p>\n\n\n\n<p>Would we say that an abacus &#8220;understands&#8221; addition? What about a paper bag? You put two apples in it. Then you put another two apples in it. Then you have a look and there are four apples in the bag. The paper bag knows how to add? I don&#8217;t think so. If you want something that uses symbols, consider a spring scale. You calibrate it with standard weights such that 1 unit on the scale is one unit of weight. You have increasing weights labeled 1, 2, 3, 4 etc. On the tray there&#8217;s even a plus sign; you put two weights on it labelled &#8220;2&#8221; and the dial says &#8220;4&#8221;. Can the scale add? Of course not. A computer, likewise, is just a physical system that turns meaningless inputs into meaningless outputs. <em>We<\/em> understand the inputs and outputs. <em>We<\/em> imbue the output with meaning as the answer to a question.<\/p>\n\n\n\n<p>Justin E.H. Smith wrote a thoughtful (as ever) <a href=\"https:\/\/justinehsmith.substack.com\/p\/no-minds-without-other-minds\">piece about the incident on Substack<\/a>. &#8220;Much of this speculation,&#8221; he suggests, &#8220;could be mercifully suspended if those involved in it just thought a little bit harder about what our own consciousness is actually like, and in particular how much it is conditioned by our embodiment and our emotion.&#8221; Note that this is basically the opposite of Coeckelbergh&#8217;s suggestion. Smith is telling us to remember what we <em>know<\/em> about sentience and consciousness <em>from our own experience<\/em> rather than get lost in the philosophy of consciousness and its lack of a &#8220;satisfactory definition&#8221; of its object. We know LaMDA is not conscious because we know it&#8217;s not sentient, and we know it&#8217;s not sentient because we know what sentience is and that it requires a body. And we know LaMDA doesn&#8217;t have one.<\/p>\n\n\n\n<p> I note that <a href=\"https:\/\/en.wikipedia.org\/wiki\/Her_(film)\">Spike Jonze&#8217;s <em>Her<\/em><\/a> is now streaming on Netflix. When I first saw it, <a href=\"https:\/\/pangrammaticon.blogspot.com\/2014\/05\/artificial-imagination.html\">it occurred to me<\/a> that it was actually just a story about love and loss told from inside a very clever satire of the absurdity of artificial intelligence. Descartes once said that he could imagine that he had no body. <a href=\"https:\/\/pangrammaticon.blogspot.com\/2010\/09\/descartes-feint.html\">I&#8217;ve never believed him; I think he was was pretending.<\/a> His &#8220;I&#8221; was literally no one ever &#8230; on philosophical stilts.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On Substack, Gary Marcus recently called the claim that LaMDA, or any other language model (like GPT-3), is sentient &#8220;nonsense on stilts.&#8221; Mark Coeckelbergh agreed, but with a twist. It is nonsense, he argued, not because of what we know about artificial intelligence, but because of what we don&#8217;t know about sentience. &#8220;The inconvenient truth,&#8221; &hellip; <a href=\"https:\/\/inframethodology.cbs.dk\/?p=5329\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Sentience on Stilts<\/span> <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-5329","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=\/wp\/v2\/posts\/5329","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5329"}],"version-history":[{"count":6,"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=\/wp\/v2\/posts\/5329\/revisions"}],"predecessor-version":[{"id":6444,"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=\/wp\/v2\/posts\/5329\/revisions\/6444"}],"wp:attachment":[{"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5329"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5329"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/inframethodology.cbs.dk\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5329"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}