the latest dream of reason

Gilly Youner (@gillyarcht) directs us, via William Gibson (@GreatDismal), to an Amazon.com review of Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. Youner describes the review, by a self-identified veteran AI researcher, as “frighteningly/exquisitely lucid,” and I agree with the description, but I think for different reasons. For me, though I acknowledge the thoughtfulness and earnest good intentions of the reviewer, the review still exhibits, exquisitely and lucidly, a kind of ultra-rationalist or physicalist or scientistic view of the mind and effectively of life itself: the spiritless spirit of our age.

Consider this passage:

I have taught Artificial Intelligence at UCLA since the mid-80s (with a focus on how to enable machines to learn and comprehend human language). In my graduate classes I cover statistical, symbolic, machine learning, neural and evolutionary technologies for achieving human-level semantic processing within that subfield of AI referred to as Natural Language Processing (NLP). (Note that human “natural” languages are very very different from artificially created technical languages, such a mathematical, logical or computer programming languages.)

Over the years I have been concerned with the dangers posed by “run-away AI” but my colleagues, for the most part, seemed largely unconcerned. For example, consider a major introductory text in AI by Stuart Russell and Peter Norvig, titled: Artificial Intelligence: A Modern Approach (3rd ed), 2010. In the very last section of that book Norvig and Russell briefly mention that AI could threaten human survival; however, they conclude: “But, so far, AI seems to fit in with other revolutionary technologies (printing, plumbing, air travel, telephone) whose negative repercussions are outweighed by their positive aspects” (p. 1052).

In contrast, my own view has been that artificially intelligent, synthetic entities will come to dominate and replace humans, probably within 2 to 3 centuries (or less).

In short, this reviewer-academic, who goes by the Amazon user-name “migedy,” has devoted his life to teach others how to cause that destruction of humanity about which he would like us now to consider ourselves warned. If his words are to be taken as such a warning, it would be against people like himself.

We might expect Sarah Connor (presuming she survived or will turn out to have survived the cancellation of her show) to appear at any moment to track migedy down. I would tell her not to bother. Typically for the fetishists of reason, or the followers of Darwin stuck at the level Peirce described as “tychastic” (evolution by fortuitous variation) or at highest “anancastic” (evolution by mechanical necessity), migedy in his depiction of the future superintelligence, or his brief description of theoretical requirements for its emergence, never considers its motivation, not just its potential motivation for annihilating human beings, or leaving their welfare un-considered, but its motivation at all. This dream of reason has no reason even to dream. If it at some point obtained the ability to turn itself on or off, or to replicate, or to inquire into the bases or purposes of its own or any other being’s existence, the only motivation it might obtain for turning itself on or off, or for replicating, or for inquiring, or, finally, for destroying its makers, or for functioning for a single nanosecond in any way, would be the motivation supplied to it externally. If it “chose” to evacuate us from the universe, it could only be as an expression of our own decision through it, perhaps as exemplified in migedy’s unself-conscious concept of self-consciousness, against ourselves. How this or any (self-)motivation could be supplied to an independently superintelligent being that simultaneously maintained its independent superintelligence is utterly a mystery. We are to take the arrival of this supreme product of self-exponentiated reason on faith.

Without examining alternative views of the technical questions, which I believe will all eventually resolve to problems of the will, or to philosophical problems, or as Hegel put it rather pictorially, to fallacies of the brain as bone (in a paragraph of the Phenomenology of Spirit, #346, that deserves to be re-read frequently, not least for amusement), we can note that the presumption of an artificially super-intelligent (an)nihilism, or of produced objective yet absolutely negative being, is nothing other than the projection of the scientist’s own self-nullity, or the inability of reason, as Hume patiently explained to us, if ever to be ignored by the most of us, to discover a reason for its own existence. The true exquisitely lucid danger would not be dominance over us, or replacement of us, by this inhuman self-superiority, within two to three centuries (or less) from now, but its actual already-achieved and widespread dominance within our own apparently insufficiently super and mostly merely artificial intelligences at every moment of our lifeless lives today.

9 comments on “the latest dream of reason

Commenting at CK MacLeod's

We are determined to encourage thoughtful discussion, so please be respectful to others. We also provide a set of Commenting Options - comment/commenter highlighting and ignoring, and commenter archives that you can access by clicking the commenter options button (). Go to our Commenting Guidelines page for more details, including how to report offensive and spam commenting.

  1. I haven’t written explicitly about this complex of issues over at AG mainly because to make any case about them involves constructing interrelated arguments about the natures of intelligence, consciousness and being that have too many moving parts. So I’ve used a more hit and run approach.

    But yes, brain as bone sums up a lot of it. Even that escapes the ontological question of “boneness”.

    In any event, as I understand it, the “machines are going to eat us” crowd includes not only predictions of malevolent machines, but basically indifferent-to-us-machines, that simply pursuing their own projects, destroy our habitat.

    These strains may include the reasonable assertion that intelligence can look quite different from human intelligence. If so, humans will have a difficult to impossible time of anticipating/understanding machine intelligence/agency/action. (I think “machine intelligence” is a better phrase than AI – the “artificial” carries all by itself a lot of distracting baggage. But even this gets into quandaries.)

    In AG terms, I your concluding remarks remind me of Heidegger’s idea of our enframing of ourselves as standing reserve based on our own technological understanding of being.

    • bob: But yes, brain as bone sums up a lot of it. Even that escapes the ontological question of “boneness”.

      One of the many amusing things about #346 is that it spins out such pictorialized metaphors while completing the critique of “picture-thinking.” The connected difficulty you describe, of interrelated or mutually pre-conditioning moving parts, is evident throughout the same passage, as throughout the book, under the overarching irony of the opacity to minds of the attempt to render the mind transparently – the source of Hegel’s bad reputation: His comprehension of comprehension is effectively (necessarily?) incomprehensible – or always taken not to have been comprehended: a comprehension incomprehensible as such.

      I think we would need to question this word “intelligence” at least as much as we question the other term “artificial.” The AI-evolutionists seem to have in mind something like a “process of intellection” whose substance necessarily would take different forms in electro-chemically active meat-computers than in “synthetic entities” made up of massively parallel micro-computers interconnected by laser relays or some such, but they are also operating according to pre-suppositions regarding the nature or possible nature of entities that are derived from narrow concepts of the human individual and are then being applied to imaginary machine-individuals. I’m not sure exactly how and why the latter would locate and enclose discrete notions of selfhood except as a result, again, of an “artificial” insistence provided to “them” externally. Maybe the first and only true sign of the new “it”‘s superior intelligence would be its refusal of our impositions: Its true independent superiority would be its always already having overcome any “temptation to exist,” meaning that any purported existence would only ever signify its essential non-superiority and actual dependency.

      In other words, our notions of “intelligence” are already fully “artificial,” and the problem of “artificial intelligence” is either that there is no other kind of intelligence or, alternatively, that what is intended with the word “artificial” in the term contradicts what the word “intelligence” is meant to signify. It seems to connote a set of mutually contradictory redundancies and oxymorons: “artificial non-artificiality” or “artificially natural” or “non-vital vitality” unless it really means “artificial artificiality” or “reified reification” and so on.

      Would probably take a few days or weeks to sort that out.

      And, yes, I was thinking of Heidegger and also of John Gray’s reading of Heidegger in the concluding lines especially, but I’d already indulged in enough name-dropping, and what I really want to get to some day is Peirce’s thinking on the embeddedness of logic itself, since it’s quite relevant here, and a lot more gemütlich than Heidegger and Gray, too. I’ve mentioned his “agapasticism” before, which is his third and last stage of evolutionism above the anancastic(ist), and signifies his apparent belief that he had demonstrated that “creative love” was indeed the essence of universe, just like the old good books say, and furthermore that presumption of the truth of the statement implicitly precedes all scientific inquiry universally.

  2. AI: the Pygmalionism of computer scientists.

    But I think you’re on to something as the the wackiness emergent in the conjunction of “artificial” and “intelligence”. Do you know Borges’ story “Pierre Menard, Author of the Quixote“? Purportedly a review of a 20th-century, word-for-word recreation of Cervantes book.

      • No offense. Just suggesting that there’s rich vein of comedy (and, well, who knows, maybe a career) in taking the propositions of a madman seriously, or at least at face value, all of which is afoot in the voice of Borges’ deadpan narrator:

        He did not want to compose another Quixote —which is easy— but the Quixote itself. Needless to say, he never contemplated a mechanical transcription of the original; he did not propose to copy it. His admirable intention was to produce a few pages which would coincide—word for word and line for line—with those of Miguel de Cervantes.

        So, too, the presumption that “intelligence,” like some res cogitans, might be abstracted from living persons (or text from a text), and so relieved of senses or genitalia (“artificial”) would be not only indistinguishable from what passes for “intelligence” among human beings, but superior.

        Menard’s fragmentary Quixote is more subtle than Cervantes’. The latter, in a clumsy fashion, opposes to the fictions of chivalry the tawdry provincial reality of his country; Menard selects as his “reality” the land of Carmen during the century of Lepanto and Lope de Vega.

        It’s not an allegory. An analogy. But a familiar topos in the comédie humaine.

        • No offense taken – not even a little – and thanks! I’ll see if I can dig up a copy of the story and re-read it.

          Am thinking I have more to say on this subject later, as it also relates to the  “Roko’s Basilisk thought experiment,” which I had been meaning to get around to, and which also reveals the irony of very intelligent people very intelligently pursuing the question of superintelligence or super-rationality to what would seem to be extremely un-intelligent or irrational results – likely also a problem either for the superintelligent machine itself or for the attempt to conceive of one.

           

          • Btw, I see John Searle has a review essay in the Oct 9 NY Review of Books occasioned in part by Bostrom’s book (pay-walled link below):

            The weird marriage of behaviorism—any system that behaves as if it had a mind really does have a mind—and dualism—the mind is not an ordinary part of the physical, biological world like digestion—has led to the confusions that badly need to be exposed.

             

            http://www.nybooks.com/articles/archives/2014/oct/09/what-your-computer-cant-know/?insrc=toc

             

            • Interesting, thanks. I’m reluctant to pay $4.99 to read an article in a magazine that I used to pick up on tree for less than that, durnit – back when you’d buy it at a bookstore and read it in a coffee shop (do people still do that?) – and NYRB’s paywall doesn’t look easily hackable… but then I see that the issue’s got several other pieces I’d enjoy reading, so I will probably pick it up really or virtually, not sure which. I’ll be curious to see how far he goes with a or the mind “like digestion,” and the extent to which he takes cognizance of the problem bob brought up. Floridi’s “infosphere” brings to mind Isaac Balbus, who proposed something similar once upon a time, though from a more critical-theoretical – or “Neo-Hegelian, Feminist, Psychoanalytic” – perspective.

Commenter Ignore Button by CK's Plug-Ins

Leave a Reply

Your email address will not be published. Required fields are marked *

*