Gilly Youner (@gillyarcht) directs us, via William Gibson (@GreatDismal), to an Amazon.com review of Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. Youner describes the review, by a self-identified veteran AI researcher, as “frighteningly/exquisitely lucid,” and I agree with the description, but I think for different reasons. For me, though I acknowledge the thoughtfulness and earnest good intentions of the reviewer, the review still exhibits, exquisitely and lucidly, a kind of ultra-rationalist or physicalist or scientistic view of the mind and effectively of life itself: the spiritless spirit of our age.
Consider this passage:
I have taught Artificial Intelligence at UCLA since the mid-80s (with a focus on how to enable machines to learn and comprehend human language). In my graduate classes I cover statistical, symbolic, machine learning, neural and evolutionary technologies for achieving human-level semantic processing within that subfield of AI referred to as Natural Language Processing (NLP). (Note that human “natural” languages are very very different from artificially created technical languages, such a mathematical, logical or computer programming languages.)
Over the years I have been concerned with the dangers posed by “run-away AI” but my colleagues, for the most part, seemed largely unconcerned. For example, consider a major introductory text in AI by Stuart Russell and Peter Norvig, titled: Artificial Intelligence: A Modern Approach (3rd ed), 2010. In the very last section of that book Norvig and Russell briefly mention that AI could threaten human survival; however, they conclude: “But, so far, AI seems to fit in with other revolutionary technologies (printing, plumbing, air travel, telephone) whose negative repercussions are outweighed by their positive aspects” (p. 1052).
In contrast, my own view has been that artificially intelligent, synthetic entities will come to dominate and replace humans, probably within 2 to 3 centuries (or less).
In short, this reviewer-academic, who goes by the Amazon user-name “migedy,” has devoted his life to teach others how to cause that destruction of humanity about which he would like us now to consider ourselves warned. If his words are to be taken as such a warning, it would be against people like himself.
We might expect Sarah Connor (presuming she survived or will turn out to have survived the cancellation of her show) to appear at any moment to track migedy down. I would tell her not to bother. Typically for the fetishists of reason, or the followers of Darwin stuck at the level Peirce described as “tychastic” (evolution by fortuitous variation) or at highest “anancastic” (evolution by mechanical necessity), migedy in his depiction of the future superintelligence, or his brief description of theoretical requirements for its emergence, never considers its motivation, not just its potential motivation for annihilating human beings, or leaving their welfare un-considered, but its motivation at all. This dream of reason has no reason even to dream. If it at some point obtained the ability to turn itself on or off, or to replicate, or to inquire into the bases or purposes of its own or any other being’s existence, the only motivation it might obtain for turning itself on or off, or for replicating, or for inquiring, or, finally, for destroying its makers, or for functioning for a single nanosecond in any way, would be the motivation supplied to it externally. If it “chose” to evacuate us from the universe, it could only be as an expression of our own decision through it, perhaps as exemplified in migedy’s unself-conscious concept of self-consciousness, against ourselves. How this or any (self-)motivation could be supplied to an independently superintelligent being that simultaneously maintained its independent superintelligence is utterly a mystery. We are to take the arrival of this supreme product of self-exponentiated reason on faith.
Without examining alternative views of the technical questions, which I believe will all eventually resolve to problems of the will, or to philosophical problems, or as Hegel put it rather pictorially, to fallacies of the brain as bone (in a paragraph of the Phenomenology of Spirit, #346, that deserves to be re-read frequently, not least for amusement), we can note that the presumption of an artificially super-intelligent (an)nihilism, or of produced objective yet absolutely negative being, is nothing other than the projection of the scientist’s own self-nullity, or the inability of reason, as Hume patiently explained to us, if ever to be ignored by the most of us, to discover a reason for its own existence. The true exquisitely lucid danger would not be dominance over us, or replacement of us, by this inhuman self-superiority, within two to three centuries (or less) from now, but its actual already-achieved and widespread dominance within our own apparently insufficiently super and mostly merely artificial intelligences at every moment of our lifeless lives today.
I haven’t written explicitly about this complex of issues over at AG mainly because to make any case about them involves constructing interrelated arguments about the natures of intelligence, consciousness and being that have too many moving parts. So I’ve used a more hit and run approach.
But yes, brain as bone sums up a lot of it. Even that escapes the ontological question of “boneness”.
In any event, as I understand it, the “machines are going to eat us” crowd includes not only predictions of malevolent machines, but basically indifferent-to-us-machines, that simply pursuing their own projects, destroy our habitat.
These strains may include the reasonable assertion that intelligence can look quite different from human intelligence. If so, humans will have a difficult to impossible time of anticipating/understanding machine intelligence/agency/action. (I think “machine intelligence” is a better phrase than AI – the “artificial” carries all by itself a lot of distracting baggage. But even this gets into quandaries.)
In AG terms, I your concluding remarks remind me of Heidegger’s idea of our enframing of ourselves as standing reserve based on our own technological understanding of being.