Warning : amateur, off-topic blogging coming. Offered in the spirit of pre-Christmas cheer.
If you haven’t watched or read Nick Bostrom on the ‘Superintelligence’, you are not a self-respecting cultural omnivore.
The ‘superintelligence’ is a hypothetical extreme risk to humanity posed by artificial intelligence [AI]. The scenario is that computer capabilities increase to the point where they become as good or slightly better at general purpose thinking, including applying themselves to the task of designing improvements to themselves.
At that point capabilities head rapidly towards an intelligence ‘explosion’, as each new modification designs another one. The superintelligent entity has capabilities far exceeding any individual human, and even the whole of humanity, and, unless it can be harnessed to our needs, may either deliberately, or inadvertently annihilate us. This is a formalisation of a pretty familiar anxiety that has permeated science fiction for ages, through films the Terminator franchise, or Transcendence, Wall-E, A Space Odyssey [“I’m sorry Dave, I’m afraid I can’t do that.“]
Benedict Evans’ newsletter included a link to a blog by Francois Chollet on the ‘Impossibility of the Superintelligence‘. I think it goes wrong for a few reasons.
“there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.”
The no free lunch theorem is a red herring. The Superintelligence worriers are concerned about the emergence of a capability that is designed to do as well as it needs to across the range of possible challenges facing it. Confronted with this objection from Chollet they would probably argue that the superintelligence would design itself to be in charge of multiple specialised units optimized for each individual problem it faces. Or that it would hone simple algorithms to work on multiple problems.
The final sentence should not be any comfort; ‘the intelligence of a human is specialized in the problem of being a human.’ But there are bad humans, thwarted by their own slowness, forgetfulness, lack of access to resources. The malign superintelligence under consideration is just like one of those, only without those constraints.
Chollet next argues that a superintelligence would not be possible because…. our own intelligence arises out of a slow process of learning. He writes, for example:
“Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization.”
So what, the Superintelligence worriers retort. The first general intelligence unit has the internet. And subsequent units can get to work training themselves at super-fast speed. Next.
Chollet argues by analogy about the evidence that super-high-IQ humans are usually not very capable.
“In Terman’s landmark “Genetic Studies of Genius”, he notes that most of his exceptionally gifted subjects would pursue occupations “as humble as those of policeman, seaman, typist and filing clerk”. There are currently about seven million people with IQs higher than 150 — better cognitive ability than 99.9% of humanity — and mostly, these are not the people you read about in the news.”
Then he explains the reverse; that many of the most capable humans have had only moderate IQs:
“Hitler was a high-school dropout, who failed to get into the Vienna Academy of Art — twice…. ……many of the most impactful scientists tend to have had IQs in the 120s or 130s — Feynman reported 126, James Watson, co-discoverer of DNA, 124 — which is exactly the same range as legions of mediocre scientists.”
I don’t find it comforting – with respect to the likelihood of a super AI taking over – that great achievements required only medium IQs. It may be that the non-IQ facets of high achieving humans are not reproducible in machines, but merely stating that those facets exist does not bear on whether this is possible or not. Maybe the AI would get one of its copies to track down life stories of failed geniuses or successful dullards to maximise its own chance of success.
The next argument is that our capabilities are not limited by our IQ but by the environment:
“All evidence points to the fact that our current environment, much like past environments over the previous 200,000 years of human history and prehistory, does not allow high-intelligence individuals to fully develop and utilize their cognitive potential.”
The idea that the environment inhibits the optimization of intelligence sounds right. Example: machine learning algorithms of today can be deteriorated by depriving them of data.
But: 1) the intermediate machines that precede a superintelligence are going to have a *lot* of data, including the data generated by their own existence, and, eventually, the entirety of knowledge, and AI generated knowledge; 2) we can see how actual individual lifetimes have limited individual human brains, but not how the sum total of all knowledge limit successively improved AIs. We don’t know enough to jump from such limits in the past to state that a Superintelligence is an ‘impossibility’.
Chollet next argues:
“our biological brains are just a small part of our whole intelligence. Cognitive prosthetics surround us, plugging into our brain and extending its problem-solving capabilities. Your smartphone. Your laptop. Google search. The cognitive tools your were gifted in school. Books. Other people. Mathematical notation. Programing.”
This is not an argument against a Superintelligence: AIs will have access to all these things too. They will be able to program. They will have computing power. They will be able to connect to the internet and search on Google. They will have access to all books written, the outputs of past people. And they will have access to other people, and other people’s online outputs.
Chollet’s tries to allay our fears about a superintelligence with this:
“It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence.”
This is true of the first AI that equals or surpasses an individual human. It will have been the output of a huge amount of prior human history and knowledge, and will stand on the shoulders of many giants. But this doesn’t make a sound prediction about what happens in the future. Once the AI gets to work, unless something restricts it, its new thinking, or the thinking of its many copies and simulations will constitute a new artificial, and highly purposed civilization or ‘cognitive prosthetic’.
“Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. Answering “yes” would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself. What we do is, gradually, collectively, build external problem-solving systems that are greater than ourselves.”
This is not comforting for two reasons. First, the new just-better-than-us AI’s can be reproduced, and work together to improve themselves: it would be a mistake to presume that they will be as limited as past individual humans. The final sentence [“What we do is, gradually, collectively, build external problem-solving systems that are greater than ourselves.”] takes us no further than ‘it hasn’t happened before, so it won’t happen in the future.” The former is true, but the latter does not follow from this. I think the Superintelligence worriers are also not all those who are ‘answering yes’. They are stating a hypothetical risk, and urging that we think carefully now while we have the time and opportunity through collective action to make a Superintelligence next to impossible.
The same comforting extrapolation from the past is deployed again by Chollet:
“Science is, of course, a recursively self-improving system, because scientific progress results in the development of tools that empower science … Yet, modern scientific progress is measurably linear. …. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades. And this is despite us investing exponential efforts into science — the headcount of researchers doubles roughly once every 15 to 20 years, and these researchers are using exponentially faster computers to improve their productivity.”
Maybe this would characterise the recursive self-improvement of computers using copies of themselves to develop improved versions of themselves; but maybe it would not! How about we still devote thinking time to how we plan in advance for the not.
Chollet cites two sources of bottlenecks in human-conducted science currently that are supposed to dog AI-self-improvement in the future:
“Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger. It gets increasingly harder to keep up with the firehose of new publications….. As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.”
Yet if we are prepared to contemplate that human science bottlenecks would not prevent an AI being constructed equivalent or better than a human, these subsequent problems are not as relevant. The AI copies itself and devises its own strategies for cooperating with its sub units.
For me, Chollet fails in substantiating his claim that a Superintelligence is an ‘impossibility’. What probability it has I have no idea. Nick Bostrom seems to be convinced that it is a certainty: a matter if when, not if. Perhaps the truth lies between these two authors. It would be nice if there were relatively cheap and reliable ways of heading off the risk, so that even if the probability was low, we could justify putting resources aside to do them. But reading Bostrom I was convinced that this wasn’t likely. The most compelling scenario for the emergence of an uncontrolled self-improving Superintelligence is via state actors competing for military advantage, or non-state companies competing in secret for overwhelming commercial advantage. Policies to head off a Superintelligence would have to be agreed cooperatively, something that seems beyond a hostile multi-polar world.