The most telling aspect of Superintelligence is the praise blurbs on the cover and back.
“Human civilisation is at stake” - Financial Times
“I highly recommend this book” - Bill Gates
I’m not sure what I’m supposed to feel, and it’s reflected in the general problems with the arguments in Superintelligence. Reading the book, you can quickly move from terrified by an idea to saying “huh, maybe” within the span of minutes.
Superintelligence’s basic premise is that artificial intelligence may someday reach a point to be beyond human intelligence and most importantly beyond human control. What if this AI decides that humans are not necessary, a threat or composed reusable atoms it needs for its goals?
The author, Nick Bostrom of Oxford University’s Future of Humanities Institute, leads the reader toward the conclusion that this is indeed a very likely situation, whether through malice or ignorance of human value on behalf of this AI.
Bostrom’s chief concern is the possibility of constraining a superintelligent AI at least until we can properly trust that its activities would benefit mankind. It is the problem that is the most vague among many others: superintelligence’s motivations towards self-preservation, its ability to possibly control the world, and its ability to choose and refine goals. While all the issues are argued as inevitable given enough time, it is the “control problem” that can determine how destructive these other issues become.
It is at this point that a further blurb about the book is necessary: “[Superintelligence] has, in places, the air of theology: great edifices of theory built on a tiny foundation of data.”
From The Telegraph, the review also argues that the book is a philosophical treatise and not a popular science book, which I would agree, and in most reactions I had when describing the book to friends, they tended to respond philosophically rather than from a technical perspective.
It is with this perspective that, Superintelligence applies a similar approach as did Daniel Dennett in Darwin’s Dangerous Idea - given enough time, anything is possible regardless of the mechanics.
The simple response is “Well, what if there isn’t enough time?”
This doesn’t suffice for Dennett’s argument (“The universe is this old, we see the complexity we do, therefore, enough time is at least this long, and we have no other data point to consider”), but it was a popular response to Superintelligence. I personally heard - “We’ll kill each other before then.” - and - “We aren’t smart enough to do it.”
Both of these arguments, reflect the atheistic version of the faith The Telegraph suggests the reader needs, but Bostrom holds to throughout the book: given enough time, superintelligence will be all powerful and all knowing - near god-like except that it can move beyond the physical.
However, much like an atheist can withdraw value from the Gospels, even the unconvinced can remember a few sentences from Bostrom and take pause. Bostrom’s central concern is how to control technology, particularly technology that we and nobody else knows how it’s made. Moreover, this should be a concern even when programmers know how a program works, but the using public does not. It is the same concern that makes people assume nonchalantly that the government is already tracking their location and their information.
Even without superintelligence, the current conversation about technology is a shrug and admittance that that’s how it is. Bostrom leans heavily toward pacing ourselves rather than end up dead. Given our current acceptance of the undesirable in our iPhones, shouldn’t we also wonder if we should pace ourselves or pause and examine our current progress in detail rather than excitedly waiting for a new product?
This isn’t to say we should stop technological progress. Instead, alongside innovation, there needs to be analysis of every step.
Every wonder what’s in your OS’s source code? Could it be tracking you and logging every keystroke sent off to some database? What if all software was open source? Wouldn’t this solve that problem?
This isn’t a technological problem is it? The question of open source for everything is an economic and industrial question, though it may ultimately be solved by technology.
Consider that, in the last twenty years, restaurants and food producers have tied themselves not to simply producing food to eat, but the type and intent of the food they produce - is it sustainable? Is it safe for the environment? Does it reflect the locale? I imagine not too many people would be surprised to see a credo on a menu alongside the salads in this day.
What about software? Are we only to expect that kind of commitment from ex-hippies and brilliant libertarian hackers? What about Apple, Google and Microsoft? It’s an ideal certainly - once you show the Google search engine algorithm, then what’s left than for a competitor to copy it? I don’t have answer for this, but understand there is an exchange - Google keeps their competitive edge, but they also keep all my information.
We are already being victimized by unknown technology and we shrug or make some snarky comment. Even though Superintelligence argues that certain technology is inevitable, we can form how it is made.
Wouldn’t it be great if we started practicing that now?