Why Computers Won’t Make Themselves Smarter | The New Yorker
In this piece published a year ago, Ted Chiang pours cold water on the idea of a bootstrapping singularity.
How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks.