The story revolves around a new approach to attaining Artificial Intelligence. By the early decades of the new century, it's accepted that both of the traditional strategies probably won't work. "Top down" has run into difficulties because the complexities of what's implied by true "intelligence" were underestimated and can't be expressed in the detail necessary for design; "Bottom up"--inducing an AI to evolve from simple beginnings--needs a better guiding principles to steer the entity that's doing the evolving toward what it's supposed to evolve into.
The upgraded form of VR comes into it as a possible vehicle for implementing a variant of the second of the above categories. We don't know what goes on inside human skulls to produce what we call "intelligence." That's why we've been unable to tell artificial systems how to emulate it. All we do know for sure is the externals--how people act and what they say. So, our protagonist, Joe Corrigan asks, instead of trying to specify the detailed internal code necessary to support people-like behavior, why not program the system to aim at doing what people do, and let the internals self-organize into whatever they need to be?
To be capable of that, naturally the system would need to be able to observe what people do. The problem now is that, as decades of attempting to devise robots that come even close to mimicking what biological systems do flawlessly, computers don't interact very efficiently with the real, outside world. But they do interact extremely efficiently with their own, internally created worlds. So, Corrigan proposes, let's turn the conventional situation the other way around. Instead of teaching the computer by trying to train robots in our real space, let's do it by projecting people into its virtual space.
The result is a massive simulation of a world that contains two kinds of inhabitants. First, there are real humans projected into it and able to move around, meet, talk, and generally act normally as "surrogates" of themselves. Second, there are "animations," humanesque creations injected and manipulated by the system. The object is a glorified form of Turing test in which the system is programmed to progressively refine the behavior of its animations until, ideally, it becomes impossible to distinguish them. Then, it's reasoned, whatever software constructs and linkages have come into being to support such indistinguishable behavior can safely be classed as "intelligent."
At least, that's the theory. But to make it a story there have to be bad guys, and of course, things go wrong. And with a world in which nobody can ever be quite sure who is human, what's real and what isn't, there's plenty of scope for twists.
Marvin Minsky (see TWO FACES OF TOMORROW
page) also helped with this one, and in acknowledgment I gave him a walk-on
part in the book, which I think he enjoyed.