I apologise in advance, but this is probably going to be a short post. I’ve been having a difficult week and didn’t have much time to work on this (owing mostly to insomnia, which my current meds have been causing; being unable to get enough sleep isn’t fun). That said, that situation is hopefully improving at least a little. I didn’t want to neglect this project entirely this week, so I’ll be analysing the prologue, which is only two pages long. We’ll get into the actual story next time.
The prologue is written from the perspective of “the master computer of the planet Harmony”. The prologue is the only part of this book where we are given anything from its perspective, which is an interesting choice on Card’s part – though this isn’t exactly uncommon practice; I’ve encountered lots of books using a one-off perspective in prologues (I wonder if TV Tropes has a name for this technique). We will get further glimpses into this computer’s perspective in prologues to later books as well.
The reason I find this an “interesting choice” as I said above is because, at least for the most part, the story tries to pretend that this computer isn’t a computer while it plays the role of a god in the narrative. Framing the story with a prologue that explicitly discusses it as a computer undermines that to some degree, especially because its being a computer is the first thing we learn about it (and therefore this computerised aspect of it is likely to remain salient to the reader). I think Card may have actually been trying to do something interesting and complex here; exploring the implications of a society where a computer actively plays the role of a god and is worshiped by humans could actually be an interesting thought experiment. Particularly when proposed by a religionist like Card, because you could grapple with some interesting and difficult questions about the implications of religious behaviour and divine command ethics (the issue of divine command ethics becomes much more interesting if you believe there actually is such a thing as the divine), or what it would even mean for something to be a god at all. I think that’s what he was trying for. But the story itself seems to cop out on this for the most part, and outside of the prologues ignores the fact that it’s a computer whenever possible.
So, back to the story – we open with this computer being “afraid”, though Card is quick to explain that this was “not in a way that any human would recognize”. Essentially, the computer has become aware of the fact that its abilities are deteriorating and it can no longer influence humanity to the degree it used to do, and it worries that it will become unable to fulfill its “mission […] to be the guardian of humanity on this world”. In which case,
“it knew without a doubt – every projection it was capable of making confirmed it – that within a few thousand years humanity would once again be faced with the one enemy that could destroy it: humanity itself, armed with such weapons that a whole planet could be killed.” (page 7)
This is SF so I won’t object to a computer’s ability to have emotions – obviously as far as we know, no current computers are capable of it (though I have occasionally ruminated on the possibility that there could be a subjective experience of what it is like to run Microsoft Word, if consciousness is a strictly emergent phenomenon; needless to say, such speculation is largely pointless because it is completely untestable), but I see no reason for it to be impossible in principle.
The more interesting thing here, I think, is that this is a simultaneously incredibly bleak and incredibly optimistic view of humanity. It would take a few thousand years for humanity to destroy itself? Even if what Card is talking about here is something along the lines of nuclear warfare or anthropogenic climate change, consider that it has by most metrics been less than 500 years since the scientific revolution, and many would argue we are already at a high risk of causing our own extinction. Why, then, would it take a few thousand years to reach this point from an already highly-developed technological society, even one that had been formerly meticulously censored by a computerised overseer? So Card, or at least his fictional computer, is very optimistic about the time scale here; the pessimism is in the assumption of inevitability. I think it’s actually debatable whether this view is correct in-universe/in-text or not; the second book in particular has a great deal of warfare breaking out rather quickly once the censorship ends, but not yet on anything close to an extinction-level scale.
Anyway, the computer decides that it has to act, but realises it doesn’t know how. I’ll quote the relevant passage because I actually sort of like it:
“Yet the master computer had no idea how to act. One of the symptoms of its decline was the very confusion that kept it from being able to make a decision. It couldn’t trust its own conclusions even if it could reach one.” (page 8)
That’s definitely a difficult dilemma. (Also, the pedant in me wants to correct that last sentence, I think it should probably be something like “It wouldn’t be able to trust its own conclusions even if it were able to reach any.” Subjunctive tenses exist, even in English.)
It goes on to think about how it needs help and there’s only one place it can get it (it’s ambiguous about what that is for now), but it’s too far away to communicate with so the computer will have to go there. Then we get this:
“Once the Oversoul had been capable of movement, but that was forty million years ago and even inside a stasis field there had been decay… It needed human help.” (page 8)
Did you catch it? The thing Card just did that absolutely enrages me?
I’m not sure if it can be called an equivocation, per se, because instead of eliding the distinction between usages of the same word, he’s suddenly shifted to a different word with vastly different connotations and yet treating it as interchangeable with the prior one. “The master computer of the planet Harmony” and “the Oversoul” have precisely the same referent, but this switch of terminology is not value-neutral, and certainly not trivial. And yes, the computer will be called “the Oversoul” throughout the rest of the series, and only referred to as “the master computer” occasionally in prologues. This gets much worse when you consider how the human characters use the term “the Oversoul” – very much as an object of reverence and worship, which most of them believe to be an actual supernatural entity rather than a computer; that is certainly an equivocation on Card’s part even if the original shift in terminology wasn’t quite so. Furthermore, the shift is accomplished in-text with no explanation or forewarning: he simply substituted “Oversoul” in and expected the reader to take it in stride. Needless to say I find this quite interesting (and infuriating, in turn) and will have more to say about it in the future, but for now all I can really do is point out what he’s doing.
So the computer spends some time
“[searching] its vast database, evaluating the potential usefulness of every human being currently alive. Most were too stupid or unreceptive; of those who could still receive direct communications from the master computer, only a few were in a position where they could do what was needed.
“Thus it was that the master computer turned its attention to a handful of human beings in the ancient city Basilica. […] it began its work, sending a steady stream of information and instructions in a tightbeam transmission to those who might be useful in the effort to save a world named Harmony.” (page 8)
WHAT?! Essentially, this computer has decided it needs help getting repairs so it’s going to manipulate some humans into providing that help. Never mind the supposed altruism here, or for that matter the paternalism: essentially, its argument is that it’s for their own good in the long term, but it doesn’t actually know that – all it knows is that its intended purpose was to censor humanity for its own safety and that it’s no longer capable of fulfilling its purpose. If we are to treat this computer as if it has moral agency, then this is fundamentally a selfish motive being justified via instrumental utility: “I don’t want to fail in my mission, which happens to be doing this thing that is helpful to humans” (or at least is presumed helpful in the narrative universe of this story). Kant would be appalled at this violation of his categorical imperative (or, in other terms, at this rampant objectification of people as means to an end). It’s possible to argue that it’s justified in pursuing this course of action on other moral theories (e.g. some forms of consequentialism), so I won’t unilaterally condemn it, but I don’t think it’s obviously all-good the way the book will treat it in future.
Additionally, in order to consider this computer justified in its actions, the reader must make the additional assumption that its assessment of the situation is correct – an assumption which the text directly contradicts, or at least renders uncertain. What the computer thinks is good for humanity may not necessarily agree with what humans think is good for humanity (or, for that matter, with what is actually good for humanity, which may well disagree with the desires of both).
Let’s see… where else have we seen a hyperintelligent nonhuman entity manipulating humanity for what it believes to be the greater long-term good?
Oh, right. Hi there, Kyubey. So nice to see you.
I will say in closing that I think the premise of this story could actually have been a great way to explore the concept of Friendly AI, its feasibility and its failure modes, and so on, if written by a different author. Somebody like Eliezer Yudkowsky, perhaps. But sadly, we’ll have to be satisfied with only the unrealised potential.
Well, so much for this being a short post; my long wind strikes again (air currents are notoriously chaotic, so I suppose I shouldn’t fault myself too much for this failed prediction). Next time on Homecoming, we’ll meet the actual characters and start in on the actual story! I’m so excited.
*I must correct myself on something I said in a previous post; for some reason I’d thought this series was set at least forty thousand years in the future, but as is clearly evident from one of the above quotes, the actual figure the book gives is forty million. Mea culpa. (Though in fairness to myself, I must say that it seems absurd to me that humanity would change so little in forty million years that the society depicted in these books would be plausible.)
(Image of sentient apple is copyright 1999-2014 Neopets, Inc. Used for non-commercial purposes with permission.)
(Kyubey is a character from Puella Magi Madoka Magica, the image thereof is taken from a screen capture of the anime, and the author claims no rights to either character or image.)