RSS

(Homecoming) The Memory of Earth: Prologue

27 Apr

I apologise in advance, but this is probably going to be a short post. I’ve been having a difficult week and didn’t have much time to work on this (owing mostly to insomnia, which my current meds have been causing; being unable to get enough sleep isn’t fun). That said, that situation is hopefully improving at least a little. I didn’t want to neglect this project entirely this week, so I’ll be analysing the prologue, which is only two pages long. We’ll get into the actual story next time.

So. Onward.

The prologue is written from the perspective of “the master computer of the planet Harmony”. The prologue is the only part of this book where we are given anything from its perspective, which is an interesting choice on Card’s part – though this isn’t exactly uncommon practice; I’ve encountered lots of books using a one-off perspective in prologues (I wonder if TV Tropes has a name for this technique). We will get further glimpses into this computer’s perspective in prologues to later books as well.

The reason I find this an “interesting choice” as I said above is because, at least for the most part, the story tries to pretend that this computer isn’t a computer while it plays the role of a god in the narrative. Framing the story with a prologue that explicitly discusses it as a computer undermines that to some degree, especially because its being a computer is the first thing we learn about it (and therefore this computerised aspect of it is likely to remain salient to the reader). I think Card may have actually been trying to do something interesting and complex here; exploring the implications of a society where a computer actively plays the role of a god and is worshiped by humans could actually be an interesting thought experiment. Particularly when proposed by a religionist like Card, because you could grapple with some interesting and difficult questions about the implications of religious behaviour and divine command ethics (the issue of divine command ethics becomes much more interesting if you believe there actually is such a thing as the divine), or what it would even mean for something to be a god at all. I think that’s what he was trying for. But the story itself seems to cop out on this for the most part, and outside of the prologues ignores the fact that it’s a computer whenever possible.

So, back to the story – we open with this computer being “afraid”, though Card is quick to explain that this was “not in a way that any human would recognize”. Essentially, the computer has become aware of the fact that its abilities are deteriorating and it can no longer influence humanity to the degree it used to do, and it worries that it will become unable to fulfill its “mission […] to be the guardian of humanity on this world”. In which case,

“it knew without a doubt – every projection it was capable of making confirmed it – that within a few thousand years humanity would once again be faced with the one enemy that could destroy it: humanity itself, armed with such weapons that a whole planet could be killed.” (page 7)

This is SF so I won’t object to a computer’s ability to have emotions – obviously as far as we know, no current computers are capable of it (though I have occasionally ruminated on the possibility that there could be a subjective experience of what it is like to run Microsoft Word, if consciousness is a strictly emergent phenomenon; needless to say, such speculation is largely pointless because it is completely untestable), but I see no reason for it to be impossible in principle.

The more interesting thing here, I think, is that this is a simultaneously incredibly bleak and incredibly optimistic view of humanity. It would take a few thousand years for humanity to destroy itself? Even if what Card is talking about here is something along the lines of nuclear warfare or anthropogenic climate change, consider that it has by most metrics been less than 500 years since the scientific revolution, and many would argue we are already at a high risk of causing our own extinction. Why, then, would it take a few thousand years to reach this point from an already highly-developed technological society, even one that had been formerly meticulously censored by a computerised overseer? So Card, or at least his fictional computer, is very optimistic about the time scale here; the pessimism is in the assumption of inevitability. I think it’s actually debatable whether this view is correct in-universe/in-text or not; the second book in particular has a great deal of warfare breaking out rather quickly once the censorship ends, but not yet on anything close to an extinction-level scale.

Anyway, the computer decides that it has to act, but realises it doesn’t know how. I’ll quote the relevant passage because I actually sort of like it:

“Yet the master computer had no idea how to act. One of the symptoms of its decline was the very confusion that kept it from being able to make a decision. It couldn’t trust its own conclusions even if it could reach one.” (page 8)

That’s definitely a difficult dilemma. (Also, the pedant in me wants to correct that last sentence, I think it should probably be something like “It wouldn’t be able to trust its own conclusions even if it were able to reach any.” Subjunctive tenses exist, even in English.)

It goes on to think about how it needs help and there’s only one place it can get it (it’s ambiguous about what that is for now), but it’s too far away to communicate with so the computer will have to go there. Then we get this:

“Once the Oversoul had been capable of movement, but that was forty million years ago and even inside a stasis field there had been decay… It needed human help.” (page 8)

Did you catch it? The thing Card just did that absolutely enrages me?

I’m not sure if it can be called an equivocation, per se, because instead of eliding the distinction between usages of the same word, he’s suddenly shifted to a different word with vastly different connotations and yet treating it as interchangeable with the prior one. “The master computer of the planet Harmony” and “the Oversoul” have precisely the same referent, but this switch of terminology is not value-neutral, and certainly not trivial. And yes, the computer will be called “the Oversoul” throughout the rest of the series, and only referred to as “the master computer” occasionally in prologues. This gets much worse when you consider how the human characters use the term “the Oversoul” – very much as an object of reverence and worship, which most of them believe to be an actual supernatural entity rather than a computer; that is certainly an equivocation on Card’s part even if the original shift in terminology wasn’t quite so. Furthermore, the shift is accomplished in-text with no explanation or forewarning: he simply substituted “Oversoul” in and expected the reader to take it in stride. Needless to say I find this quite interesting (and infuriating, in turn) and will have more to say about it in the future, but for now all I can really do is point out what he’s doing.

So the computer spends some time

“[searching] its vast database, evaluating the potential usefulness of every human being currently alive. Most were too stupid or unreceptive; of those who could still receive direct communications from the master computer, only a few were in a position where they could do what was needed.

“Thus it was that the master computer turned its attention to a handful of human beings in the ancient city Basilica. […] it began its work, sending a steady stream of information and instructions in a tightbeam transmission to those who might be useful in the effort to save a world named Harmony.” (page 8)

WHAT?! Essentially, this computer has decided it needs help getting repairs so it’s going to manipulate some humans into providing that help. Never mind the supposed altruism here, or for that matter the paternalism: essentially, its argument is that it’s for their own good in the long term, but it doesn’t actually know that – all it knows is that its intended purpose was to censor humanity for its own safety and that it’s no longer capable of fulfilling its purpose. If we are to treat this computer as if it has moral agency, then this is fundamentally a selfish motive being justified via instrumental utility: “I don’t want to fail in my mission, which happens to be doing this thing that is helpful to humans” (or at least is presumed helpful in the narrative universe of this story). Kant would be appalled at this violation of his categorical imperative (or, in other terms, at this rampant objectification of people as means to an end). It’s possible to argue that it’s justified in pursuing this course of action on other moral theories (e.g. some forms of consequentialism), so I won’t unilaterally condemn it, but I don’t think it’s obviously all-good the way the book will treat it in future.

Additionally, in order to consider this computer justified in its actions, the reader must make the additional assumption that its assessment of the situation is correct – an assumption which the text directly contradicts, or at least renders uncertain. What the computer thinks is good for humanity may not necessarily agree with what humans think is good for humanity (or, for that matter, with what is actually good for humanity, which may well disagree with the desires of both).

Let’s see… where else have we seen a hyperintelligent nonhuman entity manipulating humanity for what it believes to be the greater long-term good?

Oh, right. Hi there, Kyubey. So nice to see you.

I will say in closing that I think the premise of this story could actually have been a great way to explore the concept of Friendly AI, its feasibility and its failure modes, and so on, if written by a different author. Somebody like Eliezer Yudkowsky, perhaps. But sadly, we’ll have to be satisfied with only the unrealised potential.

Well, so much for this being a short post; my long wind strikes again (air currents are notoriously chaotic, so I suppose I shouldn’t fault myself too much for this failed prediction). Next time on Homecoming, we’ll meet the actual characters and start in on the actual story! I’m so excited.

*I must correct myself on something I said in a previous post; for some reason I’d thought this series was set at least forty thousand years in the future, but as is clearly evident from one of the above quotes, the actual figure the book gives is forty million. Mea culpa. (Though in fairness to myself, I must say that it seems absurd to me that humanity would change so little in forty million years that the society depicted in these books would be plausible.)

(Image of sentient apple is copyright 1999-2014 Neopets, Inc. Used for non-commercial purposes with permission.)

(Kyubey is a character from Puella Magi Madoka Magica, the image thereof is taken from a screen capture of the anime, and the author claims no rights to either character or image.)

Advertisements
 
7 Comments

Posted by on April 27, 2014 in mitchell

 

7 responses to “(Homecoming) The Memory of Earth: Prologue

  1. Number27

    April 28, 2014 at 8:58 pm

    Just found this via your link in the current SotD decon. May have profound (or otherwise) insights later. For now, thank you for starting this series.

     
    • Number27

      April 28, 2014 at 8:59 pm

      Also +1 on desire for Eliezer Yudkowsky friendly AI fic.

       
  2. mcbender

    April 28, 2014 at 10:20 pm

    Number27 – I’ve been wanting to deconstruct these books for a long time, I’ve just finally stopped procrastinating. I’m glad you appreciate it. (I should also add, in case you have not yet seen the others, that this is the third post in this series and you may also appreciate the previous ones).

    As far as Friendly AI fic goes, unfortunately I don’t know of any by Yudkowsky (though I’d definitely love to read one), but until then if you haven’t read it already I definitely recommend “The Metamorphosis of Prime Intellect” by Roger Williams (the full text of which is available for free online, though a hardcopy edition is now available also):

    http://localroger.com/prime-intellect/

    Please do note that this novella should come with a MASSIVE TRIGGER WARNING for violence, sex, sexual violence, gore, and incest. That said, I think it’s a deeply thoughtful engagement with these issues and well worth reading if you can stomach it. I don’t think I entirely agree with the conclusion it comes to in the end (as one reviewer said, the ending fits the characters but it’s harder to judge it as an objective state of affairs), but it’s thought-provoking regardless.

     
  3. Ani J. Sharmin

    May 3, 2014 at 11:56 pm

    Interesting analysis. I’m wondering if there could have been an interesting twist/surprise in the story if the reader didn’t find out right in the beginning that the Oversoul was a computer.

    I’ve encountered lots of books using a one-off perspective in prologues (I wonder if TV Tropes has a name for this technique).

    You made me curious, so I went looking. There is indeed a trope for this: “Intro-Only Point of View”.

     
  4. mcbender

    May 4, 2014 at 1:41 am

    That’s a really interesting question actually. I’m inclined to say I think that would have been a better story, though you’d have to do something interesting with it as opposed to just having it be for shock value and/or revealed and then ignored.

    What I think might be a more interesting question about that is that without the blatant framing of the “master computer” these books would read closer to fantasy than SF, so may not have appealed as well to the (presumed) target audience, or to Card’s established readership…

    Thanks for trawling through TV Tropes for me, that place is a massive timesink and I didn’t particularly want to set foot there 😛

     
  5. ludeshka

    May 8, 2014 at 9:41 pm

    Ah! The Homecoming Saga! (I also bought these books because they were ridiculously cheap. Hmmm.) There were parts of these books I liked, and parts of these books that made me irrationally angry. Back then, my anger was based on how OSC tried to force things so that the characters who were “right” were allllllllllllllllllllllllllllllllllways “right” even when they were not right, absolutely not right at all.
    However, most of the story is now a blur to me.
    I’d really like to see if we were angry at the same things…
    Will be looking forward to it!

     
  6. mcbender

    May 9, 2014 at 12:03 am

    ludeshka: Heh, I wouldn’t be so quick to label your anger as ‘irrational’, there are plenty of rational reasons to be angry at these books also. I will admit I think the biggest struggle I’m having so far with this series is in trying to channel my incoherent rage into readable blog posts.

    Protagonist-centred morality is definitely a major issue in them, and that definitely gets to me too, but I have to admit for me it’s the gender and sexuality issues that piss me off the most. The next post will be full of ranting about that, if I manage to get it finished; it’s amazing how much doing this is taking out of me. Not that I’m giving up, mind! Somebody’s got to do it.

     

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: