Craig Stephen Cooper

Author and Engineer

News

How to Fix It: Transcendence

Posted by Craig on April 25, 2014 at 9:10 PM

I had been anticipating Transcendence for a while, and finally came out this Thursday past. I do like stories of Brain-Computer Interfaces; in fact, one of the best Asgard episodes of Stargate SG-1 involve Thor uploading his consciousness to an alien computer to screw around with it. Having said that, I don't think uploading consciousness will ever really be feasible primarily because it is a combination of hundreds of different factors, ranging from configuration of neurons, dendrites, and synapses, to the electromagnetic interraction between seemingly disjoint clusters of neurons. Not to mention the brain is emphatically neither a digital computer not a monolithic one. There are no separate organs for language, emotion, movement, cognition, logic, etc. Rather these are the result of the interaction of various interrelated tools in our mental toolbox. In other words, I didn't go to see Transcendence for its scientific value. Although they did get a few things scientifically right, I went for the dramatic value.


Like with my previous rant, I'm going to detail here what they got wrong in Transcendence and how to fix it. So, let's start with what they did right.

 

 

What they did right:

 

 

1. The Science

The first part of the movie, before Will is uploaded to a computer, is fairly decent. It has Will setting up a Faraday cage to block phone signals so that he and his wife, Evelyn, can get some peace and quiet from the humdrum of the digital age. Sounds nice and romantic. Plus, as far as I understand it, that would work as a decent EM shield.

It's interesting seeing Johnny Depp portray a nerd who doesn't shower or keep personal hygene. Spending so much time buried in his work (though finding time to romance his wife), we see him throwing on a shirt over a sweaty, dirty singlet. I'm sure that when Andrew Wiles was solving Fermat's Last Theorem, he had similar habits. Will also puts forward his distress at how the people with the money are rarely interested in science and understanding simply for it's own sake; rather they ask for commercial applications, and run screaming from concepts where there are none. And that shits me.

Will is shot with a bullet laced with Polonium. At first I was a little sceptical about whether a small amount of Polonium would kill someone. I looked it up and turns out it can. In fact, it's thought that Yasser Arafat was taken out in this method. So I guess that's believable.

Finally, when Evelyn suggests to her friend Max that they can upload Will's consciousness to save (or rather preserve) him, Max points out something along the lines of what I mentioned above. Rather than actually being Will, it would be a digital approximation of him. Like I said, the brain is neither a digital nor monolithic computer: it's a distributed analogue computer. And the thing about analogue computers is that you won't necessarily get the same output for the same input, because it is inherently impossible to give the EXACT same input (i.e. voltage level). There will always be some degree of error in the calculations - such is not the case for a digital computer. As a result, the upload wouldn't be an exact version of Will, but a digital approximation. And I am so glad they got that right.

The upload scene wasn't super dramatic like a sci-fi buff would expect. It wasn't like, "Alright, neural network online. Synaptic probes connected! Commencing upload!" There wasn't an epic shot of an upload bar filling to 100% while Will starts convulsing as his brain's electrical activity is sucked out. In fact, it was exactly what I would expect from an ongoing experiment. They scan his brain several hundred times while analysing electrical activity from it, while he reads a number of words from a dictionary. It's as if they're training the computer to emulate his neural activity. What's more, Will actually dies before the experiment is complete. This gives them a chance to highlight a number of themes in the story as well as certain character developments with Evelyn - who was the real protagonist of the story.

Unfortunately they didn't, and that's where the movie starts to fuck up.


What they did wrong


1. The science (again)

*Sigh* Quantum computers would be SHIT for AI. People keep raving about how Quantum Computers will be mega-fast when they come out in commercial end-user products, but the fact is that QC are only good (or rather advantageous) for very specific algorithms - and they run extremely well. However, for word processing, not so much. The brain is, like I've said, NOT A DIGITAL COMPUTER. And it's not a Quantum computer either. When Evelyn and WIll show off their P.I.N.N. supercomputer - designed to be an independent neural net - they say that the system runs on quantum processors. I wanted to throttle them for that.

Another thing that confused me was that Will's consciousness was uploaded to the quantum processors, which meant that that's his brain now. That means if he wanted to copy himself, he'd need to find other quantum processors. But rather Evelyn just connects the processor's to a satellite dish and Will uploads himself to the internet before terrorists destroy the computer.. Question: Where's his processor now? Has he installed part of himself on every computer in the world? If so, he'd not be able to function remotely as well as even a dog's brain. Every part of him would have such a delay communicating with each other over ethernet, there'd be no superior intellect nor stock exchanges nor super-hacking.

Yeah, I couldn't suspend my disbelief for that.


2. The antagonists

Bunch of psycho anti-tech terrorists. AND NONE OF THEM DIED!

So the dudes who tried to kill Will also took out a whole bunch of AI labs as well. Why? Oh because the leader was once an intern for an AI expert who worked out how to simulate a monkey's brain in the same way Evelyn and Max "upload" Will. And when this expert succeeded, the girl was happy for him but then started to have nightmares about the monkey-brain simulation.

"Oh, the monkey was screaming and begging to be shut-off."

It's a simulation, dumb bitch. Also, how the hell do you know what a monkey wants? Do you speak monkey? There's this stereotype I see in a lot of female antagonists: that they somehow know what others want and impose that knowledge without concern for others. I wanted that bitch to die, die, die, die, die! And she didn't. In fact, we don't even get any reasonable conclusion to the character. She just holds Max at gunpoint until Will kills himself, then nothing. If I were Will, that bitch'd be the first one I go after.

Once again, the antagonists behaviour is based on fear wrapped in rationalisation, coated in a generous layer of sanctimonious self-righteousness, such that you want Will to win. You want Will and Evelyn to track down those worthless sacks of shit and melt their freaking brains... But they don't.


3. The protagonists

Will seems the only genuine scientist in the whole movie - He wants to understand the world. Everyone else just has this gleaming gloss in their eyes, like "We're gonna change the world!" One thing that kinda bugged me about the trailer and the movie promotions was that it portrayed Will as if he wanted to be uploaded, and it was his plan long before he got shot. The movie however shows him to be someone determined to learn, but also willing to accept his fate. The one who started the whole thing was Evelyn - unwilling to let her husband go, she creates a digital effigy of him. This presented a chance to go in a completely different direction from what most brain-upload stories do, but they didn't. Instead, they had the characters - Will, Evelyn, and Max - doing everything against the character they established in the first fourty minutes, in order to push the story down the same path as Lawnmower Man and Ghost in the Machine.

At least Max points out, "Only online for fifteen minutes and he wants to take over Wallstreet?" Those were my thoughts exactly. That's completely against Will's character, not to mention there's no reason that an AI based on a real brain would immediately turn to megalomania. I'm getting sick of AI-demonising stories that draw from Skynet.

Max gave me the shits as well. He helps Evelyn and Will do the experiment, even though Evelyn is the only one who expects it to work. And when it does, he's being the good scientist and being cautious. But when Evelyn raises her voice slightly, what does he do? Leaves, knowing full well that she's going to upload what he thinks may be a dangerous computer program. Sure, when a histrionic chick screams at me to get out when she doesn't get her way, I leave her to do something I worry is dangerous. Fuck you, Max, you should have held your ground and tried to make her see reason. But of course that wouldn't have led to the ultimate ending the imbecilic writer wanted. Then he decides to join the terrorists after that sociopathic hippie skank tells him a sob story about a simulated brain?

Evelyn... I guess I can't really fault her for wanting to keep her husband from dying. But he dies anyway. She creates a facimile of her husband so she doesn't have to let go. This opened up a whole new array of potential story and themes to explore rather than the tired stock arguments of "What is a soul?" or "Can a computer comprehend emotion?"


4. Overused and outdated themes

So freaking over nonsensical themes like "You can't program a soul into a computer!" For someone who doesn't believe in the soul (and for good reason) it's like nails on a chalkboard. All those postmodern philosophers - what the fuck does postmodern mean, anyway? - who are so interested in emotions should actually get real degrees in neuroscience so they can actually study emotions at their source. The problem is, they romanticise emotions so much that to actually study them is to somehow reduce their value. For instance, Max had written a paper suggesting that doctors would one day become technicians. Newsflash, dumbass, doctors are technicians! The human body is a machine in every sense of the word, and it is not diminishing or devaluing a human in any way to recognise that. As a result, potentially amazing and thought-provoking concepts are ignored in favour of shouding the inner workings of consciousness in mysticism and condescending horseshit.

That's primarily the problem with this movie. It acts on the assumption that an AI, because it doesn't understand emotions or the soul, it will immediately tend toward megalomaniacal behaviour and go on a genocidal rampage. This completely disregards the fact that many humans who aren't sociopaths go on similar rampages for the purposes of controling others (case in point: the Vatican and the Republican Tea Party).

Instead of these sanctimonious themes, why not explore:

  • Say we could upload someone's consciousness after they die, what kind of life would that person have?
  • Would the family of an uploaded consciousness be satisfied with such preservation?
  • Does a person go to sleep, and wake-up inside the computer? Is it uploading, or copy?
  • Is a digital facimile of someone good enough for immortality?
  • Is an uploaded consciousness really just a high-tech framed photo?

This movie dines on outdated understanding of the human brain that is primarily diseminated by idiots who don't study the human brain.

There's an interesting essay by Mark Rosenfelder about the concept of a language organ that illustrates the point I'm trying to make. Rather than being a monolithic system the source code for which we must decypher, I reckon the brain is a collection of simple interconnected systems forming a cognitive toolbox. These tools include highly simple functions like depth perception, object identification, and pattern recognition. These tools may themselves be composed of even simplier functions that are shared between them. The interacting functions of these tools combine to give rise to consciousness. However, I also don't think the brain is a contained unit that can be extracted and connected to a machine either. I think part of our intelligence comes from the rest of our bodies. I mean, our brains are big, but elephant and whale brains are bigger. Why aren't elephants capable of language? Their organs of articulation (tongue, teeth, palate, and larynx) aren't capable of producing a wide variety of sounds that can be used to convey information. Why aren't whales electrical engineers? They don't have manipulators (hands) like ours. I'm saying that the structure of our bodies has a part in the development of intelligence.

These concepts could have been used to a phenomenal degree to give an intricate and evocative story about how someone whose consciousness has been uploaded would live on; how he would maintain a relationship with his wife and friends; and how he could continue to preserve himself against those who fear him. Fuck the Skynet angle - leave that to James Cameron and Arnold Schwarzenegger.


5. The Ending!

So Cyber-Will is a megalomaniac whose wife has rejected him and he's invented these nanites and sent them all around the world so that he can be the master of all - his plan being to cure disease, remove polution, and hook everyone up to a hive consciousness to end wars. While it's out of character, I can see his motivations. SO WHY NOT EXPLAIN THEM INSTEAD OF SNEAKING IT UP ON HIS WIFE AND FRIENDS! Why not say, "Hey, I've got this idea and I think it's going to help humanity a lot. What do you think?"

Well of course because an AI doesn't understand emotions and so must necessarily become evil and deceptive.

So if he's evil and deceptive, why did he replicate his body so that Evelyn could hold him again (and possible do the nasty with him again)? And if he's got a ghastly plan to overthrow humanity, why would he then willingly download a virus that would kill him. Hell, if he knew that his wife wanted him to stop so badly that she'd bring a virus to kill him, why not just say, "OK, I'll stop!" and pull away his nanite army and remove the nanites from all the people he'd infected? Why not say, "I've disabled the nanites. Let's talk!"

Once again because the writer had a specific ending and theme in mind and clung to it like a horny dog to the house cat.

And of course they had to have an ending in which all of civilisation dependant on computers is decimated by this virus because Cyber-Will is everywhere. One of these, "Oh, we can either have technology and be slaves to a supreme artificial intelligence, or not be slaves and go back to the 1600s" ultimatums. Bullshit.


How to Fix It

I'm a little over these stories about post-apocalyptic worlds brought about by technology. Idiots who bring this stuff up have obviously never had cancer, aren't blind, aren't deaf, aren't paralysed, or have anyone they love who have gone through something like that. Like those anti-stemcell activists who say, "Oh it's an affront to god!" I'd like to say, hold that holier-than-thou tune when your brain is being eaten by tumours. Maybe I'm biased because I'm an engineer and I love computers and science, and hate these airy-fairy hippies who would get off on seeing a disembodied laptop keyboard being used to prop a door open. But at the same time, I'm an author and artist, and I enjoy a story that makes me think about new concepts and ideas.

So here's how I'd do Transcendence:

  • Not change the first fourty minutes of the movie. That there is good enough. Probably the only thing I'd change is part of Will's character such that he suggests, "If only humans could work together and communicate themselves better... We'd have far fewer wars and conflict."
  • When Cyber-Will enters the fray, he stays there, trying to comprehend his situation. There's no request to be connected to the internet or go to Wallstreet. Evelyn becomes more like Will was at the start, that she doesn't take care of her hygene and doesn't go outside. She wants to be close to her husband.
  • Max is concerned about her health, at first not believing that it's really Will and she's addicted to a doll. Max mentions this, awkwardly, to Cyber-Will, who says he'll tell her to go outside. Evelyn is outraged when she finds out Max had talked to Cyber-Will, and boots Max out.
  • Max is kidnapped by the terrorists, and tells them where Cyber-Will is in the hopes destroying it will snap Evelyn out of her obsession. That's when Evelyn escapes with the computer core. Determined to hold onto her husband, she uploads his consciousness to the rest of P.I.N.N. and demands Cyber-Will protect them with his capabilities. Reluctantly, Cyber-Will hacks into the FBI and gives them all the information on the terrorists, in the hopes they'll be arrested.
  • Evelyn grows increasingly distressed because she can't touch Will anymore, and he is equally dissatisfied because his mind feels disjoint and disconnected, and he desperately wants to touch his wife. He also laments that he doesn't remember much of the uploading scenes, or even the last few weeks of his life, leading him to ask whether he really is Will or a facimile. Eventually he becomes suicidal. 
  • Max is rescued when the terrorists are captured, and when he meets with Evelyn and Will, the lead crazy bitch's story about the monkey resonates with him. He suggests they could use robotics or 3D printing technology to put Cyber-Will back into a body with tactile sensations, but nothing works. Ultimately Cyber-Will forces her to accept that the man she fell in love with died, and he was little more than a facimile. She reluctantly shuts Cyber-Will down and wipes the processors, effectively removing the consciousness from the computers.
  • A few years pass, and Evelyn is now working on technology based on her experience with Cyber-Will. She and Max discover a way to print organs more efficiently, which saves their first patient: a young boy whose name is also Will. That's when Evelyn realises, "That is Will's Transcendence."
  • Oh, and the psycho terrorist bitch dies of cancer. Her last words were, "Someone help me."


Basic synopsis, but that would have allowed a much better exploration of new themes being presented by neuroscience. Rather than saying, "Oh, we can't program emotions, because they're not logical" we can say emotions are extremely logical. They're based on our needs and desires: the wish to live longer, not simply to be, but to feel and interract. They're a mechanism whereby our existence is extended to the maxima in order to reproduce and nurture new life. When emotion is kept and romanticised, simply for emotion's sake, we cheapen it.

This story idea allows us to recognise what our legacy really is: the mark we leave on the world. By transcending the concept of conscious permancence, our fleeting existence forms the foundation for the acheivements of future generations. By insisting on immortality and refusing to accept the inevitable, we just get in the way of ourselves and others. But by dedicating our lives to improvement and understanding, we can make our shoulders sturdy enough for our children to stand on.

Combining these intriguing elements into a story would start a new avenue for exploration of humanity's identity. The stories of AI leading to our downfall are old, tired, and should be left in the late 90s where they belong. The 2010s and 2020s should be a time when we can look for new avenues to explore in the sci-fi genre, especially in the transhumanism subgenre. When Hollywood finally gets that into their head, they can remake Transcendence, and watch the positive reviews flood in.

Categories: None

Post a Comment

Oops!

Oops, you forgot something.

Oops!

The words you entered did not match the given text. Please try again.

Already a member? Sign In

0 Comments