This movie got a lot of buzz. I was not immune to the buzz. It's still in theaters if you want to be buzzed, too. This one lies somewhere between the comfy art house smallness of Robot & Frank and the big budget blockbustingness of RoboCop or Terminator movies.
In the first few seconds of the film, a programmer wins a contest to spend a week with the shadowy founder of a huge tech company in a remote, undisclosed location (the tech company has a search engine named Blue Book after Wittgenstein's Blue and Brown Books). After being helicoptered in, he finally meets this enigmatic genius and (spoiler alert, but not really) he has a robot! A sexy lady robot (This fact is slightly less cheesy than it appears once it's explained later. Slightly.).
I don't want to spoil too much (although I will give a pre-spoiler alert that I will discuss the ending in a couple paragraphs), but one of the big philosophical issues has to do with the Turing Test (an idea from Alan Turing, the basis for the recent film, The Imitation Game, as well as the computer on which you are reading this). Nathan (the reclusive genius) contends that a better test of whether a robot is truly capable of intelligent thought is for the judge (in this case his employee/guest, Caleb) to know that he's talking to a robot. See the clip below.
The idea seems to be that if he's talking to a robot while fully aware that she's a robot and he still thinks she's thinking intelligently, then this is somehow even more evidence than a pass of the original Turing Test. I guess this is because her abilities have to overcome his bias against thinking machines. Maybe I misunderstood, but I'm not so sure this is a better version of the Turing Test, since the tester could very well be biased in favor of robot intelligence (especially if the tester is a science fiction nerd like me!). I'm inclined to think that neither version would really test for anything like qualia, or the first person phenomenon of consciousness (although they do mention the Black and White Mary thought experiment at one point, too). I do think that Turing Tests might provide some evidence, even if not decisive evidence (an issue dealt with dramatically in the Star Trek: TNG episode "The Measure of a Man"). In any case, it all makes for an interesting movie, and of course, we are left wondering what the real test is.
SPOILER ALERT! I'm going to discuss the end of the movie after this amusing image. Don't say I didn't warn you.
You have been alerted. At the end of the movie, Ava (the robot) kills Nathan. She steals parts of some other robots. I'll be honest: the image of a white robot literally stealing the skin of Asian robots didn't sit with me any better than presence of the mute Asian maid/sex slave, Kyoko. Nathan maybe had it coming to him. He was destroying apparently thinking robots, after all, which may be murder.
What troubled me was Ava's refusal to help her fellow robots and the fact that she apparently left Caleb to die. This seems like a cheap shot at inducing fear in an audience based on our uneasiness about technology: "How to make a bombastic ending...? I know, let's say they're going to kill us!" (See "What does the Ending of Ex Machina Really Mean?" for a far more detailed and somewhat more sympathetic interpretation).
As much as I enjoy The Matrix and the Terminator movies, I don't see any reason to think that the first thing on a powerful AI's list would be to wipe out humanity (or even to murder a few of us, as Ava does). Do people really think that genocidal or homicidal mania is an intrinsic property of intelligence? What would that say about us, at least the non-genocidal/homicidal among us? If the problem is how we treat AIs, then why isn't this an argument for treating them with respect? My take on 2001: A Space Odyssey (probably the greatest philosophical science fiction movie of all time) is that humans made HAL insane through faulty programming, so he's not the inherently murderous AI he's often made out to be; perhaps a similar case could be made for Ava.
As much as I enjoy The Matrix and the Terminator movies, I don't see any reason to think that the first thing on a powerful AI's list would be to wipe out humanity (or even to murder a few of us, as Ava does). Do people really think that genocidal or homicidal mania is an intrinsic property of intelligence? What would that say about us, at least the non-genocidal/homicidal among us? If the problem is how we treat AIs, then why isn't this an argument for treating them with respect? My take on 2001: A Space Odyssey (probably the greatest philosophical science fiction movie of all time) is that humans made HAL insane through faulty programming, so he's not the inherently murderous AI he's often made out to be; perhaps a similar case could be made for Ava.
I tend to think powerful AI's attitude toward us would shade from benevolence into total disregard (as in Her and the Culture series of Iain M. Banks). But maybe I'm wrong. The thing is, none of us really know what a super-powerful AI would do. Getting all bent out of shape about it may make for fun movies, but it's short sighted and immature as this excellent piece on AI panic among the media argues.
There are legitimate concerns about technology (I can be quite curmudgeonly about people staring at their phones instead of interacting with the world around them), but AI panic is a shame because it closes us off from really thinking through all the issues that AI brings up about thinking, consciousness, personhood, ethics, etc. Even worse, it could become a self-fulfilling prophecy in making us fearful of powerful AIs should they ever be created. I, for one, would welcome such AIs, not as overlords, but as friends.
My take-away from Ex Machina (with the reservations about fetishization you touch on obliquely) is that Ava's murders are effectively justified. Ethically and legally, you are allowed to kill to escape unjust imprisonment. We don't see Ava plotting murder in the film's denouement. Instead she is simply appreciating the much wider world, which she knows of in incredible detail (as she's loaded with effectively Google's knowledge), but has not experienced for herself.
ReplyDeleteI don't know the vision behind the film but I suspect they're shooting for a sympathetic but distinctly alien intelligence. The founder-guy was abusive, but Ava's liberator ultimately miscalculated, and fatally.
They should get _Blindsight_ on the big screen for the most forceful rendition of that brand of alienness.
Thanks for the comment. I understand how killing Nathan would be justifiable, but I don't really see how killing Caleb would be. (Some have suggested that the timer on the power will disable the locks, so maybe Ava didn't leave him to die after all). You raise a good point about the ending, though. It's hard to tell what we're supposed to make of that.
Delete