Alex Garland has one thought when it comes to the AI revolution: Bring it on. After a career of writing novels (The Beach) and screenplays (28 Days Later, Dredd), he’s moving into the director’s chair with April’s Ex Machina, a movie that pushes the discussion of AI and ethics into discomfiting territory. When an alpha-male tech bro (played by Oscar Isaac) secretly develops a robot named Ava, he asks one of his employees (Domhnall Gleeson) to evaluate her wares using the Turing Test. Things get tense, even dangerous—but unlike film androids who go all cogito ergo slay, Ava is thoughtful, even kind, and may be a better heiress to the world than the human who created her.
Ex Machina isn’t the usual AI movie. How’d you get interested in sentient computers?
I’m 44, and I’ve grown up keeping pace with developments in videogames and computers. When I was 12 or 13, home computers arrived; your parents bought them, expecting there’d be education in them but all you’d do is play videogames. But I did a bit of programming in Basic. I’d do really very simple “Hello World” type programs that would give this machine the very barest sentience. I remember quite well the kind of electric sense that you’d get that the machine was alive—with certain knowledge that it wasn’t.
Years later, I got into a long series of arguments with a friend of mine whose chief interest is neuroscience. He thinks that computers are never going to become sentient, and he has some good, scientific arguments why that is the case. But on an instinctive level, I just didn’t agree with him. I started doing a lot of reading about AI, mind, and consciousness.
You’ve worked with complex themes before, so this isn’t totally new territory.
As far back as The Beach there’s stuff about multiverse theory and stuff like that. I worked on a film called Sunshine, which had at its heart an issue to do with heat death in the universe. Although a tiny bit of the stuff in there was reasonable from a scientific viewpoint, it was largely bullshit—it made about as much sense as putting on the warp drive in Star Trek. It was frustrating. I felt like I’d dropped the ball in some important respect. I’m not tearing the film down—there are things about it I really love—but this was something that bugged me. When I started working on this I thought, “This is something I want which is reasonable.”
What was your AI education?
I’ve got an intellectual limitation in terms of what I’m able to understand. It’s partly intelligence, and partly understanding of mathematics; the two collide together to create a pretty impermeable brick wall for me. But what I can read and understand is the philosophical ideas that surround it.
In particular, I came across a book by a guy called Murray Shanahan, a professor of cognitive robotics at Imperial College, the UK’s version of MIT. I felt very sympathetic to its argument when I read it. So when I wrote this script I contacted him, and also a couple of other people, and, “I want you to be really tough on this script and make sure it stands up.”
And does it?
There were two conceits we made. One is that you can create a sentient machine and the other is that you have incredibly high-level robotics which let that sentient machine have a face, to have nuance. Now those are huge conceits, and somebody might reasonably say it’s equivalent to the warp drive. But it is science fiction—and within those conceits, I tried to be quite tough about it.
What kind of AI science fiction did you draw on for inspiration?
You can assume a level of film literacy with cinema audiences that you can’t assume with books. People may or may not have read Heart of Darkness, right? However, they’re very likely to have seen Apocalypse Now. So when you’re working on a sci-fi movie that contains within it artificial intelligence and robots, you can be pretty sure people know something about HAL and 2001. You can be even more sure they know about Blade Runner and replicants. So you write it knowing that you’re aiming at a strong literate audience because they almost certainly will be.
And they’ll likely be ready to pick your movie apart, too.
It could create a problem. In designing the robot, I didn’t want people thinking of another movie when she walked onscreen. If she’s colored gold you immediately think of C-3PO, and the feminine aesthetics wouldn’t cancel that out. We had to steer clear of iconic robots: the film Metropolis, or that Bjork video directed by Chris Cunningham [“All Is Full of Love”].
People seem to want to compare Ex Machina to Her—the AI is different, but you play with that theme of creating a “perfect woman.”
There are two totally separate strands in this film as far as I’m concerned. One of them is about AI and consciousness, and the other is about social constructs: why this guy would create a machine in the form of a girl in her early twenties in order to present that machine to this young guy for this test.
How important was Ava’s design to the overall look of this film?
It seems very familiar yet wholly unique. There might be a little bit of Maria from Metropolis, but not much else. It’s super important. It’s crucial because it needed to look beautiful in a particular way. It needed to look really, really beautiful and visually striking.
When Nathan explains why he made Ava look the way she does, it’s kind of creepy.
Yeah, but that’s exactly the point. You’re supposed to think it’s creepy. You’re not supposed to warm to him over that stuff; you’re supposed to feel unnerved and that this is uncomfortable. And therefore she needs to be rescued.
Nathan in many ways is an archetypal Silicon Valley guy. Is his character a commentary on those dudes?
It’s more alpha-male-meets-non-alpha-male. I like the mixture of someone who’s incredibly aggressive and kind of bullying, but is couching everything in this dude-bro speak as if that takes the edge off of what he’s actually doing. I’d suspect you’d find that just as easily on Wall Street as you would in Silicon Valley.
Have you been following the recent debates about AI and ethics?
It’s a big question. I think if you’re talking about nonsentient AIs—the advanced versions of the sort we already have—then it’s a lot to be very concerned about and a lot to be aware of. It’s not hard to imagine a situation where an AI-controlled drone turns out to be more effective on a battlefield than a human-controlled drone, and maybe doesn’t have the problems of post-traumatic stress that humans suffer. What you’ve done is hand the machine a life-and-death decision over a human. The ethical problems contained within that are absolutely obvious.
But broadly speaking, if you create a new consciousness in the form of a machine, that isn’t necessarily too significantly different in my mind from two adults creating a child. You do have a problem if that new machine is more intelligent than the parents, but again we also have some experience with that too. You could have two parents that create Einstein, and another two who create Stalin.
So you’re not worried about Skynet.
I kind of welcome it. Humans are going die on this planet. It might be because of eco-disasters or maybe because of changes that happen within the solar system or the sun. But when it happens, we’re not going to go through a wormhole and go to another galaxy and find an old planet. It’s just not going to happen. What will survive on our behalf is AIs—if we manage to create them. That’s not problematic, it’s desirable.
Ex Machina kind of feels like it was made with that in mind.
I hope that’s implicit in the film. It was definitely conceived of as a pro-AI movie. It’s humans who fuck everything up; machines have a pretty good track record in comparison to us.
It seems like you lucked out with your cast, especially Oscar Isaac and Domhnall Gleeson.
Because they’re in Star Wars?
Well, yeah. When I was setting out to cast Ex Machina, the only thing I really knew 100 percent for sure is that it wasn’t going to work to get a “film star.” They could actually sink the whole thing quite easily. So it was mainly just about finding really good actors. The problem we had was not identifying who would be good, but can we get them because there are other people who also wanted them.
So what’s next?
There’s a book [producer] Scott Rudin gave me called Annihilation, a spectacular novel by Jeff VanderMeer. I really loved it, so I’m going to try that. But I have no idea if that’ll work out.
What about another 28 Days Later sequel? 28 Months Later?
Danny Boyle and [producer] Andrew McDonald and I never wanted a third one partly because we didn’t have an idea that we felt excited about, but I think we might now have one.
Can you share the idea?
Are you crazy? I’d get fucking shot if I told you that. [Laughs.] No, I can’t tell you. It’s just at the really early stage. What we have to do is get a writer. What you want is a writer who will subvert the idea, take some ownership of it. Hopefully Andrew will find a writer who says “my idea is better anyway” and rips it to shreds and comes up with something even better. That’s what I’d like.
Angela Watercutter (@waterslicer) interviewed director Michael Mann in issue 23.01.
No comments:
Post a Comment