by Danny Stout
The protagonist, Caleb (Domhnall Gleeson), is invited to an isolated laboratory where clandestine artificial intelligence (AI) research takes place. A programmer, he meets a super computer that seems to have “conscience.” It’s not just that he “isn’t in Kansas anymore,” he’s unsure whether Dorothy, in this case a robot called Ava, has a soul. Caleb is unsettled; he’s torn between the unknown and the possible. The tension thickens when Nathan, Ava’s creator, describes her construction and abilities. “If you’ve created a conscious machine it’s not the greatest in the history of man but of Gods,” Caleb declares.
He converses with Ava on the origins of language. She’s stumped. “I don’t know,” she responds assertively. When Nathan inquires, “How do you feel about her?” Caleb is taken aback. Why should he have feelings for an android, and is that even possible at any meaningful level? In past AI movies such as Blade Runner, the human impersonator is uncovered when emotion is inadequately simulated. But, Ava has advanced simulated feelings. She tells Caleb, “You ask circumspect questions about me, but I know nothing about you. That’s not how friendship works.”
Physically, Ava is both attractive and peculiar. A strikingly beautiful face, but much of her body is a transparent shell revealing computer boards and circuitry. Her legs are alluring nevertheless, and Caleb is smitten in a way he doesn’t quite understand. There’s the hang-up, again. How can one be smitten by a system of metal and wires? The movie’s dominant theme is clearly computers’ capability of consciousness, and less techno-viewers will wish they had a course in A.I. before seeing the film. Audience members are not handed glossaries to discern the technology jargon used so freely throughout.
Caleb shakes off insinuations that Ava is anything but a machine, despite Nathan’s declaration, “If you think you’re talking to anything but a machine, it must have intelligence.” Ava’s reading of eye movements and subtle communication cues gives her emotional assessment at some level, thus further intriguing Caleb. She says, “I’d like us to go on a date, the way your eyes fix on me your micro-expressions are communicating discomfort.”
He discloses details of his parents’ death in a car crash. Her concern is a segue into insights about Nathan. This part is compelling: she has a strikingly sophisticated level of discernment: “Do you like Nathan? Are you good friends? You’re wrong about Nathan: he isn’t your friend. You shouldn’t trust anything he says.” Such disclosures set up action elements in the conclusion, inevitable in much current cinema.
As Caleb is left with questions, so are we. Professor of International Cultural Studies at BYU-Hawaii, Chad Compton, teaches A.I. and the global Internet. Mathematician Joel Helms is also interested in A.I. Recent conversations turn to religious implications, with Helms inquiring about Ava: “Does God hear her prayers?” to which Compton responds, “How can He if there’s no ‘her?”
We will inevitably confront Compton’s query. At our deaths, will we leave behind supercomputers perpetuating our identities for the benefit of future generations? And a question of my own: Now that robot nannies are available, will such androids teach gospel doctrine class in Sunday School? Philosopher Jacques Ellul posits: “(Technology) never observes the distinction between moral and immoral use. It tends on the contrary, to create a completely independent technical morality.” It’s likely that future discussions of “media effects” in the LDS faith community will inevitably broaden to include such matters.