Tuesday 10 May 2016

Ex Machina

Ex Machina
Ex Machina is a demonstration that artificial intelligence is the future of human advancement. The movie represents a complete shift from explosions and guns that characterize Hollywood to psychological turbulence. It raises questions on the imminence of machine take-over and the replacement of mere mortals. “Can a machine think?” this is a resonating question, especially considering the Summoning of Caleb—a character in the movie—to participate in a ‘Turning Test’.  The test is intended to detect distinctive features between machines and humans (Garland 23).
The movie raises infinite and timeless philosophical issues creatively and in a compelling manner. The audience of Ex Machina is more likely to treat conscious subjection of experience as an assembly piecemeal rather than a holistically understandable subject matter. It is hard to understand why an artificially intelligent machine that is self-aware can choose to be manifested through human interactions and communication limitations. Such an anthropomorphization is compelling as a learning tool though it is arguable that artificially intelligent machines will be self- aware and can gain knowledge perpetually.  In fact, Nathan in the movie is aware that Ava has vast connections to internet information and resources thus she can outline inferential patterns via aggregates of data. If such claim is true, it begs a question regarding Ava’s stunted evolution. Nathan focuses on “singularity” as an attainable feat if Ava’s artificial intelligence capabilities are iterated in its next generation. “Singularity” is a coined term implying technological advancements in machine self-awareness. Nathan’s gesture shows that Ava operates in parameters that are defined strictly, thus the neutralization of self-conscious evolution possibilities.
Ex Machina fails to probe on surfacing questions regarding Ava’s capability to reason morally. While not articulated specifically, it yields an implication that Ava’s common and outstanding ethic can be classified under utilitarianism. A dilemma exists on Ava's ability to espouse such an ethic assumption by default. For example, does Ava need to be re-programmed with a deontological ethic code to embrace utilitarianism as a default trait? Is categorical imperativeness logically inducible from humans as Kant attempted? Notably, a self-aware artificial intelligent component can be switched on and off making it hard to convince the audience of its capability to operate within the confines of ethical virtues. An adequate time is required to instill these virtues and to develop a human soul and consciousness. Unconvincing as it may seem, such weaknesses hardly affect viewership, given that the main role of the movie is entertainment.
Caleb does not wonder if Ava is indeed an artificial being despite having a clear visual on her artificial body. In fact, Nathan shows him multiple parts used in her assembly. Such parts even include some of her brain parts. Still, Caleb is unable to tell if Nathan radios Ava’s answers to his questions or if she is not a puppet. In movies, anything is possible: person can code information directly into Ava’s system using facial expression and speech simulators. When Ava participated in an interview with Caleb, Nathan was absent. The situated provided perfect conditions for Caleb to assume that watched on a video screen. Uncertainty exists on why he could not assume otherwise--more than just watching. An initial suspicion of Kyoko (Nathan’s servant) as a robot can be overturned as the events unfold. Evidently, Ava’s autonomy beguiles Caleb because he does not question it as a premise. Obtaining solutions to philosophical issues demand consciousness on minute details with a possibility of conceiving metaphysical assertions. In light of this, chances are that the audiences are shown their own beguilement.  
A question on the possibility of machines to think and make their own decisions without human influence is metaphysical. It is an age-old debate that elicits different reactions from scholars. A unanimous agreement on the answer is unattainable unless the advancement in machine technology can prove otherwise. It is only then that the debate will be settled. Obtaining an objective or an arbitrary answer to the question involves analysis of varied opinions of the audience backed by the reality. Currently, technological limits and time changes bar the provision of a conclusive answer. However, speculations can thrive on conspiracies and temptations to overshadow science (Muller 43). Therefore, “Can machines think?” is an inquiry that focuses on humans rather than machines. It regards crediting humans and living creatures with abilities to feel or think. It is about what people actually have to imagine to acknowledge other beings as thinkers or emotional. It raises another question on what to be mentally invoked so as to ascertain the thoughts of other parties. In addition, it is about the possible responses to learning that the existing criteria falls short of capturing the intended certainty of the outcome.
Human beings can be caught in a tight situation resulting in a defeat of consciousness. As a human, Caleb holds conversations with Ava. Ava throws questions at him to check on his personality. Caleb also asks questions. When he is asked if he had a warm personality, he takes a time to answer. Her insistence made him retort that he indeed is a good person. He is embarrassed to answer this question, possibly because he is convinced of his good personality. Notably, he fails to ask her a similar question. The reason can be that he is too embarrassed to ask or even worse, he does not bother or think to ask.  Caleb uses a video screen in his bedroom to watch Ava. He notes that she hardly gets out of her enclosure or sleep though she can lie down.  He also witnesses an altercation of Nathan and Ava at one point from his video feed. He also witnesses Nathan tearing down his (Caleb’s) image that Ava sketched.
Creating robots that serve as human slaves and workers can be attained in a distant future. However, legal and moral issues will arise if such machines will possess all the attributes of humans. Nathan Informs Caleb about Kyoko’s inability to speak English.  She has mysterious origins and is docile. She is also mostly mute. Nathan verbally abuses Kyoko whenever she accidentally spill water or drinks. It is not clear if Kyoko is a robot encased in a human body and denied an ability to think and speak.
Artificially intelligent machines can hardly match human intelligence and thinking ability. An altercation between Ava and Nathan prompts Caleb to confront him. Instead, he met Kyoko on his way out. He attempts to initiate a conversation with her and she responded by removing her clothes. When Caleb tries to stop her, she seems unresponsive. It was then that he fumbled to button her blouse as Nathan appears. He advises Caleb to dance with her rather than talk. When music and disco light was switched on, she started dancing hence becoming evidently clear that she was indeed a robot with primitive traits.
Nathan conducts surveillance on Ava’s sessions with Caleb from a secret chamber with dim lighting. He covered the walls with dozens of post-it notes. He alludes “Chinese Room experiment” on thought as conducted by Searle. Searle’s attempted to refute a strong artificial intelligence notion. He suggested that a ‘Chinese-speaking’ machine can resemble a non-Chinese speaker in an enclosed space. He concluded that humans can use algorithms and paper slips to calculate data thus can attain a level of a Turing-tested machine. However, he/she will not understand Chinese despite all the effort (Searle 213). As such, If Nathan intended to mock Searle’s experiment, it is safe to conclude that he comprehended homunculus in Ava’s mind. However, what the viewer would see is contrary to the real outcome. Nathan, therefore, is not pulling Ava’s strings nor is he in her mind—he is conducting simple video surveillance.
In summary, Nathan is sensual and human. He represents human flawlessness as compared to machines though he is instinctively intelligent. He is egotistical, corrupted, and bi-polar. An opportunity for Caleb presents itself through Nathan’s human weaknesses. Humans ask if artificially intelligent machines and robots can fulfill all the duties that people can. Opinions are varied as per the philosophical and moral grounds. Concluding that Ava cannot possess traits of consciousness implies that only humans have the ability and right to be conscious. Secondly, all records and facts indicate that artificially intelligent machines cannot be conscious, hence, cannot think. Logically, the second option entails a possibility of consciousness in machines depending on the definitions, comparisons, and the requirements.







Works Cited
Garland, Alex. Ex Machina. London: Faber & Faber, 2015:1-323.Internet resource.
Müller, Vincent C. Philosophy and Theory of Artificial Intelligence. Berlin: Springer, 2013: 1-162. Internet resource.

Smith, Barry. John Searle. Cambridge [u.a.: Cambridge Univ. Press, 2011: 213-217.Print.

No comments:

Post a Comment