Today there is a ever increasing fear of artificial intelligence. Sources of fear include the loss of jobs due to automation, a program rebelling against us, and the creation of robots that are specifically designed for killing. Out of those three sources, only one of those should actually be concerning, robots designed to kill. The loss of jobs due to automation should not be something concerning as most of the jobs that are vulnerable to automation are now automated. Current jobs that are susceptible to automation will likely end up being more sophisticated meaning that artificial intelligence will work alongside human workers which will increase efficiency and productivity. The belief that artificial intelligence will at one point turn on us is a very pessimistic belief, we humans are always in control and will always be.
Pretty much all artificial intelligence projects are made for a specific purpose. An artificial intelligence that specializes in driving cars will not be able to drive an airplane and vice versa. Such a threat could come as a general artificial intelligence, but that is unlikely. Rodney Brooks, a renowned roboticist has stated: “Extreme AI predictions are comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner”. Currently, there is no motive or reason to developed a general artificial intelligence, the whole concept of it is illogical to begin with. The worst case scenario is that all tech jobs become obsolete.
The development of military robots that can kill on their own is the only part of artificial intelligence that we should have a fear for, as they are specifically made to harm humans. In the documentary:”The Dawn of Killer Robots”, Motherboard, the science based branch of Vice, takes a look at robots that can be potentially considered killer robots in the near future. Notable examples include Atlas, EOD robots, and Drones, with Atlas being the most eye catching.
The Atlas robot shown in the documentary was at the time the most sophisticated bipedal robot in the world as well as the most advanced version of the Atlas robot family. Seven Atlas robots were sent to teams around the world for development and competition purposes. One of those Atlas Robots are in Virginia Tech. Due to the proximity of Washington DC, Virginia Tech has become a nursery for the development of artificial intelligence for military use.
The team with the Atlas robot is called ViGIR, and when asked about the purpose of the robot, they responded with how the robot is supposed to be for rescue operations. The scientists in team ViGIR seemingly had the best intentions when asked about the end goal of the Atlas robot that they were working on. To them, the robot is meant to be a rescue robot, although they do acknowledge that possibility that someone will eventually put it to use in combat roles. Despite this, they seem to downplay any negative beliefs that most people have. Their intention is to use robots to serve humans and do stuff that is considered hazardous for humans.
Another team called Valor was covered as well. They are developing their own robot called Escher and are receiving funding from DARPA. Dr. Lattimer, one of the professors who oversees the development of the Escher robot seemingly giggles when asked about the potential dangerous of the type of robot that they’re working on. Like team VIGiR, they too are intending to make the robot as a something can conduct tasks that are too dangerous for a humans to do.
Judging by how the team members respond to the questions of the reporters, it is clear that they are used to being asked about their robots potential dangers. They are aware of the fear that people harbor and must constantly reassure them that what they are doing is for the better good. It is here that we can observe the use of ethos. All the people involved in the development of robots are all top level engineers/scientists. Someone of that pedigree is sure to carry a lot of credibility with them so people who question them will often get reassured that nothing bad will end up happening.
Eventually the opposing side is shown. One of the most prominent people who advocate against the development of military artificial intelligence is shown, Jody Williams. She is Nobel Peace Laureate who is responsible for ban of landmines in war, and is a spokesperson for the Campaign to Stop Killer Robots. She began by conducting research on drones, and during that, she encounter information on bipedal robots. She came to the conclusion that surveillance drones will eventually be weaponized, and they were. The same scenario happened with EOD robots. Besides the Atlas robot, she also is concerned with Northrop Grumman’s x-47b drone. Here we can see the use of both ethos and logos. Jody Williams has a large background of humanitarian efforts, a nobel peace prize, and a lot of research data. These qualities make her arguments very strong and highly effective. Another prominent group of individuals are the victims of the drones. The video shows a family from Pakistan that were victims of the a drone strike. In the strike, multiple children were injured and the grandmother from the family was blown to pieces. The people in charge of overseeing drones also demonstrated grief towards this, and explain the flaws that the drones had in their surveillance systems. From here we can see that this is a product of pathos. At the same time, this use of pathos provide terrorist groups with the ability to use the appeal to fear fallacy which consists of ethos, pathos, and logos.
The video then moves on to Christine Fair, a Professor from Georgetown University and military affairs expert. She believes that the use of robots is necessary, and cites that there are people in Pakistan who are pro drone. Those people would refer to the drones as ababeel. She explains that in the Quran there is a surah called Al-Fil, and its about an invader with an army of elephants that is destroyed by birds who dropped stones on the elephants as means of repellent. The pro-drone Pakistanis support this claim by saying that drones are the only thing that combat terrorists. This is a use of ethos and logos. For Muslims, the Quran is a source of guidance to them. Ethos comes from its holiness and sacredness while logos comes from the writing itself.
If one were to look at the comments from the documentary, there would be an assumption that the documentary was bias towards the supporting side. I disagree though, the reason being that the supporting side had more concrete reasoning while the opposing side had to rely on emotion and straw man fallacies.
- “University Writing Center.” University Writing Center (UWC) – Rhetorical Analysis, writingcenter.tamu.edu/Students/Writing-Speaking-Guides/Alphabetical-List-of-Guides/Academic-Writing/Analysis/Rhetorical-Analysis.
This article will help me get an overview of what is expected for the essay. I will consult to this website page when researching a suitable article or video for my essay. It gives excellent information on how to identify rhetorics and how to classify them. There is a great emphasis in appeals with some useful examples, and tips on prewriting.
- Sarah Gross and Michael Gonchar. “Skills Practice | Persuading an Audience Using Logos, Pathos and Ethos.” The New York Times, The New York Times, 17 Jan. 2014, learning.blogs.nytimes.com/2014/01/17/skills-practice-persuading-an-audience-using-logos-pathos-and-ethos/.
This article demonstrates the applications of persuasion using ethos, pathos, and logos. With this article I can format my essay properly and get more information from whatever source I’m using. This article will complement the first article that I have annotated.
- MotherboardTV, director. The Dawn of Killer Robots. YouTube, 16 Apr. 2015, http://www.youtube.com/watch?v=5qBjFZV19p0.
This video focuses on a topic that I have chosen to write on. It is about the use of artificial intelligence in military applications. Artificial intelligence is a big topic these days and this video covers a multitude of opinions. A lot of rhetoric will be present so this is my primary choice to write about for the essay.
- Kettley, Sebastian. “Is Artificial Intelligence a Danger? More than Half of UK FEARS Robots Will Take Over.” Express.co.uk, 18 Sept. 2018, http://www.express.co.uk/news/science/1019342/Artificial-intelligence-AI-danger-UK-robots-take-jobs.
This article talks about the general fear of artificial intelligence. It explains how the british people are concerned with the automation of jobs which will result in social inequality. It is important for me to know the basic source of fear so I can then comprehend the more complex fears which is the integration of artificial intelligence in the military.
- Hambling, David. “Why the U.S. Is Backing Killer Robots.” Popular Mechanics, Popular Mechanics, 17 Sept. 2018, http://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/.
This article talks about US military’s backing of killer robots. It discusses the efficiency and the ethics of military robots that can kill on their own free will. This will primarily function as a reference for me when focusing on Motherboard’s video on “The Dawn of Killer Robots”. Like the other articles, there is a lot of rhetoric and the use of ethos, pathos, and logos is obvious.