Inhalt
Smarter Learning!
Inhalt
Bundesland, Schulart & Klasse
Bundesland, Schulart & Klasse
BE, Integrierte Sekundarschule
Baden-Württemberg
Berufl. Gymnasium (AG)
Berufl. Gymnasium (BTG)
Berufl. Gymnasium (EG)
Berufl. Gymnasium (SGG)
Berufl. Gymnasium (TG)
Berufl. Gymnasium (WG)
Berufskolleg - FH
Gemeinschaftsschule
Gymnasium (G8)
Gymnasium (G9)
Hauptschule
Realschule
Werkrealschule
Bayern
Fachoberschule
Gymnasium
Mittelschule
Realschule
Berlin
Gymnasium
Integrierte Sekundarschule
Brandenburg
Gesamtschule
Gymnasium
Oberschule
Bremen
Gymnasium (G8)
Oberschule (G9)
Hamburg
Gymnasium
Stadtteilschule
Hessen
Berufl. Gymnasium
Gesamtschule
Gymnasium (G8)
Gymnasium (G9)
Haupt- und Realschule
Hauptschule
Realschule
Mecklenburg-Vorpommern
Gesamtschule
Gymnasium
Niedersachsen
Gymnasium (G8)
Gymnasium (G9)
Integrierte Gesamtschule
Kooperative Gesamtschule
Oberschule
Realschule
NRW
Gesamtschule
Gymnasium
Hauptschule
Realschule
Sekundarschule
Rheinland-Pfalz
Gesamtschule
Gymnasium
Saarland
Gemeinschaftsschule
Gesamtschule
Gymnasium
Realschule
Sachsen
Gymnasium
Oberschule
Sachsen-Anhalt
Fachgymnasium
Gesamtschule
Gymnasium
Sekundarschule
Schleswig-Holstein
Gemeinschaftsschule
Gymnasium (G8)
Gymnasium (G9)
Thüringen
Berufl. Gymnasium
Gemeinschaftsschule
Gesamtschule
Gymnasium
Regelschule
Klasse 13
Klasse 13
Klasse 12
Klasse 11
Klasse 10
Klasse 9
Klasse 8
Klasse 7
Fach & Lernbereich
Fachauswahl: Englisch
Mathe
Deutsch
Englisch
Bio
Chemie
Physik
Geschichte
Geo
Lernbereich
Digitales Schulbuch
Lektürehilfen
Abitur LK
Abitur GK
Mittlerer Schulabschluss
VERA 8 E-Kurs
VERA 8 G-Kurs
Abitur LK
Prüfung
wechseln
Abitur LK
Abitur GK
Mittlerer Schulabschluss
VERA 8 E-Kurs
VERA 8 G-Kurs
Smarter Learning!
Schneller lernen mit deinem SchulLV-Zugang
  • Zugang zu über 1.000 Original-Prüfungsaufgaben mit Lösungen von 2004-2019
  • Alle Bundesländer und Schularten, empfohlen von über 2.300 Schulen in Deutschland
  • Digitales Schulbuch: Über 1.700 Themen mit Aufgaben und Lösungen
  • Monatlich kündbar, lerne solange du möchtest
Jetzt Zugang freischalten!

Aufgabe 2

Aufgaben
Download als Dokument:PDFWord

TASKS

1
Outline the information about Bartneck's experiment and the implications for machine-human interaction as presented in Text A.
(20 %)
2
Analyze the way the author maintains the reader's interest in the experiment and its consequences. Give evidence from Text A.
(25 %)
3
Mediation
For an international youth conference on “Human-Robot Ethics”, outline what Ulrich Ladurner writes about the use of robots and drones in war.
(Text B)
(20 %)
4
Choose one of the following tasks:
4.1
“And while eventually every participant killed the robot, it took them time to intellectually override their emotional queasiness […].” (Text A, ll. 35/36)
Reflect on the emotional ways people relate to interactive devices and computer games. Refer to Bartneck's findings in Text A and other examples you are familiar with.
(35 %)
OR
4.2
Compare the human-robot interaction in Text A with other experiments in literature or film where humans and machines like robots, surrogates, or avatars come into conflict. Assess how they deal with the situation.
(35 %)
OR
4.3
Write an article for the online magazine “Science News for Students” in which you reflect on the implications of Bartneck's experiment (Text A) and the benefits as well as risks of a robot-aided society.
(35 %)
Text A: Excerpt from the article
No Mercy For Robots: Experiment Tests How Humans Relate To Machines
By Alix Spiegel
Aufgabe 2
Could you say “no” to this face? Christoph Bartneck of the University of Canterbury in New Zealand recently testedwhether humans could end the life of a robot as it pleaded forsurvival.
Aufgabe 2





5




10




15




20




25




30




35




40




45


[…]
Treating Machines Like Social Beings
Many people have studied machine-human relations, and at this point it's clear that without realizing it, we often treat the machines around us like social beings.
Consider the work of Stanford professor Clifford Nass. In 1996, he arranged a series of experiments testing whether people observe the rule of reciprocity with machines.
“Every culture has a rule of reciprocity, which roughly means, if I do something nice for you, you will do something nice for me,” Nass says. “We wanted to see whether people would apply that to technology: Would they help a computer that helped them more than a computer that didn't help them?” […]
So what happens when a machine begs for its life – explicitly addressing us as if it were a social being? Are we able to hold in mind that, in actual fact, this machine cares as much about being turned off as your television or your toaster – that the machine doesn't care about losing it's1 life at all?
Bartneck's Milgram Study With Robots
In Bartneck's study, the robot – an expressive cat that talks like a human – sits side by side with the human research subject, and together they play a game against a computer. Half the time, the cat robot was intelligent and helpful, half the time not.
Bartneck also varied how socially skilled the cat robot was. “So, if the robot would be agreeable, the robot would ask, 'Oh, could I possibly make a suggestion now?' If it were not, it would say, 'It's my turn now. Do this!' ” […]
At the end of the game, whether the robot was smart or dumb, nice or mean, a scientist authority figure modeled on Milgram's would make clear that the human needed to turn the cat robot off, and it was also made clear to them what the consequences of that would be: “They would essentially eliminate everything that the robot was – all of its memories, all of its behavior, all of its personality would be gone forever.”
In videos of the experiment, you can clearly see a moral struggle as the research subject deals with the pleas of the machine. “You are not really going to switch me off, are you?” the cat robot begs, and the humans sit, confused and hesitating. “Yes. No. I will switch you off!” one female research subject says, and then doesn't switch the robot off.
“People started to have dialogues with the robot about this,” Bartneck says, “Saying, 'No! I really have to do it now, I'm sorry! But it has to be done!' But then they still wouldn't do it.”
There they sat, in front of a machine no more soulful than a hair dryer, a machine they knew intellectually was just a collection of electrical pulses and metal, and yet they paused.
And while eventually every participant killed the robot, it took them time to intellectually override their emotional queasiness – in the case of a helpful cat robot, around 35 seconds before they were able to complete the switching-off procedure. How long does it take you to switch off your stereo?
The Implications
On one level, there are clear practical implications to studies like these. Bartneck says the more we know about machine-human interaction, the better we can build our machines.
But on a more philosophical level, studies like these can help to track where we are in terms of our relationship to the evolving technologies in our lives.
“The relationship is certainly something that is in flux,” Bartneck says. “There is no one way of how we deal with technology and it doesn't change – it is something that does change.”
More and more intelligent machines are integrated into our lives. They come into our beds, into our bathrooms. And as they do – and as they present themselves to us differently – both Bartneck and Nass believe, our social responses to them will change.
(645 words)
Source: Spiegel, Alix. “No Mercy For Robots: Experiment Tests How Humans Relate To Machines.” National Public Radio, January 28, 2013. Accessed March 3, 2014.
http://www.npr.org/blogs/health /2013/01/28/170272582/do-we-treat-our-gadgets -like-they-re-human.
1its (mistake in original text)
Annotations
Lines
6
reciprocity
in social psychology: responding to a positive action with another action, rewarding kind actions
14
Milgram Study
experiments in the early 1960s measuring the willingness of participants to obey an authority figure who instructed them to perform acts in conflict with their personal conscience, i.e. they were persuaded to punish other participants with painful electric shocks
36
queasiness
here: uneasiness
Text B: Auszug aus dem Artikel
Wenn Roboter töten





5




10




15




20




25




30


Von Ulrich Ladurner […]
Was für eine beruhigende Vorstellung, wenn Roboter für uns in die Schlacht ziehen würden. Schon in weniger als zwei Jahrzehnten könnte es so weit sein. In einem Bericht des amerikanischen Verteidigungsministeriums mit dem sperrigen Titel Unmanned Systems Integrated Roadmap FY 2011–2036 wird die Entwicklung von Waffensystemen beschrieben, bei denen »das Maß der menschlichen Kontrolle« nach und nach abnehmen wird. Bereits um 2030 könnte es Waffen geben, die selbstständig darüber entscheiden, ob sie einen Menschen angreifen oder nicht. […]
Roboter unterminieren das, was man die Gesetze des Krieges nennt. Dazu gehört zum Beispiel, dass man seine Gegner unterscheiden kann. Wer ist ein feindlicher Kämpfer und wer nicht? Dafür reicht es nicht, dass der Kämpfer an einer bestimmten Uniform zu erkennen ist. Es zählt auch seine Absicht. Wie kann aber eine Maschine beurteilen, ob ein Mensch die Absicht hat anzugreifen oder nicht? Und weiter: Wenn eine Maschine beispielsweise unbeteiligte Zivilisten tötet, wer ist dann schuld? Derjenige, der den Roboter losgeschickt hat, oder derjenige, der die Software programmiert hat? Das sind grundlegende Fragen, auf die es keine klaren Antworten gibt.
Robotisierung senkt die Schwelle zum Krieg. Wer Maschinen für sich kämpfen lassen kann, der entscheidet sich schneller und leichter für einen Angriff. Es sterben keine eigenen Soldaten, die Kosten des Einsatzes halten sich in Grenzen. Der Preis ist also insgesamt gering.
Das aber ist eine Illusion, denn am Ende wird der Krieg nie auf die Roboter beschränkt bleiben. Robotisierung führt zu einer Entgrenzung des Krieges, auch auf der gegnerischen Seite. Das kann man im Augenblick in Pakistan beobachten. Taliban haben dort in den letzten Wochen mehr als ein Dutzend Impfhelfer erschossen, weil sie diese Menschen verdächtigen, am Drohnenkrieg mitzuwirken. Damit nämlich Drohnen ein Ziel erfassen können, benötigen sie entsprechende Informationen vom Boden. Die Impfhelfer standen im Verdacht, diese zu liefern.
Krieg findet zwischen Menschen statt, auch wenn Maschinen ihn ausführen. Und nur wer andere Menschen besiegt, sie zur Aufgabe zwingt oder gar unterwirft, wird den Krieg gewinnen können. Die Gefahren einer Robotisierung des Krieges sind so groß, dass man automatisierte Tötungsmaschinen ächten muss. Die Zeit drängt. Denn die modernen Armeen planen bereits mit Maschinenkriegern. Waffen kann man verbieten, wenn es gelingt, einen Konsens darüber herzustellen, dass sie zu gefährlich sind. Es gibt ermutigende Beispiele: Chemiewaffen, Landminen, Streumunition – Roboterkrieger sollten als Nächstes drankommen.
(379 Wörter)
Quelle: Ladurner, Ulrich: „Wenn Roboter töten“. Die Zeit, 3. März 2013. Entnommen am 3. März 2014.
http://www.zeit.de/2013/03/Roboter-Maschinenkrieg-Drohnen.
Annotationen
Zeilen
8
unterminieren
untergraben, schwächen
22
Taliban
Angehöriger einer radikalen islamischen Miliz in Afghanistan
24
Drohne
hier: unbemanntes Luftfahrzeug für militärische Zwecke
Weiter lernen mit SchulLV-PLUS!
Jetzt freischalten
Infos zu SchulLV PLUS
Ich habe bereits einen Zugang
Zugangscode einlösen
Login
Tipps
Download als Dokument:PDF

Teilaufgabe 1

$\blacktriangleright$ Outline the information about Bartneck's experiment and the implications for machine-human interaction as presented in Text A.

In dieser Aufgabe sollst du skizzieren, welche Informationen der Text über das Experiment von Bartneck gibt und welche Schlüsse sich aus diesem Experiment ziehen lassen. Wichtig: Es ist keine vollständige Inhaltsangabe gefordert! Lasse alles aus, was nicht mit dem Experiment und den Schlüssen daraus zu tun hat (alles vor l. 10).

Beschreibe daher:
  • Den Aufbau des Experiments
  • Den Ablauf des Experiments, Reaktionen der Teilnehmer
  • Praktische u. philosophische Bedeutung des Experiments

Teilaufgabe 2

$\blacktriangleright$ Analyze the way the author maintains the reader's interest in the experiment and its consequences. Give evidence from Text A.

In dieser Aufgabe sollst du analysieren, mit welchen Mitteln Alix Spiegel das Interesse seiner Leser am Experiment und seinen Konsequenzen aufrechterhält. Du sollst also nicht den Inhalt untersuchen, sondern die rhetorische Strategie des Autors. Sieh dir dabei vor allem die Textstruktur an: Wie gestaltet Spiegel den Einstieg, gibt es eine Spannungskurve und welche rhetorischen Stilmittel benutzt er?

Diese Punkte solltest du herausarbeiten:
  • Einbeziehung des Lesers (1. Pers. Pl., 2. Pers. Sg.)
  • Kontroverse Fragen
  • Unterhaltsame, weniger wissenschaftliche Beschreibung des Experiments

Teilaufgabe 3

$\blacktriangleright$ Outline what Ulrich Ladurner writes about the use of robots and drones in war (Text B), imagining you were at a youth conference on “Human-Robot-Ethics”. Mediate the text.

In dieser Aufgabe hast du gewissermaßen zwei Aufgabe zu erledigen. Erstens sollst du skizzieren, was Ulrich Ladurner über den Einsatz von Robotern/Drohnen im Krieg sagt. Fasse dich also sehr kurz, du sollst nur eine Übersicht des Texts geben, die das Wichtigste ausdrückt. Zweitens sollst du eine Mediation schreiben, also den Text vermitteln. Vereinfache komplizierte Aussagen. Du sollst den Text nicht wörtlich übersetzen, sondern anderen (Teilnehmern einer Jugendkonferenz) ermöglichen, den Text zu verstehen. Behalte den logischen Aufbau des Texts bei.

Punkte, die du nennen musst, sind:
  • Plan der amerikanischen Regierung (soll 2036 erfüllt werden)
  • Roboter unterscheiden nicht zwischen Zivilisten und Soldaten
  • Roboterkrieg hat mehr Nachteile (u.a. mehr Kriege) als Vorteile (keine Soldaten sterben, weniger Kosten)
  • Der Einsatz von Robotern verschlimmere den Krieg und solle verboten werden

Teilaufgabe 4.1

$\blacktriangleright$ Reflect on the emotional ways people relate to interactive devices and computer games. Refer to Bartneck's findings in Text A and other examples you are familiar with.

In dieser Aufgabe sollst du über das emotionale Verhältnis zwischen Menschen und Spielen/interaktiven Geräten reflektieren. Denke also tiefgründig nach und versuche, verschiedene Standpunkte und mögliche Meinungen zu beachten. Wäge diese Meinungen ab und komme zu einem persönlichen Schluss über das Thema. Bei dieser Aufgabe wird großer Wert darauf gelegt, dass du deine eigene Meinung begründet darlegst. Text A soll dir vor allem als Denkanstoß dienen.

Hierüber kannst du u. a. schreiben:
  • Frust wegen nicht funktionierenden Geräten/verlorenen Spielen
  • Videospielsucht
  • Verhältnis real life – virtual life
  • Siri/Chatbots

Teilaufgabe 4.2

$\blacktriangleright$ Compare the human-robot interaction in Text A with other experiments in literature/film where humans and machines come into conflict. Assess how they deal with the situation.

In dieser Aufgabe sollst du den Konflikt zwischen Mensch und Maschine im Text A mit einem anderen ähnlichen Konflikt aus einem Buch oder einem Film vergleichen. Am besten sind hierbei Werke, in denen es darum geht, ob Maschinen wie Lebewesen zu behandeln sind oder nicht. Beim Vergleich sollst du zudem bewerten, wie Mensch und Maschine mit der Situation umgehen. Gefragt ist also nicht nur, dass du das menschliche Konfliktverhalten gegenüberstellst, sondern dass du dir auch Gedanken machst, ob dieses Verhalten moralisch/logisch ist.

Mögliche Vergleichspunkte sind:
  • Bewertung von Maschinen nach ihrem Nutzen
  • Moralische Verantwortung von Menschen gegenüber Maschinen
  • Gefühle/Persönlichkeit von Maschinen
  • Recht der Menschen über „Leben und Tod“ der Maschinen

Teilaufgabe 4.3

$\blacktriangleright$ Reflect on the implications of Bartneck's experiment (Text A) and the benefits as well as risks of a robot-aided society. Imagine you were writing an article for the online magazine Science News for Students.

In dieser Aufgabe sollst du über die Bedeutung von Bartnecks Experiment und die Vor-/Nachteile einer auf Roboter gestützten Gesellschaft reflektieren. Denke also tiefgründig nach und versuche, verschiedene Standpunkte und mögliche Meinungen zu beachten. Wäge diese Meinungen ab und komme zu einem persönlichen Schluss über das Thema. Bei dieser Aufgabe wird großer Wert darauf gelegt, dass du deine eigene Meinung begründet darlegst. Text A soll dir vor allem als Denkanstoß dienen. Dein Text soll als Artikel für das online-Magazin Science News for Students gestaltet sein. Schreibe also möglichst unterhaltsam und nicht zu kompliziert.

Gute Punkte für eine Diskussion sind:
  • Fähigkeit der Roboter, menschliches Verhalten zu analysieren
  • Effektive Arbeitskräfte, aber unflexibel
  • Datenschutz
  • Abhängigkeit von Robotern
Weiter lernen mit SchulLV-PLUS!
Jetzt freischalten
Infos zu SchulLV PLUS
Ich habe bereits einen Zugang
Zugangscode einlösen
Login
Lösungen
Download als Dokument:PDF

Themen:

Science and technology
Personal relations in their social context

Textgrundlagen:

Spiegel, Alix: „No Mercy For Robots: Experiment Tests How Humans Relate To Machines“. National Public Radio, January 28, 2013. Abgerufen 3. März 2014. http://www.npr.org/blogs/health/2013/ 01/28/170272582/do-we-treat-our-gadgets- like-they-re-human.
Ladurner, Ulrich: „Wenn Roboter töten“. Die Zeit, 3. März 2013. Entnommen am 3. März 2014.http://www.zeit.de/2013/03/Roboter-Maschinenkrieg-Drohnen.

Teilaufgabe 1

$\blacktriangleright$ Outline the information about Bartneck's experiment and the implicationsfor machine-human interaction as presented in Text A.

Tipp

In dieser Aufgabe sollst du skizzieren, welche Informationen der Text über das Experiment von Bartneck gibt und welche Schlüsse sich aus diesem Experiment ziehen lassen. Wichtig: Es ist keine vollständige Inhaltsangabe gefordert! Lasse alles aus, was nicht mit dem Experiment und den Schlüssen daraus zu tun hat (alles vor l. 10).

Beschreibe daher:
  • Den Aufbau des Experiments
  • Den Ablauf des Experiments, Reaktionen der Teilnehmer
  • Praktische u. philosophische Bedeutung des Experiments

In the excerpt from Alix Spiegel's article No Mercy For Robots: Experiment Tests How Humans Relate To Machines, published in National Public Radio on January 28, 2013, the author discusses the interaction between humans and robots, taking an experiment by Cristoph Bartneck as a starting point.

In Bartneck's experiment, several persons had to play a computer game with a robot cat which was able to talk. Bartneck programmed the robot to vary in degrees of helpfulness and social ability. After finishing the game, the partakers of the experiment were demanded to switch off the robot. They were also told that they would erase the robot's “personality” doing this. The robot, however, pleaded for its life. Each of the experiment's partakers managed to switch the robot off, but those having played with a helpful version of the robot averagely needed 35 seconds to do so, struggling with their consciousness.
According to Bartneck, the experiment helps to increase the quality of robots on one hand. On the other hand, his experiment demonstrates that human interaction with machines is changing, as some humans may s“often treat the machines around us like social beings” (l. 3).

Teilaufgabe 2

$\blacktriangleright$ Analyze the way the author maintains the reader's interest in the experiment and its consequences. Give evidence from Text A.

Tipp

In dieser Aufgabe sollst du analysieren, mit welchen Mitteln Alix Spiegel das Interesse seiner Leser am Experiment und seinen Konsequenzen aufrechterhält. Du sollst also nicht den Inhalt untersuchen, sondern die rhetorische Strategie des Autors. Sieh dir dabei vor allem die Textstruktur an: Wie gestaltet Spiegel den Einstieg, gibt es eine Spannungskurve und welche rhetorischen Stilmittel benutzt er?

Diese Punkte solltest du herausarbeiten:
  • Einbeziehung des Lesers (1. Pers. Pl., 2. Pers. Sg.)
  • Kontroverse Fragen
  • Unterhaltsame, weniger wissenschaftliche Beschreibung des Experiments

At the beginning, Spiegel uses a picture of Bartneck's robot cat to attract his readers' attention, the picture serving both as an eye-catcher and as a visualization of the experiment. The text itself begins with a controversial thesis: Spiegel claims that “we often treat the machines around us like social beings” (l. 3). This way, he maintains the interest of those readers who would lose attention reading a summary of a scientific experiment at the beginning of a text. Also, this controversial thesis marks Bartneck's experiment as being highly relevant to the readers.

Spiegel is careful to lead his readers from a general approach to the subject to the experiment's leading question. He begins with a quote by a professor, discussing the relationship between humans and machines in the context of general human behaviour (l. 6-9). The following questions, posed directly to the reader, stress the relevance of Bartneck's experiment. Spiegel uses the first person plural and the second person singular to address his readers directly: “Are we able to hold in mind that, in actual fact, this machine cares as much about being turned off as your television or your toaster […]?” (l. 11-12) In addition to that, he draws a connection between the rather specific experiment and his readers' everyday life, comparing the robot cat with devices they use daily.

Spiegel outlines the experiment only shortly, giving almost no specific data, avoiding numbers and using quotes to help readers understand the experiment on a very practical level: “So, if the robot would be agreeable, the robot would ask, 'Oh, could I possibly make a suggestion now?'” (l. 18-20) Also, Spiegel narrates the experiment as if it was a story, switching to the present tense at one point (l. 26-29) and using a dramatic sounding quote: “They would essentially eliminate everything that the robot was – all of its memories, all of its behavior, all of its personality would be gone forever.” (l. 24-25) In stead of analyzing the experiment scientifically, Spiegel also uses a dialogue (l. 26-29) and humour (l. 33) to give an exciting and entertaining account of the survey.

When illustrating the implications of the experiment, Spiegel again speaks directly to his readers, making clear that these implications affect themselves: “More and more intelligent machines are integrated into our lives. They come into our beds, into our bathrooms.” (l. 42-43)

In conclusion, Spiegel tries to maintain the readers' interest by stressing the relevance of the experiment while using an entertaining language.

Teilaufgabe 3

$\blacktriangleright$ Outline what Ulrich Ladurner writes about the use of robots and drones in war (Text B), imagining you were at a youth conference on “Human-Robot-Ethics”. Mediate the text.

Tipp

In dieser Aufgabe hast du gewissermaßen zwei Aufgabe zu erledigen. Erstens sollst du skizzieren, was Ulrich Ladurner über den Einsatz von Robotern/Drohnen im Krieg sagt. Fasse dich also sehr kurz, du sollst nur eine Übersicht des Texts geben, die das Wichtigste ausdrückt. Zweitens sollst du eine Mediation schreiben, also den Text vermitteln. Vereinfache komplizierte Aussagen. Du sollst den Text nicht wörtlich übersetzen, sondern anderen (Teilnehmern einer Jugendkonferenz) ermöglichen, den Text zu verstehen. Behalte den logischen Aufbau des Texts bei.

Punkte, die du nennen musst, sind:
  • Plan der amerikanischen Regierung (soll 2036 erfüllt werden)
  • Roboter unterscheiden nicht zwischen Zivilisten und Soldaten
  • Roboterkrieg hat mehr Nachteile (u.a. mehr Kriege) als Vorteile (keine Soldaten sterben, weniger Kosten)
  • Der Einsatz von Robotern verschlimmere den Krieg und solle verboten werden

Dear listeners, aside from the question how robots and other machines change our everyday life and how our attitude towards them changes, too, let us take a look at another very serious issue. Progress in the construction of machines also affects the way of living and dying – it affects war. Ulrich Ladurner has written an alarming article in Die Zeit on March 3, 2013, named When Robots Kill. He writes about the moral implications and the dangers of robotic warfare.

According to the author, the American government plans to replace human soldiers with robots until 2036. However, the usage of robots in war holds ethic problems, as robots cannot differentiate between enemies and civilians. The question who is responsible for the killing of civilians by robots is yet left unanswered. Ladurner warns that the benefits of robotic warfare, such as avoiding the killing of one's country's soldiers and decreased costs of war, are overshadowed by its disatvantages. War could be easier declared, as you do not have to send human soldiers, and the enemy could also fight back more violently.
Ladurner concludes that people still die when human soldiers are replaced by robots and that the violence of war would not end. He strongly speaks up for a ban of killing machines as they are too dangerous.

Teilaufgabe 4.1

$\blacktriangleright$ Reflect on the emotional ways people relate to interactive devices and computer games. Refer to Bartneck's findings in Text A and other examples you are familiar with.

Tipp

In dieser Aufgabe sollst du über das emotionale Verhältnis zwischen Menschen und Spielen/interaktiven Geräten reflektieren. Denke also tiefgründig nach und versuche, verschiedene Standpunkte und mögliche Meinungen zu beachten. Wäge diese Meinungen ab und komme zu einem persönlichen Schluss über das Thema. Bei dieser Aufgabe wird großer Wert darauf gelegt, dass du deine eigene Meinung begründet darlegst. Text A soll dir vor allem als Denkanstoß dienen.

Hierüber kannst du u. a. schreiben:
  • Frust wegen nicht funktionierenden Geräten/verlorenen Spielen
  • Videospielsucht
  • Verhältnis real life – virtual life
  • Siri/Chatbots

We live in times that undergo rapid change and tremendous technical advance. The students now heading for their Abitur are the first generation that has been brought up in an environment where the internet, smart phones and computer games have long been established. As the world of machines and computers adjusts to human behaviour, we begin to adjust to them as well. They are not only convenient devices any more. They have begun to affect our emotions.

Whereas we are surrounded by machines to which we have barely a relationship at all, such as our cooking devices or washing machines, there are other devices we treat as if they were persons. Bartneck's survey has revealed that our relationship towards machines depends on their helpfulness and their adaption of human behaviour. The partakers of Bartneck's experiment even showed pangs of consciousness, being told to switch off a robotic cat that has helped them playing a game. While it is not unusual to humans to have emotions towards objects in general, as objects can be seen as representations of gods in religious fetishism, we live in times of cyber-fetishism. Sometimes we forget that machines are not any human at all, which is admittedly irrational behaviour.

People often show short outbursts of anger, forgetting the lack of consciousness and personality in machines. Beating a TV that does not switch on is not primarily a try to make it work, but a way to punish a machine. This is an application of the universal rule of reciprocity, demonstrated by Bartneck's experiment. If the machine is not nice to us, we are not nice to the machine. It is not a few short-tempered video gamers that destroy controllers or keyboards, being frustrated by a game's outcome.
But emotional relationships towards machines reach a new level when machines adapt to or learn to imitate human treats. Bartneck's robot has had a face and a voice, two very elemental treats on which we recognise persons and judge their personality. The use of human language is what irritates us. There are chat bots such as cleverbot on the internet that are able to hold a conversation with humans, adapting to their personality, reacting either nice or even fighting back when insulted. The fact that some people spend hours writing with these kinds of bots demonstrate that interactive devices can become persons to us, entities with a distinct personality.

It is worrying that some of these devices manage to replace one's need of social interaction. There are reports of lonely youths that actually think of apple's software Siri as their girlfriend. Machines can offer positive emotions one does not get as easily in real life, so that virtual life (based on machines) becomes the main focus of some. Some people are able to love electronic devices as they learn to better adapt to our behaviour. It is their response that makes them seem so human-like.

As robots and interactive devices progress, they become more and more part of our lives, sometimes even replacing human relationships. While we know that games are not living creatures, so most of us respond to them only while experiencing extreme frustration or joy, devices imitating our behaviour are more difficult to distinguish from ourselves. While it is nothing bad to feel touched by a video game as one feels touched by a good book or movie, it is a serious issue that those emotions replace the importance of our real lives.

Teilaufgabe 4.2

$\blacktriangleright$ Compare the human-robot interaction in Text A with other experiments in literature/film where humans and machines come into conflict. Assess how they deal with the situation.

Tipp

In dieser Aufgabe sollst du den Konflikt zwischen Mensch und Maschine im Text A mit einem anderen ähnlichen Konflikt aus einem Buch oder einem Film vergleichen. Am besten sind hierbei Werke, in denen es darum geht, ob Maschinen wie Lebewesen zu behandeln sind oder nicht. Beim Vergleich sollst du zudem bewerten, wie Mensch und Maschine mit der Situation umgehen. Gefragt ist also nicht nur, dass du das menschliche Konfliktverhalten gegenüberstellst, sondern dass du dir auch Gedanken machst, ob dieses Verhalten moralisch/logisch ist.

Mögliche Vergleichspunkte sind:
  • Bewertung von Maschinen nach ihrem Nutzen
  • Moralische Verantwortung von Menschen gegenüber Maschinen
  • Gefühle/Persönlichkeit von Maschinen
  • Recht der Menschen über „Leben und Tod“ der Maschinen

Bartneck's experiment has demonstrated that mankind might be on the way to replace interaction with machines with actual relationships towards robots, interactive devices etc. As they resemble more and more the likes of us, the borderline between people and machines may blur. In Steven Spielberg's movie A.I. Artificial Intelligence, this borderline is almost non-existent. Instead of having to switch off a robotic cat that is not able to feel, the conflict between humans and machines is much more complicated: David, a childlike robot intended to help Monica and Henry overcome the grief about their son Martin, who is in a coma, is actually able to love.

The partakers of Bartneck's experiment were confronted with a moral dilemma, being confused over how to react to a machine that pleads for its “life”. Even if they knew that the robotic cat was not a living being, some of them even talked to the cat about its fate. The more helpful the cat was, the longer did the partakers need to switch the cat off. Apparently, they had developed kind of an emotional relationship towards the robot.
In A.I. Artificial Intelligence, robots have long been introduced to society, replacing jobs in the industry and elsewhere, even replacing prostitutes. However, people do not treat these robots as human beings, maybe because the issue of opposition between humans and machines is something this fictional society is more aware of than our own. One great exception is a new type of robot, that is not only able to feel pain, but is also able to love, because researchers have found out how emotions develop as electronic signals in the human brain. This is the cause for a moral struggle that is much bigger than the pangs of consciousness the partakers of Bartneck's experiment had to face. If robots are able to feel, in which way are they any different from human beings and how should humans react towards them?

Whereas Alix Spiegel and Bartneck stress the ongoing change in human interaction with machines (l. 44-49), A.I. Artificial Intelligence holds the view that there is a long way to go from accepting robots in our everyday life and actually treating them as entities. Monica, for example, is initially appalled of David, vehemently emphasizing that he could never be a replacement for her comatose son. Shortly after his introduction, she sees him as an alien intruder and tries to avoid him. Robots with manlike features are not only able to win one's favour, but can make people feel uneasy, because they are lifeless objects imitating life. Monica, however, experiences similar pangs of consciousness as the partakers of Bartneck's experiment. Because of David's childlike look and behaviour, she feels bad treating him as an annoying toy and finally begins to develop motherly feelings towards him. After having imprinted him, which means David will eternally love her, she fully accepts him, being proud of David as of a “real” son.

The initial conflict starts again when Martin wakes up from his coma. Now, Monica and Henry are confused again about how to behave towards David. Martin thinks of David as a rival, stating that David is a mere toy and that he is the one that possesses the right to be loved by his parents. This conflict is far more complex than the one in Bartneck's experiment, as it poses questions not only about how humans interact with machines, but also about if machines could replace the role of humans in our lives. In addition to that, it is arguable that David possesses the right to be treated as a human being, as he develops an own identity. While the reactions of the partakers of Bartneck's experiment are mainly irrational, Martin's rivalry and the confusion of the parents are understandable, although being unfair towards David. Their reactions are more extreme, because David is an active part of their family life and not just a robot helping with playing a computer game.

After David having accidently almost killed Martin in panic, Monica is in a similar situation as the test subjects supposed to switch off the robotic cat. Henry wants Monica to give David back to the company that has constructed him, which would mean his destruction, as his love for Monica will never end. Likewise, the partakers of the experiment were told that switching off the robot would mean that “they would essentially eliminate everything that the robot was” (l. 24). As Monica is experiencing a much greater moral dilemma, she does not do as she is told, unlike the participants in the experiment. Instead, she chooses to let David go, hoping that he will survive in the wilderness. However, this decision is much harder for her than having to switch off a talking robot cat. David begins to cry and offers to behave better, pleading for not being abandoned. Bartneck's cat was also able to plead for its “life”, but the participants knew that it had no feelings. Whereas they managed to silence their consciousness, Monica does the only thing that guarantees David's survival, and yet she still feels remorse. It is clear that she will always remember this moment, as she deeply feels guilty about having imprinted and David and then having given him away. The partakers of Bartneck's experiment felt uneasy switching off the robot, but I doubt that they will ever think of this action as the biggest fault in their lives.

In conclusion, the reactions of Bartneck's test subjects are irrational, but also understandable, as for them, the robot cat may have been likeable. It was able to talk and to some, the cat “behaved” very helpful. Switching off the robot may have been not so easy for some, but it was not a greater moral dilemma. The treatment of David, however, is much more problematic. Although he is able to feel not only pain, but also love, people treat him as if he was just a toy, like the robot cat. Monica, Henry and Martin behave wrongly, as David deserves to be seen as a living creature. Even if his life is of synthetic origin, his autonomous personality is what makes him more similar to the likes of man than to the likes of programmed robots such as Bartneck's cat. Monica lets him live, but she also betrays him and does not stand up to her role as a mother. A.I. Artificial Intelligence demonstrates that mankind might at some point be obliged to take responsibility for its creations.

Teilaufgabe 4.3

$\blacktriangleright$ Reflect on the implications of Bartneck's experiment (Text A) and the benefits as well as risks of a robot-aided society. Imagine you were writing an article for the online magazine Science News for Students.

Tipp

In dieser Aufgabe sollst du über die Bedeutung von Bartnecks Experiment und die Vor-/Nachteile einer auf Roboter gestützten Gesellschaft reflektieren. Denke also tiefgründig nach und versuche, verschiedene Standpunkte und mögliche Meinungen zu beachten. Wäge diese Meinungen ab und komme zu einem persönlichen Schluss über das Thema. Bei dieser Aufgabe wird großer Wert darauf gelegt, dass du deine eigene Meinung begründet darlegst. Text A soll dir vor allem als Denkanstoß dienen. Dein Text soll als Artikel für das online-Magazin Science News for Students gestaltet sein. Schreibe also möglichst unterhaltsam und nicht zu kompliziert.

Gute Punkte für eine Diskussion sind:
  • Fähigkeit der Roboter, menschliches Verhalten zu analysieren
  • Effektive Arbeitskräfte, aber unflexibel
  • Datenschutz
  • Abhängigkeit von Robotern

When we talk about robots, artificial intelligence and the change of our relationship towards machines, we suppose we are talking about future. However, this change is happening now. Machines have long been part of our everyday life, now they are becoming part of our social life, too. This change may not currently be as tremendous as in movies such as I, Robot or Surrogates, but do we not live in times of science fiction when even robotic cats are treated like human beings? This is exactly what happened in an experiment by researcher Bartneck.

Bartneck's test subjects had to play a computer game with the aid of a robotic cat that was either helpful and “nice” or not so helpful and “mean”. Those having played with a helpful cat clearly suffered pangs of consciousness when they were told to switch the cat off. The better a machine is adapted to our behaviour and our expectations, the more likely is it that we develop a social relationship towards it. Bartneck's experiment demonstrated that we are at the point of designing robots that can have an impact on our emotions. Machines are not only useful devices to us, we are actually treating them more and more like living creatures. What does this ongoing change mean? Are robots crucial for the progress of our society or are they a threat to humanity itself? Let us first take a look at what robots could do good for us in the future.

As a generation that has been brought up with google, we have always known the benefits of engines that adapt to our behaviour and that are able to analyze our way of thinking. This analytic ability is maybe one of the most useful traits of robots. Even the most able detective is nothing compared to a simple analytical programm that can evaluate thousands of bits and bytes. Robots would be perfect operating at a customer service or anything similar. New information could be easily installed to the robot's hard disk, whereas humans need to take some time for learning. Robots do not forget, they are reliable and they will never have a bad day.
Additionally, they can fill a serious gap in our society: There are not enough carers and nurses in hospitals, orphanages, retirement homes. The friendlier a robot can be and the more human it is able to behave, the better for those needing help and good care. Robots do not sleep, which is a big asset in the healthcare sector.
And now imagine personal assistants that know you better than you, yourself or your best friends do! Machines and even google manage to analyze even almost unnoticeable traits and patterns of behaviour. Making these patterns visible, we would be able to improve ourselves tremendously. Robots could open up a new era of self-improvement.

However, robots could be a serious threat to our society, too. First of all, they are unable to react to unforseen scenarios. Robots cannot act autonomously and have to rely on the data on their hard drive. In hospitals or retirement homes, this fact is a matter of life and death: So as long as robots are not prepared for even the most rare scenarios, they are unreliable. Also, robots have to be waited regularly, as a defect of a healthcare-robot would be fatal to its patients' lives. If humanity slowly gives up control, it is reliant on the work of robots. We may lose the ability to help ourselves, which would be a very poor deal for the comfort of hardworking machines.
Also, robots can be misused as spies. If google was a robot, what would say us that it does not send data about our personality, our everyday life and our behaviour to a big corporation that gains control over our life? Bartneck's robotic cat was able to manipulate people, as they treated it as a living creature. Do we want to let machines into our living rooms that change the way we feel? Do we want to give up control over our data protection and privacy?

Technology is progressing fast and even us as a generation that has been brought up with the internet and always present media do not know where we are heading. If humanity is able to build well-adapted and helpful robots that finally revolutionize our healthcare system, this would mean a big asset as long as we manage to control the robots. But I doubt that we have learned enough to manage living with extremely manlike robots yet – some of us cannot even distinguish between their virtual and their real life, some of us are best friends with their smart phone. Robots should not substitute human interaction. Bartneck's experiment shows us that we have to learn a lot about our relationship towards machines. Let us hope that we keep control over the machines we build and that it is not the robots who control us.

Weiter lernen mit SchulLV-PLUS!
Jetzt freischalten
Infos zu SchulLV PLUS
Ich habe bereits einen Zugang
Zugangscode einlösen
Login
Folge uns auf
SchulLV als App