Social and ethical behavior in the internet of things

F Berman, VG Cerf - Communications of the ACM, 2017 - dl.acm.org
F Berman, VG Cerf
Communications of the ACM, 2017dl.acm.org
FEBRUARY 2017| VOL. 60| NO. 2| COMMUNICATIONS OF THE ACM 7 cerf's up fails, is
hacked, or acts with negative or unintended consequences, who is accountable, how, and to
whom? A high-profile example of this is autonomous vehicles, which make many decisions
without “a human in the loop.” We currently expect automobile companies to be accountable
if automotive systems, such as antilock brakes, fail. As cars begin to drive themselves, who
should be responsible for accidents? As systems take on more decisions previously made …
FEBRUARY 2017| VOL. 60| NO. 2| COMMUNICATIONS OF THE ACM 7 cerf’s up fails, is hacked, or acts with negative or unintended consequences, who is accountable, how, and to whom? A high-profile example of this is autonomous vehicles, which make many decisions without “a human in the loop.” We currently expect automobile companies to be accountable if automotive systems, such as antilock brakes, fail. As cars begin to drive themselves, who should be responsible for accidents? As systems take on more decisions previously made by humans, it will be increasingly challenging to create a framework for responsibility and accountability. 3. How do we promote the ethical use of IoT technologies? Technologies have no ethics. Many systems can be used for both good and ill: Video surveillance may be tremendously helpful in allowing senior citizens stay in their homes longer and parents to monitor their newborns; they can also expose private behavior to unscrupulous viewers and unwanted intrusion. In his highly popular and visionary books, Isaac Asimov posited four laws of robotics1, 2 on the basic theme that robots may not harm humans (or humanity), or, by inaction, allow humans (humanity) to come to harm. Asimov’s Laws provide a glimpse into the social and ethical challenges that will need to be addressed in the IoT. How do we promote and enforce ethical behavior by both humans and intelligent systems? Will we need to develop and incorporate “artificial ethics” into automated systems to help them respond in environments when there are good and bad choices? If so, whose ethics should be applied?
ACM Digital Library