Call Us: US - +1 845 478 5244 | UK - +44 20 7193 7850 | AUS - +61 2 8005 4826

Responsibility and Lethality for Unmanned Systems:

We then consider trust and deception, from a somewhat non-traditional perspective, i.e., where the robot must make decisions when to trust or deceive its human counterparts. We illustrate this is feasible through situational analysis and draw upon cognitive models from interdependence theory to explore this capability, while noting the ethical ramifications of endowing machines with such a talent. Finally, we explore the notion of the maintenance of dignity in human-robot interaction and suggest methods by which this quality of interaction could be achieved. While several meaningful experimental results are presented in this article to illustrate these points, it must be acknowledged that the research in the application of computational machine ethics to real-world robotic systems is still very much in its infancy. Robot ethics is a nascent field having its origins in the first decade of the new millennium [Veruggio 05]. Hopefully the small steps towards achieving the goal of ethical human-robot interaction presented here will encourage others to help move the field forward, as the relentless pace of this technology is currently outstripping our ability to fully understand its impact on just who we are as individuals, society, and a species.