Curated Collections of the Most Useful Facts.

What's This?
Ethics of Artificial Intelligence

Ethics of Artificial Intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is concerned with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings with the moral behavior of artificial moral agents (AMAs).

 

Curated by

Anna Hawes

Anna Hawes

56 Knowledge Cards

Views    660

Share     twitter share  

Curated Facts

A common criticism of the AI project is that a computer only does what it is programmed to do, that it is without the mysterious property called free will and, therefore, can never become "moral." . . . Some philosophers might even be tempted to say that a machine, however intelligent, which is without the capacity to value and to make moral choices cannot be considered an end in itself and, therefore, could not be the possessor of certain human rights.

Article:   Artificial Intelligence a…
Source:  Offline Book/Journal

The best-known set of guidelines for robo-ethics are the “three laws of robotics” coined by Isaac Asimov, a science-fiction writer, in 1942. The laws require robots to protect humans, obey orders and preserve themselves, in that order. Unfortunately, the laws are of little use in the real world. . . Regulating the development and use of autonomous robots will require a rather more elaborate framework.

Article: Robot ethics: Morals and ...
Source: The Economist

Machines that are both autonomous and beneficent will require some kind of moral framework to guide their activities. ... It should be noted, of course, that the type of artificial intelligence of interest to Čapek and today’s writers — that is, truly sentient artificial intelligence — remains a dream, and perhaps an impossible dream. But if it is possible, the stakes of getting it right are serious enough that the issue demands to be taken somewhat seriously, even at this hypothetical stage.

Article: Machine Morality and Huma...
Source: The New Atlantis

[Judea Pearl is] working on a calculus for counterfactuals—sentences that are conditioned on something that didn't happen. ... It's kind of like an alternative reality—you have to give the computer the knowledge. The ability to process that knowledge moves the computer closer to autonomy. It allows them to communicate by themselves, to take a responsibility for one's actions, a kind of moral sense of behavior. These are issues that are interesting—we could build a society of robots that are able to communicate with the notion of morals.

Article: Artificial Intelligence P...
Source: US News and World Report

Human morality tends to become more complex and hard practicable because of its diversity and relativity, being often reduced in practice to the professional deontology. Even past and present moral theories (ethical systems) present serious weaknesses, now analytically studied by computer ethics. Such conceptual problems, which can make difficult the work of moral norms as rules for machines, are not consequences of theoretical errors, but are generated even by the nature of human morality and its complexity.

Article:   Moral Intelligence for Hu…
Source:  Offline Book/Journal

It might prove to be the case that no hierarchy of normative principles can do justice to the complexity of personal, moral choice. It also might be that the self-reflexively conscious ego of a sophisticated AI would take no programming at all, and that it would pick and choose its own rules, rules it learns through the trials and errors of time.

Article:   Artificial Intelligence a…
Source:  Offline Book/Journal

Since evolution selects for the fittest individuals, morality can be viewed as having evolved from the superior benefits it provides. This is demonstrated by mathematical models, as described in game theory, of conflict and cooperation between intelligent rational decision-makers. So while natural selection will invariably lead intelligences to a morality-based cooperation, it is in the best interest of humanity to accelerate the artificial intelligence’s transition from conflict to collaboration.

Article: Morality of the Machine: ...
Source: h+ Magazine

Whether predetermined or not, the fact is that machines are involved in all sorts of decisions, from approving credit card transactions to allocating resources in hospitals. They are even being deployed as automatic sentries on national borders. I'm not sure whether this means that they have "transcended into the sphere of decision making" but it does mean that without direct human oversight machines are selecting among options that have moral consequences.

Article:   Morality and Artificial I…
Source:  Offline Book/Journal

As they become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency. Weapons systems currently have human operators “in the loop”, but as they grow more sophisticated, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously. As that happens, they will be presented with ethical dilemmas.

Article: Robot ethics: Morals and ...
Source: The Economist

Whether you're a utilitarian or Kantian, Christian or Buddhist, you can agree that stabbing the stranger sitting next to you on the train is morally bad... Of course, there's lots of disagreement about what constitutes a harm, and when it is acceptable to cause a harm, but our most basic premise was that most machines that are currently making harmful decisions don't even have the means to take those harms into account when making these decisions.

Article:   Morality and Artificial I…
Source:  Offline Book/Journal
Player
feedback