Robohub is creating a series of "robotics and AI ethics" posts, starting with a roundup of all the rules for AIs and robots of note, starting with Asimov's Three Laws and moving through rules published by scholarly and technical groups like ACM and IEEE, trade union groups like UNI, and multistakeholder groups like the Montréal Declaration for Responsible AI draft principles.
The ACM US Public Policy Council Principles for Algorithmic Transparency and Accountability (Jan 2017)
Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.
2. Access and redress: Regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.
3. Accountability: Institutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.
4. Explanation: Systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made. This is particularly important in public policy contexts.
5. Data Provenance: A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process.
6. Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.
7. Validation and Testing: Institutions should use rigorous methods to validate their models and document those methods and results.
A round up of robotics and AI ethics: part 1 principles
(via 4 Short Links)