top of page
Search

AI Ethics and Safety Advisory

Writer's picture: Nicholas KlugeNicholas Kluge

Updated: Nov 11, 2021



It is not an uncommon situation when an individual, or a group of people, finds themselves in front of a decision-maker responsible for making some form of judgment based on some set of observable facts and characteristics (e.g., a judge in a civil court, an HR assessor in a job interview, or a bank manager responsible for authorizing, or not, a loan). However, what is new is the use of statistical inference models to automate such processes (e.g., machine learning models).


As systems created from machine learning increasingly affect people and society, understanding potential unintended consequences must also be studied and recognized. To anticipate, prevent, and mitigate the unintended consequences of such systems, we must understand when and how harm can be introduced throughout the life cycle of such systems (i.e., data collection, training, validation, testing, deployment, etc.).


However, there are still few proposals for how we should implement ethical principles and normative guidelines in the practice of developing these types of systems. The goal of this project is to try to bridge the gap between discourse and praxis. Between abstract principles and technical implementation.


We are in the process of developing an AI Ethics and Safety manual. A guide composed of several tools to help developers implement AI systems ethically and robustly. This initiative is part of a partnership between the PUCRS School of Humanities, Tecnopuc, and NAVI AI New Ventures. And we hope to implement our methodology with companies linked to Tecnopuc that seek to implement ethical principles in the development of their products. We intend to publish our manual (in open form) as soon as we finalize it.



For more information, contact Nicholas Kluge (President of AIRES at PUCRS).

14 views0 comments

Recent Posts

See All

Commentaires


bottom of page