Artificial intelligence (AI) powers many apps and services that people use in daily life. With billions of users of AI across fields from business to healthcare to education, it is critical that leading AI companies work to ensure that the benefits of these technologies outweigh the harms, in order to create the most helpful, safe, and trusted experiences for all.
Responsible AI considers the societal impact of the development and scale of these technologies, including potential harms and benefits. The AI Principles provide a framework that includes objectives for AI applications, and applications we will not pursue in the development of AI systems.
As AI development accelerates and becomes more ubiquitous, it is critical to incorporate Responsible AI practices into every workflow stage from ideation to launch. The following dimensions are key components to Responsible AI, and are important to consider throughout the product lifecycle.
Fairness addresses the possible disparate outcomes end users may experience as related to sensitive characteristics such as race, income, sexual orientation, or gender through algorithmic decision-making. For example, might a hiring algorithm have biases for or against applicants with names that are associated with a particular gender or ethnicity?
Learn more about how machine learning systems might be susceptible to human bias in this video:
Accountability means being held responsible for the effects of an AI system. This involves transparency, or sharing information about system behavior and organizational process, which may include documenting and sharing how models and datasets were created, trained, and evaluated.
Another dimension of accountability is interpretability, which involves the understanding of ML model decisions, where humans are able to identify features that lead to a prediction. Moreover, explainability is the ability for a model’s automated decisions to be explained in a way for humans to understand.
Privacy practices in Responsible AI involve the consideration of potential privacy implications in using sensitive data. This includes not only respecting legal and regulatory requirements, but also considering social norms and typical individual expectations. For example, what safeguards need to be put in place to ensure the privacy of individuals, considering that ML models may remember or reveal aspects of the data that they have been exposed to? What steps are needed to ensure users have adequate transparency and control of their data?
The advent of large, generative models introduces new challenges to implementing Responsible AI practices due to their potentially open-ended output capabilities and many potential downstream uses.
The quiz is available on Wooclap. However, you can already view the questions.