As discussed in the previous unit, Microsoft has developed and refined its own internal process to govern AI responsibly. This unit explains how this governance system works in a real situation. While every organization needs its own unique governance frameworks and review processes, we believe that our sensitive use framework can serve as a helpful starting point. One of Microsoft’s early steps in our responsible AI governance process was to use a sensitive uses review trigger. The framework helped our internal and customer-facing teams identify when specific use cases need more guidance.
Microsoft sensitive use case framework
Per our responsible AI governance documentation, we consider an AI development or deployment scenario a “sensitive use” if it falls into one or more of the following categories:
- Denial of consequential services: The scenario involves the use of AI in a way that may directly result in the denial of consequential services or support to an individual (for example, financial, housing, insurance, education, employment, or healthcare services).
- Risk of harm: The scenario involves the use of AI in a way that may create a significant risk of physical, emotional, or psychological harm to an individual (for example, life or death decisions in military, safety-critical manufacturing environments, healthcare contexts, almost any scenario involving children or other vulnerable people, and so on).
- Infringement on human rights: The scenario involves the use of AI in a way that may result in a significant restriction of personal freedom, opinion or expression, assembly or association, privacy, and so on (for example, in law enforcement or policing).
We train our employees to use this framework to determine whether an AI use case should be flagged for further review—whether they’re a seller working with a customer or someone working on an internal AI solution. We also train our Responsible AI Champs for their role as liaison between employees and central governance teams.
Leave a Reply