AI has created an interesting and dynamic inflection point for our society. It’s ushered in an era of rapid technological advancements that are revolutionizing industries and reshaping the landscape of businesses on a global scale, and at the same time it’s also created an array of substantial risks that can not be underestimated. Recognizing the critical importance of safeguarding against these risks, organizations should really be thinking through how best to navigate this new world.
One way of doing this is for those organizations to prioritize the development and implementation of AI risk management frameworks and policies. The good news is that a large body of work already exists in this space. I’ve compiled a table of the three major AI risk frameworks below, and then I’ll spend some time digging into the one I typically defer to, the NIST AI Risk Management Framework, by walking through how I think about implementation of the framework at a high level.
Prominent AI Risk Management Frameworks
Framework | Topics Covered | Best Suited For |
---|---|---|
NIST AI RM Framework 1.0 | Governance, Risk Identification and Assessment, Risk Analysis and Tracking, Risk Management Controls | All organizations, regardless of size or sector, that design, develop, deploy, and use AI systems |
US DoE AI RM Playbook | AI Risk Education, Prevention Planning, Ethical and Equity AI Governance, AI Risk Assessment Development and Implementation | AI leaders, practitioners, and procurement teams in organizations |
Wharton AI Risk and Governance | Data Related Risks, AI/ML Attacks, Testing and Trust, Compliance | Financial services and firms adopting AI systems |
References: NIST AI RMF / US DoE AIRMP / Wharton AI Risk and Governance
NIST AI Risk Management Framework (AI RMF 1.0)
The National Institute of Standards and Technology (NIST) developed the AI Risk Management Framework (AI RMF 1.0) to help organizations manage risks associated with AI. The framework is intended to improve the development, use, and evaluation of AI products, services, and systems.
The NIST AI RMF is operationalized through four functions: Govern, Map, Measure, and Manage. The ‘Govern’ function establishes governance structures and processes to build a culture of AI risk management across the organization. ‘Map’ identifies and assesses the risks associated with AI systems and people involved in using AI within the organization. ‘Measure’ assesses, analyzes, and tracks the exposure from identified AI risks. Finally, ‘Manage’ implements and maintains risk management controls to mitigate identified risks.
What does Implementation Look like?
Now that we have a high-level idea of what the framwork provides, how about some more tactical ideas? Below are some suggestions on how an organization might start to think about implementing the AI RMF in a larger enterprise environment.
1. Govern: Establish AI Risk Governance Structures
- Develop clear governance structures and processes to own AI risk management across the organization. This could be your “AI Risk Council”, or your “AI Review Board”.
- Define roles and responsibilities for individuals and teams involved in AI projects to ensure accountability. These people will be your day-to-day interfaces with this governance structure.
- Integrate AI risk management into existing governance frameworks, such as IT governance or compliance programs. Think through how existing teams and other governance approaches will integrate.
- Promote collaboration between business units, IT, and risk management teams to align AI initiatives with overall organizational objectives. We need to make sure people are aware of this work and that they understand how it’s relevant to their specific areas.
2. Map: Identify and Assess AI Risks
- Begin by conducting a thorough assessment of AI systems and the people using AI within your organization. Yes, leg-work time. Get out there, investigate, document and compile!
- Identify potential risks associated with AI, including data privacy, security, bias, and legal compliance. This may vary across industries, we what we want here is a list of critical activities and functions that can be associated or impacted by AI initiatives.
- Assess the impact and likelihood of each identified risk to prioritize them effectively. This allows us to prioritize and focus on the highest risk activities.
- Collaborate with relevant stakeholders, including data scientists, data engineers, and legal experts, to gain a comprehensive understanding of AI risks. This work provides a diverse and well-rounded opinion set to inform a holistic view of the scope of risk.
3. Measure: Analyze and Track Exposure to AI Risks
- Develop key performance indicators (KPIs) and metrics to assess and analyze AI risk exposure. We need to be able to report back on progress and monitor risk. Dashboard time! A great idea here is to associate an AI risk score with core initiatives or larger departments.
- Regularly monitor and track these metrics to stay informed about changes in risk levels. Think about what regular meetings or briefings may exist that could serve as regular check-in points.
- Implement AI risk assessment tools and software to automate data collection and analysis. This is still a nascent space but just as we’ve seen automated security asesesment tools pop up, this capability will be here in no time. Until then, create a standard rubric to follow consistently.
4. Manage: Implement and Maintain Risk Management Controls
- Develop and implement risk mitigation strategies based on the risk assessment findings. How do we plan to avoid the pot-holes ahead?
- Establish clear procedures for responding to and containing AI-related incidents and breaches. We hit a pot-hole! Now what? This shouldn’t be a real-time reaction, we need a plan in place before it happens.
- Regularly update risk management controls to adapt to evolving AI technologies and emerging risks. The AI space has a pace unlike many technologies. We need to ensure we’re actively monitoring new developments and prepared to incorporate them into our planning process.
- Ensure that risk management practices align with legal and regulatory requirements, including GDPR, CCPA, or industry-specific regulations. This is obvious, but important, so I’ve included it. Your risk management work in AI neds to support, not contradict, other regulatory and compliance requirements.
Hopefully this has been helpful, or at least given you a starting point as far as thinking through AI risk management. Foundational work on this topic should be a core component of any organization’s AI strategy, and I wouldn’t consider any strategy complete without it.
I know risk management frameworks can be a dry topic, but organizations really need to look at this risk management work as an enabler, not a hinderance or set of hoops to jump through. A solid AI risk management framework not only safeguards business operations but also builds trust with stakeholders and consumers alike. As AI continues to evolve and shape our world, it’s going to be those organizations who prioritize and implement robust AI risk management frameworks that will not only thrive in this new era but also lead the way towards a responsible and sustainable AI-enabled future.