Unpacking the Future of Artificial Intelligence Innovation Act
AI isn’t just rapidly becoming part of our lives; it’s also drawing scrutiny from politicians at the state and national levels. The Future of Artificial Intelligence Innovation Act of 2024 S. 4178 (sponsored by Senators Maria Cantwell, Todd Young, Marsha Blackburn, John Hickenlooper, Ben Ray Lujan, and Roger Wicker) is a prime example of forthcoming policies that AI companies will need to adhere to for continued success.
The Act authorizes the creation of the U.S. AI Safety Institute within the National Institute of Standards and Technology (NIST). This institute will develop voluntary standards and collaborate with national labs to establish testbeds that accelerate AI innovation.
Building On NIST’s AI Risk Management Framework
In practice, the forthcoming U.S. AI Safety Institute will build on NIST AI Risk Management Framework (RMF). Released in January 2023, the AI RMF provides voluntary guidelines for designing, developing, and deploying AI systems with a focus on risk management. This framework, although non-binding, is anticipated to influence future regulatory standards.
The framework is structured around four core functions: Map, Measure, Manage, and Govern, each aimed at addressing specific risks associated with AI technologies.
Defining Trustworthy AI Systems
The framework also defines seven characteristics of trustworthy AI systems:
- Safe: Prevents physical or psychological harm.
- Secure and Resilient: Protects against attacks and withstands adverse events.
- Explainable and Interpretable: Offers clear understanding of its mechanisms and outputs.
- Privacy-Enhanced: Safeguards autonomy by protecting confidentiality and control.
- Fair and Bias-Managed: Ensures equity by managing biases.
- Accountable and Transparent: Provides accessible information at various stages of the AI lifecycle.
- Valid and Reliable: Performs as intended through continuous testing and monitoring
Starting with SOC 2
With these characteristics in mind, AI companies should aim to become SOC 2 or ISO27001 compliant, as those frameworks share many characteristics with the guidelines outlined by NIST and forthcoming regulations vis-a-vis the Future of Artificial Intelligence Innovation Act of 2024.
Koop’s customer assurance platform helps tech companies seamlessly navigate the complexities of business insurance, regulatory compliance, and security automation in one place.
We provide a comprehensive suite of insurance coverage that includes General Liability, Technology Errors & Omissions, Cyber Liability, and Management Liability coupled with the most cost-effective SOC 2 compliance certification on the market.
Ready to learn more? Visit our website at https://www.koop.ai or drop us a note at [email protected].