Key Takeaways:
- The National Institute of Standards and Technology (NIST) is partnering with MITRE to create an AI Economic Security Center to secure U.S. critical infrastructure from cyber threats.
- The center will focus on developing and adopting AI-driven tools to help security personnel protect critical infrastructure systems, such as power plants and hospitals.
- The partnership aims to drive innovation and adoption of AI technology in the industry, while addressing threats from adversaries’ use of AI and reducing risks from insecure AI.
- The center will work on ensuring the reliability of mission-critical systems that rely on AI models and developing technologies to mitigate AI glitches.
- The partnership is part of the Trump administration’s strategy to maintain America’s competitive advantage in AI research and deployment.
Introduction to the Partnership
The National Institute of Standards and Technology (NIST) has announced a partnership with the nonprofit research organization MITRE to create an AI Economic Security Center. The center’s primary goal is to drive the development and adoption of AI-driven tools that can help security personnel protect critical infrastructure systems, such as power plants and hospitals, from cyber threats. This partnership is part of the Trump administration’s strategy to maintain America’s competitive advantage in AI research and deployment, particularly in the face of increasing competition from China. The center will focus on developing and adopting AI technology that can help secure critical infrastructure, while also addressing threats from adversaries’ use of AI and reducing risks from insecure AI.
The Need for Reliable Automation
Experts believe that the new AI security center should prioritize research on ensuring the reliability of mission-critical systems that rely on AI models. Nick Reese, the chief operating officer at Optica Labs, notes that while AI can simplify data analysis and service delivery, it is equally important to ensure that decisions made using AI are accurate. Reese suggests that the center should explore ways to create true AI assurance at the point where humans interact with systems, rather than just focusing on protecting AI datasets and models from hackers. This is a critical area of research, as AI systems are often less reliable than traditional mechanical and electrical components, and critical infrastructure facilities have a lower tolerance for glitches.
The Importance of AI Assurance
Andrew Lohn, a senior fellow at Georgetown University’s Center for Security and Emerging Technology, agrees that increasing reliability should be a priority for AI security research. Lohn notes that AI can be impressive, but it is often less reliable than what is demanded of traditional systems and components. He suggests that research should focus on developing technologies that can mitigate the risks associated with AI glitches, rather than trying to eliminate them entirely. This could involve designing systems and standards that take into account the potential for AI failures, much like safety measures are designed to mitigate human failings. By developing these technologies, the center can help ensure that AI systems are reliable and secure, and can be safely integrated into critical infrastructure.
The Role of NIST and MITRE
NIST and MITRE will work closely together to drive the development and adoption of AI-driven tools. The agency will focus on areas where collaborative development and pilot testing can demonstrate significant technology adoption impacts at a fast pace of innovation. The goal of the partnership is to help U.S. industry make smart choices about AI implementation, while addressing the pressing challenges facing the nation. The center will develop technology evaluations and advancements that are necessary to effectively protect U.S. dominance in AI innovation, address threats from adversaries’ use of AI, and reduce risks from reliance on insecure AI. By working together, NIST and MITRE can help ensure that AI technology is developed and deployed in a way that is safe, secure, and beneficial to the nation.
Conclusion and Future Directions
The partnership between NIST and MITRE is an important step towards securing U.S. critical infrastructure from cyber threats. By prioritizing research on reliable automation and AI assurance, the center can help develop technologies that can mitigate the risks associated with AI glitches and ensure that AI systems are safe and secure. The center’s work will be critical to maintaining America’s competitive advantage in AI research and deployment, and will help ensure that AI technology is developed and deployed in a way that benefits the nation. As the center begins its work, it will be important to monitor its progress and provide feedback on its efforts. With the right approach, the center can help drive innovation and adoption of AI technology, while addressing the pressing challenges facing the nation.