President Biden has issued an Executive Order (EO) to establish new standards for AI safety and security. This move builds upon previous initiatives aimed at responsible AI innovation, including a commitment from 15 tech industry leaders to ensure safe and trustworthy AI development.
The EO’s objectives encompass safeguarding Americans’ privacy, advancing equity and civil rights, consumer and worker protection, fostering innovation and competition, and bolstering American leadership on the global stage. It is released in conjunction with the UK’s AI Safety Summit, aligning with the UK’s efforts to create a favorable regulatory framework for AI.
Recognizing the implications of AI growth for safety and security, the order outlines key actions to mitigate potential risks:
- Mandating developers of powerful AI systems to share safety test results and critical data with the US government.
- Developing standards, tools, and tests to ensure AI system safety, security, and trustworthiness.
- Addressing the risks of AI’s misuse in engineering hazardous biological materials.
- Protecting against AI-enabled fraud and deception by establishing standards for detecting AI-generated content and authenticating official content.
- Establishing an advanced cybersecurity program to identify and rectify vulnerabilities in critical software using AI tools.
- Directing the development of a National Security Memorandum to outline further actions on AI and security.
Casey Ellis, CTO and Founder of Bugcrowd, views this EO as a proactive approach to harness AI’s potential while mitigating associated risks. Furthermore, the Executive Order emphasizes the importance of data privacy and calls upon Congress to pass bipartisan data privacy legislation. A recent ISACA survey highlighted concerns among digital trust professionals, with 68% worried about AI-related privacy violations and 77% concerned about the spread of disinformation and misinformation.
White House Executive Order Addresses AI and Civil Rights
The White House has underscored the need to combat discrimination, bias, and abuses in justice, healthcare, and housing that can arise from irresponsible AI applications.
The Executive Order (EO) seeks to prevent AI algorithms from exacerbating discrimination, acknowledging the presence of social bias in some large language models (LLMs) due to biased training data.
Andre Lutermann, from Microsoft Deutschland’s CTO’ Office, highlighted the importance of responsible AI principles during ISACA’s Digital Trust event in Dublin, emphasizing the need for guidelines on AI’s ethical boundaries, especially in LLM training.
The EO also addresses AI’s impact on employment by planning to produce reports on AI’s labor market effects and developing principles and best practices to mitigate harm and optimize benefits for workers. A study by ISACA revealed that AI could be a significant job creator in the digital trust sector. While 70% of respondents believed AI would positively affect their jobs, 81% acknowledged the need for additional training to retain or advance their careers.
Furthermore, the Biden administration aims to promote responsible and effective government AI use, including the rapid hiring of AI professionals.