What the Latest Executive Order Means for the AI Industry
- The new executive order from President Biden on AI covers eight key areas: AI safety and security, privacy, equity and civil rights, consumer, patient, student, and worker protections, innovation and competition, American leadership abroad, and government use.
- This executive order will help advance the safer use of AI, increasing public trust and thereby increasing AI adoption by mitigating much of the negative perception around AI.
- Veritone is excited about these regulations, for they directly align with our AI for Good principles and product portfolio, uniquely positioning us at the forefront of companies prepared to meet this new regulatory future in AI.
President Biden’s recent artificial intelligence (AI) executive order marks a significant regulatory step in the ethical and responsible use of AI. For me and Veritone as a whole, it’s not a surprise. We’ve been waiting for direction from the White House for some time now. We’ve witnessed incremental regulations move through state legislatures. And New York City passed regulations around the use of AI in talent acquisition. Federal regulations were the next logical step.
The order contains eight key areas that set out guidelines and regulations on how the federal government intends to protect citizens and the labor force from the unethical use of AI. In this article, I’ll break down the key areas of the executive order and share what this means for the AI industry as a whole.
Key Actions of the Executive Order
Much of the executive order concerns maintaining national security and addressing foundational model technology such as ChatGPT. Here’s what the executive order covers:
- AI Safety and Security: Developers of powerful AI systems must share safety test results with the U.S. government. The National Institute of Standards and Technology will set rigorous standards for AI safety and security. Additionally, advanced cybersecurity programs will be established.
- Protecting Privacy: The President calls for Congress to pass data privacy legislation and supports the development of privacy-preserving AI techniques and technologies.
- Advancing Equity and Civil Rights: The government aims to prevent algorithmic discrimination in areas like housing, criminal justice, and education.
- Consumer, Patient, and Student Protection: The government will promote the responsible use of AI in healthcare, education, and product safety.
- Supporting Workers: Measures will be taken to mitigate the impact of AI on jobs, protect workers’ rights, and invest in workforce training.
- Promoting Innovation and Competition: The government will catalyze AI research, support small developers and entrepreneurs, and expand opportunities for skilled immigrants in AI fields.
- American Leadership Abroad: The administration will collaborate with international partners to establish global AI frameworks and standards, promoting responsible AI use worldwide.
- Government Use of AI: Government agencies will issue guidance for responsible AI deployment, enhance procurement, and invest in AI talent development.
What Does This Mean for the AI Industry?
The executive order encompasses various measures to ensure that AI technologies are advanced and used responsibly and safely, impacting different facets of AI adoption and public trust.
This order addresses the critical imperatives of making AI safer, influencing its adoption, and increasing public trust. By implementing rigorous safety standards, promoting equitable use, safeguarding privacy, and fostering innovation, the government aims to strike a balance that allows AI to flourish while safeguarding the interests and rights of its citizens—ultimately charting a course toward a more responsible and inclusive AI future.
Enhancing the safety of AI is paramount for its continued development and deployment. By requiring developers to share safety test results and imposing rigorous standards, the executive order lists a number of steps on how to make AI systems safer and how to use AI with greater responsibility.
This ensures that AI, particularly the most powerful models, undergo comprehensive safety evaluations and red-team testing before they are made public (in accordance with the Defense Production Act). As a result, the potential risks associated with AI, such as unintended consequences or malicious applications, are minimized.
The executive order also covers regulations around the public’s physical safety, sensitive information, fraudulent information, cybersecurity, and more. Safer AI systems can protect individuals and organizations from harm while encouraging responsible innovation, as developers, users, and the public can have greater confidence in the technology’s reliability.
Faster AI Adoption
The measures outlined in the executive order can have a profound impact on AI adoption. While there is an argument that strict safety requirements could slow down AI development and deployment, they also provide clear guidelines for developers and organizations. These guidelines, when followed, can streamline the process of introducing AI into various sectors.
Addressing safety concerns and promoting the responsible use of AI can instill more confidence in the technology, which will drive broader adoption. A well-regulated AI environment is more likely to improve understanding, encourage investment, and promote the responsible use of AI across industries, from public safety and critical infrastructure to education and consumer products.
Greater Public Trust
AI technology’s safety, adoption, and public trust all go hand-in-hand. By prioritizing the development and use of privacy-preserving techniques and addressing issues of bias and discrimination, the government demonstrates its commitment to ensuring that AI benefits all citizens.
The guidelines set forth for federal agencies to evaluate privacy-preserving techniques, as well as the development of best practices for AI use in areas like public sectors and education, help assure the public that AI will be employed responsibly. As public trust in AI grows, individuals and communities are more likely to embrace and support its integration into their daily lives, along with a greater trust in the organizations that use this technology.
What Does This Mean for Veritone?
At Veritone, we are immensely excited about this development. This executive order strongly aligns with our AI for Good principles, which reflects our enduring dedication to AI safety and the promotion of ethical AI models, applications, and practices. Because of that, Veritone is uniquely positioned at the forefront of AI companies because we’ve taken the initiative to self-regulate so that our technology is transparent, trustworthy, secure & compliant, and, at the end of the day, empower people—not replace them.
With these principles as the foundation of our technology, people can have the confidence to base their AI practice on the Veritone platform, aiWARETM, and our proven solutions and services. But, it’s more than just having trustworthy technology. We expect more definitive regulations will follow from the federal government requiring companies to conduct audits of their AI technology. Veritone is one of the few companies that have already done a formal audit.
As I mentioned earlier, in New York City, regulations were passed requiring companies to audit AI in the talent acquisition space to ensure it was not perpetuating bias in hiring practices. We worked with Vera to conduct this audit for our Veritone Hire solutions. Their CEO, Liz O’Sullivan, is one of the fifteen members of President Biden’s AI task force. Not only have we been self-regulating for some time, but we also have experience auditing AI technology in accordance with regulations, which we believe will become the standard moving forward.
It is a source of great pride for us to have actively championed and contributed to the cause of responsible AI, so witnessing these values gain traction at the highest levels of government affirms our foundational belief in the ethical development and use of AI. For years, we’ve been at the forefront of pioneering AI solutions that not only push the boundaries of innovation but also firmly adhere to the principles of fairness, transparency, and accountability. We look forward to adhering to these principles and helping our customers navigate the regulatory future of AI.