Biden Administration Takes Bold Steps to Regulate AI Industry
On Thursday, President Joe Biden held a meeting with CEOs of top AI companies, including Microsoft and Google, where he emphasized the importance of ensuring the safety of AI products before their deployment.
AI has become increasingly popular this year, with many companies launching similar products to ChatGPT. However, concerns have arisen regarding privacy violations, skewed employment decisions, and potential scams and misinformation campaigns.
Biden, who has experimented with ChatGPT, expressed the need to mitigate AI’s risks to individuals, society, and national security. The meeting discussed the importance of transparency with policymakers, evaluating the safety of products, and protecting them from malicious attacks.
Vice President Kamala Harris expressed potential safety, privacy, and civil rights concerns, highlighting the legal responsibility of companies to ensure the safety of their AI products.
The National Science Foundation has announced a $140 million investment to establish seven new AI research institutes, and the federal government has released policy guidance on AI usage.
However, U.S. regulators have fallen short compared to European governments’ strict technology regulation. To address this, the U.S.-EU Trade & Technology Council is collaborating on this issue.
READ MORE: Crypto Experts Predict a New High for Bitcoin in May 2023
The Biden administration has implemented an AI Bill of Rights, a risk management framework, and an executive order mandating federal agencies to eliminate AI bias.
Additionally, the Federal Trade Commission and the Department of Justice’s Civil Rights Division have pledged to leverage their legal authorities to combat AI-related harm.
Despite tech giants’ efforts to combat propaganda, fake news, pornography, child exploitation, and hateful messaging, they have not been successful.