A Unique Alliance for AI Regulation: Averting Uncontrolled Outcomes

Aug 1, 2023 | AI News | 0 comments

In an unprecedented move, four leading companies in the race for the latest generation of artificial intelligence (AI) – Google, Microsoft, Anthropic, and OpenAI – announced on July 26 the creation of a new professional organization. This organization, named the “Frontier Model Forum,” aims to promote responsible development of the most sophisticated AI models and minimize potential risks.

The Biden Administration Calls for AI Regulation

In the United States, political tensions in Congress are preventing any efforts towards AI regulation. Therefore, the White House is encouraging concerned groups to ensure the safety of their products themselves, citing their “moral duty.” Last week, the administration of Joe Biden obtained commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to adhere to three principles in AI development: safety, security, and trust.

The Rapid Deployment of Generative AI

The rapid deployment of generative AI, through popular interfaces like ChatGPT (OpenAI), Bing (Microsoft), or Bard (Google), raises many concerns among authorities and civil society. Meanwhile, the European Union (EU) is finalizing a project to regulate AI, which will impose obligations on industry companies, such as transparency with users and human control over the machine.

Dangerous Capabilities Can Emerge Unexpectedly

Company leaders do not deny the risks. In June, Sam Altman, the head of OpenAI, and Demis Hassabis, the leader of DeepMind (Google), called for action against the “extinction risks” related to AI. During a Congressional hearing, Sam Altman supported the popular idea of creating an international agency responsible for AI governance. Meanwhile, OpenAI is working towards a “general” AI with cognitive abilities similar to those of humans.

In a July 6 publication, the Californian startup defined “frontier model” AI as highly sophisticated fundamental programs that could pose serious risks to public safety. “Dangerous capabilities can emerge unexpectedly, and it is challenging to truly prevent a deployed model from being used abusively,” warns OpenAI.

Discover More AI Tools

Here are the best tools we have selected that will improve your performance and productivity: Aomei, Mindmanager, Connectbooks, Timedoctor,and Melio. 

Every day, we introduce new AI tools and discuss the latest news in artificial intelligence. Discover new AI tools and software tools and stay up-to-date with the latest tools available.

Pin It on Pinterest

Shares
Share This