Tech Leaders Sign Letter Calling for ‘Pause’ to Artificial Intelligence 

An open letter signed by Elon Musk, Apple co-founder Steve Wozniak and other prominent high-tech experts and industry leaders is calling on the artificial intelligence industry to take a six-month pause for the development of safety protocols regarding the technology.

The letter — which as of early Thursday had been signed by nearly 1,400 people — was drafted by the Future of Life Institute, a nonprofit group dedicated to “steering transformative technologies away from extreme, large-scale risks and towards benefiting life.”

In the letter, the group notes the rapidly developing capabilities of AI technology and how it has surpassed human performance in many areas. The group uses the example of how AI used to create new drug treatments could easily be used to create deadly pathogens.

Perhaps most significantly, the letter points to the recent introduction of GPT-4, a program developed by San Francisco-based company OpenAI, as a standard for concern.

GPT stands for Generative Pre-trained Transformer, a type of language model that uses deep learning to generate human-like conversational text.

The company has said GPT-4, its latest version, is more accurate and human-like and has the ability to analyze and respond to images. The firm says the program has passed a simulated bar exam, the test that allows someone to become a licensed attorney.

In its letter, the group maintains that such powerful AI systems should be developed “only once we are confident that their effects will be positive and their risks will be manageable.”

Noting the potential a program such as GPT-4 could have to create disinformation and propaganda, the letter calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

The letter says AI labs and independent experts should use the pause “to jointly develop and implement a set of shared safety protocols for advanced AI design and development that will ensure they are safe beyond a reasonable doubt.”

Meanwhile, another group has taken its concerns about the negative potential for GPT-4 a step further.

The nonprofit Center for AI and Digital Policy filed a complaint with the U.S. Federal Trade Commission on Thursday calling on the agency to suspend further deployment of the system and launch an investigation.

In its complaint, the group said the technical description of the GPT-4 system provided by its own makers describes almost a dozen major risks posed by its use, including “disinformation and influence operations, proliferation of conventional and unconventional weapons,” and “cybersecurity.”

Some information for this report was provided by The Associated Press and Reuters.



your ad here

leave a reply:

Discover more from UPONSOFT

Subscribe now to keep reading and get access to the full archive.

Continue reading