[ad_1]

AI

More than a thousand people, including professors and AI developers, have co-signed an open letter to all AI labs, calling on them to suspend the development and training of AI systems more powerful than GPT -4 for at least six months.

The letter is signed by people in the field of AI development and technology, including Elon Musk, co-founder of OpenAI, Yoshua Bengio, prominent AI professor and founder of Mila, Steve Wozniak, co-founder of Apple, Emad Mostraque, CEO of Stability AI, Stuart Russell, pioneer of AI research, and Gary Marcus, founder of Geometric Intelligence.

The open letter, published by the organization Future of Life, cites potential risks to society and humanity that arise from the rapid development of advanced AI systems without shared security protocols.

The problem with this revolution is that the potential risks have yet to be fully appreciated and taken into account by a comprehensive management system, so the positive effects of the technology are not guaranteed.

“Advanced AI could represent a profound change in the history of life on Earth and should be planned and managed with commensurate care and resources,” read the letter.

“Unfortunately, this level of planning and management is not happening, even though the past few months have seen AI labs locked in an uncontrollable race to develop and deploy digital minds ever more powerful than anyone – not even their creators – cannot reliably understand, predict, or control.”

The letter also warns that modern AI systems are now in direct competition with humans in general tasks, which raises several existential and ethical questions that humanity has yet to consider, debate and decide.

Some of the issues highlighted concern the flow of information generated by AIs, the uncontrolled automation of tasks, the development of systems that thwart humans and threaten to render them obsolete, and the very control of civilization.

Tweeter

The co-signing experts believe we have reached a point where we should only train more advanced AI systems that include strict oversight and after gaining confidence that the risks arising from their deployment are manageable.

“Therefore, we call on all AI labs to immediately suspend for at least 6 months the training of AI systems more powerful than GPT-4,” advises the open letter.

“This pause must be public and verifiable, and include all key players. If such a pause cannot be enacted quickly, governments must step in and institute a moratorium.”

During this break, AI development teams will have the opportunity to meet and agree on the establishment of security protocols which will then be used for compliance audits carried out by external and independent experts.

In addition, policymakers should implement protective measures, such as a watermarking system that effectively differentiates genuine content from fabricated content, allowing the assignment of liability for damages caused by materials generated by the AI, and publicly funded research into the risks of AI.

The letter does not advocate stopping AI development altogether; instead, it highlights the dangers associated with the current competition among AI designers vying for a share of the growing market.

“Humanity can enjoy a thriving future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, let’s design these systems for the benefit of all and give society a chance to adapt,” the text concludes.



[ad_2]

Source link