AI Firms Warned Over Potential to Endanger Humanity

OpenAI is actively dedicating significant computing power and has formed a team led by Ilya Sutskever and Jan Leike to address this issue.

Paytm founder, Vijay Shekhar Sharma, recently expressed his concerns about the potential consequences of advanced AI systems, including the disempowerment and even extinction of humanity.

He took to Twitter to share his worries, referencing a blog post by OpenAI.

Sharma highlighted some alarming findings from the OpenAI blog post, stating that he is genuinely concerned about the power that certain individuals and countries have already accumulated.

He drew attention to a specific claim in the post that suggested the development of such systems could lead to the disempowerment and extinction of humanity in less than seven years.

READ MORE: ZachXBT’s Research Cited in $3.1 Million NFT Rug Pull Lawsuit Against Boneheads

The blog post, titled “Introducing Superalignment,” discusses the need for scientific and technical breakthroughs to ensure control over AI systems that could surpass human intelligence.

OpenAI is actively dedicating significant computing power and has formed a team led by Ilya Sutskever and Jan Leike to address this issue.

While the arrival of superintelligence may still seem distant, OpenAI believes it could become a reality within this decade.

The post emphasizes the importance of managing the risks associated with superintelligence through new governance institutions and aligning AI systems with human intent.

Current AI alignment techniques rely on human supervision, particularly reinforcement learning from human feedback.

However, these techniques may not be sufficient to align superintelligent AI systems that exceed human capabilities.

OpenAI asserts that new scientific and technical breakthroughs are necessary to tackle this challenge effectively.

OpenAI plans to build an automated alignment researcher operating at a human-level intelligence to address the issue.

They intend to leverage substantial computing resources to scale their efforts and align superintelligence.

This process involves developing scalable training methods, validating models, and stress-testing the alignment pipeline.

Recognizing that research priorities will evolve, OpenAI aims to provide more details about their roadmap in the future.

They are in the process of assembling a team of leading machine learning researchers and engineers dedicated to addressing the challenge of superintelligence alignment.

OpenAI emphasizes that their work on superintelligence alignment is complementary to their ongoing efforts to improve the safety of existing AI models and address other risks associated with AI.

The concerns raised by Vijay Shekhar Sharma and the findings presented in the OpenAI blog post highlight the need for careful consideration and proactive measures to ensure the responsible development and deployment of advanced AI systems.

While the potential benefits of AI are vast, it is crucial to navigate the risks associated with its exponential growth and mitigate any potential threats to humanity.

Submit A Crypto Press Release

No information published in Crypto Intelligence News constitutes financial advice; crypto investments are high-risk and speculative in nature.