More

    Paytm CEO Vijay Shekhar Sharma worried about human extinction after reading OpenAI post

    Published on:

    PayTm founder Vijay Shekhar Sharma recently expressed his concerns regarding the potential disempowerment and even extinction of humanity due to the advancement of highly sophisticated AI systems. He took to Twitter to share his worries, referring to a recent blog post by OpenAI.

    In his tweet, Sharma highlighted some alarming findings from the OpenAI blog post and stated that he is genuinely concerned about the power accumulated by certain individuals and select countries already.

    Sharma drew attention to another aspect of the blog post, which claimed that “In less than 7 years, we have a system that may lead to the disempowerment of humanity, even human extinction.”

    What is OpenAI Warning Us About?

    In their blog post titled “Introducing Superalignment,” OpenAI discusses the necessity of scientific and technical breakthroughs to regulate AI systems that could surpass human intelligence. OpenAI is dedicating significant computing power and forming a team led by Ilya Sutskever and Jan Leike to address this issue.

    While the advent of superintelligence may still seem distant, OpenAI believes that it could become a reality within this decade. Managing the risks associated with superintelligence requires new governance institutions and solving the challenge of aligning AI systems with human intent, according to the post.

    Currently, AI alignment techniques, such as reinforcement learning from human feedback, rely on human supervision. However, these techniques may not be sufficient for aligning superintelligent AI systems that surpass human capabilities. OpenAI claims that new scientific and technical breakthroughs are necessary to tackle this challenge.

    OpenAI’s approach involves constructing an automated alignment researcher that operates at approximately human-level intelligence. They aim to utilize substantial computing resources to scale their efforts and align superintelligence. This process includes developing scalable training methods, validating models, and stress-testing the alignment pipeline.

    OpenAI Hiring a New Team

    OpenAI acknowledges that their research priorities will evolve and intends to share more details about their roadmap in the future. They are assembling a team of top machine learning researchers and engineers to work on the problem of superintelligence alignment.

    OpenAI aims to provide evidence and arguments that convince the machine learning and safety community that superintelligence alignment has been achieved.

    OpenAI emphasizes that their work on superintelligence alignment is in addition to their ongoing efforts to enhance the safety of existing AI models and address other risks associated with AI.

    Read more: Stability AI CEO: AI will replace human coders in five years

    Related

    Leave a Reply

    Please enter your comment!
    Please enter your name here