Current directory: /home3/bjinbymy/public_html/indianext/wp-content/mu-plugins High-Profile Safety Team Disbanded By OpenAI Following Chief Scientist Sutskever's Leave - AI Next
Indianext
No Result
View All Result
Subscribe
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
  • News
    • Project Watch
    • Policy
  • AI Next
  • People
    • Interviews
    • Profiles
  • Companies
  • Make In India
    • Solutions
    • State News
  • About Us
    • Editors Corner
    • Mission
    • Contact Us
    • Work Culture
  • Events
  • Guest post
No Result
View All Result
Latest News on AI, Healthcare & Energy updates in India
No Result
View All Result
Home AI Next

High-Profile Safety Team Disbanded By OpenAI Following Chief Scientist Sutskever’s Leave

May 21, 2024
openai

Following the departure of the group’s two leaders, including OpenAI co-founder and chief scientist Ilya Sutskever, the company has effectively dissolved a high-profile safety team tasked with guaranteeing the security of potential future ultra-capable artificial intelligence systems.

The business informed Bloomberg News that OpenAI is now more extensively integrating the so-called superalignment team across its research activities to help the company reach its safety goals, as opposed to keeping the group as a stand-alone entity. Jan Leike, another OpenAI veteran, and Sutskever led the team when it was founded less than a year ago.

The decision to reevaluate the team was made in response to concerns raised by a number of recent OpenAI departures regarding the company’s strategy for striking a balance between speed and safety when creating its AI products. Respected researcher Sutskever stated on Tuesday that he was leaving OpenAI. Sutskever and CEO Sam Altman had previously disagreed over how quickly artificial intelligence should advance.

Leike posted a curt message on social media shortly afterward to announce his departure. He declared, “I resigned.” Sutskever’s departure was the final straw for Leike after conflicts with the corporation, according to a person with knowledge of the matter who wished to remain anonymous to discuss private discussions.

Leike claimed that the superalignment team had been battling for resources in a statement on Friday. Leike posted on X, “My team has been sailing against the wind for the past few months.” “There were moments when we were short on processing power and it was becoming more difficult to complete this important research. “Altman replied to Leike’s post a few hours later. Altman posted on X, “He’s right, we still have a lot of work to do.” “We’re determined to complete it.”

In recent months, a few more members of the superalignment team have also left the organization. OpenAI fired Leopold Aschenbrenner and Pavel Izmailov. Their departures were previously revealed by The Information. A person with knowledge of the situation claims that Izmailov had already been moved off the team before leaving. Requests for feedback from Izmailov and Aschenbrenner were not answered.

The startup’s co-founder John Schulman, whose work focuses on massive language models, will head OpenAI’s alignment efforts moving forward, the firm announced. In a separate blog post, OpenAI announced that Jakub Pachocki, Research Director, will succeed Sutskever as Chief Scientist.

Regarding Pachocki’s hiring, Altman said in a statement on Tuesday, “I am very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.” Artificial general intelligence, or AGI, is defined as AI that can accomplish most jobs just as well as or better than humans. Though it doesn’t currently exist, one of the company’s goals is to develop AGI. Employees at OpenAI are working on individual safety-focused teams as well as company-wide teams linked to AI safety. One, a readiness team, was established in October of last year with the goal of examining and mitigating the “catastrophic risks” associated with AI systems.

The goal of the superalignment team was to neutralize the biggest long-term dangers. The superalignment team was formed by OpenAI, which had long declared its technological objective to regulate and guarantee the security of artificial intelligence software that will be smarter than humans. The team was formed in July of last year. OpenAI stated in the announcement that the team’s work will receive 20% of its processing power at that time.

Sutskever was one of several OpenAI board members who pushed for Altman’s termination in November, a move that set off a hectic five days for the business. After OpenAI President Greg Brockman resigned in protest, investors staged a mutiny, and within a few days, almost all 770 workers of the business signed a letter threatening to depart if Altman was not reinstated. Remarkably, Sutskever also signed the letter and expressed regret for his role in Altman’s dismissal. Altman was back on board shortly after.

Sutskever mainly vanished from the public eye in the months that followed Altman’s departure and return, raising questions about his potential role at the corporation. According to a person with knowledge of the situation, Sutskever also ceased working from OpenAI’s San Francisco headquarters.

Leike claimed in his statement that he left OpenAI due to a number of differences with the company’s “core priorities,” which he believes are not sufficiently concentrated on safety precautions associated with the development of AI that may be more intelligent than humans.

Sutskever stated he is “confident” OpenAI will create AGI “that is both safe and beneficial” under its present leadership, which includes Altman, in a post earlier this week announcing his departure.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Editors Corner

How can Artificial Intelligence tools be a blessing for recruiters?

Will Artificial Intelligence ever match human intelligence?

Artificial Intelligence: Features of peer-to-peer networking

What not to share or ask on Chatgpt?

How can Machine Learning help in detecting and eliminating poverty?

How can Artificial Intelligence help in treating Autism?

Speech Recognition and its Wonders in your corporate life

Most groundbreaking Artificial Intelligence-based gadgets to vouch for in 2023

Recommended News

AI Next

Google: AI From All Perspectives

Alphabet subsidiary Google may have been slower than OpenAI to make its AI capabilities publicly available in the past, but...

by India Next
May 31, 2024
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

New research from Bryter, which involved over 200 doctors from the US and the UK, including neurologists, hematologists, and oncologists,...

by India Next
May 31, 2024
Solutions

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

Three government agencies joined forces to form a synergy in order to deliver eMigrate services through Common Services Centers (CSCs)...

by India Next
May 31, 2024
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

The advent of artificial intelligence has significantly changed the landscape of entrepreneurship. The figures say it all. Global AI startups...

by India Next
May 31, 2024

Related Posts

Google
AI Next

Google: AI From All Perspectives

May 31, 2024
Pfizer
AI Next

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

May 31, 2024
Artificial-Intelligence
AI Next

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

May 31, 2024
openai
AI Next

OpenAI Creates An AI Safety Committee Following Significant Departures

May 31, 2024
Load More
Next Post
artificial-intelligence

Artificial Intelligence Skills Require Training for Indian Tech Workers

IndiaNext Logo
IndiaNext Brings you latest news on artificial intelligence, Healthcare & Energy sector from all top sources in India and across the world.

Recent Posts

Google: AI From All Perspectives

US And UK Doctors Think Pfizer Is Setting The Standard For AI And Machine Learning In Drug Discovery

An Agreement Is Signed By MEA, MeitY, And CSC To Offer E-Migration Services Via Shared Service Centers

PR Handbook For AI Startups: How To Avoid Traps And Succeed In A Crowded Field

OpenAI Creates An AI Safety Committee Following Significant Departures

Tags

  • AI
  • EV
  • Mental WellBeing
  • Clean Energy
  • TeleMedicine
  • Healthcare
  • Electric Vehicles
  • Artificial Intelligence
  • Chatbots
  • Data Science
  • Electric Vehicles
  • Energy Storage
  • Machine Learning
  • Renewable Energy
  • Green Energy
  • Solar Energy
  • Solar Power

Follow us

  • Facebook
  • Linkedin
  • Twitter
© India Next. All Rights Reserved.     |     Privacy Policy      |      Web Design & Digital Marketing by Heeren Tanna
No Result
View All Result
  • About Us
  • Activate
  • Activity
  • Advisory Council
  • Archive
  • Career Page
  • Companies
  • Contact Us
  • cryptodemo
  • Energy next
  • Energy Next Archive
  • Home
  • Interviews
  • Make in India
  • Market
  • Members
  • Mission
  • News
  • News Update
  • People
  • Policy
  • Privacy Policy
  • Register
  • Reports
  • Subscription Page
  • Technology
  • Top 10
  • Videos
  • White Papers
  • Work Culture
  • Write For Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

IndiaNext Logo

Join Our Newsletter

Get daily access to news updates

no spam, we hate it more than you!