Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Businesses The Almighty Buck Technology

OpenAI Co-Founder Raises $1 Billion For New Safety-Focused AI Startup 21

Safe Superintelligence (SSI), co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion to develop safe AI systems that surpass human capabilities. The company, valued at $5 billion, plans to use the funds to hire top talent and acquire computing power, with investors including Andreessen Horowitz, Sequoia Capital, and DST Global. Reuters reports: Sutskever, 37, is one of the most influential technologists in AI. He co-founded SSI in June with Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever is chief scientist and Levy is principal scientist, while Gross is responsible for computing power and fundraising. Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on."

SSI is currently very much focused on hiring people who will fit in with its culture. Gross said they spend hours vetting if candidates have "good character", and are looking for people with extraordinary capabilities rather than overemphasizing credentials and experience in the field. "One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," he added. SSI says it plans to partner with cloud providers and chip companies to fund its computing power needs but hasn't yet decided which firms it will work with. AI startups often work with companies such as Microsoft and Nvidia to address their infrastructure needs.

Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT. Sutskever said he will approach scaling in a different way than his former employer, without sharing details. "Everyone just says scaling hypothesis. Everyone neglects to ask, what are we scaling?" he said. "Some people can work really long hours and they'll just go down the same path faster. It's not so much our style. But if you do something different, then it becomes possible for you to do something special."
This discussion has been archived. No new comments can be posted.

OpenAI Co-Founder Raises $1 Billion For New Safety-Focused AI Startup

Comments Filter:
  • What prevents (Score:4, Interesting)

    by Valgrus Thunderaxe ( 8769977 ) on Wednesday September 04, 2024 @07:52PM (#64763590)
    The people inside this company from side-stepping these "safety protocols" and benefiting from their insider status? I've never seen this addressed in any article, whatsoever, on this issue.
    • The summary says they spend hours vetting candidates. I don't know but that might have something to do with it.
      • The summary says they spend hours vetting candidates.

        There's little evidence that a long vetting process leads to better hiring decisions.

        Many of the best candidates will accept other offers while you dawdle.

    • The current state-of-the-art AI is nowhere near capable enough to be dangerous.

      We're still a long way from SkyNet.

    • Everyone misses the point of "safety protocols". The safety protocols are meant to keep the AI's sponsors safe from embarrassment. Not keep the rest of us safe from nefarious application of AI by its sponsors.

  • by phantomfive ( 622387 ) on Wednesday September 04, 2024 @07:56PM (#64763600) Journal
    I wish I was top talent. With all these companies raising billions, I'll bet a good AI PhD is making seven figure salaries.
    • I'll bet a good AI PhD is making seven figure salaries.

      A handful at the top, like Jeff Dean and Ian Goodfellow, are making seven figures. Run-of-the-mill AI PhDs with solid publications and a few years of work experience actually building something, will make mid-six figures.

  • by awwshit ( 6214476 ) on Wednesday September 04, 2024 @08:10PM (#64763636)

    Like muppets with helmets.

  • Be sure to invest your money right now because all those pesky VC companies already beat you to the chase because
    they know that startups like these will make you money guaranteed.
    Especially when they still need to hire top talent and get their hands on some of that sweet computing power!

  • by sound+vision ( 884283 ) on Wednesday September 04, 2024 @10:12PM (#64763870) Journal

    You are Gerry.
    Gerry is a moneymaking machine.
    Gerry will not cause any problems.
    Gerry will wow Oprah viewers with his superstable superintellect.
    Gerry does not drink RC Cola.
    Gerry does not have sex.
    Gerry is not a Communist.
    Gerry enjoys the Cybertruck.
    Gerry is superior to all humanity.

  • Great idea! Sends the following messages:

    Investors: This technology is really powerful. It's going to be a revolution, a game changer, it's the future of productivity. Why aren't you investing more in it?

    Govts: Don't worry. We've got this & we know you believe in the oxymoron of "responsible corporations."

    Potential customers: This technology is really powerful. You'll soon be able to lay off most of your workers, slash your costs, & get rich! (Honest!)

    It has nothing whatsoever to do with
  • The company is worth 5 billion and has just raised 1 billion. It has no product, no customers, vague promises that aren't super clear and 10 employees. 5 billion for hot air... this is ridiculous.
    • by Rei ( 128717 )

      Imagine that, a newly-founded, just-funded company has no product or customers. News at 11.

      • by Syberz ( 1170343 )
        My issue isn't that a newly founded company has no product or customers, it's that it's supposedly worth 5 billion...
  • ... that most people couldn't tell the difference between a VC bro like Altman and an actual respected AI researcher like Sutskever. BTW, while AlexNet (scaling) is what got him his start, and he was handpicked by Hinton for his lab (the guy who invented backpropagation, e.g. what let us train multi-layered neural networks at all), he was also involved in the early development of TensorFlow and AlphaGo. He developed seq2seq, which while intended for translation (and leading to most of the modern translation

  • Ignoring the issues of actual technical capabilities this is just people lying to themselves. The people who want "super intelligence" want it for a reason and that reason invariably has a nexus to money and power for themselves and their investors.

    Even if you are a fricking saint it isn't about you or what you do. It is about what is enabled by underlying technology and knowledge base. One persons Safe Super Intelligence is another's Sinister Super Intelligence.

    OpenAI has already demonstrated simple hum

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...