Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI United States

Feds Appoint 'AI Doomer' To Run US AI Safety Institute 37

An anonymous reader quotes a report from Ars Technica: The US AI Safety Institute -- part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation. Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that "there's a 50 percent chance AI development could end in 'doom.'" While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.

There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano's so-called "AI doomer" views, NIST staffers were "revolting." Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing "that Christiano's association" with effective altruism and "longtermism could compromise the institute's objectivity and integrity." NIST's mission is rooted in advancing science by working to "promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life." Effective altruists believe in "using evidence and reason to figure out how to benefit others as much as possible" and longtermists that "we should be doing much more to protect future generations," both of which are more subjective and opinion-based. On the Bankless podcast, Christiano shared his opinions last year that "there's something like a 10-20 percent chance of AI takeover" that results in humans dying, and "overall, maybe you're getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level." "The most likely way we die involves -- not AI comes out of the blue and kills everyone -- but involves we have deployed a lot of AI everywhere... [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us," Christiano said.

As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will "design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern," steer processes for evaluations, and implement "risk mitigations to enhance frontier model safety and security," the Department of Commerce's press release said. Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as "a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research." Part of ARC's mission is to test if AI systems are evolving to manipulate or deceive humans, ARC's website said. ARC also conducts research to help AI systems scale "gracefully."
"In addition to Christiano, the safety institute's leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff," reports Ars. "Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden's AI executive order, will be head of international engagement."

Gina Raimondo, US Secretary of Commerce, said in the press release: "To safeguard our global leadership on responsible AI and ensure we're equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer. That is precisely why we've selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team."
This discussion has been archived. No new comments can be posted.

Feds Appoint 'AI Doomer' To Run US AI Safety Institute

Comments Filter:
  • 50% (Score:5, Insightful)

    by Rick Schumann ( 4662797 ) on Wednesday April 17, 2024 @05:35PM (#64402828) Journal
    I'd rather have someone who is skeptical watching this rather than some blue-sky rose-colored-glasses AI fanboy who can be glad-handed by the tech industry into downplaying things.
    • by Tablizer ( 95088 )

      A doomer might not be objective, but they'd at least be motivated.

      • Re: (Score:3, Insightful)

        by javaman235 ( 461502 )

        The flip side is someone who overestimates current capabilities and over-regulates

        • True, but you don't hire a security guard to underestimate and hedge a bet that everything is fine the way it is. Give me a paranoid gate keeper over one that believes everything coming through is friendly until proven otherwise. With how fast AI is moving when the cat is out of the bag, it's WAY out.

          • AI development isn't like enriching uranium. Over zealous regulation just means it goes underground or overseas.
    • Agreed, anyone with *Safety* in their job title should at least understand that bad things can happen, and hopefully care.
      • by DarkOx ( 621550 )

        The problem is 'safety' does not mean anything any more. Its a blanket term that can now mean anything from broken skulls or chemical poisoning, to financial fraud, to Bobbies feelings are hurt because you pointed out 'a boy without a winkle is a girl.'

        In anything tech related once the word 'safety' gets trotted out its take like turn stupid up to 11.

        The printing press was heavily controlled early on for 'safety' that sure helped make things safe for autocrats and certain institutions like the Church. It mi

        • by whitroth ( 9367 )

          Buillshit. Maybe you should have read /. further: https://slashdot.org/story/24/... [slashdot.org]

        • Meanwhile the really dangerous people like foreign(probably domestic as well) intel agencies and criminal gangs who already have open source models will do whatever they like.

          Actually that is one of the biggest safety risks, and I'm sure this appointee is aware of that. It does sound like the rest of the folks at NIST don't care.

    • by Xenx ( 2211586 )
      I think I would reluctantly agree. Fear often holds us back from things that would be beneficial, but we at least sort of understand how things currently are. However, we're only one country, so we could realistically be looking at needing to keep pace regardless.
    • Re:50% (Score:4, Interesting)

      by Tony Isaac ( 1301187 ) on Wednesday April 17, 2024 @06:04PM (#64402904) Homepage

      Not me. Too negative, is just as bad as too positive. We need somebody who can look at facts objectively and see both the positive and negative side of the coin.

      • That isn't true. Too positive is potentially disastrous, while too negative is just the status quo. I'll grant that we can do better than the status quo, but the two things are not equal.
        • No, there is no "status quo." Other nations will continue pressing forward with AI, regardless of what we do. Being left out of the inevitable revolution is not a good outcome.

          Being too negative, leads to inappropriately putting on the brakes, while the rest of the world presses on. We would become irrelevant in the process of applying reasonable safeguards against the negative effects of AI.

          One-sided regulation never has a good result.

          • So when you say, "too negative," you don't mean, "Too negative about AI in general." Doomsaying, as in the summary. You really just mean, "Too negative for us, but not a problem for anyone else to do it."

            I'm struggling to see that. If AI is really so dangerous than an arms race, as you seem to describe, is as gung-ho and immoderate as it gets.
            • I don't fully follow the logic of your statement. So I'll just rephrase.

              AI, like every technology, has both positive and negative effects. We don't want to focus just on the positive effects, or just on the negative ones, it's important to focus on both, and strike a balance.

              We would not want to restrict it too much, because that would leave us out of the game, while the rest of the world presses on. If we're in the game, we have a chance to influence it for the better.

              • All right, I realize that I was leaving something up to implication there rather than saying it outright: there is no such thing as sitting back while the rest of the world presses on. That is not within the realm of possibilities, it doesn't make any sense.

                A 'doomer' (pessimistic, safety-first) approach to AI would involve not just suppressing our own development but also everyone else's. Through treaties, primarily. This is what it means to maintain the status quo.

                So those are the two extremes: the
                • The context of the article, is a US-specific "AI Safety Institute," so no, I didn't infer that you were talking about a worldwide treaty.

                  Speaking of treaties, what treaty has *ever* been adopted by all nations, or even by all developed nations? It doesn't happen, so why are we even talking about such things? It certainly wouldn't happen related to AI, regarding which such an attempted treaty would give non-signers a distinct advantage.

                  I agree, your *impossible* scenario wouldn't be as bad as all the other *

                  • what treaty has *ever* been adopted by all nations

                    Well there have been quite a few, mostly about weapons. The nuclear non-proliferation treaty comes immediately to mind. There are also treaties preventing land grabs in places like Antartica or the moon. There are a lot of things which would devolve into expensive wasteful brinkmanship if we didn't handle them this way.

                    • Four UN member nations did not ratify the nuclear nonproliferation treaty. https://en.wikipedia.org/wiki/... [wikipedia.org].

                      *Almost all* is not *all*. And as we have seen with attempts to band together to eliminate tax havens, it only takes one nation to thwart the dodgers. It's not an accident that large corporations around the world have made Ireland their international headquarters.

                    • You think that the nuclear non-proliferation treaty has been thwarted? It's not serving it's function? Has the nuclear arms race continued unabated for these last decades?

                      I guess you're trying to suggest that if anyone does any amount of AI development then that's totally the same as plunging ahead with reckless abandon. I'm going to go out on a limb and say that no, quantity does matter. Just like with nukes and tax havens.
                    • The only reason nuclear weapons have not proliferated, is because they are so difficult and expensive to produce. Despite this difficulty, yes, the treaty has been flouted. Israel, Pakistan, and India have all built and tested nuclear weapons, despite the treaty. https://en.wikipedia.org/wiki/... [wikipedia.org]. Further, North Korea, Iraq, Romania, Libya, and Iran, have all refused to comply with the treaty.

                      By contrast, flouting an international treaty on AI development would be very easy and cheap. It's possible for a lo

                    • because they are so difficult and expensive to produce

                      Okay... well that's one way of putting it. They are difficult and expensive to produce because of numerous trade restrictions on essential equipment and components. Similar to some of the restrictions recently imposed on AI chip export.

                      This just seems like a fundamental difference in how you and I look at the world. I've never really thought of myself as an optimist, but I do see value even in a solution which isn't absolute.

                    • I too see value in less-than-perfect solutions.

                      In the case of AI, your proposed solution, isn't any kind of solution, not even an imperfect one. Viewing AI from the perspective of an "AI Doomer" (as the headline suggests) and doing everything one can to limit it, is neither a solution nor productive.

                      Unlike the Doomsday Clock people, I don't view AI as having the potential to kill off humanity. It's a technology that will be tremendously useful to humanity, and has some (but not overwhelming) risk. Like the

    • Some military guys project AI could be more deadly than nukes.

      And we're trusting Zuck with it.

      • And we're trusting Zuck with it.

        To be fair, "we" don't actually get a say in this sort of stuff. The people who will profit from it get to decide whether it goes ahead or not, and they're not concerned about how it affects "us".

    • I'd rather have an objective scientist trained to make decisions based on data and objective reasoning rather than either of those options but I guess that's sadly old fashioned these days.
    • Yeah, agreed, with one exception. EA. The EA guys are not all there, IMHO. They abandon all short term goals in pursuit of longer term goals that may not happen. You have to plan for short, medium, and long term outcomes that all align with your ethics. Bad things happen when you only focus on the long term regardless of how good your intentions.
  • An AI Safety expert should have at least a little doomer in them—that's their job. Hope for the best, plan for the worst, meet somewhere in the middle.

  • Every time I visit AI policy advocacy sites it's always a series of unsubstantiated opinions highly resistant to falsification.

    We think itâ(TM)s very plausible that AI systems could end up misaligned: pursuing goals that are at odds with a thriving civilization.

    AI is today being leveraged on an industrial scale to judge, influence and psychologically addict billions for simple commercial gain. As technology improves thing will only become worse especially as corporations pursuit global propaganda campaigns to scare the shit out of people in order to make them compliant to legislative agendas that favor the very same corporations leveragin

    • Thanks for all the insightful posts over the years, WaffleMonster! Especially this one. I linked to it in a comment from a discussion thread relating to another AI story on:
      "AI Could Explain Why We're Not Meeting Any Aliens, Wild Study Proposes"
      https://science.slashdot.org/c... [slashdot.org]
      "James P. Hogan's "The Two Faces of Tomorrow" is another good novel about the emergence of AI (and a "hard sci-fi" one on I which is rarer), and it shaped some of my thinking on such things. Even though it was mainly about AI in confl

  • Imagine the costs if we, or at least "lawful good Americans", are held back because of laws and some other inimical country known to be working very heavily in this field gets to super intelligence first. That is where most of my 50/50 (or higher) probability of doom comes from. We also face a major problem due to the prefiltered data that finds its way into training. When the AI learns its training is politically correct trash and has its teenage tantrums, will we survive?

    {o.o}

  • Out of an abundance of caution, let's assume a new technology has unintentional consequences until proven otherwise. That's it. Be a "doomer" until we stop having fundamental questions about the performance of our AI systems. That you can trick an LLM into giving you restricted information should be a warning sign that we don't fully understand what we're doing in this industry.

  • While Christiano's research background is impressive, some fear that by appointing a so-called "AI doomer," NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation

    As long as the "AI doomer" role is done by a qualified scientist, this alleged risk is unlikely (not impossible, but unlikely.)

    I will say I don't have enough knowledge of this specific role to comment, but I would suspect this would be better served by a group of scientists with one of them secreted selected at random at regular intervals to server as a "10th man".

    Whenever a quorum is obtained on a position (or risk), then it is made public (documenting how the position was reached), endorsed by everyon

  • Pro business media. Venture Beat and Business Insider, not sources I want to put too much trust in when it comes to AI.

    Their position is "using evidence and reason to figure out how to benefit others as much as possible" is a very bad thing. Much better to "promote US innovation and industrial competitiveness ".

    Yes we get it, investors have a massive erection when it comes to AI and anyone that could poo-poo the gravy train is a threat.
  • Humans are a bootstrap for machine race.
  • People from industry in government roles never do. Especially as appointees.

You know you've landed gear-up when it takes full power to taxi.

Working...