Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI United States

AI Giants Pledge To Share New Models With Feds 14

OpenAI and Anthropic will give a U.S. government agency early access to major new model releases under agreements announced on Thursday. From a report: Governments around the world have been pushing for measures -- both legislative and otherwise -- to evaluate the risks of powerful new AI algorithms. Anthropic and OpenAI have each signed a memorandum of understanding to allow formal collaboration with the U.S. Artificial Intelligence Safety Institute, a part of the Commerce Department's National Institute of Standards and Technology. In addition to early access to models, the agreements pave the way for collaborative research around how to evaluate models and their safety as well as methods for mitigating risk. The U.S. AI Safety Institute was set up as part of President Biden's AI executive order.
This discussion has been archived. No new comments can be posted.

AI Giants Pledge To Share New Models With Feds

Comments Filter:
  • They're just doing this to prevent the government from passing regulation. It's their way of claiming to be open about what they do so that when time comes for them to be prosecuted they can claim that the government knew what they were up to all the time.
    • What do you anticipate will be the reason they are prosecuted?
      • Dubious justification. AI models should not need a pass/fail stamp of approval by government of all people, who I trust even less. Especially given the recent revelations on Facebook having information withheld from public discourse by pressure from the FBI and then the Biden Whitehouse on COVID / Hunter Biden Laptop, I trust the government to regulate the information I get even less that I used do. It is all propaganda. The government today serves one group of people: Government.

        I think this is a 1st Am
      • Is this a joke? Half the stuff that's already happening: deep fakes, disinformation campaigns, securities fraud, failure of automated systems, etc., etc., etc. Think of everything against which Geoffrey Hinton is warning.
      • Creating a shadow library for commercial gain.

  • "...to evaluate the risks of powerful new AI algorithms."

    It's not the algorithms, it's the data. The models are not well understood, the algorithms are just ways of developing the data and interacting with it. The real concern is what AI systems could be allowed to do. An AI system that doesn't control anything isn't a risk. An AI system in the hands of Elon Musk is a catastrophe waiting to happen. It's the people doing this who are the problem, yet here we are with a rosy article about how these peopl

    • An AI system in the hands of Elon Musk is a catastrophe waiting to happen.

      Elon Musk is in charge of the most complex AI system ever developed and applied in human history: Tesla FSD Beta.

      The data we imperially do understand is crystal clear, in its application, it has saved thousands of lives:

      https://www.tesla.com/VehicleS... [tesla.com]

      This report is submitted to the California DMV, and they verify and validate it, so let's not pretend the DMV themselves are colluding with Elon Musk to pull a fast one on you. He needs their permission to keep it running on California roads. Califor

      • empirically*

        Tesla FSD Supervised*
      • This report is submitted to the California DMV, and they verify and validate it, so let's not pretend the DMV themselves are colluding with Elon Musk to pull a fast one on you. He needs their permission to keep it running on California roads. California is run by Democrats so they are in no mood to collude with Elon. If you STILL stupidly argue that the data is inaccurate, you are either implying the government is inept at evaluating these complex systems, in which case lets not have them do that, or worse, they are willing to collude and lie to you about them anyway, in which case lets not have them do that.

        The biggest problem I have with this is it doesn't say anything about all the times a human driver prevented the AI from making a catastrophic mistake. It would be one thing to compare across human vs algorithms but that has nothing to do with what this data is saying.

        Next up there is inherent selection bias in people with sufficient disposable income to kick in $10k or perpetually rent it for hundreds of dollars a month just for the honor of being a guinea pig in a software beta test. There are significa

        • The biggest problem I have with this is it doesn't say anything about all the times a human driver prevented the AI from making a catastrophic mistake.

          That is your biggest problem? The system has active driver attention monitoring. Problem solved. The fixes are therefore reflected in the data, which by any accounts is remarkable. Defining terms of collection is important, and it was not solely Tesla's decision to define the criteria. the California DMV sets the standard in which these reports are generated.

  • Currently regulations on AI are popping up like weeds. Many states have or are in the process of legislation. Heaven help you if you deploy solutions used by people in Europe, China, or really anywhere. If they don't like you, they likely can prosecute you. See: https://www.ncsl.org/technolog... [ncsl.org]

    The legal definition of AI is so broad, that nearly every developer, no matter how opposed to AI fall under the umbrella. The AI term has become so abused, it's hard to even know what you want to build is legal.
    • The legal definition of AI is so broad, that nearly every developer, no matter how opposed to AI fall under the umbrella.

      I would also put a large part of the blame of this onto the AI companies and corporations in general, when they start labelling so many to almost all their services as "AI" and "Ai Powered" what else are the regulators supposed to take from that? If everyone continued calling this stuff "machine learning" like it actually still is maybe the terminology wouldn't be such a mess.

      They sought to create a business boom cycle out of thin air and hype the shit out of it, that is gonna create some blowback.

  • FBI office gets new AI model.

    Agent: "Boss, we got updated model, and are analyzing it!"

    Boss: "Great! What's it do?"

    Agent: "We need a few months to study it."

    Boss: "Okay, well, what did the last version do?"

    Agent: "Um, we, like, can't make heads or tails of it, it's like a giant bowl of millions of spaghetti noodles."

    Boss: "Isn't that what neural networks essentially are?"

    Agent: "Uh, yes."

  • I'm just going to assume mainly OpenAI pushed for this to try to legitimize itself and try to make a Fair Use decision a fait accompli.

"Mach was the greatest intellectual fraud in the last ten years." "What about X?" "I said `intellectual'." ;login, 9/1990

Working...