Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI United States Technology

Commerce Department Looks To Craft AI Safety Rules (axios.com) 24

The federal government is taking what could be the first steps toward requiring safer, more transparent AI systems as a Commerce Department agency invited public comment to help shape specific policy recommendations. From a report: The move is far short of the comprehensive AI legislation critics have advocated. But with the frenzy over generative AI continuing to grow, the Biden administration is trying to get a head start on a government response to the fast-moving industry. The Commerce Department's National Telecommunications and Information Administration (NTIA) is asking the public to weigh in on what role the federal government can play to ensure AI algorithms are acting as claimed and not causing harm.

"We really believe in the promise of AI," Assistant Commerce Secretary Alan Davidson, who runs NTIA, tells Axios. "We do believe it needs to be implanted safely and we're concerned that's not happening right now." Davidson said that the government could take a range of actions to shape AI that don't require new legislation -- including mandating audits as part of its procurement standards or offering prizes or bounties to those who find bias within algorithms. "We need to start the hard work of actually putting in place processes that are going to make people feel like the (AI) tools are doing what they say they are going to do, that models are behaving," Davidson said.

This discussion has been archived. No new comments can be posted.

Commerce Department Looks To Craft AI Safety Rules

Comments Filter:
  • How to do the same to AI as Commerce did to the internet in General. Allow Business interests to Hijack AI for their own purposes.

    The safety they are looking for is their own, they'll continue giving away customer information to the bad guys a million or so people at a time.

    • by DarkOx ( 621550 )

      Objective number zero 0: should be fight like hell to make sure nobody slips a CDA-230 like free pass for corporate interests for AI.

      We need to make damn sure there will be accountability this time!

      • Whoever gets there first wins the world, that's who everyone will be "accountable" to.
      • Even if it was mandated to open-source release the code, and release the models with weights attached, who would be capable of tracing a particular "thought" or "creative generation" of the AI?

        Even the PhDs who make and test these things often cannot figure out exactly why it learned a particular capability, or it may take them significant time and effort to figure out roughly why a single pattern of inference / generation emerges.

        So what is meant by transparency that is meaningful? An AI that can introspec
    • Are you suggesting that the Commerce department might actually want commerce? God forbid! And God further forbid that people use the resources available to them to earn a living! How terrible!

      Honestly, I don't know how such naive and perverted conceptions of business and commerce persist. Commerce is a good thing.

      • Are you suggesting that the Commerce department might actually want commerce? God forbid! And God further forbid that people use the resources available to them to earn a living! How terrible!

        Honestly, I don't know how such naive and perverted conceptions of business and commerce persist. Commerce is a good thing.

        The present day internet is an example of your premise. Business interests that know your most personal data, and follow you all over the internet, and then giving it away to people who would like to steal your money because the business interests being business interests consider comsec to be a cost center that eats up money that they want. So much so that a weird form of security by obscurity has emerged. Since the bad buys have so much information like CC numbers and bank account numbers, by the billion

  • The opinions are smelly and intertwined.

  • Uh (Score:4, Interesting)

    by backslashdot ( 95548 ) on Tuesday April 11, 2023 @11:40AM (#63441334)

    It's way too soon to be regulating this stuff. As usual, it will end up that to work around the legislation you would need massive capital .. and so basically the well resourced corporations won't be inhibited but the little guy will be. It's like how big Pharma colludes with the FDA to make sure you need about $1 billion for a new drug to get through the approval process. Innovative ideas won't get the funded because no investor will gamble with $1 billion only to have the drug fail at the last FDA step. If we stifle AI with burdensome regulations, only the elite companies will be able to do AI research.

    • If we stifle AI with burdensome regulations, only the elite companies will be able to do AI research.

      Or foreign countries. Doesn't this fuck up our protectionism? Slow down AI development, why? So China can catch up?

    • As usual, it will end up that to work around the legislation you would need massive capital .. and so basically the well resourced corporations won't be inhibited but the little guy will be.

      Yes, that's their goal. "Open"AI has repeatedly stated they intend to be the arbiters of right and wrong, to control the tech fully, and are seeking monopoly in the field. Similarly Google and Amazon and Musk want OpenAI slowed down because there's a 6 month waiting list on NVIDIA H100's right now and they don't want to fall behind.

      • Elon Musk is the biggest hypocrite on AI. He claims AI is very dangerous, yet his cars drive you around in traffic using "the most powerful AI in the world" ... how does that work? If AI is so dangerous why is he putting it in a position where it can make a decision that kills me or others? chatGPT is not in charge of any life or death systems, but Elon wants THAT stifled.

        • He's not a hypocrite, he's a guy who wants to be the last one with a chair when the music stops. Stop thinking your petty morality matters in the slightest: whoever gets AI first owns the world, hands-down and without contest. It's not left-vs-right or us-vs-them, it's "whoever gets it first will be the future owner of your actual life. There are no good choices, and no, this isn't a testimony towards there being a good candidate in the running, because there isn't and never will be.
    • by DarkOx ( 621550 )

      Which is exactly why we don't want to regulate this AT ALL.

      Ultimately its just software, LLMS are just a software transform over massive amounts of input. AI as it exists today is not special or distinct. What is going to happen if Commerce - especial Biden's Commerce gets hands around it is that we will get a bunch regulation that simultaneously makes it impossible for you, I, or anyone not big tech to play because compliance will be to complex, but at the same time absolves big tech from any civil complai

      • That's a valid point, how do they even enforce this without audits and scouring your code repos? As usual the big companies will be able to skirt around it, as they will have the training data sets and utilize various legal loopholes to develop their own AI systems that are "regulatory compliant."

        • by WDot ( 1286728 )
          Right now, based on similar guidance from the FDA/NIST/FTC, the regulations will involve documenting the hell out of it and being held to the standards you document. Characterize the model, the training set, the validation set, the held-out test set. If the data is from human subjects, what are the demographics? Is there sufficient representation of every kind of person in all of your data splits? Is the data stored securely? How do you increase data over time, and manage the data to prevent overtuning to a
    • How about a simple - "It is far too soon to be crafting regulation. We yet can't conceive of what regulations might actually be needed or what the outcomes might be, so any regulations passed are more likely than not to be ineffective or downright counterproductive."

      Pharmaceuticals are a bad comparison, that industry existed for centuries (in one form or another) before anyone thought of regulating it.

  • What role to play? Don't play a role. The lobbiest cycle takes two years to complete and revolutionary elements of AI are coming out every year now. Even by the corrupt standards and goals of the federal government the new regulations will be short-sighted and ineffective. As for actually preventing any problems, let me laugh even harder. [youtube.com]
  • are doomed to repeat it.

    This is increasingly like when the internet became widely available to the general public, and the pearl clutchers nattered on (and on and on) about how we need rules, and licenses to access the internet, and so on.

    And a year two later, everybody adapted, and life moved on, because even idiots have some survival instinct.

  • Please help me understand how todays Generative AI software is dangerous. Obviously you have the academic fraud and plagiarism to counter, and perhaps a bit of misinformation, but what is the danger of an ai that helps you generate words and graphics. If you still trust anything you read on the internet without researching it you are the problem, not AI. Also, schools have always had plagiarism and issues with people hiring others to write their papers and take their tests. AI just does it better and cheap
    • Sure, someone will do it but then you can just arrest them and destroy their creation.

      We're probably not near AGI, but the usual trope is that if you can achieve it purely in multi-node software, you can't reasonably destroy it — the idea being that only if it absolutely requires special hardware do you have a reasonable chance to contain an even-moderately-higher-than-human intelligence AGI once it figures out how to defeat security, especially if it gets smarter as it gets access to more computing resources.

      I suppose we could still turn everything off manually, though, at least until

  • One word that, in my opinion, regulators throw around too carelessly is "bias." The reason is that there are multiple statistical definitions of bias or fairness, and it is literally mathematically impossible to satisfy them all simultaneously. (See https://arxiv.org/pdf/1609.058... [arxiv.org] , I am not one of the coauthors). So which definition of bias? If someone was offering bounties for finding bias in algorithms, the easiest way to win would be to find which definition of bias the model was tuned to minimize, th
  • Seems this issue has been addressed already... perhaps thats a starting point.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...