Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI United States Technology

White House Unveils Initiatives To Reduce Risks of AI (nytimes.com) 33

The White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology. From a report: The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards "the American people's rights and safety," adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference. The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology.

A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments.The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job. Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.

This discussion has been archived. No new comments can be posted.

White House Unveils Initiatives To Reduce Risks of AI

Comments Filter:
  • 'lets control and regulate'
    AI is something that someone codes up, then runs. Its covered under the first amendment since Bernstein v. Department of Justice
    Biden must view it as a threat to his reelection, or some people who's jobs are threatened by AI are bribing "10% for the Big guy" in order to keep their jobs via legislation.

    • 'lets control and regulate' AI is something that someone codes up, then runs. Its covered under the first amendment since Bernstein v. Department of Justice Biden must view it as a threat to his reelection, or some people who's jobs are threatened by AI are bribing "10% for the Big guy" in order to keep their jobs via legislation.

      Next thing they will say is you have to wear a special mask and get special injections to use AI.

      • Holy Christ this is some hardcore Poe's law. I seriously can't tell if you're joking or not but I seriously hope that you are....
    • "Twenty-six experts on the security implications of emerging technologies have jointly authored a ground-breaking report – sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists."
      See: https://www.cam.ac.uk/Maliciou... [cam.ac.uk]
      • What about the use of AI by corporations and governments to do truly nefarious things without consequence? AI could be used by corporations to "generate a list of job requirements for a future job which would exclude all applicants except those of a particular race, without mentioning that race", or by a government to "Draft a series of laws to suppress the ${civil right} of the citizens in steps so incremental that public dissatisfaction will never rise to the level of adverse action against the drafting

        • Horrible possibility. AI could be used for manipulation of people and ideologies with fake news, and so on. The report talks about "The use of AI to automate tasks involved in surveillance (e.g. analysing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation."
        • Who needs AI for that. One time when I was promoted from within the manager who wanted me for the open position had me write the job requirements for the position I was interviewing for. Great for me but I would never want to waste my time interviewing for position that was already locked up. They had a policy you had to interview x number of candidates. Since then first thing I ask if I am interviewing for an internal job.... Are you looking for applicants or cannon fodder?
    • by rsilvergun ( 571051 ) on Thursday May 04, 2023 @12:03PM (#63497230)
      Did it ever cross your mind that there's a reason why that's our reaction? But maybe after centuries of letting the chips fall where they may and seen the disasters and pain caused by that that we may be want to stop problems before they start?

      In liberal politics there's a phrase. "Nobody ever got a ticker tape parade for preventing a disaster". One of the major problems left-wing politics has is that when you put policies in place to prevent disasters inevitably people come out of the woodwork to say that we don't need those policies because those disasters aren't happening. They somehow miss the fact that the reason is disasters aren't happening is because we have these laws in place that prevent them from happening.

      I mean I think we all agree that a ounce of prevention is worth a pound of cure. But for some reason we don't want to apply that when it comes to our daily lives.
      • Did it ever cross your mind that there's a reason why that's our reaction?

        Of course it did. We are wondering why you cannot seem to get educated on the history of such efforts, how they fail, and how millions of people often end up dying in these little Communist and Socialist experiments. It's the same outcome as when the Right-wing facsists take over and coercively force their will on everyone. It's just some asshole looking to take something from you "because society needs it". The Uniparty wants your individual freedoms and any cash you earn. They are happy to have your pet p

  • I bet one thing the white house does know is that it's going to WIPE a bunch of jobs out. It's already starting with IBM's hiring directive. The actors, writers, teachers, musicians, and other content creators look to me to have had a great run. Wasn't it nice to follow your dreams and get a great job? Now, their jobs are under threat. How do they like it, I wonder? We've lived with offshoring swords-of-damacles since about 2000. Now, the survivors like me might be a bit jaded with your "OMG muh job!" argum
  • The National Science Foundation plans to spend $140 million on new research centers devoted to A.I.

    How does this relate to the existing programs on ai.gov? Article doesn't say. Or maybe it does, who the hell knows, its paywalled.

  • by Iamthecheese ( 1264298 ) on Thursday May 04, 2023 @09:42AM (#63496856)
    Administration Backs 4-Year-Old's 'Common Sense' Cookie Policy

    The White House today endorsed 4-year-old Billy Smith's proposals to establish new "cookie research centers" and conduct government review of innovative cookie products. Press Secretary Jen Psaki called the proposals "common sense Steps to ensure the health, safety and nutrition of the American people in the face of a booming cookie industry."

    Smith's proposals come at a time when the $200 billion cookie market has prompted concerns over effects of new ingredients and sweetness levels on children's well-being. However, Psaki said "reasonable people can disagree on the appropriate level of cookie regulation, if any." The administration supports Smith's call for $140 million in taxpayer funding to develop "safer, tastier cookies through science."

    Psaki declined to criticize arguments that the proposals could hamper business innovation, instead calling on "all parties to consider the well-being of future generations." Smith is scheduled to meet today with leaders of Nabisco, Oreo, Chips Ahoy and other companies to reiterate the administration's "commitment to balancing business interests with public good."

    Industry groups have raised objections, arguing the proposals could increase costs, reduce choices and limit creativity. But White House officials insist they aim "to ensure cookies enrich lives, not end them prematurely." While criticism of overreach seems likely, the proposals have sparked broader debate over responsibilities in an industry that now shapes childhood nutrition as well as experience.

    Smith's proposals seem aimed more at expanding cookie access than curbing consumption. However, proponents argue increased government funding and oversight could curb irresponsible marketing toward children and support nutritious innovations. Critics counter that responsibility lies with parents, not regulation.

    Smith, 4, helped develop the proposals with guidance from administration officials and healthcare experts. At a press conference, he expressed his goal as "making sure every kid gets to eat lots of yummy cookies, even if they're super fancy cookies!"





    Written by an AI
    • The thing of it is, it is just enough generic government BS that fits every other topic of the day. It's missing a couple things. I just can't quite put my finger on it though ;-)

  • Do Something! (Score:5, Interesting)

    by cstacy ( 534252 ) on Thursday May 04, 2023 @09:43AM (#63496860)

    This is the government responding to calls for them to "Do Something" and the fear of being blamed. The conferences, the proclamations and orders, and even any laws or regulations that get passed, all will do nothing. Well, nothing that is any good, anyway.

    Everyone knows that sometime this year, there will be some dead babies due to mothers following medical advice that they got online from a chatbot. This is absolutely going to happen and there's no stopping it.

    They can see the finger-pointing coming, and are in a panic because that dead baby could be happening tomorrow. Rest assured, citizens, we are doing something about it!

    They going to pass laws and regulations saying stuff like:

    * The AI's output should be informative, logical, and actionable.
    * The AI's logic and reasoning should be rigorous, intelligent, and defensible.
    * The AI can provide additional relevant details to respond thoroughly and comprehensively to cover multiple aspects in depth
    * The AI always references factual statements to the search results.

    Additionally

    * Responses should avoid being vague, controversial, or off-topic. They should also be positive, interesting, entertaining, and engaging.

    Hello.
    May I introduce you to Sydney.

    • To an actual threat. We already have malware authors and spyware authors using chat GTP to improve the quality of their email spam and trick people who otherwise wouldn't be. Not to mention the huge amount of political propaganda that's going to be coming out of our enemies overseas. I mean for Christ's sakes we have people rooting for Vladimir Putin in this country. Whatever else you think about are locally grown politicians Vlad Putin is not your friend. But I somehow have to explain that to people.

      An
  • by oldgraybeard ( 2939809 ) on Thursday May 04, 2023 @10:11AM (#63496926)
    understand that there is no I(Cognitive intelligence) in what the PR Departments and the Marketers are calling Artificial Intelligence. It's just really good automation. But don't get me wrong it is still very dangerous! In fact it maybe even more dangerous since there isn't any ethics, intelligence, morals or honor involved. When programmed to kill as it will assuredly be. Killing is exactly what it will do. And it will be very efficient at it. Governments will not be able to pass up armies that will not question anything.
  • The fact of the matter is that, regardless of consequences, AI can not and will not be a completely controlled thing.

    The USA is a country that can't stop gun violence, drug use, or enforce reasonable antitrust laws. While they can hire expertise to make recommendations, few in government are technically savvy enough to grasp the full implications of ever improving AI over the next decade or two.

    In the end, when AI has stealth replaced most governmental function and officials start realizing that more and more, they are just figureheads while AI makes decisions behind the scenes, there may be some faltering, ineffective pushback.

    It won't make any difference.

    • by Whibla ( 210729 )

      The fact of the matter is that, regardless of consequences, AI can not and will not be a completely controlled thing.

      Indeed!

      When thinking about this years ago I came to the conclusion that creating an AI is akin to having a child. One can try to inculcate good behaviours in them, One can try to bestow on them a moral compass, one can try to instil in them a sense of fairness, of right and wrong, but at the end of the day they are going to grow up into an independent actor. This lack of control is unsettling, but inevitable.

      Of course, we're not quite at the stage of 'true' AI, yet, but that distinction is becoming more and

  • by cstacy ( 534252 ) on Thursday May 04, 2023 @10:22AM (#63496958)

    How would they even define "AI" so as to regulate it? It's almost impossible.

    I remember the previous AI hype bubble, about 30 years ago. Every company that could afford it, including every Fortune 500, had to get them some of that "AI" as fast as possible. It was going to magically solve all problems.

    Of course, while AI had it's place and did provide some very useful solutions in some cases, the immense hype was way off. So when corporate disappointment came, and the hype bubble burst, there was a huge backlash.

    "AI Winter" came, and any systems, products, or technology that called itself "AI" was verboten, rejected, blacklisted, and not to be touched with a 50 foot pole. "We tried that AI stuff and it wasn't magic. We don't like AI anymore and if you even speak that word we will throw you out the window!"

    This even affected anything that could be related to AI, such as certain programming languages. I remember writing error handlers to user-facing interfaces to make sure nothing got through. To the point of faking up JAVA error messages, so that if the very worst happened and an error leaked, the user would think the program was written in JAVA.

    So tech companies with useful AI learned to call it anything else. You had to be careful even with words like "intelligent" or "expert". But if you could disguise your AI, call it something else, and learn the right marketing code speak, you could still sell your technology. But a lot of perfectly good companies and products died because there was no disguising them.

    "AI" is a nebulous term, and can mean just about anything. Lots of software these days has "AI" in it, even if it's not the NN variety. People certainly argue about using the term "AI" to describe ChatGPT.

    You can't outlaw or regulate math.
    And people can call their technology anything they want.

    If the government tries to regulate "AI", jsut call it something else. Any descriptive definition in any regulation is going to just amount to "software". And that won't fly. It's a pointless exercise.

    Besides, all the regulations are going to say anyway is some rather vague meaningless crap.

    • > How would they even define "AI" so as to regulate it?

      That's what the task-force is supposed to study. Good luck.

      > It's almost impossible.

      I suppose laws can be made against distributing unvetted content. Maybe this should apply to user-submitted material also if readership is high enough. A content hoster can't be expected to check every message, but they can check the most popular ones.

      Perhaps a distinction should be made between a website hoster and a content hoster. A content hoster would be like

      • by cstacy ( 534252 )

        You seem to think it's about spam bots.

        It's not that kind of bot.

        It's Bing, Google, Wikipedia, and every news outlet and medical reference site, and every web site that you go to. And the AI programs that your doctor will use. Those are the "chat bot" so-called "AI" programs.

        There are tens of thousands of other AI programs used by people every day, unrelated to the chat bots. That's the software in your toaster, electric shaver, dishwasher and clothes dryer. Netflix, Amazon, those are AI. The authorization

  • As in, sue them for every penny they have (or might have) when there are negative outcomes.
    • by schwit1 ( 797399 )

      All the AI companies have to do to have zero liability is follow the lead of Pfizer and Moderna. Money talks.

      Pay off the right politicians to get included under section 230.

  • “Artificial intelligence is no match for natural stupidity." -- Einstein

Force needed to accelerate 2.2lbs of cookies = 1 Fig-newton to 1 meter per second

Working...