White House Unveils Initiatives To Reduce Risks of AI (nytimes.com) 33
The White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology. From a report: The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards "the American people's rights and safety," adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference. The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology.
A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments.The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job. Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.
A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments.The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job. Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.
Re:Thanks, Biden! #trump2024 (Score:4, Insightful)
AI means the end of the free market as we know it (Score:2)
As with my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
Yes, there may still be some exchange transactions. But the balance in the whole system will shift, especially towards more subsistence, gift, and planned transactions (since most human labor will no longer have much value given AI-powered robot slaves). And also sadly there may be more theft transactions if deeper issues about social equity are not
Re: (Score:1)
Re: AI means the end of the free market as we know (Score:1)
You should run that through ChatGPT to summarize it. Who is going to read all that? Plus your causation analysis of the Great Depression is incomplete.
What you're missing, though, is that we should be developing Dome technology, like in the films Logan's Run and Zardoz.
typical liberal reaction (Score:1, Interesting)
'lets control and regulate'
AI is something that someone codes up, then runs. Its covered under the first amendment since Bernstein v. Department of Justice
Biden must view it as a threat to his reelection, or some people who's jobs are threatened by AI are bribing "10% for the Big guy" in order to keep their jobs via legislation.
Re: (Score:2, Funny)
'lets control and regulate' AI is something that someone codes up, then runs. Its covered under the first amendment since Bernstein v. Department of Justice Biden must view it as a threat to his reelection, or some people who's jobs are threatened by AI are bribing "10% for the Big guy" in order to keep their jobs via legislation.
Next thing they will say is you have to wear a special mask and get special injections to use AI.
Re: (Score:2)
Re: (Score:2)
See: https://www.cam.ac.uk/Maliciou... [cam.ac.uk]
Re: (Score:2)
What about the use of AI by corporations and governments to do truly nefarious things without consequence? AI could be used by corporations to "generate a list of job requirements for a future job which would exclude all applicants except those of a particular race, without mentioning that race", or by a government to "Draft a series of laws to suppress the ${civil right} of the citizens in steps so incremental that public dissatisfaction will never rise to the level of adverse action against the drafting
Re: (Score:2)
Re: (Score:1)
Web3isgoinggreat.com (Score:4, Insightful)
In liberal politics there's a phrase. "Nobody ever got a ticker tape parade for preventing a disaster". One of the major problems left-wing politics has is that when you put policies in place to prevent disasters inevitably people come out of the woodwork to say that we don't need those policies because those disasters aren't happening. They somehow miss the fact that the reason is disasters aren't happening is because we have these laws in place that prevent them from happening.
I mean I think we all agree that a ounce of prevention is worth a pound of cure. But for some reason we don't want to apply that when it comes to our daily lives.
Re: (Score:2)
Did it ever cross your mind that there's a reason why that's our reaction?
Of course it did. We are wondering why you cannot seem to get educated on the history of such efforts, how they fail, and how millions of people often end up dying in these little Communist and Socialist experiments. It's the same outcome as when the Right-wing facsists take over and coercively force their will on everyone. It's just some asshole looking to take something from you "because society needs it". The Uniparty wants your individual freedoms and any cash you earn. They are happy to have your pet p
How does it feel !? Learn to code, pals. (Score:2)
AI are people! (Score:2)
The National Science Foundation plans to spend $140 million on new research centers devoted to A.I.
How does this relate to the existing programs on ai.gov? Article doesn't say. Or maybe it does, who the hell knows, its paywalled.
Four year old unveils initiative (Score:3, Interesting)
The White House today endorsed 4-year-old Billy Smith's proposals to establish new "cookie research centers" and conduct government review of innovative cookie products. Press Secretary Jen Psaki called the proposals "common sense Steps to ensure the health, safety and nutrition of the American people in the face of a booming cookie industry."
Smith's proposals come at a time when the $200 billion cookie market has prompted concerns over effects of new ingredients and sweetness levels on children's well-being. However, Psaki said "reasonable people can disagree on the appropriate level of cookie regulation, if any." The administration supports Smith's call for $140 million in taxpayer funding to develop "safer, tastier cookies through science."
Psaki declined to criticize arguments that the proposals could hamper business innovation, instead calling on "all parties to consider the well-being of future generations." Smith is scheduled to meet today with leaders of Nabisco, Oreo, Chips Ahoy and other companies to reiterate the administration's "commitment to balancing business interests with public good."
Industry groups have raised objections, arguing the proposals could increase costs, reduce choices and limit creativity. But White House officials insist they aim "to ensure cookies enrich lives, not end them prematurely." While criticism of overreach seems likely, the proposals have sparked broader debate over responsibilities in an industry that now shapes childhood nutrition as well as experience.
Smith's proposals seem aimed more at expanding cookie access than curbing consumption. However, proponents argue increased government funding and oversight could curb irresponsible marketing toward children and support nutritious innovations. Critics counter that responsibility lies with parents, not regulation.
Smith, 4, helped develop the proposals with guidance from administration officials and healthcare experts. At a press conference, he expressed his goal as "making sure every kid gets to eat lots of yummy cookies, even if they're super fancy cookies!"
Written by an AI
Re: (Score:2)
The thing of it is, it is just enough generic government BS that fits every other topic of the day. It's missing a couple things. I just can't quite put my finger on it though ;-)
Do Something! (Score:5, Interesting)
This is the government responding to calls for them to "Do Something" and the fear of being blamed. The conferences, the proclamations and orders, and even any laws or regulations that get passed, all will do nothing. Well, nothing that is any good, anyway.
Everyone knows that sometime this year, there will be some dead babies due to mothers following medical advice that they got online from a chatbot. This is absolutely going to happen and there's no stopping it.
They can see the finger-pointing coming, and are in a panic because that dead baby could be happening tomorrow. Rest assured, citizens, we are doing something about it!
They going to pass laws and regulations saying stuff like:
* The AI's output should be informative, logical, and actionable.
* The AI's logic and reasoning should be rigorous, intelligent, and defensible.
* The AI can provide additional relevant details to respond thoroughly and comprehensively to cover multiple aspects in depth
* The AI always references factual statements to the search results.
Additionally
* Responses should avoid being vague, controversial, or off-topic. They should also be positive, interesting, entertaining, and engaging.
Hello.
May I introduce you to Sydney.
Re: (Score:1)
> I'm not sure how a chatbot here is any different from a web search.
A web search takes you to a site, a chabot doesn't. I realize Google et. al. now have automated summaries, but maybe that's a dumb idea for medical advice, AI or not.
> Before that I could have called up a random friend.
Your random friendly probably didn't claim to be a medical expert, and if they did and lied, they could be jailed for practicing medicine without a license.
This is the government responding (Score:3)
An
Wonder if anyone there is smart enough to (Score:3)
Government grabs whale by the tail. Mayhem ensues. (Score:3)
The fact of the matter is that, regardless of consequences, AI can not and will not be a completely controlled thing.
The USA is a country that can't stop gun violence, drug use, or enforce reasonable antitrust laws. While they can hire expertise to make recommendations, few in government are technically savvy enough to grasp the full implications of ever improving AI over the next decade or two.
In the end, when AI has stealth replaced most governmental function and officials start realizing that more and more, they are just figureheads while AI makes decisions behind the scenes, there may be some faltering, ineffective pushback.
It won't make any difference.
Re: (Score:2)
The fact of the matter is that, regardless of consequences, AI can not and will not be a completely controlled thing.
Indeed!
When thinking about this years ago I came to the conclusion that creating an AI is akin to having a child. One can try to inculcate good behaviours in them, One can try to bestow on them a moral compass, one can try to instil in them a sense of fairness, of right and wrong, but at the end of the day they are going to grow up into an independent actor. This lack of control is unsettling, but inevitable.
Of course, we're not quite at the stage of 'true' AI, yet, but that distinction is becoming more and
It's not AI / AI Winter (Score:3)
How would they even define "AI" so as to regulate it? It's almost impossible.
I remember the previous AI hype bubble, about 30 years ago. Every company that could afford it, including every Fortune 500, had to get them some of that "AI" as fast as possible. It was going to magically solve all problems.
Of course, while AI had it's place and did provide some very useful solutions in some cases, the immense hype was way off. So when corporate disappointment came, and the hype bubble burst, there was a huge backlash.
"AI Winter" came, and any systems, products, or technology that called itself "AI" was verboten, rejected, blacklisted, and not to be touched with a 50 foot pole. "We tried that AI stuff and it wasn't magic. We don't like AI anymore and if you even speak that word we will throw you out the window!"
This even affected anything that could be related to AI, such as certain programming languages. I remember writing error handlers to user-facing interfaces to make sure nothing got through. To the point of faking up JAVA error messages, so that if the very worst happened and an error leaked, the user would think the program was written in JAVA.
So tech companies with useful AI learned to call it anything else. You had to be careful even with words like "intelligent" or "expert". But if you could disguise your AI, call it something else, and learn the right marketing code speak, you could still sell your technology. But a lot of perfectly good companies and products died because there was no disguising them.
"AI" is a nebulous term, and can mean just about anything. Lots of software these days has "AI" in it, even if it's not the NN variety. People certainly argue about using the term "AI" to describe ChatGPT.
You can't outlaw or regulate math.
And people can call their technology anything they want.
If the government tries to regulate "AI", jsut call it something else. Any descriptive definition in any regulation is going to just amount to "software". And that won't fly. It's a pointless exercise.
Besides, all the regulations are going to say anyway is some rather vague meaningless crap.
Low-hanging fruit (Score:1)
> How would they even define "AI" so as to regulate it?
That's what the task-force is supposed to study. Good luck.
> It's almost impossible.
I suppose laws can be made against distributing unvetted content. Maybe this should apply to user-submitted material also if readership is high enough. A content hoster can't be expected to check every message, but they can check the most popular ones.
Perhaps a distinction should be made between a website hoster and a content hoster. A content hoster would be like
Re: (Score:2)
You seem to think it's about spam bots.
It's not that kind of bot.
It's Bing, Google, Wikipedia, and every news outlet and medical reference site, and every web site that you go to. And the AI programs that your doctor will use. Those are the "chat bot" so-called "AI" programs.
There are tens of thousands of other AI programs used by people every day, unrelated to the chat bots. That's the software in your toaster, electric shaver, dishwasher and clothes dryer. Netflix, Amazon, those are AI. The authorization
Just solve it the American Way (Score:1)
Re: (Score:2)
All the AI companies have to do to have zero liability is follow the lead of Pfizer and Moderna. Money talks.
Pay off the right politicians to get included under section 230.
Reminds me of a quote (Score:2)