

Most US Workers Avoid AI Chatbots Despite Productivity Benefits, PEW Finds (pewresearch.org) 100
Most American workers are not embracing AI chatbots in their jobs, with 55% rarely or never using these tools and 29% completely unfamiliar with them, according to a Pew Research Center survey released Tuesday.
Only 16% of workers report regular use of AI chatbots like ChatGPT, Gemini or Copilot. Adoption is highest among younger workers (23% of those aged 18-29) and those with post-graduate degrees (26%). Among users, research (57%), editing content (52%), and drafting reports (47%) top the list of applications. While 40% find chatbots extremely or very helpful for working faster, just 29% say they significantly improve work quality.
For the majority who don't use AI chatbots, 36% cite lack of relevance to their job as the primary reason. Employer attitudes remain largely neutral, with half neither encouraging nor discouraging usage. The technology sector leads in workplace adoption, with 36% of employers actively promoting chatbot use, followed by financial services (24%).
Only 16% of workers report regular use of AI chatbots like ChatGPT, Gemini or Copilot. Adoption is highest among younger workers (23% of those aged 18-29) and those with post-graduate degrees (26%). Among users, research (57%), editing content (52%), and drafting reports (47%) top the list of applications. While 40% find chatbots extremely or very helpful for working faster, just 29% say they significantly improve work quality.
For the majority who don't use AI chatbots, 36% cite lack of relevance to their job as the primary reason. Employer attitudes remain largely neutral, with half neither encouraging nor discouraging usage. The technology sector leads in workplace adoption, with 36% of employers actively promoting chatbot use, followed by financial services (24%).
Small wonder (Score:2)
A lot of them only just finally figured out the slide-rule.
Re: (Score:2)
Re: Small wonder (Score:2)
Probably for the best. They eventually will replace us all, no need to make it faster. ðY...
TFS brought to you by ChatGPT (Score:2)
Re: (Score:2)
https://www.tiktok.com/@tmz/vi... [tiktok.com]
Sorry for the tiktok link but its the only decent video of it i could find. I understand if you dont want to click it. Two ai bots realise the other is a bot and start talking to each other like modems or something.
Because they aren't good at small talk (Score:3)
Re: (Score:2)
In addition to that the bots are often annoying and are giving irrelevant or even incorrect answers for the specific thing I'd like to know.
Re:Because they aren't good at small talk (Score:4, Insightful)
Re: (Score:1)
There are still people who can't even effectively use a search engine.
Let's just say that these people don't have good critical thinking skills.
And no, no... I do not mean old people. In fact, it's the younger people who get the totality of their information from things like Facebook feeds.
Re: (Score:2)
chat bot's default helpful/compliance with no motivation or underlying values
Helpful compliance? I'd characterize it as malicious compliance. Recent example from when I was playing with Gemini (with some mild paraphrasing to cut out the verbage that it produced including numerous apologies about getting things wrong which were even more annoying since it very clearly was not at all sorry!)
Me: How significant was the Higgs discovery result in the paper from ATLAS?
Gemini: It was very significant.
Me: That's not precise enough, what was the exact significance given
Gemini: The
Re: (Score:2)
That's why I do not use AI chatbots. It is like have a chat with an idiot child who is going out of their way to be stupid.
At least the idiot kid knows what it's doing and is having fun with it.
Re: (Score:2)
Hot take, I think it is the other way around. A lack of critical thinking skills, self-awareness, and chat bot's default helpful/compliance with no motivation or underlying values to motivate or lead the user unless specifically requested (see critical thinking and self-awareness), you have the intersection of "garbage in, garbage out" and "ask a stupid question, expect a stupid answer". AI doesn't judge, which is a mixed bag.
Sounds like most of my colleagues... a desire to be helpful would be a massive improvement.
Re: (Score:1)
The only bot I want to chat with is a sex box. And it needs to have a human-like female body.
It also needs to be open-minded.
Re: Because they aren't good at small talk (Score:2)
Re: (Score:2)
Re: (Score:2)
The only bot I want to chat with is a sex box. And it needs to have a human-like female body.
It also needs to be open-minded.
I'd think the benefit was you don't have to talk to it.
The 29% (Score:2)
Are the people who are too ignorant to notice the hallucinations.
Re: (Score:2)
I weep for the next decade where the US slides face first into idiocracy screaming "Americuh first!"
The rest of the world is like "who asked for this crap?"
When the calculator came along, we were not allowed to use those in class rooms until 1996 despite being invented 30 years prior.
When the computer came along, we still had typewriter labs as late as 1994. They then bought computers that were already out of date in 1988 and kept them past 2000 when Y2K forced their hand.
It doesn't surprise me that chatbot
Re:The 29% (Score:5, Insightful)
I weep for the next decade where the US slides face first into idiocracy screaming "Americuh first!"
The rest of the world is like "who asked for this crap?"
When the calculator came along, we were not allowed to use those in class rooms until 1996 despite being invented 30 years prior. When the computer came along, we still had typewriter labs as late as 1994. They then bought computers that were already out of date in 1988 and kept them past 2000 when Y2K forced their hand.
It doesn't surprise me that chatbot garbage has slow adoption, because the previous precedents with the computer and calculator didn't revolutionize anything, they were largely a cost-saving benefit by removing a lot of physical paper pushing, mailing, inter-office memo's that were previously required additional staff and office space to deal with. The computer, once it had a flat screen literately allowed crappy employers to shrink the 'cubicle' down to a 3' x 2' space big enough for just the screen and keyboard and some elbow room.
What does the chatbot do that you can't do with existing people? Other than get rid of them? Nothing. We've had garbage-tier "knowledge base" systems at call centers for literal decades, and staff won't use them over the legacy material that could be read all on one page and ctrl-F through. A chatbot makes that 100 times more frustrating.
Thus far, the one use-case I've seen that they're actually good at is "executive summary" reports. So long as those reports don't require any actual technicalities. But a lot of management types are being convinced by the pushers that I call the AI prophets that it will be a way to either do tons more work with current staff, or lose lots of staff and still get more done, and the puddles of drool filled with day-dreams of dollar signs seem to form around them faster than janitorial staff can clean them up.
Re: (Score:2)
I do find the AI output at several search engines to be helpful.
I'd say that 80% of the time it's helpful. Unfortunately, if it's actually something I'm researching at any depth, I have to fact-check it. A sanity check is often good enough, but I've found instances where it's wrong.
I'd happily correct the "AI". I'd sometimes take the time to do so. They don't seem to have a way to do that. I think that may help - or people may screw with it, making it even less accurate.
Re: (Score:2)
Re: (Score:1)
Thus far, the one use-case I've seen that they're actually good at is "executive summary" reports.
I'm in the 50% that have never tried an LLM. Many of my colleagues use them regularly for proofreading.
I'm tempted by automated meeting summary/minutes. But until the security and privacy issues are addressed, no thank you.
Re: (Score:2)
Thus far, the one use-case I've seen that they're actually good at is "executive summary" reports.
I'm in the 50% that have never tried an LLM. Many of my colleagues use them regularly for proofreading.
I'm tempted by automated meeting summary/minutes. But until the security and privacy issues are addressed, no thank you.
I certainly won't use it for anything where privacy is a concern at all. I refuse to use them for my own personal work as an author, and gave up my editing / proofing software when they started shoveling AI nonsense into everything and claiming they need to upload all my documents to their servers in order to better help me. I've gone back to the arduous task of manual edits thanks to that bullshit. But at work, since the management insists, I'm happy to throw write-ups at it to summarize. It's not like we
Re:The 29% (Score:5, Interesting)
Chatbots don't do anything new, they just do it faster and cheaper. They may show a level of competence that the user does not themselves possess, even if they can't be said to possess expertise.
It's like wikipedia. In a mission-critical sense it's not 100% reliable. However for 99.8% of real world use cases, it's totally fine.
And chatbot has not at all had slow adoption. Even the Atari 2600 took like 5 years before it got big. LLM chatbots came out of nowhere two years ago.
Re: (Score:3)
It's like wikipedia. In a mission-critical sense it's not 100% reliable. However for 99.8% of real world use cases, it's totally fine.
That is vastly overestimating the reliability of chatGPT, and Gemeni is even worse.
Re: (Score:1)
It's like wikipedia. In a mission-critical sense it's not 100% reliable. However for 99.8% of real world use cases, it's totally fine.
That is vastly overestimating the reliability of chatGPT, and Gemeni is even worse.
IME, both statements are true. The real world use case is proofreading English language for non-native English speakers.
Re:The 29% (Score:4, Insightful)
Agreed, but Wikipedia is miles ahead of any of the chat bots at this point.
If you read a wikipedia article, you may find some of the details to be incorrect - but usually only if you do some significant digging. Usually such errors are about something "human" rather than something "physical" too (so there's room for interpretation). You couldn't base a law suit on Wikipedia alone, but you can argue a point in the pub with it perfectly well.
All of the chatbots I've tried get entire subject areas wrong on occasion - so you have to double check just about everything they say. You might be able to sub for the crazy drunk guy at the pub with it (or possibly certain politicians), but no more than that.
Re: (Score:2)
Re: (Score:2)
When the calculator came along, we were not allowed to use those in class rooms until 1996 despite being invented 30 years prior.
Remember when teachers would cling to the line you won't have a calculator in your pocket all the time? Jokes on you teach.
Re: (Score:2)
Are the people who are too ignorant to notice the hallucinations.
This gets posted to every article on LLMs, and all I can figure out is that people must be using the tools incorrectly. I find the output from ChatGPT 4 to be excellent. Is it always right? Of course not. But I give it at least a 9 out of 10 on accuracy. But what is also incorrect is almost every forum post about whatever I'm trying to solve. I ask ChatGPT a question and I get an answer. I run a search and I get page after page of ads and forum posts of angry people bickering over different wrong ans
Re: (Score:2)
I'm sure it's getting better, but I'll ask it what seem like pretty simple questions "similar to" (since my AI system that I use doesn't save history, this is by memory):
On a Juniper EX4200 how do I set the PVID on a port?
And it'll give me commands that straight up don't work.
Similarly with "On Alma Linux 9, how do I install bla" and it'll list packages that don't exist, or at least don't exist in the default repositories and it doesn't tell me what repository is needed.
I've asked it to write AutoIT scripts
Re: (Score:1)
Re: (Score:2)
I give it a 4/10 on accuracy. Total crap. Might as well ask the Magic 8 Ball.
Re: (Score:2)
Re: (Score:2)
Or the occurrence of hallucinations they have is even higher.
Stop calling it hallucination (Score:2)
So many limitations... (Score:5, Insightful)
So far I don't find them very useful (Score:3)
That's for pretty basic code, if you're writing some fancy math stuff I gather it can do a better job because math doesn't change all the time. But if you're doing application programming with an unfamiliar framework you're still stuck looking up docs and recent tutorials.
Also I'm not going to hand out any info that isn't generic because I can't risk letting their AI get data it shouldn't have, like personal info.
Re: So far I don't find them very useful (Score:4, Insightful)
Re: (Score:2)
For programming you need to give it fairly clear instructions and keep an eye on what it's suggesting, not only do you need to understand the code you're using but it can be very stubborn about certain workflows.
It's best for learning a popular framework, it's worst for working with a niche framework. For instance, if you're doing Polars it will constantly suggest Pandas syntax.
Re: (Score:2)
No, that is not enough. No matter what you do, the LLM AI will always make small errors in code, no matter how short and simple the program is, I have seen this with single function doing nothing but few operations of basic math. This happens simply because that is how LLM works. I think that they are trying to fix this by running the code through AI several times to fix those errors, but that sometimes fails for the same reason it fails on the first try. LLM is not accurate and code needs to be accurate.
No
Re: (Score:2)
No, that is not enough. No matter what you do, the LLM AI will always make small errors in code, no matter how short and simple the program is, I have seen this with single function doing nothing but few operations of basic math. This happens simply because that is how LLM works. I think that they are trying to fix this by running the code through AI several times to fix those errors, but that sometimes fails for the same reason it fails on the first try. LLM is not accurate and code needs to be accurate.
Note that even best programmers make similar mistakes. The difference is that they can run the code, see that it has errors, locate the errors and fix them.
For Python at least I think ChatGPT will actually run snippets on its own interpreter, but it's true that LLMs do make errors. It can still give you useful input to incorporate into your code base.
To fix this, they would need something similar to AlphaFold, a system that tries to solve the problem from different perspectives. Alphafold to my understanding (simplified explanation, it is more complex than this) tries to first set the atoms to correct locations so that they fill the basic requirements with completely ignoring their position in molecule chain. Then they run trigonometry and other checks to see if the positions are possible or not and this gives feedback for the first stage, which then makes corrections to its answer and this loop is repeated to finetune the answer. This complex system gives you over 90% accuracy, which is considered good enough.
I don't know a lot about AlphaFold other than it uses RL which means the model has a way to check the quality if its works. In theory you could do the same with an LLM, add an agent on top, the question is how you evaluate the answer to maximize the return.
The other issue is RL on its own is infamously resource hu
Re: (Score:3)
I tried using one to write some simple VHDL code. It could not get a simple entity description syntactically correct.
Re: (Score:3)
I tried to get it help me learn use C++ coroutines. My thinking was that cppreference and the final draft are public, it can help me distill that into a quick start guide.
Joke was on me: first contact with an LLM, no idea how it works. It kept mixing multiple languages, along with pre-final draft syntax, and essentially made me lose half a day and another two days understanding WTF LLMs actually are, before I actually read the C++ documentation.
Re: (Score:2)
Which one did you use? The last time I had chatgpt or copilot mix those up was last summer. How you structure your prompt is important too. Something like "I know C++ but coroutines are new to me. Let's write a program together that checks the status of an API endpoint and also serves HTTP traffic, each running on their own coroutine. The Api endpoint is xyz.com/v1/myapi and the json output looks like this { foo: bar } and we want to make sure bar is > 10. The http server should respond "green" if bar is
Re: (Score:2)
Yes, it was last summer.
I put your example into chatgpt, and indeed it spat out seemingly correct code. But that code is absolutely useless to understand the basics coroutines, promises and awaiters, as it basically simply did the boilerplate to replace "return" with "co_return".
Which brings me to my initial point: unusable to understand something you don't already know well, when you just started fishing around how the thing could work.
I asked it for a primer on coroutines, it did do a good job. (At least
16% of users are already using AI Chatbots (Score:2)
There's two ways to spin this data. One is "hardly anyone is using this bleeding edge technology with a bunch of rough edges"
The other is "wow look at that, 16% of all workers are already finding value in AI Chatbots despite all the hurdles needed to use them".
Look at typewriters, laser printers, computers etc. It took literal decades for them to be introduced into offices as key tools. Whereas AI Chatbots already have 16% penetration, possibly 20% market penetration by this point.
Re: (Score:2)
Look at typewriters, laser printers, computers etc. It took literal decades for them to be introduced into offices as key tools
All of those were held back by affordability issues.
Re: (Score:2)
Running an LLM on an on-premises GPU is likewise "held back by affordability issues."
Re: (Score:1)
Chatbots are useless (Score:1)
They may be excellent wordsmiths, but their output contains a lot of nonsense
They are kinda like an overconfident bullshitter at a party, trying to impress a drunk girl
Newer deep research tools are starting to get good and useful
Re: (Score:2)
They may be excellent wordsmiths,
No.
Re: (Score:1)
They may be excellent wordsmiths,
No.
Agree on both. But we may compromise on "adequate grammar correctors." My non-native English speaking colleagues feel they help.
Despite Productivity Benefits (Score:5, Insightful)
Whose productivity? I don't have all day to reformat my query until your chatbot understands the issue. Your productivity may have gone through the roof by firing all your support staff. Mine, not so much.
Re: Despite Productivity Benefits (Score:5, Insightful)
Good (Score:1)
Seems Idiocracy is still some way off.
Re: (Score:2)
Seems Idiocracy is still some way off.
False. It just so happens that even Idiocracy isn't looking for the artificial idiot. Yet.
Re: (Score:2)
Trump and the Morons (TM) did not get the 90% (or so) of the votes they would have to get for full idiocracy. But sure, this election has been a major milestone on the way there.
Creepy (Score:4, Funny)
They make me feel uncomfortable and like my brain is going to atrophy from disuse.
Re:Creepy (Score:4, Interesting)
Had the exact opposite effect here - learned more stuff in the past two yea4s than in 10 before that. It is like having an infinitely patient private tutor.
Re: Creepy (Score:2)
Well as long as youâ(TM)re ok with the tutor being wrong sometimes.
Re: (Score:2)
Like humans then?
Bullshit (Score:2)
I am 100% convinced that this scales by reading and writing ability and typing speed as well because they usu
Re: (Score:2)
Okay, I work in IT but still. I can type better, faster, and more accurately than it can without coming across as overly-fake and without having to double check if I left in any inaccuracies. The time to pull it up, describe what I want (with typing and flawless spelling so it knows what I mean) and then proofread it, I could just written the email or the summary or the documentation by that point. I am 100% convinced that this scales by reading and writing ability and typing speed as well because they usually don't use a mouse and copy and paste and compose the request to chatbots very quickly either and they certainly aren't light-speed editors either if their skills are low.
They're very helpful for the two finger typists among us though. They add lots of extra verbiage when all that needs to be said is, "I did that thing you requested." Which makes it easy to spot who on the staff is using them to write email responses.
ChatGPT responds (Score:2)
I asked ChatGPT to compose a comment on this article for me:
"Looks like AI chatbots are still the new kid on the block for most American workers—only 16% are on the AI hype train. Maybe it’s time for a crash course in ChatGPT 101, but then again, it’s hard to embrace a robot that can’t fetch your coffee! "
My Reasons-- (Score:5, Insightful)
I avoid generative AI in my work because:
1. I know more within my specialization than ChatGPT and similar systems. My work is not easily scrapable online.
2. My human interaction is more personable and adaptable to every different person.
3. I enjoy my work and enjoy improving myself.
I use generative AI when...
1. I'm having a difficult time getting a good search result. I will use ChatGPT as a pseudo-search engine to learn some more-appropriate terms to search and some websites to investigate myself. That means that I will sometimes stumble into ChatGPT totally bullshitting a response, but other times it directs me well.
2. To summarize very large documents and highlight key concepts for further investigation. This is great when analyzing old contracts.
I never...
1. Accept what ChatGPT says as truth. It's goal is not to be truthful... it's to seem confident.
2. I never say, "ChatGPT says..." as if that's enough of a response. Instead, I might say, "I ran this through ChatGPT which directed me to this website/programs/department where I found this information."
Re:My Reasons-- (Score:4, Interesting)
Using a generative AI in your work requires, ironically, higher order thinking skills. It's perfectly safe for a highly skilled programmers to use generative AI because he can understand the code and how it relates to both the prompt and the system requirements. But I'm not comfortable with the idea of people without years of hand coding experience shoving prompts into an AI code generator and accepting responses they don't understand.
I think this poses a kind of Catch-22 for the future. In the short term, with AI's ability to handle volumes of data and cases a human being couldn't, we'll see AI-driven advances in a lot of fields. But in the long term, the expertise to understand and critique what AI gives us will wane. When entry level and journeyman coder positions have been gone for decades, and people with pre-AI experience leave the labor market, the software world will be flying blind.
Re: (Score:2)
My friend is big into AI. I lost my job, so he, as part of reviewing my resume, fed it through ChatGPT.
I will admit, ChatGPT did revise bits of my resume - I looked at its output and saw that it had re-worded something I had problems with to something much better. But that's all I took from it. The rest of the output I rejected outright because it was garbage and the hallucinations it added were horrible.
So I admit, I had a little help from AI, but I also had final control - at no point did ChatGPT touch my
Re: (Score:2)
I will admit, ChatGPT did revise bits of my resume - I looked at its output and saw that it had re-worded something I had problems with to something much better.
As hey! ( 33014 ) posted above, "Using a generative AI in your work requires, ironically, higher order thinking skills" and you demonstrated those higher order thinking skills by recognizing the limited improvements where they were. Chances are that you will remember that particular input and thus will be better in that area in the future.
That's a fantastic use for generative AI and it's how any of us should be open to using it. Where it becomes detrimental to humanity (and yes, I mean that with its full we
That's because there are little gains (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
Good for coding and research, dubious for writing (Score:2)
I'm a little surprised that folks use it for drafting reports since I hated it for that (admittedly small part of my job).
My issue with using LLMs for writing is if I need to write something it's because there's a specific thing I want to say!
So I'm either telling the LLM what I want it to say and having it garble or fluff it up, or I'm just writing it myself.
The LLM is fine for editing, but to generate the original text I don't really see the value.
y'all sound like UFO nuts (Score:2)
Re: (Score:2)
The problem I find is - sure rubber ducking an issue is useful.
But so far with AI I often find that by the time I can give it enough information and details to produce something useful, I've basically already written what I need in the prompt.
Part of this is because there's lots of internal knowledge that isn't publicly available for the AI to have even with Web Search that's necessary for answering a ticket or writing documentation or whatever. And don't forget about the issues with what you're even allowe
Re: (Score:2)
Is it learning from me? - why I avoid them (Score:1)
I have no problem with an isolated copy of a chatbot that will be erased or reset after each use. I just have to trust that this erasure is actually happening.
If I or my employer/organization doesn't control the instance of the bot that I am using, or there isn't some legal/contractual obligation to erase the data, I generally don't trust it.
I will use a non-erasing chatbot for work if I know that my interactions will never be used to train up any chatbot outside the organization. I will also use it at wo
Yep (Score:3)
I started using these chatbots about 3 years ago in my coding job. They were fantastic at launch, but now they are often more error-prone, frustrating, and time consuming to use than to just work out the problem on my own.
Once the novelty wears off, you realize this stuff is pretty much snake oil.
Re: (Score:2)
Most US workers? (Score:3)
For how many US workers would ChatGPT even be applicable? Let's look at the most common jobs.
#1 Cashier - Not really a lot of call for a cashier to be using AI in their work.
#2 Food Prep - again not really applicable
#3 Stock Clerk - Pretty manual and not lot of computer work.
#4 Laborer - I think this one is self explanatory
#5 Janitor - Pretty sure AI is not much help there.
#6 Construction worker - See any of the above
#7 Bookkeeper - this one might just be using it
#8 Waitstaff - Not applicable
#9 Medical Assistant - not really applicable here either
#10 Bartender - Keep AI outta my mfing bar.
So out of the 10 most common jobs in America 1 may have a reason to use AI.
It's a ridiculous claim since PEW ignores any American who actually works for a living, and seems to ignore the fact that most American workers have no need for AI.
Re: Most US workers? (Score:2)
Re: (Score:2)
Hello ChatGPT, please create me a bright and cheery, three sentence marketing pitch for selling crystal meth on the street. Incorporate the terms "finna", "peanut butter crank", "type shit", and "skibidi rizzler". Avoid using the words "whimsical" and "tapestry".
Re: (Score:2)
Maybe some of the above might find LLM AI useful as a translator, if they are not fluent in English.
(of course if they are not fluent in English the new administration will be wanting to deport them)
I tried using an AI bot (Score:3)
It was much better at programming than me. Faster, quicker, better code.
So I quickly deleted it. Not having that little shit stealing my job.
Oh look another /. AI story (Score:2)
gossipy hallucinations (Score:2)
That's fine (Score:1)
It's a tool.
Some jobs probably aren't good problem spaces for it yet.
Some are, but not everybody knows how to use them effectively. It's easier to rant against the tool on Slashdot than to admit that you don't now how to use it effectively.
short-term gain for long-term loss (Score:2)
Before chatbots: 1 hour writing, 2 hours debugging
After chatbots: 5 minutes writing, 6 hours debugging
Privacy concerns (Score:2)
Those creepy chatbots that people invite to their online meetings...they're listening to everything that is said, taking dictation, and supplying it to some obscure company who will do who-knows-what with the information. Probably, sell it to the highest bidder.
For that reason, my company bans chatbot use without explicit authorization, because the risk of data breach is so high.
My reason... (Score:3)
Terrible title (Score:2)