AI Investment Soars but Profitable Use Remains Elusive for Many Firms, Goldman Sachs Says 46
Despite soaring investment in AI hardware, most companies are struggling to turn the technology into profitable ventures, Goldman Sachs' latest AI adoption tracker reveals. Equity markets project a $330 billion boost to annual revenues for AI enablers by 2025, up from $250 billion forecast just last quarter, yet only 5% of US firms currently use AI in their production processes.
The disconnect between sky-high investment and tepid adoption underscores the significant hurdles businesses face in implementing AI effectively. Industry surveys by Goldman indicate that while many small businesses are experimenting with the technology, most have yet to define clear use cases or establish comprehensive employee training programs. Data compatibility and privacy concerns remain substantial roadblocks, with many firms reporting their existing tech platforms are ill-equipped to support AI applications.
The lack of in-house expertise and resources further compounds these challenges, leaving many companies unable to bridge the gap between AI's theoretical potential and practical implementation. Even among those organizations actively deploying AI, only 35% have a clearly defined vision for creating business value from the technology. This strategic uncertainty is particularly acute in consumer and retail sectors, where just 30% of executives believe they have adequately prioritized generative AI. The barriers to profitable AI use are not limited to technical and strategic issues. Legal and compliance risks loom large, with 64% of businesses expressing concerns about cybersecurity risks and roughly half worried about misinformation and reputational damage stemming from AI use.
Despite these challenges, investment continues to pour into AI hardware, particularly in semiconductor and cloud computing sectors. Markets anticipate a 50% revenue growth for semiconductor companies by the end of 2025. However, this enthusiasm has yet to translate into widespread job displacement, with AI-related layoffs remaining muted and unemployment rates for AI-exposed jobs tracking closely with broader labor market trends.
The disconnect between sky-high investment and tepid adoption underscores the significant hurdles businesses face in implementing AI effectively. Industry surveys by Goldman indicate that while many small businesses are experimenting with the technology, most have yet to define clear use cases or establish comprehensive employee training programs. Data compatibility and privacy concerns remain substantial roadblocks, with many firms reporting their existing tech platforms are ill-equipped to support AI applications.
The lack of in-house expertise and resources further compounds these challenges, leaving many companies unable to bridge the gap between AI's theoretical potential and practical implementation. Even among those organizations actively deploying AI, only 35% have a clearly defined vision for creating business value from the technology. This strategic uncertainty is particularly acute in consumer and retail sectors, where just 30% of executives believe they have adequately prioritized generative AI. The barriers to profitable AI use are not limited to technical and strategic issues. Legal and compliance risks loom large, with 64% of businesses expressing concerns about cybersecurity risks and roughly half worried about misinformation and reputational damage stemming from AI use.
Despite these challenges, investment continues to pour into AI hardware, particularly in semiconductor and cloud computing sectors. Markets anticipate a 50% revenue growth for semiconductor companies by the end of 2025. However, this enthusiasm has yet to translate into widespread job displacement, with AI-related layoffs remaining muted and unemployment rates for AI-exposed jobs tracking closely with broader labor market trends.
Morons (Score:3)
what does a GPT do better than a human, right now? Write articles for your online media company? Create Art that you can sell/license? Write code to create the next generation of must have applications/apps?
As far as I can see, so far it does none of those things as well as a human can.
Hence, morons (read investors who are buzzing about repeating buzzwords to each other in an echo chamber) are willing to invest in any company that says they are doing 'AI', but actual business people, who have to make money to keep their company afloat, they are just dipping their toes into the AI pond because if they invest heavily in it right now, then they may loose the whole business.
How smart do you have to be to see this?
Pretty smart I think, because idiots still write articles like this one.
Re: (Score:2)
I've had the attitude that I'll get excited for AI when it actually does something useful for me. Oracle is approaching it in an intelligent way, link it to things like database queries, not to go out and look for things on the Internet, but to replace the normal UI that is used to search for things with more "natural language" approaches. For most businesses, replacing an existing database front-end with an AI powered front end could also work, if you need to check through multiple sources of informati
Re: (Score:2)
Re: (Score:2)
The typical "business" approach isn't set up by people who use technology though, so they come up with stupid, "let's rebuild everything around AI!" approach, instead of the intermediate, "let's use it to make things easier for employees".
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Yes, it paraphrases what is commonly said in data, so that's what it is useful for. Example are in fields which have a lot of basically repetitive lists of talking points. Almost like expert knowledge systems but with less logical rigour. Those generated summaries are then a place to start from to do some human creative thinking.
Re: Morons (Score:3)
Volume. If your job is to produce a large volume of content, regardless of quality (like degenerate marketeers) nothing beats a GPT.
Also, it provides a low cost alternative for content producers whose multimedia content is focused on non-visual media. Like informative YT video channels. They now have access to pretty high quality artwork to put in their videos at a price point that makes sense.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Yeah I see that creepy crap on YT and it goes immediately to "Do not recommend this channel".
Re: (Score:3)
As far as I can see, so far it does none of those things as well as a human can.
The goal might be more humble than as well as a human can, but "good enough for the dollar spent within the timeframe".
So can it write articles better? Well no, but it can take a paragraph and stretch it into a stupid "article" that isn't any worse than the tripe out there that exists only as a way to try to hold eyeballs on a page long enough to be subjected to an adequate number of ads.
Can it create art that can be sold/license? Well no, but if you wanted generic soulless "stock photo" grade fodder for so
Re: (Score:2)
Re: (Score:2)
You're missing a great opportunity here. This page of comments can be reworded in an excellent reader by a LLM. Sample: https://pastebin.com/Jpf6C72U [pastebin.com]
Re: (Score:2)
HOT DOG ! now THIS is the future!!!!
Re: (Score:3)
As far as I can see, so far it does none of those things as well as a human can.
Au contraire. AI generates jokes which are funnier than human generated jokes [cbsnews.com].
And as climate change deniers would say, checkmate!
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Re: Investment definition (Score:2)
It's more like you invest in storytellers who may be able to draw more money and attention, all of which may (accidentally; if you're lucky) produce a valuable product that 1000x the company. This is why everyone knows who Sam Altman is. They're not investing in OpenAI as much as they are Sam Altman 's bullshit.
Re: (Score:2)
Re: Investment definition (Score:2)
Not just ChatGPT, but yeah, I was talking about investing like as a stock holder, not investing in an idea for your own company.
Since it isn't actually intelligent (Score:2)
what do they expect?
Typical hype cycle (Score:5, Insightful)
LLMs surprised their creators with unexpected progress. Researchers got excited and wondered if the rapid progress would continue. Hypemongers and investors, hungry for the "next big thing" overreacted and made extremely optimistic predictions while pouring billions into anything with the two magic letters, "AI", in their press release. Now, the hype train is rolling down the tracks at full speed
In the short term, expect a tsunami of crappy AI stuff, released far before it's ready in order to convince investors that the makers are on the cutting edge
In the longer term, I predict that genuinely useful stuff will be invented, but it's entirely possible that the current hype bubble will explode long before that
Re: Typical hype cycle (Score:2)
I think what well find is companies like MSFT, with large interconnected software systems, will benefit the most. AI is so bad at so many things that we'll probably need to create micro-LLMs that are purpose built. Not full products, but tools of integration.
Re: (Score:2)
Re: (Score:2)
Business applications need accuracy (Score:5, Insightful)
Re: (Score:2)
This x100. When 100% accuracy is required, the output of current AI tools requires a human to QA everything. In that case, it's usually still cheaper just to have the human do it in the first place!
Now, I do believe we will see 100% AI accuracy achieved for many use cases in the not-too-distant future, but it's not here yet.
Re: (Score:2)
I think it's the double edged sword, it takes "fuzzy" input, which computers traditionally couldn't really do anything with, but the result is similarly "fuzzy", and thus useless to use verbatim. I think that's likely to just be a natural limitation of the approach that will endure. Potentially very useful in activities that begin and end with a human, but likely to stay very challenged when driving some sort of specific picky mechanism or process.
It's present in human interaction too, that iteration is r
Re: (Score:2)
There is always the problem where stupid people use any tool poorly. You could theoretically use a hammer to knock down a tree, but there are better tools available if that is what you want to do. Going online to find information isn't an intelligent use of AI, but if you have current tools for you business, using AI to make use of those tools and simplify your workflow won't come up with factually incorrect nonsense, because you aren't using AI to be creative.
Re: (Score:2)
Re: (Score:2)
The AI solutions so far seem to include a random chance of error. I think that limits the practical business applications to things like art or customer service where errors are perfectly acceptable.
AI doesn't do much an intelligent human can do. And yeah, it is prone to hallucinations.
One thing I always wondered about though. After the AI revolution, when AI Ai's itself, and humans don't add more input because why do that - AI is the path forward, and the biggest thing evah!
Except that probably isn't going to happen. AI will be as manipulable as cult members. The Google AI hallucinating that a happy white family is somehow a hate crime, but a happy black family is inclusive, or the African or Chi
Re: (Score:2)
Yep. And there is a second limit: You need to control and limit the training data and still have enough to make it work, or ot becomes completely unusable in a business context. The current LLMs, for example, are basically just demos.
As to "art", I find the AI stuff to be generic, boring and off-putting. I can now spot most of it without even really looking. I expect many people have the same experience.
Re: (Score:1)
https://yro.slashdot.org/story/24/02/15/2239221/air-canada-found-liable-for-chatbots-bad-advice-on-plane-tickets [slashdot.org]
Because it's not actually AI (Score:2)
It's typeahead, writ large. Clippy, Jr. And all the yelling is since cryptocurrency was headed downhill, this is the next bubble.
And it *is* a bubble. Those of you who, like me, saw the dot.com bubble, 24 years ago, have seen this before.
You Know (Score:2)
This sounds familiar... Can't quite say for sure...
whats the killer app? (Score:2)
Misconception (Score:2)
Companies are not all jumping on AI because they scent massive profits; most are jumping on because they see disruption and fear losing their business model. If the matrix is
99% chance - invest; AI has little impact, lose investment
1% chance - invest; massive payoff of new AI development, double growth
99% chance - don't invest; AI has little impact, save investment
1% chance - don't invest; AI transforms everything, competitors who did invest take market, company goes bankrupt
then there's a good case to be
Long term investment (Score:2)
Obviously (Score:2)
They all see something that is not there. And they all suffer from FOMO. Idiots.
Sure, LLMs and generative AI in general will bring some efficiency increases and will cause problems on the job market. But these people think it is somehow revolutionary and will change everything. That is very obviously not the case.
And we were worried (Score:2)
that AI was coming for our jobs. Not so fast, and not so cheap!