Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Businesses The Almighty Buck

AI Investment Soars but Profitable Use Remains Elusive for Many Firms, Goldman Sachs Says 46

Despite soaring investment in AI hardware, most companies are struggling to turn the technology into profitable ventures, Goldman Sachs' latest AI adoption tracker reveals. Equity markets project a $330 billion boost to annual revenues for AI enablers by 2025, up from $250 billion forecast just last quarter, yet only 5% of US firms currently use AI in their production processes.

The disconnect between sky-high investment and tepid adoption underscores the significant hurdles businesses face in implementing AI effectively. Industry surveys by Goldman indicate that while many small businesses are experimenting with the technology, most have yet to define clear use cases or establish comprehensive employee training programs. Data compatibility and privacy concerns remain substantial roadblocks, with many firms reporting their existing tech platforms are ill-equipped to support AI applications.

The lack of in-house expertise and resources further compounds these challenges, leaving many companies unable to bridge the gap between AI's theoretical potential and practical implementation. Even among those organizations actively deploying AI, only 35% have a clearly defined vision for creating business value from the technology. This strategic uncertainty is particularly acute in consumer and retail sectors, where just 30% of executives believe they have adequately prioritized generative AI. The barriers to profitable AI use are not limited to technical and strategic issues. Legal and compliance risks loom large, with 64% of businesses expressing concerns about cybersecurity risks and roughly half worried about misinformation and reputational damage stemming from AI use.

Despite these challenges, investment continues to pour into AI hardware, particularly in semiconductor and cloud computing sectors. Markets anticipate a 50% revenue growth for semiconductor companies by the end of 2025. However, this enthusiasm has yet to translate into widespread job displacement, with AI-related layoffs remaining muted and unemployment rates for AI-exposed jobs tracking closely with broader labor market trends.
This discussion has been archived. No new comments can be posted.

AI Investment Soars but Profitable Use Remains Elusive for Many Firms, Goldman Sachs Says

Comments Filter:
  • by LazarusQLong ( 5486838 ) on Thursday July 11, 2024 @10:30AM (#64618589)
    "The disconnect between sky-high investment and tepid adoption "

    what does a GPT do better than a human, right now? Write articles for your online media company? Create Art that you can sell/license? Write code to create the next generation of must have applications/apps?

    As far as I can see, so far it does none of those things as well as a human can.

    Hence, morons (read investors who are buzzing about repeating buzzwords to each other in an echo chamber) are willing to invest in any company that says they are doing 'AI', but actual business people, who have to make money to keep their company afloat, they are just dipping their toes into the AI pond because if they invest heavily in it right now, then they may loose the whole business.

    How smart do you have to be to see this?

    Pretty smart I think, because idiots still write articles like this one.

    • by Targon ( 17348 )

      I've had the attitude that I'll get excited for AI when it actually does something useful for me. Oracle is approaching it in an intelligent way, link it to things like database queries, not to go out and look for things on the Internet, but to replace the normal UI that is used to search for things with more "natural language" approaches. For most businesses, replacing an existing database front-end with an AI powered front end could also work, if you need to check through multiple sources of informati

      • Sounds reasonable. I do think that that still fits my "dipping toes" simile though. I mean, trying it out in ways that are small, aimed at what it is demonstrated to do fairly well, no huge investment though.
        • by Targon ( 17348 )

          The typical "business" approach isn't set up by people who use technology though, so they come up with stupid, "let's rebuild everything around AI!" approach, instead of the intermediate, "let's use it to make things easier for employees".

      • Comment removed based on user account deletion
        • by Bongo ( 13261 )

          Yes, it paraphrases what is commonly said in data, so that's what it is useful for. Example are in fields which have a lot of basically repetitive lists of talking points. Almost like expert knowledge systems but with less logical rigour. Those generated summaries are then a place to start from to do some human creative thinking.

    • Volume. If your job is to produce a large volume of content, regardless of quality (like degenerate marketeers) nothing beats a GPT.

      Also, it provides a low cost alternative for content producers whose multimedia content is focused on non-visual media. Like informative YT video channels. They now have access to pretty high quality artwork to put in their videos at a price point that makes sense.

      • recently I have seen many YT channels where they appear to have replaced the narration with AI, and you can tell... It is a loss... also, it seems that recently someone has decided to start a flock of YT channels where the AI not only narrates but wrote the script (which is fairly mundane/crappy)... it is also a loss.... sure, it'll get better, I am sure, but for a multimillion dollar business? I don't think it is time yet for them to make BIG investments in AI yet, too much at stake.
      • GPT might be able to generate a large volume of content, but reading time doesn't go up. It just competes for the same slice of the pie as before. We used to have millions times more text than we can read, now we have billion times.
      • Yeah I see that creepy crap on YT and it goes immediately to "Do not recommend this channel".

    • by Junta ( 36770 )

      As far as I can see, so far it does none of those things as well as a human can.

      The goal might be more humble than as well as a human can, but "good enough for the dollar spent within the timeframe".

      So can it write articles better? Well no, but it can take a paragraph and stretch it into a stupid "article" that isn't any worse than the tripe out there that exists only as a way to try to hold eyeballs on a page long enough to be subjected to an adequate number of ads.

      Can it create art that can be sold/license? Well no, but if you wanted generic soulless "stock photo" grade fodder for so

      • You make good points. Lots of the 'tripe' as you call it is such garbage that I can honestly say, it seems like a 7 year old could have done better, a 10 year old definitely could.
      • > So can it write articles better? Well no, but it can take a paragraph and stretch it into a stupid "article"

        You're missing a great opportunity here. This page of comments can be reworded in an excellent reader by a LLM. Sample: https://pastebin.com/Jpf6C72U [pastebin.com]
    • what does a GPT do better than a human, right now? Write articles for your online media company? Create Art that you can sell/license? Write code to create the next generation of must have applications/apps?

      As far as I can see, so far it does none of those things as well as a human can.


      Au contraire. AI generates jokes which are funnier than human generated jokes [cbsnews.com].

      And as climate change deniers would say, checkmate!
      • well, gee. I was not aware of that. I will have to look out for more jokes in my life, other than the huge joke which is my life.
      • having now read the article, I see that they really had to give the AI some help/understanding. The examples I read were not very funny to me, but as the author said, delivery is important to.
  • Comment removed based on user account deletion
    • It's more like you invest in storytellers who may be able to draw more money and attention, all of which may (accidentally; if you're lucky) produce a valuable product that 1000x the company. This is why everyone knows who Sam Altman is. They're not investing in OpenAI as much as they are Sam Altman 's bullshit.

  • Typical hype cycle (Score:5, Insightful)

    by MpVpRb ( 1423381 ) on Thursday July 11, 2024 @10:36AM (#64618607)

    LLMs surprised their creators with unexpected progress. Researchers got excited and wondered if the rapid progress would continue. Hypemongers and investors, hungry for the "next big thing" overreacted and made extremely optimistic predictions while pouring billions into anything with the two magic letters, "AI", in their press release. Now, the hype train is rolling down the tracks at full speed

    In the short term, expect a tsunami of crappy AI stuff, released far before it's ready in order to convince investors that the makers are on the cutting edge

    In the longer term, I predict that genuinely useful stuff will be invented, but it's entirely possible that the current hype bubble will explode long before that

    • I think what well find is companies like MSFT, with large interconnected software systems, will benefit the most. AI is so bad at so many things that we'll probably need to create micro-LLMs that are purpose built. Not full products, but tools of integration.

    • I'm bookmarking this page, and 3 years from now, your comment is the only one that will still be relevant.
    • 99 times out of 100 I am annoyed by an implementation of AI. For example, why does instagram's search need AI? It doesn't pull up anything relevant to my searches anymore. Why would I want to ask instagram a question?
  • by KingFatty ( 770719 ) on Thursday July 11, 2024 @10:40AM (#64618615)
    The AI solutions so far seem to include a random chance of error. I think that limits the practical business applications to things like art or customer service where errors are perfectly acceptable.
    • This x100. When 100% accuracy is required, the output of current AI tools requires a human to QA everything. In that case, it's usually still cheaper just to have the human do it in the first place!

      Now, I do believe we will see 100% AI accuracy achieved for many use cases in the not-too-distant future, but it's not here yet.

      • by Junta ( 36770 )

        I think it's the double edged sword, it takes "fuzzy" input, which computers traditionally couldn't really do anything with, but the result is similarly "fuzzy", and thus useless to use verbatim. I think that's likely to just be a natural limitation of the approach that will endure. Potentially very useful in activities that begin and end with a human, but likely to stay very challenged when driving some sort of specific picky mechanism or process.

        It's present in human interaction too, that iteration is r

    • by Targon ( 17348 )

      There is always the problem where stupid people use any tool poorly. You could theoretically use a hammer to knock down a tree, but there are better tools available if that is what you want to do. Going online to find information isn't an intelligent use of AI, but if you have current tools for you business, using AI to make use of those tools and simplify your workflow won't come up with factually incorrect nonsense, because you aren't using AI to be creative.

      • The problem is when you use the tools as you describe, you get back results that have a chance of including errors. I think google provided a good example illustrating this risk in the very scenario you outlined, but the AI included results advising people to put glue as a topping on their pizza. So if you present those results in a business context, you will lose your customers permanently because they won't trust you any more.
    • The AI solutions so far seem to include a random chance of error. I think that limits the practical business applications to things like art or customer service where errors are perfectly acceptable.

      AI doesn't do much an intelligent human can do. And yeah, it is prone to hallucinations.

      One thing I always wondered about though. After the AI revolution, when AI Ai's itself, and humans don't add more input because why do that - AI is the path forward, and the biggest thing evah!

      Except that probably isn't going to happen. AI will be as manipulable as cult members. The Google AI hallucinating that a happy white family is somehow a hate crime, but a happy black family is inclusive, or the African or Chi

    • by gweihir ( 88907 )

      Yep. And there is a second limit: You need to control and limit the training data and still have enough to make it work, or ot becomes completely unusable in a business context. The current LLMs, for example, are basically just demos.

      As to "art", I find the AI stuff to be generic, boring and off-putting. I can now spot most of it without even really looking. I expect many people have the same experience.

  • It's typeahead, writ large. Clippy, Jr. And all the yelling is since cryptocurrency was headed downhill, this is the next bubble.

    And it *is* a bubble. Those of you who, like me, saw the dot.com bubble, 24 years ago, have seen this before.

  • This sounds familiar... Can't quite say for sure...

  • Apart from the usual way of transfering public money to the weathly that is military procurement,
  • Companies are not all jumping on AI because they scent massive profits; most are jumping on because they see disruption and fear losing their business model. If the matrix is

    99% chance - invest; AI has little impact, lose investment
    1% chance - invest; massive payoff of new AI development, double growth

    99% chance - don't invest; AI has little impact, save investment
    1% chance - don't invest; AI transforms everything, competitors who did invest take market, company goes bankrupt

    then there's a good case to be

  • That's the thing, AI is the kind of tech that we invest for the long term. 5, 10 maybe more years. It's the kind of "we will land on the moon before the decade" thing. However, our world is addicted to short term satisfaction and profits. "what, a tech that can potentially replace humans? I want if nowwwwwe!". Next quarter should be amazing! And the next! And the next! For the moment, AI is little more than a parlour trick. Sure, AI can generate articles, texts, images, videos, music, but guess what, we a
  • They all see something that is not there. And they all suffer from FOMO. Idiots.

    Sure, LLMs and generative AI in general will bring some efficiency increases and will cause problems on the job market. But these people think it is somehow revolutionary and will change everything. That is very obviously not the case.

  • that AI was coming for our jobs. Not so fast, and not so cheap!

TRANSACTION CANCELLED - FARECARD RETURNED

Working...