Don't Get Used To Cheap AI (axios.com) 112
AI services may not stay cheap for long, as companies like OpenAI and Anthropic are currently subsidizing usage to rapidly grow market share. As these companies move toward profitability and potential IPOs, Axios reports that investors will likely push them to increase prices and improve margins. An anonymous reader shares an excerpt from the report: Flashback: Silicon Valley has seen this movie before. The so-called "millennial lifestyle subsidy" meant VC money helped underwrite cheap Uber rides and DoorDash deliveries. Before that, Amazon built its base with low prices, free shipping and, for years, no sales tax in most states. Eventually, all of these companies had to charge enough to cover costs -- and make a profit.
Follow the money: The current iteration of AI subsidies won't last forever. Both OpenAI and Anthropic are widely expected to go public. Public investors will demand earnings growth and expanding margins. Even as chips get more efficient, total spending keeps rising. Labs need more capacity, more upgrades and more supply to meet demand.
The bottom line: The costs of AI will keep going down. But total spend from customers will need to keep going up if AI companies are going to become profitable and investors are ever going to get returns on their massive investments.
Follow the money: The current iteration of AI subsidies won't last forever. Both OpenAI and Anthropic are widely expected to go public. Public investors will demand earnings growth and expanding margins. Even as chips get more efficient, total spending keeps rising. Labs need more capacity, more upgrades and more supply to meet demand.
The bottom line: The costs of AI will keep going down. But total spend from customers will need to keep going up if AI companies are going to become profitable and investors are ever going to get returns on their massive investments.
No shit (Score:5, Interesting)
Re: No shit (Score:5, Insightful)
First one, then the other.
AI companies hoping by then there will be enough platform lock-in and hassle to transition to human-powered production they don't try and go back.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Nah, brains are overrat... over... rat.. rat.. squeak, rat hungry, rat eat some cheese.
How many cumulative Single Event Upsets to an LLM, might cause it to utter the above? I hope that your rat-token was intentional, because it reads admirably.
Re: No shit (Score:5, Funny)
As we all learned in school, the first hit is always free.
Re: No shit (Score:5, Informative)
Reality: neither.
Inference costs for a given capability level have been declining exponentially with a mean halving time of 7,5 months over the past 3 1/2 years - and show no signs of stopping.
From *independent small-scale inference hosts*, API pricing - e.g. guaranteed to not be subsidizing inference - frontier-scale open models are commonly served for less than $1/Mtok input-output averaged (the main cost is in output). This is an order of magnitude less than the closed models charge, because the closed model providers profit hand-over-fist on their inference, and use that to subsidize the general (non-paying) public. But the real cost is low. DeepSeek is believed to be providing inference on their platform (for a 671B-param model) at-cost, and they charge $0.028/Mtok-in (cache hit) / $0.28/Mtok-in (cache miss) / $0,42/Mtok-out. Independent providers are somewhat - but not dramatically - more expensive serving the same model due to the need to earn a sustainable profit margin.
A question like "What is the capital of Madagascar?" might involve under 100 tokens. "Write the entire next chapter of my novel" might be tens of thousands. It's hard to give a specific number, but if we say 1000 tokens, If we go pessimistic and choose $1/Mtok, then serving a flagship model (with many hundreds of billions of parameters) costs about a tenth of a penny per query. But models can still be useful for general chatbot uses all the way down to a few billion parameters (a hundredth of a penny per query), and for less general uses (for example, summarization) down well below that.
These are not some sort of gigantic cost. And remember: the cost for a given quality level has a trend of halving every 7,5 months.
The notion that you're going to get AI to go away by being "no longer economical" is, quite simply, fantasy. The flagship model producers make money on inference. They're actually quite profitable business lines. They lose money as a company because they're sinking such vast sums into R&D and scaleup.
Re: (Score:2)
ED: "all the way down to a few billion parameters (a thousandth of a penny per query),"
Re: (Score:3)
Re: (Score:2)
Unlikely. The gap between the open source models and the top proprietary ones is 6 months to 2 years. A H200 capable of running those models is can support about 6 developers and costs $30k, or say $6k per developer, or $250/mo amortised over 2 years. But no developer works 24 hours, and you can rent them on AWS. So you don't need to buy or run the H200. Amazon's 200% markup comes from sharing it. They don't need a H200 either - like Google they have
Yes shit (Score:3)
The bottom line: "The costs of AI will keep going down."
Cheap AI is not only here to stay, it is going to get cheaper.
Google, Microsoft, and Meta offer competing AI products that are good enough, while having other income sources to maintain low prices.
OpenAI and Anthropic will not compete if they raise prices.
The only reason prices would go up is if the supply cannot keep up with the demand.
Re: (Score:3)
Actually, the "open weights" Chinese models do just as well on local hardware, and it is getting trivial to update them with new information, so yeah, "AI" is already cheap enough, for what it can do.
That's the reason the big "tech" companies are trying to corner the hardware market, so that it is impossible to set your own rig up.
It will be a very funny fall.
Nah, we will hardly notice any fall (Score:5, Interesting)
The demand for modern AI is real and growing.
The hardware they are buying up is not being stored in a warehouse to starve the competition; it is being used.
I could maybe convinced that OpenAI and Anthropic are just ponzi schemes of a sort, racing for an IPO before the investment money runs out.
But Microsoft, Google, and Meta are not in this for a quick buck.
If OpenAI cannot pay the bills I will bet on Microsoft to absorb what they do not already own of them just for their data centers.
And Meta needs to buy Anthropic so it can catch up.
A little dip in the market on both events, and then back to the moon they go, just like after the dot com bubble, but not even that bad.
People can run a spreadsheet app or word processor on their local hardware and store their files on their own servers; but most do not.
Instead people pay Microsoft and Google for online services for things that everyone easily did at home 20 years ago.
This is not because Microsoft and Google bought up all the CPUs and RAM so you only had a thin client; it is convenience.
Most people do not run Chinese models today because it is a pain in the ass. The majority of people who could pull it off do not bother because the Chinese models are behind the proprietary models, and it is more important to get things done than it is to be "free".
Then there are a few who do it for real productivity and demand the liberty. Thankfully this will get easier and better; but just like computing the percentage of people on devices without keyboards and with walled garden app stores will continue to grow faster than those booting Linux Desktop (not Android) and using a mouse. I mean look at Chromebook sales.
AI will just be part of Google Workspace and Microsoft Teams like word processing, spreadsheets, and presentation apps are today.
You and I will still try to run it locally, and it will be great.
Manufacturing will catch up to demand, but we probably will not see the abnormally low prices again that we were enjoying for RAM and hard drives again.
Re: (Score:2)
Where is the "demand"? Is there really so much of it that one can justify dumping a trillion or ten in it?
Of all "AI" peddlers I know of, there's only Palantir that is making money, and that's only because of its government contracts.
Literally everyone else is either
- an edge case like the small companies that run "generate me a pic for $20 a month", which are basically the same ballpark as the people who run the Chinese models for fun;
- or burning investor cash with no end in sight like OpenAI, Anthropic,
Re: (Score:3)
Where is the "demand"? Is there really so much of it that one can justify dumping a trillion or ten in it?
This technology is unlike anything I have seen as far as adoption rate.
However, my view is limited and could be quite skewed, but when my mother is using and paying for the technology there is something there to me.
I see about 50% of people in my life have adopted ChatGPT or the like, from my boss to my mother, and they are not giving it up; it would be easier to take away them all the social media, streaming services, ecommerce, and gig work services than ChatGPT.
This 50% will easily give up Tiktok to Face
Re: Nah, we will hardly notice any fall (Score:2)
People will always buy things that are far under their cost to produce. That's not a business model at all. Your mom isn't going to want to pay for the electricity she consumes plus a profit when it comes to that.
My mom will not have to pay. (Score:2)
I hear you, the assertion is that today the $20/month or so ChatGPT subscription is not covering the true cost.
As a result either ChatGPT eventually has to raise the cost of the subscription or find another revenue source to keep the lights on in the data centers.
Google and Meta have already solved this problem.
The obvious established revenue source is advertisement and selling "user" data.
Google and Meta do not have my mom's credit card on file, yet she "consumes" electricity at the Google data center when
Re: (Score:2)
Google, Microsoft, and Meta can easily absorb the cost
They hope so.
https://tech.slashdot.org/stor... [slashdot.org]
The demand is insane, more than social media, more than smartphones, more than internet retail, more than internet search, more than I saw with the internet, more than with cell phones.
As fluffernutter says, it is easy when you don't have to pay for it. When you do...
They have proven they have sound business models that are not going anywhere.
Yes, the question is will the giant sucking sound produced by the "AI" be appreciated by the investors. So far they've been lured by the dream of low costs as the companies fire employees and the free government money. This illusion will end soon. The assplosion will be epic.
Re: (Score:2)
Massive layoffs are unsettling, but they now seem like lopsided news with some missing half of the story.
I guess Meta lays off 22.16% of its employees in 2023 is much catcher headline than Meta increased its employees by 20% in 2022, and 10% in 2024.
Meta had nice looking growth rates for years: https://www.macrotrends.net/st... [macrotrends.net]
I suspect they may be maturing a bit, Covid saw everyone over hiring, and the VR push did not quite pay off; although the Meta glasses seem to be a success in spite of privacy concern
Re: (Score:2)
Unlike you, I'm not trying to guess the reason for the "unsettling layoffs", I'm taking tfa at face value - the layoffs are needed to save money for the GAI chase of zuck. Is it a plausible interpretation? Yes. It matches with what I hear from people who work there, which is that zuck's gone all in on "GAI" as he religiously believes it is right behind the corner. It also matches well with their recent spending pattern. This is not a healthy business decision, it is a gold rush.
Is it credible that this is a
Re: (Score:2)
I want to start by asking again for the "free government money" explanation you have stated twice; sorry I buried that request at the end of my last post and I am really most curious about that.
Is it credible that this is a "post-covd" reduction as you claim? Nope, that happened in 2023 already.
Agreed, that is why the jobs cuts in 2023 were not that scary; they were a good correction for a previous mistake; not the explanation for the 2026 layoffs.
Also why I am not worried that layoffs today are a concern; from what I have heard out of Facebook, there is a lot of fat to cut.
So in at least 1/3 of your "guaranteed profitability" examples we appear to have exactly the opposite.
Facebook, Instagram, WhatsApp, an
Re: (Score:2)
Also why I am not worried that layoffs today are a concern;
They aren't a concern, they are a symptom.
but betting against him in business? I guess time will tell.
I'm not betting, I'm observing. And what I see is desperation - fb is cutting itself up to catch up with the rest of the LLM-peddling pack and appears to be failing, while the rest of the "industry" has no business plan except to fire at least half of the national workforce and sell advertising to the fired.
Once the Iran war and the imminent AI burst put the country formerly known as the USA in the 100 trillion debt zone, zuck, muck and slopman running away to green
Free Government Money (Score:2)
Please now tell me about the free government money.
Re: (Score:2)
You haven't noticed the yuge tax discounts that concentrate a lot of money into the hands of the select billionaire community? You have somehow missed the many initiatives that take a lot of risk from the "investor" and dump it onto the government? Are you blissfully unaware of the ballooning deficits that are funding this largesse? What of the calls for greenspan-level rates (forgetting what came out of it in the end)?
It isn't a single episode, or a single program, it is a whole government of corporate soc
Re: (Score:3)
I am not a big government fan, so anything to throw shade on their actions is welcome to me.
I would not say I am the most informed person, but a quick Google search and asking ChatGPT failed to support the "Free Government Money" claim.
yuge tax discounts that concentrate a lot of money into the hands of the select billionaire community
I cannot defend the logic that lower taxes as "free government money".
I would much rather Billionaires spend their money than the government.
initiatives that take a lot of risk from the ‘investor’ and dump it onto the government
I assume you are talking about research funding or grants or something?
OpenAI did just get that Department of Defense contract that Ant
Re:Nah, we will hardly notice any fall (Score:4, Insightful)
> Manufacturing will catch up to demand, but we probably will not see the abnormally low prices again that we were enjoying for RAM and hard drives again.
These aren't "abnormally low" - they are market prices that are profitable for the manufacturers. Unfortunately there is currently more demand, from AI, for memory than the industry has capacity for, so those that have longer term purchase agreements, or are willing to pay more will win, and until supply/demand gets back in balance we're going to see higher prices for things like laptops and smartphones.
RAM and SSD Prices (Score:2)
My understanding, and I am failing to find a great source for this, is that we enjoyed an oversupply of RAM and SSD chips from 2023 into 2025.
As a result RAM and SSD prices were at a low (abnormally is perhaps a bad choice of words since it is not the first boom bust in this industry).
The increase in demand from AI flipped the supply and quickly took us from an oversupply situation to an under supply raising prices.
My primary point is that we most likely will not see a return to the low prices in early 2025
Re: (Score:2)
> The only reason prices would go up is if the supply cannot keep up with the demand.
This may well happen for a while, it is hard to ramp up SOTA chip manufacture very quickly, and the industry is notoriously boom and bust, and the chip companies (incl. memory) have learnt their lesson - they are not going to ramp up as hard as they can (which is anyways limited) to meet a short term demand bubble which they expect will level off.
State of the art chip production (Score:2)
Yeah I believe you are right. Even if the industry wanted to ramp up my understanding is that it is about a 3 year process.
Now I was referring to the supply of AI compute, not the supply of hardware; although it seems hard to decouple those.
But the naysayers who are predicting, seemingly cheering for, a fall, believe that the data center (chips and all) build out is an over supply that AI demand will not pay for.
If however the chip shortage instead is a thing, then all the compute that say OpenAI secured wi
Re: No shit (Score:2)
The steps (Score:3)
Re: (Score:3)
The cloud was also more expensive than rolling your own, but you couldn't do that because the megacorps hired all the programmers. Then they realized hardware was cheaper than manpower, so those programmers were let go and the megacorps just bought up all the hardware to make sure nobody can run their own models.
Re: (Score:2)
Re: (Score:2)
Re: The steps (Score:5, Interesting)
You can. I have been experimenting with some this week. Claude code running locally against qwen3.5-35b. And GLM-4.7 flash. It is very slow on my 3060Ti/8GB. Though i have a 16 core CPU and 64GB of RAM. However, it is actually very capable, if you can tolerate the wait. Still much faster tham i could be at writing code by hand, tests, running them, even collecting logs and reverse engineeromg payloads from my IOT sensors. If it was paid work, it could never compete with the cloud offerings. Waiting 5 mins between prompts is common. Chatgpt codex 5.3 is about 100x faster - I'm doing a free monthly trial right now. Not sure how luch better the local models will get. Qwen3 coder next exceeds both the vram and ram capacity of my system. It is however possible for mw to upgrade to a 16GB vram GPU and 128GB RAM. But prices are way too high, and I may not pull the trigger on a $1200 upgrade to maybe at beast get 2x token speed.
Re: (Score:2)
You forgot ads. (Score:2)
There's a step between 2 and 3 and it is "inject ads everywhere".
Re: (Score:2)
Forget that Google gives it away in exchange for borrowing your eyeballs... priceless.
Distorted reality. (Score:5, Insightful)
If AI execs think that most people would pay for AI, then they are truly delusional. We only do it because it's free and/or forced upon us.
Besides, our occasional use of your tools doesn't justify paying a subscription for it.
If you start charging, or charging more, we'll go somewhere else, even if that means reddit or StackOverflow. I'm sure they'd welcome us back with open arms.
um (Score:3, Insightful)
If you're a programmer, and I mean someone doing it to put bread on the table, your employer (or you as a contractor) should be willing to spend hundreds a month on AI. The productivity gains are just that good. Even one bug a month quashed by AI would pay for it. If you think people are "delusional" about being willing to pay for it at all, I can just say, glad you enjoy programming as a hobby but the world is moving past you. And the programming AI gets exponentially better every six months.
As for the ret
Re: (Score:2)
I assume this guy got downvoted or the slur against the disabled ... but what he's saying about employers paying for AI is dead on.
If you make say $200k/year, even $500/month ($6k a year) is a relative drop in the bucket (3%). Claude makes me far, far more than 3% more productive.
Re: (Score:2)
yes obviously (Score:3, Insightful)
First of all the PAYGO API includes a profit margin, because that is the business model for that product. So just breaking even with the Pro or Max plan versus equivalent usage under the PAYGO rate means that Anthropic has reached the point where it wants to be.
Second, I'm not sure how you calculated the break-even for both models. Are you assuming the Pro/Max people are using the AI around the clock? That is not realistic at all, for one, people only work 8 hours a day, and even while working they do a lot
Re: yes obviously (Score:2)
now read my sig
Re: um (Score:4, Interesting)
I doubt it.
And when 80gb gpus inevitably become affordable in 3-5 years time, even the large model slight edge is likely to be lost.
I just don't see how these hyperscalers will recoup their massive capital investments before consumer grade hardware catches up and their business model collapses.
Re: (Score:3)
It is different type of work, you do not need to figure out details of the API or data format. You just need to know what you want to accomplish and how to accomplish it. You still need to know what you are doing though.
Someone already said that AI will make smart people smarter and stupid p
Re: (Score:3)
You've completely missed the parent's point.The LLM craze is causing the cost of generating words to become so low that it will destroy the revenue stream of the current large scale Ammerican AI providers. They rely on model complexity and extreme resource requirements to keep competition out.
But their bet is broken, we've already seen much smaller models with comparable performance, which means the investments last year are too high for the short time window these companies have remaining to monetize and
Re: (Score:2)
Re: (Score:3)
If you're a programmer, and I mean someone doing it to put bread on the table, your employer (or you as a contractor) should be willing to spend hundreds a month on AI. The productivity gains are just that good. Even one bug a month quashed by AI would pay for it. If you think people are "delusional" about being willing to pay for it at all, I can just say, glad you enjoy programming as a hobby but the world is moving past you. And the programming AI gets exponentially better every six months.
What difference does the dinosaur rhetoric make? If AI is getting "exponentially better every six months" the world isn't just moving past "programmers" it is moving past humanity. Personally I'm tired of all the "delusional" over the top rhetoric.
Re: (Score:3)
I'm actually trying to figure out whether the OP used "exponentially" correctly and meant what you assumed, or if they used "exponentially" the way most people do, to mean something between "a bit" and "a lot, probably."
Re: (Score:2)
He should have said "literally exponentially" which would have clarified everything.
Re: (Score:3, Interesting)
AI is already a game changer in many professions. It changed software development drastically. I think it will change diagnostic medicine and chemical research and many other intellectual professions.
As AI develops further it will be applied to civil engineering and mechanical engineering and other engineering disciplines. AI is just a tool, like excel or CNC m
Re: Distorted reality. (Score:2)
If AI get applied to civil engineering, people will die, and the world will finally wake up to the illusion of AI.
As for software development, the quality is so low, that we've dumped AI. It felt like mentoring juniors all day long and we've had enough. Bad date. Had to go.
Re: (Score:2)
Re: (Score:2)
The fact that this is moderated insightful is a testament to how low quality of slashdot is fallen. This used to be a site for technology people.
Only a person that has no foresight at all can think that AI is just a fad that will go away.
Wish you would invest as much time in surfacing objective evidence to support your positions as you do passing judgement. Nobody even said shit about it just being a fad that will go away.
AI is already a game changer in many professions. It changed software development drastically.
So where are the receipts?
AI is just a tool, like excel or CNC machines.
AI is more like a personal slave with brain damage.
Re: (Score:2)
Um. It's not even actually "AI".
But go on.
Re: (Score:2)
Yea, post the wrong answer to Reddit and someone will be happy to do your homework for you.
No problem. (Score:5, Insightful)
Yet, thanks to skyrocketing RAM prices, I've still had to pay for it.
I'll just run it locally (Score:2)
There are plenty of models you can run on your own hardware. Sure the capabilities will be significantly less, but that's better than being trapped in an ever increasing subscription price.
Re: (Score:2)
Yeah... memory and storage prices are going through the roof right now.
The higher prices do not seem to be impacting the large PC manufacturers as much, though. Amusingly, you can get an entire Macbook Neo with the education discount ($499) than the price of 32 GB high speed DDR5 memory kit ($550-$600).
Re: (Score:2)
The higher prices do not seem to be impacting the large PC manufacturers as much, though.
The largest of the PC manufacturers had long term contracts (aka memory price hedging). Those long term contracts are ending (or will end soon enough), and the memory manufacturers are now limiting the time frames of the contracts so they can capture the upside(s) due to the AI demand (and the manufacturers are also redirecting their fabs to the higher profit AI preferred memories, further reducing the availability of the classic PC memory supply).
The smaller manufactures (such as Framework) that do not
Re: (Score:2)
If prices are higher than production costs, they will come down at some point, because extra price means extra profit and every manufacturer wants to get as much of that as they can. So RAM prices will come down.
Cheap AI is here to stay (Score:4, Interesting)
What will more likely happen is that features and functionality will keep expanding to use more processing power. But where is the limit? I say that comes when we can render at near-realtime 8k/240hz a video (or video game if you prefer) with procedurally generated world, characters, and storyline based on the users input via whatever real-world data, UI or sensors you want to use. This might even be possible now if you are a billionaire with access to million dollar server farms. Probably my imagination isn't broad enough in estimating the limit of "personal computing" but additional computing power beyond that seems pointless for any one individual.
In any case the price of an ai product will depend on the features offered and how much hardware is needed. Probably you can run a 500 billion parameter LLM on a smart watch in 2055 but if you want that power today it seems to cost about $20 a month. I doubt anyone will try to ever charge more for today's $20 featureset. The price of this stuff will absolutely decrease, the only unknown is how companies will roll out new functionality and how specific future features fit into the pricing tiers over time.
Re: (Score:2)
A lot of the cost is involved with the training. AI companies got caught in the "stealing public work" cookie jar, and that's unlikely to be allowed to happen again at any meaningful scale. That means new AI cos are unlikely to form. They are more or less out of places to steal from without actually paying someone, and that will increase costs as well.
Facebook's LLaMMA is OSS only because of how far they know they are behind, and even to get there they got sued for stealing from pirate sites to train the
Re: (Score:3)
Disney re-using public domain stories over and over is not "stealing" but AI companies using OSS is "stealing". The knowledge is there to build upon. You stole from all of the people before you, who figured out all the math and chemistry and physics that
Re: (Score:2)
Yes, what you do is copyright infringement. But: you are being let off due to an exception. The exception is the GPLv2 bargain you agreed to.
As soon as you don't follow the GPLv2 rules to the letter, you are theoretically a target for damages, because you have no right to do what you do.
Of course, you are presumably a small fish and it is difficult to find you, so you are statistically safe. But it still makes you a criminal if you breach the GPLv2.
Re: (Score:2)
Looking at source code is not copyright infringement.
Re: (Score:2)
Re: (Score:2)
If an LLM regurgitates your code verbatim (or really close so it could reasonably be considered a derivative work) and it's uncredited, then it's copyright infringement and also plagiarism.
Re: (Score:2)
When I want to develop a linux kernel driver, I would go and look how other drivers are implemented. Is that stealing?
Maybe not stealing, but depending on your output it could be a copyright violation.
The normal way to avoid copyright violations is to perform a clean room implementation. https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:2)
I agree. The opportunity for AIs to act as Agents will be worth lots to providers. As always, we are the product.
Re: (Score:2)
Good models also get smaller. Did you try Ministral 8B or Qwen3.5 9B? They beat many much larger models from 6 months ago and run fast on cheap desktop hardware.
Moore's Law is most certainly dead! (Score:2)
Sorry, 10-15% every 3 years is not going to meaningfully impact AI costs. This is not a computation issue, but an algorithm issue. If you want real AI gains, you need a new algorithm/strategy/approach.
Ti
Re: (Score:2)
I question their strategy (Score:2)
AI companies released models early and for free
They also loudly and publicly made bold claims that all jobs will be replaced by AI
I can only guess that they did this to create excitement and attract investment
The general public reacted by using AI to create slop and scams
Non technical people saw only the slop, scams and predictions of job loss, and turned strongly against AI
Investors reacted irrationally
It might have been a better strategy to keep it in the lab and charge for early access, restricted to tec
Re: (Score:2)
Google's strategy was to keep it in the lab, because they didn't need money and they were winning. It was best strategy for them to just keep going forward while others where sleeping.
OpenAI's strategy on the other hand was to publish old technology as new, make bold claims about AI and get some investors money. OpenAI needs money because they don't have much skills and the only way they know to improve AI is to put more hardware and data behind it. For them, this strategy was optimal.
Because OpenAI went pu
Enterprise will bear the cost - you won't (Score:2)
Technology will get cheaper (Score:2)
Sam Altman: AI will sold like a utility (Score:2)
And then there this ...
Sam Altman says AI will eventually be sold like electricity and water — by companies like OpenAI [businessinsider.com] (Mar 13, 2026):
"Fundamentally our business and I think the business of every other model provider is going to look like selling tokens," Altman said, referring to the units AI systems use to process and price input and output data.
"We see a future where intelligence is a utility like electricity or water and people buy it from us on a meter and use it for whatever they want to use it for," he added.
Re: (Score:2)
Sam Altman is quite funny. He seems entirely oblivious to the fact that deep learning means that if you can get access to a good source of data then you can probably train your own model. A good source like a company selling query-answer pairs.
You'd think the constant complaining that Chinese companies were ripping him off by, uh, using his product, would be a tip off.
Don't depend on OpenAI or Anthropic (Score:3)
AI providers that provide freely usable models are already profitable. Only the companies doing own training are at a loss and may need to rise prices (and also have the highest prices currently).
And in the end, aim for self-hosting. Not because there won't be cheap APIs, but because you don't want others to see your data and because you don't want to get a problem when the API provider decides that the five year old model is now too old and you need to switch to a newer one that reacts differently and need to adapt code and prompts. Self-Hosting ensures things only change when you want them to change.
"We have no moat" (Score:2)
AI is a commodity, progress is slowing and there is no shortage of good enough free models. OpenAI et el are screwed.
Re: (Score:2)
FrontierMath shows that AI capability has risen from about 10% (2025-Jan) to about 40% (2026-Jan) (currently about 50%)
https://epoch.ai/frontiermath/... [epoch.ai]
I think it would be fair to wait a year and then try to figure out if progress is slowing down or not. Between 2025-2026 progress has not been slow. I agree with you that OpenAI won't be leading the scoreboard after a year (because of scaling problems), but I am very interested to see where Gemini will be.
Re: (Score:2)
Gemini is the current threat for OpenAI. Investors won't lose interest because AI gets boring, but because it looks like Google is in a much better position and Google's model seems to confirm that.
This is Enshittification 101. (Score:3)
No, not how it works (Score:2)
People use uber because uber is cheaper than taxis.
Not because uber is subsidized by VC.
The truth is VC wants $$$, so they want to raise the price of everything to profit.
The world doesn't work that way.
Uber costs more than taxis, people will use taxis.
AI is more expensive than people, people will use people again.
AI isn't going to get cheaper to the client (Score:3)
Currently, there's a huge amount of electricity being subsidized through burn rate. They can't keep burning forever and they'll want to see ROI one day. That means the customer will have to start paying enough to cover the enormous electric bills.
My guess is they're hoping to be able to burn long enough for the customer to become too de-skilled to do without them.
Bad Analogy (Score:2)
If I don't like Uber, I can't hail Didi.
If I don't like OpenAI/Anthropic/Gemini, I sure can pay for DeepSeek/GLM.
Rugpull in 3... 2... (Score:2)
It seriously is like a game of chicken (Score:3)
OpenSource LLMs are getting better every day (Score:2)
Seems like wishful thinking (Score:2)
It sucks!
(starts to suck less)
It's too expensive!
"Dude, it's $20/month"
Well ... it's going to be too expensive!
Startup economics (Score:2)
Right now I run with a $100/month Anthropic Max subscription, and the net effect is that I have a really smart (but completely unseasoned) Ph.D. in computer science who works for me full time, and a very organized generalist MBA research assistant that's roughly half time. There are a couple of gratis services in that mix — Exa and Perplexity, that I will start paying for in April. Overall this $200-ish monthly expense would cost me around a quarter million annually if I had to hire humans to replace
Then people discovered local LLMs (Score:2)
And the industry went, wait - we don't want that!
Buys up all the hardware in the world, and rents it back to you.
will likely? (Score:2)
Every new technology starts out expensive (Score:2)
Originally, cars had to be painstakingly assembled by hand. Henry Ford didn't invent the car, he invented mass production of cars, making them cheap enough for regular people to buy.
The first IBM PC cost more than $1,500 in 1980 dollars, or about $5,500 adjusted for inflation. Today, you can get a basic PC for less than $500.
You name it, new technology is always expensive to develop, but efficiencies and competition help bring down costs. AI isn't going to be any different.
Re: (Score:2)
I understand the logic, though. The big tech companies want to get people hooked on cheap and easy AI, in the hopes that they get hooked on it and will pay for subscriptions once the free tier plans get neutered.
What they seem to be missing is that running an AI model locally is also becoming cheap and easy. Installing OpenClaw is basically as easy as copying and pasting an install command into a command line window and entering an API key. I'm sure that we'll soon have GUI installers that make that even ea
Re: Ridiculous Millenial Bashing (Score:2)
60 percent? Hell, I might buy a sub if any of these products were close to that
Re: (Score:2)
If you include leaving out important context, choosing inferior solutions, violating KISS and some other things, none of the models will make it to 60%.
Re: (Score:2)
They have had enough of paying filthy peons and being dependent on them.
Eh... hard to blame 'em