Anthropic's Haiku 3.5 Surprises Experts With an 'Intelligence' Price Increase (arstechnica.com) 13
An anonymous reader quotes a report from Ars Technica: On Monday, Anthropic launched the latest version of its smallest AI model, Claude 3.5 Haiku, in a way that marks a departure from typical AI model pricing trends -- the new model costs four times more to run than its predecessor. The reason for the price increase is causing some pushback in the AI community: more smarts, according to Anthropic. "During final testing, Haiku surpassed Claude 3 Opus, our previous flagship model, on many benchmarks -- at a fraction of the cost," Anthropic wrote in a post on X. "As a result, we've increased pricing for Claude 3.5 Haiku to reflect its increase in intelligence."
"It's your budget model that's competing against other budget models, why would you make it less competitive," wrote one X user. "People wanting a 'too cheap to meter' solution will now look elsewhere." On X, TakeOffAI developer Mckay Wrigley wrote, "As someone who loves your models and happily uses them daily, that last sentence [about raising the price of Haiku] is *not* going to go over well with people." In a follow-up post, Wrigley said he was not surprised by the price increase or the framing, but saying it out loud might attract ire. "Just say it's more expensive to run," he wrote.
The new Haiku model will cost users $1 per million input tokens and $5 per million output tokens, compared to 25 cents per million input tokens and $1.25 per million output tokens for the previous Claude 3 Haiku version. Presumably being more computationally expensive to run, Claude 3 Opus still costs $15 per million input tokens and a whopping $75 per million output tokens. Speaking of Opus, Claude 3.5 Opus is nowhere to be seen, as AI researcher Simon Willison noted to Ars Technica in an interview. "All references to 3.5 Opus have vanished without a trace, and the price of 3.5 Haiku was increased the day it was released," he said. "Claude 3.5 Haiku is significantly more expensive than both Gemini 1.5 Flash and GPT-4o mini -- the excellent low-cost models from Anthropic's competitors."
"It's your budget model that's competing against other budget models, why would you make it less competitive," wrote one X user. "People wanting a 'too cheap to meter' solution will now look elsewhere." On X, TakeOffAI developer Mckay Wrigley wrote, "As someone who loves your models and happily uses them daily, that last sentence [about raising the price of Haiku] is *not* going to go over well with people." In a follow-up post, Wrigley said he was not surprised by the price increase or the framing, but saying it out loud might attract ire. "Just say it's more expensive to run," he wrote.
The new Haiku model will cost users $1 per million input tokens and $5 per million output tokens, compared to 25 cents per million input tokens and $1.25 per million output tokens for the previous Claude 3 Haiku version. Presumably being more computationally expensive to run, Claude 3 Opus still costs $15 per million input tokens and a whopping $75 per million output tokens. Speaking of Opus, Claude 3.5 Opus is nowhere to be seen, as AI researcher Simon Willison noted to Ars Technica in an interview. "All references to 3.5 Opus have vanished without a trace, and the price of 3.5 Haiku was increased the day it was released," he said. "Claude 3.5 Haiku is significantly more expensive than both Gemini 1.5 Flash and GPT-4o mini -- the excellent low-cost models from Anthropic's competitors."
Self Reference (Score:2)
They asked Haiku how it should be priced, and it gave them that BS answer.
Re: (Score:2)
They probably realized they can't keep selling it at 2% of their cost.
Huh? (Score:2)
I just read the summary and I feel like I just got dumber. I've used ChatGPT a few times, and its helped with some general knowledge stuff, but anything else? No clue...
Writing (Score:5, Insightful)
I just read the summary and I feel like I just got dumber. I've used ChatGPT a few times, and its helped with some general knowledge stuff, but anything else? No clue...
AI is currently a good choice for assistance in writing.
If you look over the reviews of the current AI listings for writing, you find that Ai really shines at the very fine grained level of writing. It will give you words, sentences, and even paragraphs that appear to be really well written.
But in review after review, everyone notes that after a paragraph or two the writing starts to have flaws, and after a page or two it just doesn't have the level of writing expertise that a human would have.
Things like "take this paragraph and make it more tense/faster/slower/descriptive/visceral" seem to work well. Things like "analyze these paragraphs, and write a new paragraph in my same style that says *this*" work OK as well.
Things like "give me five other phrases that describe *this*" work really well, as does "give 5 other words that mean *that*".
What's missing from all of these are the implied "...and I'll choose the best one".
Right now AI can be used as a tool to help your productivity, but it's still at the level of a productivity enhancement tool. And not something that can be used for completely writing something, at least not more than a page or two of text that holds together.
The bills are due (Score:5, Informative)
They have to start showing some real revenue other than more rounds of VC money. The novelty is wearing off. AI winter is coming.
Re: (Score:2)
The novelty is wearing off. AI winter is coming.
That would be nice. With the even more massive overpromising they did this time compared to the last few ones, it is absolutely inevitable, but maybe they can keep the hype going a bit longer. What will be interesting is whether free LLMs will vanish completely (too expensive to run) or whether by the time the house of card collapses they will be efficient enough that some free offers will remain. I know I will not pay for this.
Re: (Score:2)
The AI bubble will burst soon. Expect a GUI puss coating the data centres.
wakeup call or another winter coming? (Score:2)
It's not what you think (Score:4, Informative)
The BeOS successor Haiku is still free. https://www.haiku-os.org/ [haiku-os.org]
Haiku (Score:3)
Price hike for more smarts,
Claude 3.5 Haiku laughs,
Wallets weep in sync.
LOGIC! (Score:3)
"During final testing, Haiku surpassed Claude 3 Opus, our previous flagship model, on many benchmarks -- at a fraction of the cost," Anthropic wrote in a post on X. "As a result, we've increased pricing for Claude 3.5 Haiku to reflect its increase in intelligence."
You know, this price increase wouldn't cause nearly as much ire if they hadn't told the entire world that it's performing better "at a fraction of the cost." Costs go down, price to end-users goes up? I know it's MBA 101, but god damn, you're supposed to babble about supply chain issues and increased operating expenditures when increasing prices, not say it's costing you less. Did this management/public relations team flunk out of business training? WTF?
Re: (Score:2)
You are absolutely right that the PR/messaging management was terrible on this, not least because a couple of weeks ago they implied that there would not be a price hike. The said, the previous version of Haiku was 60 times cheaper than Opus but didn't have even close to as good output, while the new version of Haiku is still 15 times cheaper than Opus and outperforms it. My guess is that it's also still borderline as to if they are breaking even on it.
In the end you can only sell products at a price tha
Re: (Score:2)
You are absolutely right that the PR/messaging management was terrible on this, not least because a couple of weeks ago they implied that there would not be a price hike. The said, the previous version of Haiku was 60 times cheaper than Opus but didn't have even close to as good output, while the new version of Haiku is still 15 times cheaper than Opus and outperforms it. My guess is that it's also still borderline as to if they are breaking even on it.
In the end you can only sell products at a price that the market will bear. While this price rise is irritating and badly managed, $1 per million tokens for a model that is highly capable and really quite fast is pretty competitive right now (and will be more so when they roll out the update with vision support). GPT-4o is 2.5 times more expensive. GPT-4o Mini is cheaper but less capable. If you're building stuff in AWS then the Anthropic models also have the advantage that you can run the models right there and get a variety of security, privacy and contractual benefits without having to go to another vendor. As such, I don't think that they are going to loose customers by doing this; they'll just loose a bit of good will in the short term.
One thing that ads to the irksomeness, is them saying they may not be turning a profit. That strikes me as a them problem, not a customer problem. Which ads yet another layer of ick to the messaging. "We cut costs! We'll pass that on to our customers by raising prices!"