Anthropic's Haiku 3.5 Surprises Experts With an 'Intelligence' Price Increase (arstechnica.com) 6
An anonymous reader quotes a report from Ars Technica: On Monday, Anthropic launched the latest version of its smallest AI model, Claude 3.5 Haiku, in a way that marks a departure from typical AI model pricing trends -- the new model costs four times more to run than its predecessor. The reason for the price increase is causing some pushback in the AI community: more smarts, according to Anthropic. "During final testing, Haiku surpassed Claude 3 Opus, our previous flagship model, on many benchmarks -- at a fraction of the cost," Anthropic wrote in a post on X. "As a result, we've increased pricing for Claude 3.5 Haiku to reflect its increase in intelligence."
"It's your budget model that's competing against other budget models, why would you make it less competitive," wrote one X user. "People wanting a 'too cheap to meter' solution will now look elsewhere." On X, TakeOffAI developer Mckay Wrigley wrote, "As someone who loves your models and happily uses them daily, that last sentence [about raising the price of Haiku] is *not* going to go over well with people." In a follow-up post, Wrigley said he was not surprised by the price increase or the framing, but saying it out loud might attract ire. "Just say it's more expensive to run," he wrote.
The new Haiku model will cost users $1 per million input tokens and $5 per million output tokens, compared to 25 cents per million input tokens and $1.25 per million output tokens for the previous Claude 3 Haiku version. Presumably being more computationally expensive to run, Claude 3 Opus still costs $15 per million input tokens and a whopping $75 per million output tokens. Speaking of Opus, Claude 3.5 Opus is nowhere to be seen, as AI researcher Simon Willison noted to Ars Technica in an interview. "All references to 3.5 Opus have vanished without a trace, and the price of 3.5 Haiku was increased the day it was released," he said. "Claude 3.5 Haiku is significantly more expensive than both Gemini 1.5 Flash and GPT-4o mini -- the excellent low-cost models from Anthropic's competitors."
"It's your budget model that's competing against other budget models, why would you make it less competitive," wrote one X user. "People wanting a 'too cheap to meter' solution will now look elsewhere." On X, TakeOffAI developer Mckay Wrigley wrote, "As someone who loves your models and happily uses them daily, that last sentence [about raising the price of Haiku] is *not* going to go over well with people." In a follow-up post, Wrigley said he was not surprised by the price increase or the framing, but saying it out loud might attract ire. "Just say it's more expensive to run," he wrote.
The new Haiku model will cost users $1 per million input tokens and $5 per million output tokens, compared to 25 cents per million input tokens and $1.25 per million output tokens for the previous Claude 3 Haiku version. Presumably being more computationally expensive to run, Claude 3 Opus still costs $15 per million input tokens and a whopping $75 per million output tokens. Speaking of Opus, Claude 3.5 Opus is nowhere to be seen, as AI researcher Simon Willison noted to Ars Technica in an interview. "All references to 3.5 Opus have vanished without a trace, and the price of 3.5 Haiku was increased the day it was released," he said. "Claude 3.5 Haiku is significantly more expensive than both Gemini 1.5 Flash and GPT-4o mini -- the excellent low-cost models from Anthropic's competitors."
Self Reference (Score:2)
They asked Haiku how it should be priced, and it gave them that BS answer.
Re: (Score:2)
They probably realized they can't keep selling it at 2% of their cost.
Huh? (Score:2)
I just read the summary and I feel like I just got dumber. I've used ChatGPT a few times, and its helped with some general knowledge stuff, but anything else? No clue...
Writing (Score:4, Interesting)
I just read the summary and I feel like I just got dumber. I've used ChatGPT a few times, and its helped with some general knowledge stuff, but anything else? No clue...
AI is currently a good choice for assistance in writing.
If you look over the reviews of the current AI listings for writing, you find that Ai really shines at the very fine grained level of writing. It will give you words, sentences, and even paragraphs that appear to be really well written.
But in review after review, everyone notes that after a paragraph or two the writing starts to have flaws, and after a page or two it just doesn't have the level of writing expertise that a human would have.
Things like "take this paragraph and make it more tense/faster/slower/descriptive/visceral" seem to work well. Things like "analyze these paragraphs, and write a new paragraph in my same style that says *this*" work OK as well.
Things like "give me five other phrases that describe *this*" work really well, as does "give 5 other words that mean *that*".
What's missing from all of these are the implied "...and I'll choose the best one".
Right now AI can be used as a tool to help your productivity, but it's still at the level of a productivity enhancement tool. And not something that can be used for completely writing something, at least not more than a page or two of text that holds together.
The bills are due (Score:4, Informative)
They have to start showing some real revenue other than more rounds of VC money. The novelty is wearing off. AI winter is coming.
Re: (Score:2)
The novelty is wearing off. AI winter is coming.
That would be nice. With the even more massive overpromising they did this time compared to the last few ones, it is absolutely inevitable, but maybe they can keep the hype going a bit longer. What will be interesting is whether free LLMs will vanish completely (too expensive to run) or whether by the time the house of card collapses they will be efficient enough that some free offers will remain. I know I will not pay for this.