HACKER Q&A
📣 changisaac

Will LLM API costs be negligible in a year?


Hi HN. We’re managing costs at my startup and by far our largest spend is on calls to Anthropic, OpenAI, etc. We’ve considered things like spinning up our own open source model but decided it’s not worth it considering we don’t even have PMF yet.

Optimistically though, I see that token prices to LLMs have been going down a lot in the past few years. Do you think if this continues that it’ll eventually become a negligible expense? Or do you think we will forever be gouged by these foundation model companies? (: Much like how cloud computing has went (AWS, GCP, etc.)


  👤 ben_w Accepted Answer ✓
Define "negligible".

You need to know how much LLM output you need to get your product working, before you even know what you're hoping for regarding a target cost per million tokens. When you do get PMF, can some of the work be offloaded to a smaller and cheaper model? Can you determine this division of labour yet?

Consider also that "computer" used to be a job title, that since then the cost of doing computations has reduced by a factor of at least 1e14, and yet that you're only asking this question at all because you're still compute limited.


👤 musbemus
If they do start to become unsustainable you might see more companies moving to a BYOK or usage-based billing model. If they do that, I don't know if the use cases for AI would justify the cost for consumers (but perhaps so for businesses). There's been a ton of build out of data centers so I do think the cost reduction we've seen so far may extrapolate but at the expense of more performant models. Hard to tell right now though

👤 codingdave
At some point AI providers will need to break down profit/token and price accordingly. Right now, they are losing money to gain market share. Also, AI consumers will need to get the expense of AI into their own profit calculations.

Hard to say how it will play out, aside from both sides are going to strive to maximize their own benefit, and time will tell how the actual numbers balance out.

This is one reason why it matters whether or not the AI bubble is all hype. There is a non-trivial chance that once people truly figure out the monetary value of AI's help on their processes and cut out all hype-based use cases... their spending limits to reach that value might not match what the providers need to run the platforms.


👤 symbolicAGI
The frontier models when released are operating UIs and APIs at a substantial profit during the delivery of inference. However, overall the vendors are losing money because they are paying for ever-increasing training costs for the next version of their frontier model.

This money-losing business of the vendors will no doubt continue for at least another year.

There are two ways to expect lower LLM API costs in the future:

1. Be satisfied with an older version of a particular LLM. As inference hardware and software become more efficient, the vendor can lower API costs on the older models to remain competitive.

2. Eventually - not next year - the return on investment from training the next version of the LLM will decrease relative to the ROI on current LLMs (because the improvements will be less awesome) and the training cost of such a model will necessarily be spread out over a longer duration as competition allows. At that point (whenever) the training cost might level off or actually decrease and that savings would be competitively passed along to the API consumer. And coincidentally that would be the point at which the vendors become overall profitable.