Optimistically though, I see that token prices to LLMs have been going down a lot in the past few years. Do you think if this continues that it’ll eventually become a negligible expense? Or do you think we will forever be gouged by these foundation model companies? (: Much like how cloud computing has went (AWS, GCP, etc.)
You need to know how much LLM output you need to get your product working, before you even know what you're hoping for regarding a target cost per million tokens. When you do get PMF, can some of the work be offloaded to a smaller and cheaper model? Can you determine this division of labour yet?
Consider also that "computer" used to be a job title, that since then the cost of doing computations has reduced by a factor of at least 1e14, and yet that you're only asking this question at all because you're still compute limited.
Hard to say how it will play out, aside from both sides are going to strive to maximize their own benefit, and time will tell how the actual numbers balance out.
This is one reason why it matters whether or not the AI bubble is all hype. There is a non-trivial chance that once people truly figure out the monetary value of AI's help on their processes and cut out all hype-based use cases... their spending limits to reach that value might not match what the providers need to run the platforms.
This money-losing business of the vendors will no doubt continue for at least another year.
There are two ways to expect lower LLM API costs in the future:
1. Be satisfied with an older version of a particular LLM. As inference hardware and software become more efficient, the vendor can lower API costs on the older models to remain competitive.
2. Eventually - not next year - the return on investment from training the next version of the LLM will decrease relative to the ROI on current LLMs (because the improvements will be less awesome) and the training cost of such a model will necessarily be spread out over a longer duration as competition allows. At that point (whenever) the training cost might level off or actually decrease and that savings would be competitively passed along to the API consumer. And coincidentally that would be the point at which the vendors become overall profitable.