HACKER Q&A
📣 pera

Why there are still no signs of increase in productivity anywhere?


Several PhD-level reasoning models have been released since September of 2024, and since then there have been many extraordinary claims of 10x to 1000x increase in productivity in programming.

Given that it's now October of 2025 I must ask, why there are no signs of such revolutionary increase in productivity?


  👤 pera Accepted Answer ✓
I also wanted to add a bit more context regarding some of these claims.

For example, back in March Dario Amodei, the CEO and cofounder of Anthropic, said:

> I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code

Other similar claims:

https://edition.cnn.com/2025/09/23/tech/google-study-90-perc...

https://www.cnbc.com/2025/04/29/satya-nadella-says-as-much-a...

Some of these AI predictions seem quite unlikely too, for example AI 2027:

https://news.ycombinator.com/item?id=43571851

> By late 2029, existing SEZs have grown overcrowded with robots and factories, so more zones are created all around the world (early investors are now trillionaires, so this is not a hard sell). Armies of drones pour out of the SEZs, accelerating manufacturing on the critical path to space exploration.

> The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun.32 The surface of the Earth has been reshaped into Agent-4’s version of utopia


👤 necovek
This sounds like a disingenuous "ask HN": you are supposedly questioning marketing claims by AI model producers by pointing out how their predictions have not happened.

Everyone knows why that's the case: because claims were never backed by anything but people claiming this in whose interest it was for others to buy into it.

There might even be a case of shareholder fraud there for any public official, but obviously, they'll just claim they honestly believed that.


👤 pavel_lishin
> since then there have been many extraordinary claims

Has there been any extraordinary evidence?


👤 AfterHIA
Language models have limited use in many well established domains like the humanities, literature, and art. The reason "AI" isn't being used to build, "the future we always wanted" is that even before LLMs innovation and incremental improvement weren't, "hard;" it takes a significant financial infrastructure to market products and create, "new, better norms" and given that software has moved from, "sell people useful tools and support" to, "collect and sell massive amounts of data; engage in behavior modification" there's no real reason to create better tools even if Claude can exponentially reduce development costs and time to working prototypes. We're living beyond the scope of market capitalism. We now live in pre-technofeudalism so all non-marginal gains are going serve the oligarch's potential for rent-collection. They aren't for you and I.

Real innovation looks like this: https://worrydream.com/ and https://archive.org/details/humaneinterfacen00rask and https://www.dougengelbart.org/pubs/papers/scanned/Doug_Engel...