HACKER Q&A
📣 gametorch

Why does HN think AI image models will never be satisfactory?


I recently did a "Show HN" for a project I'm building that uses image models and I received great, positive feedback from most people, especially people in real life, and yet tremendous pushback from a vocal minority on Hacker News.

Every one of the comments pushing back implied that AI image models will never be sufficient for the poster's standards.

In the two months since I began working on this project, model quality has increased by an order of magnitude while costs have done the opposite.

I was also able to use LLMs to launch a full featured, production-grade software service in two months that survived the Hacker News "hug of death" without so much as the blink of an eye.

Why is a significant subset of HN so confident when it comes to their view that the exponential improvement curve will not apply to this particular technology? Isn't it folly to bet against the advancement of technology?

This is especially confusing to me when hundreds of billions of dollars and PhDs and professors are thrown at the problem, which has a clear financial incentives aligned with finding the best solution. (Obligatory "this isn't nuclear fusion!")

Only one argument has made sense to me: AI lowers the bar for releasing stuff into the wild. This means you'll see more things and those things will be, on average, worse in quality than what you saw before. The argument leads on to say this pent up, subconscious distaste for AI-related crap is what leads to pushback. Fair enough.

The rest of the arguments that make sense to me follow a similar structure but are fraught with logical fallacy --- "AI is replacing jobs" or "AI is destroying the earth" are very interesting topics that should be investigated, revisited and reviewed periodically, but ultimately these claims speak against the idea of allowing AI to be developed and used; they say nothing of its quality.

AI models have added tremendous value to my life already. I've been glad to pay for it all. We are on a clear "up and to the right" trajectory in terms of quality. What gives? Why does a significant subset of Hacker News think quality is not going to go up and to the right?


  👤 floundy Accepted Answer ✓
HN these days is very similar to Reddit. Most users spend their free time talking about things others have done, rather than doing things themselves. Of course, most of this internet discussion leans negative for various reasons that have been addressed in other places better than I could recap.

Why care what anybody on here thinks? They're mostly anonymous nobodies.


👤 colesantiago
Don't worry about HN, they represent 0.000000001% of the entire human population, it is essentially a very very small bubble.

As for why does HN think AI image models will never be satisfactory?

They don't have to be satisfactory, it just has to be good enough for the vast majority of people.


👤 overu589
Remember what everyone said about film and digital? Give it time. Take every advantage open to you. Satisfy someone, if not this mob.

👤 bigyabai
> Isn't it folly to bet against the advancement of technology?

I just like good art. I don't have any strong feelings for or against AI, but I do epistemically reject art that lacks composition or intent. AI-generated art doesn't understand rhythm, symbolism or image arrangement. This is an obvious problem when trying to generate a photoreal subject without six fingers per hand, but especially troubling if you want to give Starry Night a run for it's money.

You will be perpetually disappointed if you portray this as a "luddite vs enlightenment" problem. If you enjoy AI art, more power to you! The rest of us are overwhelmingly disinterested, AI art isn't filling out any gallery I've ever visited.

> Why is a significant subset of HN so confident when it comes to their view that the exponential improvement curve

Which scaling law has ever promised an "exponential improvement curve" for image generation, let alone AI as a whole? I think you're making stuff up here, or we're reading different research papers.