I wish the team can either restrict new accounts from posting or at least offer a default filtering where I can only see posts from accounts with certain criteria.
I don’t want to see HN becoming twitter, which is full of bots and noise, as this would be a really sad day.
I do think this is relevant though: "HN can't be immune from macro trends" - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Even for posts that are interesting to me, I get the feeling that it's not worth looking at because it was probably made using LLMs. Nothing against them, but I personally thought of Show HNs as doing something for the love of it, the end result being a bonus.
So maybe we should just be honest about this: our standards have raised. We want to see Show HN posts that require effort and dedication, that require more than a few hours of prompt flogging.
Moderators don't have the capacity (and fairly, it is impossible) to check if they are bots or humans.
There are no good solutions, there are hundreds of thousands of intelligences out there, trained millions of hours on how to scam humans, capable of spitting out text tirelessly and shamelessly, and there will be only more of them, tens, hundreds, thousands times more.
Losing that seems too high of a price to pay. Yes there are AI generated comments, in the past there has been script generated comments. You can report, downvote, or just ignore and move on. I am aware of posts like this existing, but I feel they are being effectively managed.
Try not to be too offended about the notion of these posts existing. Many of them are not malicious, they just caused by users stepping outside what is considered appropriate, but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.
It almost feels like new accounts should be treated like new posts -- it is sort of a service that a select few are willing to undertake to upvote interesting stories early on.
I wish even more I could block specific users (there are some highly prolific, high karma users here who are extremely irritating), but that's harder and is probably best handled client side.
AI spam good actually. More please. Concern levels: zero. upvote for progress
There are still quality submissions by new accounts and HN is good at pulling those needles from the haystack.
From the perspective of usually just swinging into a post from the front page, when I do see green, it's usually overtly political trolling, and dead from the start. So I had assumed new account = no one sees your post in gray, at least for a week or two.
I don't envy the "Show HN:" case. It can be intractable, story time:
Last week, there was a "Show HN:" post for a GitHub link, made it all the way to #2. It was a Flutter app, written up as if it did all the stuff you'd want from an open source LLM client. I said to myself "geez, I knew I took to long to deliver the thing I've been working on for 2 years. the MVP version is insanely popular."
-- only after digging into the repo for 10 minutes, with domain expertise, did I realize it was a complete Potemkin village, built by Claude. And even then, I was afraid to post something pointing this out because it required domain expertise, and it could have read as negative rather than principled.
All that to say, some subsets of The AI Poster problem now require having intimate domain expertise and 10 minutes to evaluate it. :/
Additionally, the Claude 4.6s and GPT-5.4s are better than me at posting on HN now. :/ And I've been here 16 years. The last week pretty much every single one of my comments has been written by Opus 4.6 or GPT-5.4, via: 1) dump HN post into prompt 2) say "I feel $X about this, write me an HN post that communicates this but not negatively".
I'm a little ashamed to admit if you look through my post history, you'll definitely see a repeated pattern over 16 years of someone who is very negative and has a hard time communicating it constructively. They're smart enough now to extrapolate observations in the way I want to, while avoiding my own tarpits.
i.e. only surface stories posted by or upvoted by those you trust, and the inverse with those you distrust.
Then exponentially drop off trust transitively and it could be almost workable.
I'm not saying your idea is bad necessarily but giving another perspective.
If you look at the leader board (https://news.ycombinator.com/leaders), you'll find a few old accounts that pretty much do nothing but farm links, posting sometimes dozens of times a day, with a very low percentage of comments. Their high "score" isn't an indicator of quality; they just spam enough that a few get some good upvotes, but most of their submissions are low quality.
Such a sad development.
Think back to prohibition. Just because we want less public drunkenness doesn't mean it is wisest to ban alcohol. One has to ask: what is the chance the ban is successful? What happens when it cuts the wrong way?
To what degree do we care about (1) "human" versus "AI"; (2) comment quality; (3) sensible methods for revealing social preferences? I care a lot more about the latter two than the first. It doesn't have to be a zero sum tradeoff, but I think it is a good starting question.
Let's have that discuss and not try to solve the human vs AI classification problem.
I find it's worse here now than X. Literally every discussion turns into meta and severely politicized. Certain topics you get flagged out by a mob for stating facts.
At least on X reply bots are not allowed anymore. Blue checks are useless tho.
(edit: And thus such bots can't easily discover that they shouldn't post, afaict)
Additionally, dang had replied on it: https://news.ycombinator.com/item?id=47050421
My initial thought is to set up a devoted account like "sock_puppet_detector", and using the infrastructure from https://hackersmacker.org/, add any likely sock-puppets as 'foes'. Then anyone can install hackersmacker, and add "sock_puppet_detector" as a friend to see sock-puppets highlighted. Likewise for rules violators.
A new human user will spend actual time creating a thoughtful and helpful post, only to be greeted by "sorry, your post has been removed by automod because you don't meet criteria". They get disheartened and walk away forever.
The spammers, on the other hand, know how the rules work and so will just build their bots to work around this (waiting 30days, farming karma).
The net result is that these rules ensure that much greater proportion of new accounts come from bad actors - who else would jump through hoops just to participate on a web forum?
I didn't actually create my account until 2021? 2022? I can't remember. And I didn't make my first post or even comment until just last week.
While I think a minimum post count or reputation metric could perhaps reduce the AI generated posts, introducing friction also makes it harder for real people to contribute anything meaningful.
Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?
I made a Show post last week where I heavily relied on AI. I'm sure there are some "tells." But even so, I spent more than three hours working on the content of my post and my first response. Would my post have been acceptable to you?
Other subs are slowly being inundated with hidden history spammers …
Bad times.
randusername_2022
I'm right on the boundary of the slopocene, not sure if in or out.
Moderation is already taxing as it is.
I’ve read all of the source and I drove the architecture but it would be a stretch to say I didn’t ask for assistance on things that felt fuzzy or foreign to me. I also have generally stopped typing code. I still don’t think the LLM made the project though, it feels like my decision making.
If the bar for Show HN becomes no AI whatsoever then you’re just going to see a bunch of people covering their AI tracks. I’m reluctant to post it because I’m afraid of getting blasted by the community for using AI. At the same time, it is work that I’ve poured hundreds of hours into, that I’m proud of and that I think would be of interest to HN.
I read the Obliteratus post that made it to the front page the other day and I agree that is pure slop. While it’s frustrating that it took up front page space, it’s evident that the whole community caught on to the sloppiness of it all immediately and called it out. I just don’t think HN wants to set the precedent that no AI code should be shared.
I also saw a week or two ago that someone open sourced a project of theirs that wasn’t open source in the first place. The reason they stated was that they had vibe coded and were embarrassed to be discovered. If you want to get a concept out quickly with AI, you’re now hesitant to open source because of the precedent set by the community. I think that’s a scary thought to me. I would rather know the tools I’m using are AI generated/assisted and make the value judgement on if I trust the code and project owners.
1. Exist for some time.
2. Vote on stuff that humans would vote for.
3. Avoiding voting on traps.
4. Comment occasionally and productively.
5. Post to a limited existing audience, and receive upvotes.
6. Post limitedly to a general audience.
7. Post generally.
It’s basic earn a reputation behavior.
Unfortunately, I was not able to "reorganize" comments/posts in a manner that I felt was particular "better", and didn't keep the plugin, for whatever that's worth.
I think it would be more prudent to overlay a web-of-trust, where accounts which submitted links/comments that you upvoted are then given significantly higher priority in other threads/feeds (unfortunately downvotes are not made apparent on HN, but factoring downvotes would also help.) Exposing your web-of-trust may also assist others in determining trusted content.
Perhaps this web-of-trust approach is dystopian on the order of MeowMeowBeenz, but I have not heard any other practical solutions to the disintegration of trust which is upon us.
Edit: Elsewhere in this thread HackerSmacker was mentioned, which is what I'm describing. That's exciting, I'll be trying it out later.
I'm using a new account and will likely use one forever, as I don't want lots of posts linked together, nor do I care about points or karma or whatever it's called. My first few comments are always shadowbanned. I also see lots of dead posts for new accounts with "showdead" turned on. A lot of them are normal, useful comments, some are inflammatory or just plain stupid. I haven't seen many comments that seem to be AI generated. Maybe they are and I just don't see it, idk.
Anyway, if a comment passes some basic filter (doesn't post shady links or talk about VIAGRA or 11 INCH PENIS or something spammy), I hope they still show up, even as "dead". On this account I copied 1 dead comment to give it more visibility and I've done it before a few times, too. The comment is still dead, btw (id 47262467). And maybe instead of (shadow)banning new users/posts, just make a separate view for old/established account and another one for all posters.
I would also be glad some solve some CPU- or RAM-intensive task as PoW. If I really had to, I'd pay with Monero or something similar, as long as it's a currency with low fees so a payment equivalent to 25 cents wouldn't incur a big fee. I wouldn't pay more per account (especially when I rotate them), as I've been a lurker for years and only recently started posting, anyway.
Finally, thanks for letting us sign up over Tor. :)
There are barely any bots on Twitter. There were thousands of thousands of bots before 2023, because the API was free. These days running a bot on Twitter is expensive.
Fun fact: a company I worked for in the past had access to an undocumented partners-only API that allowed us to register unlimited number of accounts. I was personally tasked to handle the integration.
In addition, I’ve been here in HN since the late 2000s. Look- it’s a new profile. Also sometimes I use AI to help craft better responses. Do with that what you will.
345 comments | 64 hidden | 50 blocked | 15 green
So I don't see people who annoyed me for one or other reason in the past, I auto-hide the top 1000 accounts by word count, and I hide all green users. This was trivial to write for myself and I think more people should work on something like this for themselves.
But then again, some of the most prolific, most upvoted accounts on this site constantly flood the site with political content and nothing is ever done about it and they get rewarded for it .. so yeah. I gave up hope a long time ago.
I think a simple solution (and one that eventually every content platform will have to adopt) is to allow users to tag AI-generated spam. I think that a few years from now this feature should be the norm, like existing basic features on forums such as upvote, downvote, favorites, hide, etc. I know this will require much more development effort than simply blocking new accounts from posting at all. But on the other hand, you can’t block new accounts forever.
Bots are recognizable and can be selectively ignored. But an echo chamber that would result from measures like this cannot be, because you cannot see the potential comments and posts that were snuffed because some one didn't bother.
If you want HN to be a place to feel comfortable and your world view to be unchallenged, sure, go ahead. But then we already have reddit.
I believe it's a policy or moderation enforcement issue. Such as banning incomprehensible / low value posts whether generated by AI or not.