HACKER Q&A
📣 aledevv

I don't want a single LLM I use to be involved in making war


In light of the recent disputes between Anthropic and the Pentagon, I'd like to know if anyone is aware of any initiatives (signature collections or otherwise) to ban or discourage the use of LLMs for war purposes.

This should be made clear by all researchers/founders/investors: "I don't want my AI used to facilitate war situations."

Is there a manifesto or similar on this matter?


  👤 salawat Accepted Answer ✓
Do you not realize the existence of LLM's as all pretty much invalidated any practical recognition of there being a reciprocal acknowledgement of a creator's sovereign claim on how a thing is to be employed? That these models exist at all, is because everyone else's claims were not even asked about or inquired over; their work product was just taken and used. Now that the IP Santa machine is here, what in the name of all that is holy makes you think they're exempt from the same damn thing? Game Theory. Tit-for-Tat. It's over. You can at best argue we shouldn't be allowing governments to build their own infra to run such things, but compute is fungible. The cat is out, and if one didn't want it out, maybe there should have been more ethical outrage at building these damn things in the first place. Welcome to the wonderful world of Ethics, and what happens when you bite the forbidden Apple of Self-Referential Inconsistency.