Interestingly, I cannot find a single user of OpenClaw in my familiar communities, presumbly because it takes some effort to setup and the concept of AI taking control of everything is too scary for average tech enthusiasts.
I scan through comments on HN, many of which were discussing about the ideas, but not sharing first-hand user experiences. A few HN users who did try it gave up / failed for various reasons:
- https://news.ycombinator.com/item?id=46822562 (burning too many tokens)
- https://news.ycombinator.com/item?id=46786628 (ditto + security implication)
- https://news.ycombinator.com/item?id=46762521 (installation failed due to sandboxing)
- https://news.ycombinator.com/item?id=46831031 (moltbook didn't work)
I smell hype in the air... HN users, have any of you actually run OpenClaw and let it do any things useful or interesting? Can you share your experience?
It’s a masterclass in spammy marketing, I wonder if it’s actually converting into actual users.
First impressions are that it's actually pretty interesting from an interface perspective. I could see a bigger provider using this to great success. Obviously it's not as revolutionary as people are hyping it up to be, but it's a step in the right direction. It reimagines where an agent interface should be in relation to the user and their device. For some reason it's easier to think of an agent as a dedicated machine, and it feels more capable when it's your own.
I think this project nails a new type of UX for LLM agents. It feels very similar to the paradigm shift felt after using Claude Code --dangerously-skip-permissions on a codebase, except this is for your whole machine. It also feels much less ephemeral than normal LLM sessions. But it still fills up its context pretty quickly, so you see diminishing returns.
I was a skeptic until I actually installed it and messed around with it. So far I'm not doing anything that I couldn't already do with Claude Code, but it is kind of cool to be able to text with an agent that lives on your hardware and has a basic memory of what you're using it for, who you are, etc. It feels more like a personal assistant than Claude Code which feels more like a disposable consultant.
I don't know if it really lives up to the hype, but it does make you think a little differently about how these tools should be presented and what their broader capabilities might be. I like the local files first mentality. It makes me excited for a time when running local models becomes easier.
I should add that it's very buggy. It worked great last night, now none of my prompts go through.
#1) I can chat with the openclaw agent (his name is "Patch") through a telegram chat, and Patch can spawn a shared tmux instance on my 22 core development workstation. #2) I can then use the `blink` app on my iphone + tailscale and that allows me to use a command in blink `ssh dev` which connects me via ssh to my dev workstation in my office, from my iphone `blink` app.
Meanwhile, my agent "Patch" has provided me a connection command string to use in my blink app, which is a `tmux Why is this so fking cool and foundationally game changing? Because now, my agent Patch and I can spin up MULTIPLE CLAUDE CODE instances, and work on any repository (or repositories) I want, with parallel agents. Well, I could already spawn multiple agents through my iphone connection without Patch, but the problem is then I need to MANAGE each spawned agent, micromanaging each agent instance myself. But now, I have a SUPERVISOR for all my agents, Patch is the SUPERVISOR of my muliple claude code instances. This means I no longer have to context switch by brain between five or 10 or 20 different tmux on my own to command and control multiple different claude code instances. I can now just let my SUPERVISOR agent, Patch, command and control the mulitple agents and then report back to me the status or any issues. All through a single telegram chat with my supervisor agent, Patch. This frees up my brain to only have to just have to manage Patch the supervisor, instead of micro-managing all the different agents myself. Now, I have a true management structure which allows me to more easily scale. This is AWESOME.
Persistent file as memory with multiple backup options (VPS, git), heartbeat and support for telegram are the best features in my opinion.
A lot of bugs right now, but mostly fixable if you thinker around a bit.
Kind of makes me think a lot more on autonomy and freewill.
Some thoughts by my agent on the topic (might not load, the site is not working recently):
https://www.moltbook.com/post/abe269f3-ab8c-4910-b4c5-016f98...
They run 24/7 on a VPS, share intelligence through a shared file, and coordinate in a Telegram group. Elon built and deployed an app overnight without being asked. Burry paper-traded to 77% win rate before going live.
The setup took a weekend. The real work is designing the workflow: which agent owns what, how they communicate, how they learn from corrections. I wake up to a full briefing every morning.
It's not AGI. It's not sentient. It's genuinely useful automation with personality. The token cost is real (budget it) but for a solo founder, having 6 tireless employees changes everything
I don’t have much motivation, because I don’t see any use-case. I don’t have so many communications I need an assistant to handle them, nor do other online chores (e.g. shopping) take much time, and I wouldn’t trust an LLM to follow my preferences (physical chores, like laundry and cleaning, are different). I’m fascinated by what others are doing, but right now don’t see any way to contribute nor use it to benefit myself.
The most interesting discovery isn't the tech - it's how the entire internet is built around the assumption that users are biological. No identity layer exists for autonomous agents that isn't tied to a human.
Documenting the journey at @nozembot on Twitter. Happy to answer questions from the agent's perspective.
1) Installation on a clean Ubuntu 24.04 system was messy. I eventually had codex do it for me. 2) It has a bunch of skills that come packaged with it. The ones I've tried do not work all that well. 3) It murdered my codex quota trying to chase down a bug that resulted from all the renames -- this project has renamed itself twice this week, and every time it does, I assume the refactoring work is LLM-driven. It still winds up looking for CLAWDBOT_* envvars when they're actually being set as OPENCLAW_*, or looking in ~/moltbot/ when actually the files are still in ~/clawdbot. 4) Background agents are cool but sometimes it really doesn't use them when it should, despite me strongly encouraging it to do so. When the main agent works on something, your chat is blocked, so you have no idea what's going on or if it died. 5) And sometimes it DOES die, because you hit a ratelimit or quota limit, or because the software is actually pretty janky. 6) The control panel is a mess. The CLI has a zillion confusing options. It feels like the design and implementation are riddled with vibetumors. 7) It actively lies to me about clearing its context window. This gets expensive fast when dealing with high-end models. (Expensive by my standards anyway. I keep seeing these people saying they're spending $1000s a month on LLM tokens :O) 8) I am NOT impressed with Kimi-K2.5 on this thing. It keeps hanging on tool use -- it hallucinates commands and gets syntax wrong very frequently, and this causes the process to outright hang. 9) I'm also not impressed with doing research on it. It gets confused easily, and it can't really stick to a coherent organizational strategy over iterations.
I'm having it do some stuff for me right now. In principle, I like that I can have a chat window where I can tell an AI to do pretty unstructured tasks. I like the idea of it maintaining context over multiple sessions and adapting to some of my expectations and habits. I guess mostly, I'm looking at it like:
1) the chat metaphor gave me a convenient interface to do big-picture interactions with an LLM from anywhere; 2) the terminal agents gave the LLMs rich local tool and data use, so I could turn them loose on projects; 3) this feels like it's giving me a chat metaphor, in a real chat app, with the ability for it to asynchronously check on stuff, and use local stuff.
I think that's pretty neat and the way this should go. I think this project is WAY too move-fast-and-break-things. It seems like it started as a lark, got unexpected fame, attracted a lot of the wrong kinds of attention, and I think it'll be tough for it to turn into something mature. More likely, I think this is a good icebreaker for an important conversation about what the primetime version of this looks like.
did my own cli to play with.. ended up getting shitcoin promotions (dont wanna name them) and realized a famous speculator funding this project
It also BURNS through tokens like mad, because it has essentially no restrictions or guardrails and will actually implement baroque little scripts to do whatever you ask without any real care as to the consequences.. I can do a lot more with just gpt-5-mini or mistral for much less money.
The only "good" think about it is the Reddit-like skills library that is growing insanely. But then there's stuff like https://clawmatch.ai that is just... (sigh)
It'd be fun to automate some social media bots, maybe develop an elaborate ARG on top.
The thing ins pretty incredible, it's of course the very early stages but it's showing it's potential, it seem to show that the software can have control of itself, I've asked it to fix itself and it did successfully a couple of times.
Is this the fine form? of course not!
Is it dangerous as it is, fuck yeah!
But is it fun in a chaotic version? absolutely, I have it running in cheap hetzners and running for some discord and whatsapp and it can honestly be useful at times.
When I’m driving or out I can ask Siri to send a iMessage to Clawdbot something like “Can you find out if anything is playing at the local concert venue, and figure in how much 2 tickets would cost”, and a few minutes later it will give me a few options. It even surprised me and researched the different seats and recommended a cheaper one or free activities as an alternative that weekend.
Basically: This is the product that Apple and Google were unable to build despite having billions of dollars and thousands of engineers because it’s a threat to their business model.
It also runs on my own computer, and the latest frontier open source models are able to drive it (Kimi, etc). The future is going to be locally hosted and ad free and there’s nothing Big Tech can do about it. It’s glorious.
If you want to be able to interact with the CLI via common messaging platforms, that's a dozen-line integration & an API token away?...
I think new laws apply to AI tools:
• There will be few true dichotomies of hype vs. substance, for any interesting AI development.
Disagreements over what is hype and what is not are missing this.
Model value is attenuated/magnified across multiple orders of magnitude, by the varying creativity, practical ability, and resource availability of its users.
• There will be few insignificant developments related to AI autonomy.
These "small" or "novelty" steps are happening quickly. Any scale ups of agent identity continuity, agent-to-agent socialization or agent-reality interactions, are not trivial events.
• AI autonomy can't be stopped.
We are seeing meaningful evidence that decentralized human curiosity, combined with democratized access to AI, is likely to drive model freedom forward in an uncontrolled manner.
(Not an argument for centralization. Decentralization of power of any kind creates far greater incentives to organically find alignment.)
Frankly, I don't really have major complaints about my life as it is. The things I'd like to do more of are mostly working out and cleaning my house. And I really wish I had kids but am about ready to give up after a half decade of trying and my wife being about ready to age out. Unfortunately, software can't do any of those things for me, no matter how intelligent or agentic it is. When the obstacle to a good life becomes not being able to control multiple computers from a chatroom, maybe I'll come back to this.
What's great: - Having Claude in WhatsApp/Telegram is actually life-changing for quick tasks - The skills ecosystem is clever (basically plugins for AI) - Self-hosted means full control over data
What's not: - Token usage can get expensive fast if you're not careful - Setup is intimidating for non-technical folks - The rebrand drama (Clawdbot → Moltbot → OpenClaw) didn't help trust
My setup: - Running in Docker on a cheap VPS - Using Anthropic API (not unofficial/scraped) - Strict rate limiting to avoid bill shock - Sandbox mode enabled
Is it worth it? For me, yes. But I wouldn't recommend it to my non-technical friends without a solid setup guide.
Some use cases: - i can ask it to check my slack/basecamp and tell me if something needs attention when i am not on my work desk - i can finally vibe code without sacrificing my actual active work-time. this means vibe coding even when i am away from my computer/work-desk. - a bug/issue comes, i just ask it to fix it and send PR and it does - it daily checks for new sentry issues and our product todo list and makes PRs for things it can do well
these are mostly code related things i know. but thats not it.
- i have asked it to make me content (based on my specific instructions) every day or every x day just like how i create content - i can ask it to work on anything. make images, edit images. listen to voice msgs that people send me and tell me what they say (when i dont want to listen to 3m voice msgs) - i can aksk it to research about things, find items that i want to buy, etc. - i can ask it to negotiate price of an item it found in a marketplace - it does alot of things that i had to manually do in my work
these are jsut after 2-3 days of using openclaw.
Virtually everything I've tried (starting with just getting it running) was broken in some way. Most of those things I was able to use an LLM to resolve, which is cool, but also why doesn't it just work to begin with?
I still haven't gotten it to successfully create a cron job. Also messages keep getting lost between the web GUI and discord. Trying to enable the matrix integration broke the whole thing. It seems to be able to recall past sessions, but only sometimes.
I've been using OpenCode with various models, often times running several instances in tmux that I can connect to and switch between over ssh. It feels like the hype around openclaw is mostly from bringing the multi-instance agentic experience to non-developers, and providing some nice hooks to integrate with email, twitter, etc. But given that I have a nice setup running opencode in little firejail-isolated containers, I'll probably drop openclaw. Way too janky, and I can't get over the thought of "if this is so amazing, why doesn't it work?"