For instance, this skill of web development: https://raw.githubusercontent.com/vercel-labs/web-interface-...
That’s too much for a model to carry in its context while it’s trying to do actual work.
Far better is to give that skill.md to a model and have it produce several hundred lines of code with a shebang at the top. Now you haven’t got a skill, you’ve got a script. And it’s a script the model can run any time to check its work, without knowing what the script does, how, or why - it just sees the errors. Now all your principles of web dev can be checked across your codebase in a few hundred milliseconds while burning zero tokens.
TDD is codification too: codifying in executable form the precise way you want your logic to work. Enforce a 10ms timeout on every unit test and as a side effect your model won’t be able to introduce I/O or anything else that prevents parallel, randomized execution of your test suite. It’s awesome to be able to run ALL the tests hundreds of times per day.
Constantly checking your UI matches your design system? Have the model write a script that looks at your frontend codebase and refuses to let the model commit anything that doesn’t match the design system.
Codification is an insanely powerful thing to build into your mindset.
I mainly work in Python, and I've been ensuring that all of my projects have a test suite which runs cleanly with "uv run pytest" - using a dev dependency group to ensure the right dependencies are installed.
This means I can run Claude Code against any of my repos and tell it "run 'uv run pytest', then implement ..." - which is a shortcut for having it use TDD and write tests for the code it's building, which is essential for having coding agents produce working code that they've tested before they commit.
Once this is working well I can drop ideas directly into the Claude app on my iPhone and get 80% of the implementation of the idea done by the time I get back to a laptop to finish it off.
I wrote a bit about "uv run pytest" and dependency groups here: https://til.simonwillison.net/uv/dependency-groups
When I don't know what I want to do, I read existing code, think about it, and figure it out. Sometimes I'll sketch out ideas by writing code, then when I have something I like I'll get Claude to take my sketch as an example and having it go forward.
The big mistake I see people make is not knowing when to quit. Even with Opus 4.5 it still does weird things, and I've seen people end up arguing with Claude or trying to prompt engineer their way out of things when it would have been maybe 30 seconds of work to fix things manually. It's like people at shopping malls who spend 15 minutes driving in the parking lot to find a spot close to the door when they could have parked in the first spot they saw and walked to the door in less than a minute.
And as always, every line of code was written by me even if it wasn't written by me. I'm responsible for it, so I review all of it. If I wouldn't have written it on my own without AI assistance I don't commit it.
I’ll sometimes have it help read really long error messages as well.
I got it to help me fix a reported security vulnerability, but it was a long road and I had to constantly work to keep it from going off the rails and adding insane amounts of complexity and extra code. It likely would have been faster for me to read up on the specific vulnerability, take a walk, and come back to my desk to write something up.
AI coding tools are burning massive token budgets on boilerplate thousands of tokens just to render simple interfaces.
Consider the token cost of "Hello World":
- Tkinter: `import tkinter as tk; tk.Button(text="Hello").pack()`
- React: 500MB of node_modules, and dependencies
Right now context windows token limits are finite and costly. What do you think?
My prediction is that tooling that manage token and context efficiency will become essential.
For side projects? It's been a 10x+ multiplier.
The first saves me days of work/month by sparing me endless paper pages of notes trying to figure out why things work in a certain way in legacy work codebases. The second spares me from having to dig too much in partially outdated or lacking documentation or having to melt my brain understanding the architecture of every different dependency.
So I just put major deps in my projects in a `_vendor` directory that contains the source code of the dependencies and if I have doubts LLMs dig into it and their test to shed light.
What I haven't seen anybody yet accomplish is produce quality software by having AI write them. I'm not saying they can't help here, but the bottleneck is still reviewing and as soon as you get sloppy, codebase quality goes south, and the product quality follows soon after.
Where I've automated more aggressively is everywhere around the code. My main challenge was running experiments repeatedly across different systems and keeping track of the various models I ran and their metrics, etc. I started using Skyportal.ai as an ops-side agent. For me, it's mostly: take the training code I just iterated on, automatically install and configure the system with the right ML stack, run experiments via prompt, and see my model metrics from there.
I wrote beads-skills for Claude that I'll release soon to enforce this process.
2026 will be the year of agent orchestration for those of us who are frustrated having 10 different agents to check on constantly.
gastown is cool but too opinionated.
I'm excited about this promising new project: https://github.com/jzila/canopy
We're writing an internal tool to help with planning, which most people don't think is a problem but I think is a serious problem. Most plans are either too long and/or you end up repeating yourself.
/gh-issue [issue number]
/gh-pr [pr number]
Edit: replaced links to private github repo to pastebin.
I have some complex large and strict compliance projects that the AI is a pair programmer but I make most of the decisions, and I have smaller projects that, despite great impact on the bottom line, can be entirely done unsupervised due to the low risk factor of "mistakes" and the easiness of correcting them after the fact they are caught by the AI as well.
The pattern that works best for me: I describe what I want at a high level, let it scaffold, then I read through and course-correct. The reading step is crucial. Blindly accepting generates technical debt faster than you can imagine.
Where it really shines is the tedious stuff - writing tests for edge cases, refactoring patterns across multiple files, generating boilerplate that follows existing conventions. Things that would take me 45 minutes of context-switching I can knock out in 5.
The automation piece I've landed on: I let it handle file operations and running commands, but I stay in the loop on architecture decisions. The moment you start rubber-stamping those, you end up with a codebase you don't understand.
Instead of having it write the code, I try to use it like a pair reviewer, critiquing as I go.
I ask it questions like "is it safe to pass null here", "can this function panic?", etc.
Or I'll ask it for opinions when I second guess my design choices. Sometimes I just want an authoritative answer to tell me my instincts are right.
So it becomes more like an extra smart IDE.
Actually writing code shouldn't be that mechanical. If it is, that may signify a lack of good abstractions. And some mechanical code is actually quite satisfying to write anyway.
The more time you spend making guidelines and guardrails, the more success the LLM has at acing your prompt. There I created a wizard to get it right from the beginning, simplifying and "guiding" you into thinking what you want to achieve.
It doesn't write the code for me, but I talk to it like it is a personal technical consultant on this product and it has been very helpful.
But my side projects which I kinda abandoned a long time ago are getting a second life and it is really fun just to direct the agent instead of slowly re-aquire all of the knowledge and waste time typing in all the stuff into the computer.
AI excels at finding the "seams," those spots where a feature connects to the underlying tech stack, and figuring out how the feature is really implemented. You might think just asking Claude or Cursor to grab a feature from a repo works, but in practice they often miss pieces because key code can be scattered in unexpected places. Our skills fix that by giving structured, complete guides so the AI ports it accurately. For example, if an e-commerce platform has payments built in and you need payments in your software, you can reference the exact implementation and adapt it reliably.
The generators typically generates about 90% of the code I need to write a biz app. Leaving the most important code to me: the biz logic.
No AI. Just code that takes a (simple) declarative spec file and generates Typescript/C++/Java/... code.
I am also using AI's daily. However the code generators are still generating more productivity for me than AI's ever have.
What would you make if you could make anything? Does it all just lose meaning?