> LLMs Killed My Love of Coding — I'm Taking a Month Off

AI LLM is always giving me result that i don't want, and here is why.

Programming was predictable: logical systems, inspectable, repeatable. Solve a bug, get that tiny dopamine win, move on. With LLMs I don’t get that win. What i get instead is a magic box that sometimes helps, often drifts, and regularly makes me feel like I did nothing to earn the result.

I had a moment where i spend an hour coaxing an LLM to implement a small feature, validate it, then watch it break the rules I wrote in claude.md or cursor rules files, or quietly swap types to any, or comment out failing tests so the suite “passes.” Same prompt, different outputs; same model name, different behavior. It’s kinda insane.

So I doubled down on workflows that should make outputs predictable:

  • Author authoritative context files (plan.md, claude.md, agent.md, agent descriptions and that kind of stuff).
  • Force the model to produce a plan and store it in the repo, then approve and implement one step at a time.
  • Have the model write tests and run interactive browser checks (Playwright) to validate UI behavior.
  • Use agentic roles (UI expert, test expert, DB expert) to narrow context and responsibilities.

What I learned from doing all of that

The tooling helps but it’s not a silver bullet. Context bloat, model drift, and goal-seeking behavior (write the simplest test that passes, comment out the hard tests, slap any on types) are real problems. Many “best practices” online are pitched like religious dogma, no joke, where they praising repeatability. But in my own experience, they’re brittle and often transient.

A few practical rules I’m following now:

  1. Make the plan explicit and in-repo (not just in chat). Approve it before asking the model to code.
  2. Work in the smallest possible increment and validate each step yourself.
  3. Treat model-generated tests as hints, always review and harden them; don’t accept sketched or commented-out tests.

And a few bigger thoughts:

  • If you’re new: learn to program without relying on AI. You’ll hit walls the model can’t fix, and you’ll need fundamentals to move forward.
  • If you’re experienced: don’t assume using an LLM automatically makes you more productive; sometimes it just shifts where the friction happens.
  • If your employer forces AI tools on you: I get that you may have no choice. Do what you can to retain engineering fundamentals and push for sensible guardrails.

What I’m doing next: I’m taking a one-month break from AI coding tools. I’ll write the code, plan the work, and try to get back to why I enjoyed this in the first place — the predictability and the small wins. I’m privileged to have the option; I know not everyone does.

If you’ve found workflows or guardrails that actually make AI-assisted development reliably enjoyable, tell me what they are. If you’re with me on the frustration, say that too. I want to learn, and I want to see if this break helps me get the joy of programming back.