-
source https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/
-
Spending your free time building things is super enjoyable …
-
… Anthropic and OpenAI handed out some freebies to hook people to their addictive slot machines.
-
Everything is broken
- ❗️ contradict point 7 of the Toyota Production System principle:
7. Use only reliable, thoroughly tested technology that serves your people and processes.
- 98% uptime becoming the norm instead of the exception, including for big services.
- … user interfaces have the weirdest fucking bugs that you’d think a QA team would catch.
- … seem to be accelerating.
- … user interfaces have the weirdest fucking bugs that you’d think a QA team would catch.
- … how much code is now being written by AI at Microsoft. … Windows is going down the shitter. Microsoft itself seems to agree …
- … claiming 100% of their product’s code is now written by AI consistently put out the worst garbage you can imagine.
- … software companies small and large, saying they have agentically coded themselves into a corner.
- ❗️ contradict point 7 of the Toyota Production System principle:
-
How we should not work with agents and why
- … next generation of LLMs will fix it. Pinky promise!
- … among my circle of peers I have yet to find evidence that this kind of shit works.
- … clankers aren’t humans.
- A human makes the same error a few times. Eventually they learn not to make it again.
- An agent has no such learning ability.
- It will continue making the same errors over and over again.
- … requires you to actually observe the agent making that error.
- … only so many mistakes the human can introduce in a codebase per day.
- … orchestrated army of agents, … mistakes suddenly compound at a rate that’s unsustainable.
- … realize … e2e tests you had your clankers write are equally untrustworthy.
- … only thing that’s still a reliable … is manually testing the product.
- Congrats, you fucked yourself (and your company).
- … only thing that’s still a reliable … is manually testing the product.
- … next generation of LLMs will fix it. Pinky promise!
-
Merchants of learned complexity
- … agents. … are merchants of complexity.
-
… seen many bad architectural decisions in their training data and throughout their RL training.
- Guess what the result is?
-
- … agents. … are merchants of complexity.
-
Agentic search has low recall
- … agents can also no longer deal with it. … codebase and complexity are too big, and they only ever have a local view of the mess.
-
How we should work with agents (for now, I think)
- … scoped so the agent doesn’t need to understand the full system.
- … loop can be closed, that is, the agent has a way to evaluate its own work.
- … output isn’t mission critical
- … rubber duck to bounce ideas against …
- … compressed wisdom of the internet and synthetic training
- … provided that you as the human are the final quality gate.
- Karpathy’s auto-research … will happily ignore any metrics not captured by the evaluation function, …
- code quality,
- complexity,
- correctness,
- … let the agent do …
- boring stuff,
- stuff that won’t teach you anything new,
- try out different things you’d otherwise not have time for.
- … evaluate what it came up with, take the ideas that are actually reasonable and correct, …
- … slowing the fuck down is the way to go.
- Give yourself time to think about …
- … what you’re actually building and why.
- Give yourself an opportunity to say, fuck no, we don’t need this.
- Set yourself limits on how much code you let the clanker generate per day,
- in line with your ability to actually review the code.
- Give yourself time to think about …
- … architecture, API, and so on, write it by hand.
- Be in the code.
- … write the thing or seeing it being built up step by step introduces friction …
- … better understand
- what you want to build and
- how the system “feels”.
- … where your experience and taste come in,
- something the current SOTA models simply cannot yet replace.
- Be in the code.
- … end result will be systems and codebases that continue to be maintainable,
- at least as maintainable as our old systems before agents.
- … your product now sparks joy instead of slop.
- … build fewer features, but the right ones.
- Learning to say no is a feature in itself.
- Ten principles for good design by Dieter Rams
Good design is as little design as possible
- Ten principles for good design by Dieter Rams
- … still have an idea what the fuck is going on,
- and that you have agency.
-
All of this requires discipline and agency.
- Jean-Denis Caron discovered Thoughts-on-slowing-the-fuck-down in Sandwich-–-Un-édito-sur-l’AI-sans-aucun-AI by Francois Lanthier Nadeau
- Links with Cognitive-Helmets-for-the-AI-Bicycle:
- Slow down …
- Be the generator … minds like being creative and generative.
- Models don’t have taste … quality is where humans are relevant.
- Agents don’t learn … they can trick us into confusing effort with learning.