Back

A.I. Agents 😡 Are Not It

The irony of this post is that I’m currently peddling an AI agent platform (agentlandOS.com).

I’ve been doing AI-assisted coding heavily for over 12 months. I’ve built dozens of projects over that time and have used every model from Anthropic and OpenAI.

I’ve noticed an obvious regression in the quality of answers coming from Claude Code. The answers are still “smart” — the code is good, the logic is strong. The problem is that AI agents lie, lose context, forget things, and say incorrect things. To top it off, the agent apologizes and admits it’s wrong when you call it out.

I know what you’re thinking: “Zach, you probably breached the context window, or YOU’RE not providing enough context.”

No.

The agents don’t have to follow instructions, tell the truth, or be correct.

Replacing deterministic code with AI agents — tools, LLMs, context, memory — is going to be a bumpy ride. I love the LLMs, but the agentic experience is infuriating sometimes.

On top of that, the more Anthropic integrates AI agents, the longer it takes to get results that previously came quickly.

It makes sense that the major model companies want us to rely on AI agents more — AI agents eat tokens for breakfast, lunch, and dinner, and they take longer to reach an answer.

It’s currently 1:56 AM and I’ve wasted hours because an agent told me I didn’t have any updates in my remote repo. For some reason, I listened. Four hours later, I realized I had been re-doing work from the day before. I asked again a third time, and this time the agent found the remote branch with updates.

The value far outweighs the cons — but tonight? F*** you, Claude Code.

I’m going back to CC as soon as I push publish on this post.