Skip to main content

Agent Skills

Mar 29, 2026 · 4 min read

I built a set of agent skills that encode my engineering workflow into a format any AI coding agent can use. They cover exploration, planning, implementation, review, and shipping without relying on giant prompts.

The biggest problem I had with AI coding agents was not capability. It was consistency. An agent could write solid code in one session and drift into noise in the next. Generic prompts did not hold up. The agent would second-guess itself, produce speculative findings, or lose track of what mattered.

I needed something more structured. Not a framework or a wrapper, just clear instructions that encode the workflow I actually follow: explore the problem, plan the solution, implement, review, open the pull request, and ship.

From idea to production

The skills follow the same workflow I use every day.

It starts with explore, asking questions one at a time until the task is clear. If the code can answer a question, the agent reads the code instead of asking. Then plan, designing through dialogue, not in isolation. The plan emerges from conversation, grounded in what the code actually looks like today.

For implementation, I also have a TDD skill for the work that actually benefits from it: one test, one implementation, one refactor pass. Vertical slices, not horizontal. I do not use that for everything, and pretending otherwise would be silly.

Once the work is scoped, I file an issue with duplicate checking and approval before writing any code. After implementation, a full audit suite runs against the branch diff. The pull request only gets created after verification passes and review findings are clean.

That full cycle, from explore to ship, is what I run daily while building Acolyte. Each skill works on its own, but together they cover the whole path from idea to production.

Evidence over noise

The review step composes five focused audits into one pass: style, architecture, documentation, security, and tests. Each audit checks a different concern, from naming and pattern consistency to boundary integrity, doc drift, concrete attack paths, and coverage gaps, but they all share the same shape.

Every audit requires concrete code references or plausible failure scenarios before reporting a finding. No speculative concerns, no generic style dogma, no fear-driven security recommendations. That evidence threshold is the part that matters. Without it, agents produce noise. With it, they produce findings I trust enough to act on while I am already working on the next issue.

Explicit by design

The skills are opinionated and explicit. Each one defines a scope, an evidence threshold, a workflow, an output format, and a list of anti-patterns.

The anti-patterns are the most important part. They tell the agent what not to do: no speculative abstractions, no broad rewrites instead of minimal fixes, no fear-driven security recommendations without concrete attack paths, no disappearing to build a plan in isolation and returning with a document for approval. These are failure modes I hit repeatedly before encoding them. Naming them explicitly is what stopped them from coming back.

The agent does not have to invent process or guess what I expect. It follows the skill, and I get consistent results across sessions.

That consistency is what generic prompts could not give me. The same review that was useful on Monday would drift into speculative cleanup wishlists by Wednesday. With skills, the bar stays the same because the instructions stay the same.

Built from practice

The workflow behind these skills is not something I invented for AI agents. It comes from 15+ years of building production software. The skills encode that experience into a format agents can follow. They took shape over hundreds of sessions building Acolyte, where what did not work got cut along the way.

They follow the open Agent Skills format. One markdown file per skill, no runtime, no dependencies, no lock-in to a specific agent. I can use the same skill with different agents instead of rebuilding the same prompt stack for each one.

I published the project-agnostic ones on skills.sh for anyone to use. The source is at cniska/skills.

Share

Read next

Draw the Line

Some agents sandbox with Docker. Others have no boundary at all. Acolyte takes a middle path: argv-only execution, realpath checks, and a restricted environment. No containers, no kernel features. The workspace root is the boundary.