← Back to all posts
Claude CodeAI EngineeringDeveloper Productivity

I Wasted Hours Re-Explaining My Stack to Claude Code. Here's How /skills Fixed It.

March 14, 2026·5 min read·3 views

The Problem Nobody Talks About

I was building a production AI system. FastAPI backend, LangGraph agents, Pinecone for retrieval. The kind of stack where naming conventions matter, where error handling patterns need to stay consistent across fifty files, and where one wrong architectural decision cascades into hours of refactoring.

Claude Code was helping me ship faster than I ever had before.

But there was this friction I couldn't shake.

Every single session, I'd spend 10-15 minutes just re-establishing context. "Here's my folder structure. Here's how I name my agent nodes. Here's the error handling pattern I use. Here's why I structure Pinecone queries this way."

Claude would nail it. For that session.

Next session? Back to square one. Same explanations. Same corrections. Same drift.

I thought this was just how AI-assisted development worked. You trade speed for repetition. You accept the context tax.

I was wrong.

I Was Using Claude Code Like a Chatbot

Here's what took me embarrassingly long to realize - I was treating Claude Code like a conversation. You talk, it responds, the context evaporates. That's how chatbots work.

But Claude Code isn't a chatbot. It's a configurable development system. And the difference between those two things is massive.

A chatbot forgets you the moment the session ends. A configurable system lets you encode your preferences, your patterns, your architectural decisions - so they persist across every interaction.

The feature that made this click for me? /skills.

What /skills Actually Does

The concept is straightforward. You create markdown files that describe how you work - your patterns, conventions, and standards. Claude reads these files before it touches anything in your codebase.

No re-explaining. No context loss. No drift.

I created skill files for each repeatable pattern in my stack:

LangGraph node structure - how I define nodes, what the input/output schemas look like, where state management happens
FastAPI route handlers - my naming conventions, response models, dependency injection patterns
Pinecone query formatting - how I structure metadata filters, which similarity metrics I use, how I handle multi-index searches
Error handling conventions - when to raise vs. catch, logging patterns, how errors propagate through agent chains

Each file is just markdown with clear rules. Nothing fancy. Here's a simplified example of what one looks like:

# LangGraph Node Convention ## Node Function Signature - All nodes accept `state: AgentState` as first parameter - Return type is always `dict` with keys matching state channels - Node names use snake_case: `intent_detection_node`, `response_generation_node` ## State Updates - Never mutate state directly - return new values - Always include `messages` key in return dict - Use `add` reducer for message accumulation ## Error Handling in Nodes - Wrap external API calls in try/except - On failure, return error state - don't raise - Log with structured format: node_name, error_type, context

That's it. Thirty lines of markdown that save me from explaining the same patterns every session.

The Results Were Immediate

The first project where I applied skills across the board - the difference was obvious from day one.

Code reviews got faster because Claude's output already matched my architecture. I wasn't spending the first half of every review pointing out structural mistakes that I'd corrected in the previous session. The patterns were right from the first response.

I stopped fixing the same mistakes twice. Before skills, Claude would occasionally use a different error handling approach, or structure a LangGraph node in a way that didn't match the rest of my codebase. These weren't bugs exactly - they were inconsistencies. And inconsistencies compound. With skills loaded, the consistency was automatic.

My context-setting time dropped from 15 minutes to basically zero. I'd open a new session, describe what I wanted to build, and Claude already knew how I build things. That might sound like a small win, but multiply it across 4-5 sessions a day and you're looking at over an hour saved daily.

Why Most Developers Skip This

Here's the honest truth - setting up skills takes about 30 minutes. You need to sit down, think about your patterns, and write them out clearly enough that an AI can follow them.

Most developers don't do this.

Not because it's hard. Because it feels like overhead. It feels like documentation. And developers have a complicated relationship with documentation, to put it politely.

But here's what I've learned - the 30 minutes you spend writing skill files compound into hours saved every week. It's not a one-time investment with a one-time payoff. It's a one-time investment that pays out every single session, for as long as you're working on that project.

Beyond Skills - The Bigger Lesson

The real insight isn't about /skills specifically. It's about treating AI tools as systems to configure, not conversations to have.

Claude Code has layers of configuration - CLAUDE.md for project context, skills for engineering patterns, hooks for automated workflows. Each layer reduces the amount of manual context you need to provide. Each layer makes the tool more useful without making you repeat yourself.

I think the developers who get the most out of AI coding tools aren't the ones writing the best prompts. They're the ones who invest in configuration. They're setting up the system so that good output is the default, not something you have to prompt your way into every time.

Try This Today

If you're using Claude Code and you haven't set up skills yet, here's what I'd suggest:

1.Pick the three patterns you explain most often - the ones you catch yourself repeating session after session
2.Write each one as a short markdown file with clear rules and examples
3.Save them as skill files in your project
4.Watch how the next session feels different

You don't need to document everything on day one. Start with the patterns that cause the most friction. Add more as you notice yourself repeating explanations.

The goal isn't perfect documentation. The goal is never explaining the same thing twice.