paulund
#ai #devtools #architecture #claude

A cheerful cartoon character at a vintage typewriter surrounded by floating markdown pages and small robots reading from scrolls of plain text

Everything Is Markdown Now

My AI development setup is almost entirely markdown files. CLAUDE.md for project context. Skills in .claude/skills/. Agent definitions in .claude/agents/. Specs in plans/. System prompts, review checklists, style guides: all markdown.

I didn't plan this. It happened gradually as I moved from using AI tools casually to integrating them properly into how I work. And the interesting thing about ending up here is what it means for portability.

Your AI configuration should be model-agnostic by default. If it isn't, you've built yourself a dependency you didn't need.

The Switching Cost Problem

This week the US Department of Defense's deal with Anthropic collapsed. Anthropic had held a $200M contract to prototype AI for national security work, but negotiations broke down when Anthropic refused Pentagon contract language that would have allowed its AI to be used "for all lawful purposes." The DoD blacklisted Anthropic and promptly signed a new deal with OpenAI. I have no insight into the internal debates. But the thing I keep thinking about is the switching cost.

If your AI workflows are built around model-specific APIs, proprietary formats, or platform-locked configuration, switching providers is painful. Not technically hard. Painful in the sense of rebuild, retest, redeploy for everything that assumes the old provider.

If your workflows are built on markdown (plain text files that describe how the AI should work), switching is mostly just pointing a different model at the same files.

The portability is in the format, not the model.

How Markdown Keeps AI Workflows Portable

CLAUDE.md tells Claude Code about the project: the tech stack, the conventions, the commands, the things to avoid. This file is specific to Claude Code by name, but the content is universal. If I started using a different tool tomorrow, I'd rename it and change almost nothing else. Cursor rules do the same thing for Cursor: near-identical content, slightly different filename. Both are plain text.

MCP configurations are JSON. JSON is also plain text. The servers they connect to are protocol-level, not model-level.

Spec-driven development stores feature specs as markdown files in a plans/ directory. These get read by whatever AI tool is doing the implementation. I've had Claude, GPT-4o, and Gemini work from the same spec files without modification.

Skills and agents in Claude Code are markdown with YAML frontmatter. The structure is Claude-specific, but the underlying guidance (how to write for a particular audience, how to structure a code review, what makes a good PR description) is reusable anywhere.

Write the intent in plain text, let the model interpret it. The intent is yours. The interpretation is theirs.

Plain Text Has Always Won

This isn't a new observation. The Unix philosophy put it clearly decades ago: write programs that handle text, because text is universal. Configuration as text can be read, edited, diffed, versioned, and understood without special tools.

Every time software has moved away from plain text formats, there's been a reckoning. Binary formats. Proprietary databases for configuration. Formats only readable by one vendor's tools. The long-term direction is always back toward plain text, because plain text has the widest support and the longest lifespan.

We're watching this play out again in AI tooling. Teams building AI workflows in plain text can move between providers and iterate on their configurations without much friction. Teams building around proprietary formats or API-specific structures will have more of it.

Skills Are the Clearest Case

Skills (structured markdown files defining how an AI should handle a particular type of work) are where the portability argument is most concrete.

A skill file for technical writing defines what good technical writing looks like, how to structure articles, what tone to use, what to avoid. This guidance doesn't depend on the underlying model. Claude Sonnet can read it. So can GPT-4o. So can whatever comes out in 2027.

The same applies to code review guidelines, API design patterns, and testing conventions. If you've put the intelligence into the markdown, you own the intelligence. The model is just the runtime.

This is the real shift from prompt engineering to skills design. Prompts live in conversations. Skills live in files. Files are portable.

Diagrams Belong in Markdown Too

Planning work is part of this. Architecture diagrams, flow charts, sequence diagrams for complex features: these have traditionally lived in Notion pages, Lucidchart exports, or Figma files that drift out of sync with the actual code within weeks.

Mermaid fixes this. You write diagrams as plain text inside a fenced code block, and they render visually in GitHub, VS Code, and most modern markdown viewers. But the source is still just text.

flowchart TD
    A[Capture idea] --> B[Draft article]
    B --> C{Review}
    C -->|Pass| D[Merge to main]
    C -->|Needs work| B
    D --> E[Generate social posts]

That diagram describes the content pipeline for this site. It lives in a markdown file. It's in git. It diffs cleanly when the process changes.

The thing that makes this useful for AI workflows specifically is that the AI doesn't need to render it. The source text describes the structure precisely. When I share a sequence diagram with Claude, it reads the nodes and edges directly and understands how the system fits together, without needing a visual. The diagram serves two audiences at once: the human who wants to see the flow, and the AI that needs to understand how the pieces connect.

This matters for plans/ specs too. A feature spec with an embedded Mermaid diagram is clearer than one written in prose alone. The diagram captures relationships that sentences describe awkwardly. And because it's in the same file as the written spec, it travels with the context rather than sitting in a separate tool that no one remembers to update.

What This Means in Practice

Treat your configuration as code. Put everything in files, and don't keep system prompts locked away in UI fields or platform settings where they can't be versioned. All of it belongs in git alongside the rest of your project.

Use plain text where you can. Markdown handles most guidance and documentation. JSON or YAML handles structured configuration. Avoid platform-specific formats unless they're truly unavoidable. When you're locked into one vendor's format, you've created friction for your future self.

Keep the intelligence separate from the tool. The reasoning about how to structure a feature, how to write a test, how to review code: that belongs in your files. The model executes it. If you can describe how to do something in markdown, you can take that description to any model.

Your .claude/ directory, cursor rules, and MCP configs should all live in git alongside your code. When your AI workflow is captured in version-controlled files, you've built something that outlasts any single tool.

The models will keep changing. The markdown will not.