Back to Insights
Builders LogMar 2, 2026·Adura Fayode

I Turned 21,000 Lines of Code Into 43 Files

Anthropic's financial analysis plugin (the one that builds discounted cash flow models) has fewer files than my project's __pycache__ directory. No server. No API. No frontend. Just SKILL.md files and an Excel template.

I had just spent a month building 21,000 lines of code for my own AI-powered financial tool: a business plan and model generator for ecommerce founders. The thing that actually shipped was 43 files totaling 588 KB. Smaller than a single hero image on a landing page.


The Belief I Held

I was building a SaaS product.

The motivation was simpler than that sounds. I wanted hands-on experience with Claude's Agent SDK. I needed a real project, not a tutorial. I have a background in financial analysis, so financial modeling felt like the natural domain: build an AI-powered financial model generator for pre-launch ecommerce founders. A founder answers 22 questions about their business: product category, order volume expectations, channel strategy, margin structure. The system produces a populated six-year Excel financial model and a 7,000-word business plan with sourced market research.

The stack I built to learn the SDK: React for the intake form. FastAPI for the backend with SQLite job tracking. Claude Agent SDK for parallel research subagents that pull benchmarks from curated databases and live web sources. openpyxl for Excel generation with formula recalculation. Pandoc for converting the business plan into a styled Word document. Background task queues. Progress polling endpoints. File download routes. Environment variables. Docker configuration.

Eleven development phases. About a month. I never launched it as a SaaS; that was not the point. But I built full SaaS-grade infrastructure, because the assumption felt obvious: to deliver this kind of value, you need software. A server. A database. An application.

That assumption was wrong.

The "Wait a Minute" Moment

I kept looking for the catch. Where is the backend? There is no backend. Where does the data persist? It does not persist. The user gets their deliverables and the session ends. Where is the orchestration logic that sequences the pipeline? It is in the SKILL.md files. Written in English. Claude reads them and executes the workflow.

That is when I looked at my own codebase and asked a question I should have asked a month earlier: where does the value actually live?

Your Product Might Be Knowledge, Not Code

Most software products encode three things:

  1. Domain knowledge — the mapping tables, heuristics, benchmark data, and decision trees that represent expertise in a specific area
  2. Orchestration logic — the sequencing: do step A, then step B, then step C based on the result of A
  3. Infrastructure — servers, databases, authentication, billing, monitoring, deployment

With large language models as the runtime, the second layer (orchestration) is commoditized. Claude already knows how to sequence tasks, handle conditional logic, manage errors, and format outputs. You do not need to write that code yourself. You describe it in plain English and the model executes it.

The third layer (infrastructure) disappears entirely. No server to provision. No database to manage. No deployment pipeline. The user's Claude subscription handles compute, and the plugin system handles distribution.

What remains is the first layer: domain knowledge. And that is the part I spent the most time on, without realizing it was the product.

My codebase had 13 curated benchmark JSON files covering 11 product categories across 5 regions: conversion rates, customer acquisition costs, average order values, gross margins, all sourced and attributed. It had a 49-driver catalog defining the type, bounds, and semantic meaning of every financial model input. It had mapping tables that convert "1-5k orders per year" into a specific target of 3,000 orders. It had an equity suggestion algorithm, a conversion rate hierarchy, fixed overhead allocation rules, and channel mix normalization logic.

That knowledge (curated over days of research, cross-referenced against industry reports, validated through model iteration, and grounded in formal finance training) was the actual product. The 21,000 lines of code were a delivery mechanism I built because I did not realize another delivery mechanism already existed.

How Claude Code Translated Itself

Here is the meta part. I used Claude Code (the same tool that runs plugins) to convert the SaaS into a plugin. It read my Python source files and produced the SKILL.md instructions that replaced them.

The process was not trivial but it was surprisingly systematic. Each Python module got distilled into a SKILL.md file that captured the domain knowledge and decision logic while discarding all the infrastructure glue. Claude Code read the source, understood what each module actually did versus what it existed to support, and wrote instructions that preserve the former.

A concrete example: my Excel writer module was 1,476 lines of Python. It opened the .xlsx file as a ZIP archive, parsed the XML inside, wrote values to specific cells using regex replacements on XML tags, cleared cached formula values, deleted the calculation chain file, and used xlcalculator to recalculate formulas. The resulting SKILL.md is 472 lines. It describes the same process in plain English, with every cell address preserved, and Claude follows the instructions to produce an identical Excel file. The domain knowledge (which cells to write, what values, in what format) is all there. The implementation machinery (ZIP handling, XML parsing, regex patterns) is gone, because Claude already knows how to do those things.

Another: my research agent was 6,405 lines across fourteen modules with parallel subagents, a three-tier resolution hierarchy (curated database, then live web research, then fallback defaults), source attribution with confidence scoring, and a 49-driver catalog. The plugin equivalent is a 387-line SKILL.md plus the 13 benchmark JSON files and the driver catalog. The resolution logic is described, not coded. The benchmark data, which took weeks to research and curate, ships as-is.

claude plugin validate returned "Validation passed." The receipts:

ComponentBefore (Python SaaS)After (Plugin)
Research agent6,405 lines across 14 modules387-line SKILL.md
Excel financial model writer1,476 lines472-line SKILL.md
Business plan writer~3,200 lines across 9 modules286-line SKILL.md
Intake questionnaire5,629 lines across 26 files391-line SKILL.md
Document export~1,000 lines178-line SKILL.md
Total~21K lines43 files, 588 KB
Hosting cost$5-20/month (estimated)$0/month (user's Claude subscription covers compute)

The plugin is open source: github.com/aivant-io/bizplan-plugin.

What survived intact: the 13 curated benchmark JSON files. The Excel template (74 KB of financial modeling formulas). The reference Word document template (37 KB of professional styling). The driver catalog with 49 driver definitions. The mapping tables and decision heuristics.

What disappeared: the FastAPI server, SQLite database, and background task queue. The React frontend with form validation and localStorage persistence. The Pydantic models, TypeScript types, and API routes. The Docker configuration and deployment scripts. Every line of orchestration glue connecting one pipeline stage to the next.

The pattern is not subtle. The domain knowledge survived. The infrastructure evaporated.

The Full Product Lifecycle Used Four Different Claude Interfaces

The plugin conversion was not the only place Claude showed up. The entire product lifecycle used different Claude interfaces, from initial build to final distribution:

Claude Agent SDK built the original research agent. When a founder selects "beauty and personal care" as their product category, the system needs curated conversion rate benchmarks, CAC ranges, and margin profiles. The Agent SDK let me build parallel subagents that pull from a curated database first, then fall back to live web research, then to regional defaults. That research architecture is domain knowledge: the priority hierarchy, the source attribution, the confidence scoring. It survived the translation.

Claude Code performed the SaaS-to-plugin conversion itself. It read my Python source files, understood the domain logic embedded in them, and wrote SKILL.md instructions that capture what matters and discard what does not.

Claude in Excel restructured the financial model. The original workbook was one enormous sheet. Claude in Excel helped me decompose it into four sheets (Assumptions, Model, Financial Statements, and Dashboard) with proper cell references across sheets. The plugin would have worked with everything on one sheet, but the restructuring made the model readable.

The Claude Plugin system is the distribution mechanism. A founder installs the plugin, runs /bizplan, and gets a populated Excel model and a sourced business plan. No deployment. No onboarding. No infrastructure.

The same underlying intelligence, applied through different surfaces, at each stage of the product lifecycle.

The Tradeoffs

This is not a free lunch. Converting a SaaS to a plugin means accepting real constraints, and some of them matter.

You lose determinism. SKILL.md files are instructions, not code. Claude interprets them. Two runs with the same inputs will not produce byte-identical outputs. The financial model will have the same numbers (those are deterministic cell writes) but the 7,000-word business plan will vary in phrasing, emphasis, and structure. For some products, this nondeterminism is unacceptable. For a business plan that gets edited by the founder anyway, it is fine.

There is no billing mechanism. Plugins are free. There is no way to charge users today. This works if distribution matters more than direct monetization right now: lead generation for a consulting practice, brand awareness, market validation. If you need revenue on day one, you still need billing infrastructure, which means you still need a SaaS.

You are dependent on the ecosystem. The plugin system is new. If the marketplace does not grow, your 43 files are well-organized documentation with no distribution. This is a real bet, and you should be honest about whether you are comfortable making it.

Not everything converts. Products that require real-time data pipelines, persistent user state across sessions, sub-second latency, or hardware access do not fit this pattern. If the value of your product is the infrastructure itself, like a real-time analytics dashboard, a collaborative editing environment, or a high-frequency trading system, you still need to build the infrastructure. The knowledge-versus-infrastructure distinction matters.

Should Your SaaS Be a Plugin?

After going through this conversion, I keep returning to four questions. They work as a quick diagnostic for whether a product (yours or a client's) could follow this path.

1. Where does the value live?

Look at your codebase and separate the domain knowledge from everything else. The mapping tables, benchmarks, heuristics, decision trees, curated data, and business rules. That is the domain knowledge. The API routes, database queries, authentication middleware, task queues, and deployment configuration. That is the delivery mechanism.

If the domain knowledge could be written as structured instructions and reference files that a smart generalist could follow to produce the same output, the product is a plugin candidate.

2. Is distribution more important than monetization right now?

Plugins distribute for free through an existing marketplace. If you need reach before revenue, the plugin model trades monetization for distribution. You use the product to generate consulting leads, build credibility in a space, or validate market fit before investing in billing. If you need revenue immediately, you need billing infrastructure, which means you still need software.

3. Can an LLM be the runtime?

The workflow pattern that works: read structured input, apply domain rules, produce structured output. Questionnaire answers go in, a financial model and business plan come out. If your product follows this pattern (input, rules, output) the LLM handles the orchestration natively.

What does not work: sub-second response requirements, persistent multi-user state, hardware integrations, real-time collaboration, or continuous data pipelines. If the workflow needs any of these, it needs traditional infrastructure.

4. How much of your codebase is orchestration glue?

Go through your codebase and tag each file: domain knowledge, orchestration, or infrastructure. If more than half is orchestration glue (connecting module A to module B, handling errors between stages, retrying failed operations, formatting data between steps) the LLM already does that. Your SKILL.md only needs to encode the domain-specific parts.

In my case, the split was roughly 25% domain knowledge, 40% orchestration, and 35% infrastructure. The domain knowledge survived as SKILL.md files and reference data. The other 75% (orchestration and infrastructure combined) disappeared.

This is a useful exercise even if you decide not to convert. It forces clarity about where your competitive advantage actually sits.

What Changed in Our Defaults

At Aivant, we help teams figure out where AI fits into their operations. Before this project, our default assumption was that delivering AI-powered capabilities meant building software: an application with a backend, a frontend, and infrastructure to run it.

That default changed.

Before we write a line of code for a client engagement, we now ask: could this be a SKILL.md? Could the client's domain expertise (their decision frameworks, their curated data, their business rules) be encoded as structured instructions that an LLM follows, rather than as code that a server executes?

Sometimes the answer is no, and we build the software. But surprisingly often, the answer is yes. The client's competitive advantage is not their codebase. It is their domain knowledge. Their curated benchmarks. Their decision logic. The code was always just a delivery vehicle. Now there is a lighter vehicle available.

What I Would Do Differently

Start with the SKILL.md. If I were building this again, I would write the instructions first. Describe the questionnaire. Describe how to resolve the assumptions. Describe how to populate the Excel template. If Claude can follow those instructions and produce the right output, you do not need the SaaS. Build the software only when the instructions are not enough.

Curate the knowledge earlier. The benchmark data (11 product categories, 5 regions, conversion rates and CAC and margin ranges with source attribution) is the moat. I would have invested there first instead of building infrastructure I did not need.

Study existing plugins first. Anthropic's own financial analysis plugins were the template I should have followed from the start. Do not invent patterns when good ones already exist.

Try This

If you want to test whether your own project could work as a plugin, here is the minimal structure:

your-plugin/
├── .claude-plugin/plugin.json
├── commands/your-command.md
└── skills/
    └── your-skill/
        ├── SKILL.md          # This is the product
        ├── references/       # Schemas, catalogs, rules
        └── data/             # Curated benchmarks, mappings

The SKILL.md frontmatter:

yaml
---
name: your-skill-name
description: >
  What this skill does in 2-3 sentences.
compatibility: What tools or access it needs.
---

Validate it:

bash
claude plugin validate ./your-plugin

The question worth asking is not "should I build a SaaS or a plugin?" That frames it as a technology choice. The real question is: where does the value in my product live? If the answer is domain knowledge (curated data, decision logic, expert heuristics) the delivery mechanism might be simpler than you think.

Forty-three files. 588 KB. A month of code, distilled into the knowledge that mattered all along.


We help teams figure out which parts of their operations are knowledge versus infrastructure — and build the right delivery mechanism for each.

For the detailed comparison of Claude Code versus the Agent SDK that informed this decision, see Code or SDK: When You Actually Need the Agent SDK.

Want help putting this into practice?

Book a 15-minute call