← All posts
Part 1 of 8 May 2026 ~4 min read

I Got a Claude Subscription and Started Three Projects at Once

What building three apps simultaneously taught me about AI tools

Before we begin: This blog exists because of what it describes. From day one, I organized knowledge for AI — rules, architecture decisions, coding standards, domain context — without knowing it had a name. Over time, I conceptualized it, understood its scales, and it emerged as what I now call AI Knowledge Engineering, formalized through the harness.os methodology. This series tells that story: how it happened, what I built, and what I learned. At the end, I'll show how this blog itself was built using AI Engineering (the execution engine) and AI Knowledge Engineering (the knowledge that made the execution good).

The ideas I never built

I'd always wanted to build my own systems. A system to track my skydive progression — jumps, qualifications, canopy downsizing. Something to manage my financial life properly. A training tracker that understood how different types of exercise feed into each other. A way to organize the knowledge I was accumulating across everything I do.

I'd tried. Notion became my second brain — pages for skydiving, fitness, finances, routines, knowledge management. YouTube playlists, Google Drive folders, Apple Notes, scattered docs. The organization was real, the intent was real, but none of it was a system I'd actually built. It was always someone else's tool, bent into shape.

At work, I was already using Copilot daily. I knew the files in the repo helped the tool give better answers — I'd read the codebase, understood the structure, and the AI benefited from that context passively. But I wasn't organizing anything specifically for the AI. The repo was just there, and Copilot worked better because of it.

When I got a Claude Code subscription, I had the idea that I'd need something similar — structured context that the tool could use. But this time I was starting from scratch, building my own projects. And I quickly learned that the frustrations were worse than at work: closing a session by accident, computer restarting, opening a new conversation and wishing something from the last one had been saved. At work the repo carried the context. Here, there was nothing.

So naturally, I started three projects at once.

Three apps, one developer

way2fly — a skydive progression tracker. I'm a skydiver, and the existing tools for logging jumps and tracking progression through qualifications are terrible. Paper logbooks and spreadsheets. I knew exactly what I wanted: a clean mobile app that understood the structure of skydiving progression, from A-license through to advanced disciplines.

way2move — training and wellness tracking. Not another generic fitness app. Something that understood the relationship between different types of training, recovery, and how they all feed into each other. The kind of holistic view that no single app gives you.

way2save — personal finance. Budgets, goals, the boring stuff that everyone needs and nobody enjoys tracking. But designed the way I actually think about money, not the way some product manager at a fintech company decided I should.

Three apps. All Flutter, because I wanted them on both iOS and Android. All built solo. All with an AI assistant that, as I was about to find out, had no memory between sessions.

The persistence problem

From the start, I was using Claude to make decisions — not just write code. Which state management approach? Riverpod over Bloc. Which architecture pattern? Repository pattern for the data layer. How to share a design system across three apps. A lot of architecture translates from backend — dependency injection, separation of concerns, clean layering. The UI was new, but the thinking wasn't.

I already knew from using Copilot at work that files in the project helped the AI give better answers. So I created CLAUDE.md files, architecture docs, decision records early on. The basic persistence was there from day one — I wasn't starting from zero each session.

But the frustrations were different from what I expected. The computer would turn off in the middle of a conversation. I'd accidentally close a session. I'd open a new conversation and wish the in-flight context from the last one had survived — not the files I'd already written, but the live thread of reasoning we'd been building together.

With each new conversation, I wasn't starting over. I was figuring out more. Every session revealed another gap — something the existing files didn't cover, a decision that wasn't captured yet, a pattern the AI kept getting wrong despite the docs I'd written.

The real problem wasn't amnesia. It was that the knowledge I needed to persist kept growing. Each session taught me something new about what the AI needed to know, and the files I had weren't keeping up with what I was learning about how to work with AI effectively.

Now multiply that by three projects. Context-switching from way2fly to way2save meant the AI lost the session thread — and sometimes mixed up which project we were even talking about. I'd ask about a data model and get an answer that mixed up the domain. Skydiving jumps bleeding into financial transactions. Three projects made the gaps visible faster.

From files to rules

I already had CLAUDE.md files and architecture docs in each project. What I started noticing was that through daily usage, I could do more than just describe the project — I could force behaviors. Rules the AI had to follow. Schemas it had to respect. Constraints that prevented it from drifting.

The first files were descriptions: "here's the project, here's the tech stack." But through continued usage I realized I could write prescriptive rules: always use this pattern, never use that one, follow this naming convention, structure responses this way.

So the files evolved. A .claude/rules/ directory for coding standards that the AI had to follow. Spec files for features I was planning. Architecture decision records that weren't just documentation — they were constraints.

Each project got its own set. Context-switching between projects became possible because each project carried its own knowledge and its own rules. The AI behaved differently in each project because the files told it to.

What these files actually were

A few weeks in, I noticed something: these files weren't documentation. Not in the traditional sense.

Documentation is for humans. It explains why. It tells a story. It provides context that helps a person understand a system they're encountering for the first time. Good documentation has narrative structure, progressive disclosure, examples that build intuition.

What I was writing was something different. It was structured knowledge for AI. And AI knowledge needs to be structured differently. It needs to be precise, declarative, and exhaustive about the what and the how. It doesn't need narrative. It needs rules. It doesn't need to build intuition. It needs to be told exactly what to do and what not to do.

Compare:

Human doc:
  "We chose Riverpod for state management because
  it offers better testability than Bloc and its
  code-generation approach reduces boilerplate..."

AI doc:
  State management: Riverpod with code generation.
  - Always use @riverpod annotation
  - Never use StateNotifier (legacy)
  - Repository pattern for data layer
  - AsyncValue for loading/error states

The human version explains a decision. The AI version is an instruction set. Both are valuable. They serve different audiences.

I didn't have a name for it yet, but what I was doing — organizing knowledge specifically for AI consumption — would turn out to be the core practice. Not AI Engineering (building the engine). AI Knowledge Engineering (organizing what the engine needs to know).

Three projects, one lesson

Running three projects simultaneously turned out to be useful, even if it wasn't planned that way. A single project might have let me tolerate the repetition. "Yeah, I have to re-explain the architecture each time, but it's just one project." I might have limped along for months without solving the problem.

Three projects made the friction obvious fast. Three sets of architecture decisions. Three sets of coding standards. Three tech stacks with subtle differences. The repetition cost was tripled, and I reached the point where I had to fix it in days, not months.

That urgency pushed me to solve the problem immediately. Not with a grand plan or a framework, but with the most pragmatic tool available: files. Text files in the right places, with the right content, read by the AI at the start of every session.

Principle #1: AI tools need organized, persistent knowledge to be effective across sessions. Without it, every conversation is an expensive repetition of the last one.

I didn't know it yet, but I'd stumbled onto the first principle of what would become harness.os. The files were messy. The organization was ad hoc. The content was whatever I thought to write down between debugging sessions. But the basic idea was right: the AI is only as good as the knowledge you give it, and that knowledge needs to persist.

If files make the AI better, what's the right set of files? What's the structure? What's the anatomy of the knowledge an AI actually needs?

That question led to the next step.