What is software, really?
Not philosophically. Practically. What does every piece of software that has ever been worth anything actually do?
It improves a process.
Email improved the communication process. Before email, you wrote a memo, printed it, walked it to someone's desk or mailed it. After email, same process, much better execution. The process — "communicate information to someone who isn't here" — didn't change. The means of executing it did.
Excel improved the bookkeeping process. Uber improved the transportation process. Shopify improved the retail process. Slack improved the team communication process. Every successful software product in history is the same story: someone looked at how people do a thing, and said, "software can make this better."
This is so obvious that it's invisible. We don't think of ourselves as building process improvement tools. We think we're building "apps" or "platforms" or "products." But strip away the branding and the UI and the investor pitch, and what's left is always the same thing: a process that existed before the software, now executed more efficiently because of it.
I needed to see this clearly before I could understand what AI actually changes. Because AI doesn't change what software is for. AI changes who participates in the process that software improves.
A new participant
Before AI, the participants in software-mediated process improvement were humans and code. Humans made decisions. Code executed them. The boundary was clear. A human decided what to write in the email; Gmail handled sending it. A human decided what numbers to put in the spreadsheet; Excel handled the formulas. Humans think. Software acts.
AI blurs that line. Not erases it — blurs it. AI can participate in the thinking part. Not all of it. Not reliably for all of it. But meaningfully, for a growing portion of it.
This means the process improvement equation has a new variable. It used to be: Humans + Software = Better Process. Now it's: Humans + AI + Software = Better Process.
The goal is the same. There's just a new participant.
AI isn't a change in what software does. It's a change in who's doing it. Once you see it that way, the question of "what do I build for AI?" becomes clearer: you build whatever helps this new participant do its job within the existing process.
But this participant has a memory problem
Here's the catch. Humans carry process knowledge in their heads. A senior engineer doesn't need to re-read the architecture docs before every code review. A veteran skydive instructor doesn't need to check the manual before every debrief. Humans accumulate knowledge through experience, store it in memory, and apply it instinctively.
AI doesn't do any of that.
The model has broad training knowledge — it knows what skydiving is, it knows what software architecture means — but it knows nothing about YOUR specific process. Your decisions. Your constraints. Your learnings. Your domain rules. Every session, you have to re-teach it.
Unless you externalize the knowledge.
This is what the outer harness is for. It's the externalized process knowledge that allows AI to participate effectively without starting from scratch every time. Without it, AI is impressive but generic. With it, AI becomes useful in your specific context.
Four kinds of knowledge
When I started categorizing the knowledge in my harness databases, I expected a mess. Hundreds of chunks accumulated over months of building, organized by project and by vibes. But when I stepped back and looked at what was actually there, a clean taxonomy emerged. Everything fell into one of four categories.
Build knowledge — how we create things. Development workflows, coding patterns, CI/CD configurations, testing strategies, deployment playbooks. The knowledge that governs the creation process itself. "When we build Flutter apps, we use hexagonal architecture. When we write tests, we do TDD. When we deploy, we use this pipeline." This is knowledge about making software.
Product knowledge — what we're creating. Architecture decisions for a specific product. The roadmap. The feature set. Why we chose this database over that one. What the user personas look like. The constraints that shaped the design. This is knowledge about a specific thing being built.
Operations knowledge — how a domain works. Skydiving progression rules. Training periodization principles. Financial planning frameworks. The operational rules and workflows of a real-world domain, independent of any software. This is knowledge about how things work in the world.
Domain knowledge — the actual data. Jump logs. Workout records. Transaction histories. The concrete, user-specific data that the process operates on. This is knowledge about what has actually happened.
Build, Product, Operations, Domain. Four types. They cover the complete lifecycle of process improvement through software. Build is how you make the tool. Product is what tool you're making. Operations is the process the tool improves. Domain is the data the process acts on.
These four categories aren't arbitrary. They map to real organizational boundaries. Build knowledge is owned by the engineering team. Product knowledge is owned by the product team. Operations knowledge is owned by domain experts. Domain knowledge is owned by users. Different owners, different lifecycles, different access patterns. Mixing them is how knowledge systems become unmaintainable.
This is not new. This is BPM with a brain.
Business Process Management is a $16 billion-plus market. Companies like Celonis, UiPath, and Bizagi have built businesses on this premise: structuring and improving business processes creates real value.
harness.os isn't disrupting BPM. It's extending it. Traditional BPM structures processes for human participants. harness.os structures the knowledge those processes depend on, so AI can participate alongside humans. Same premise, expanded scope.
BPM has always been a form of knowledge engineering — encoding process knowledge into systems. AI Knowledge Engineering extends this to ALL types of knowledge, not just processes. Build knowledge. Product knowledge. Operations knowledge. Domain knowledge. BPM covers one slice. AI Knowledge Engineering tries to cover all of them.
The recursive loop
This is the part where I want to be honest about what's proven and what's still theoretical.
The proven part: every learning logged in a harness makes the next session better. When an AI session discovers that a particular migration strategy causes issues, that learning gets stored. The next session that touches migrations has that knowledge available from the start. It doesn't repeat the mistake. The knowledge compounds. I've seen this happen across hundreds of sessions.
The theoretical part: at scale, cross-domain pattern discovery should create broader improvements. A learning from the build process should improve the product process. An operational insight from skydiving should inform training methodology. The mesh of connected harnesses should surface patterns that no single harness contains.
I believe this will work because I've seen hints of it. But I haven't built the metrics pipeline to prove it quantitatively. The compound effect feels real. I need the data to confirm it. Being honest about what's measured and what's intuited matters when you're making claims about recursive improvement.
What this means for what I was building
The takeaway: software improves processes. AI is a new participant in that improvement. The outer harness is data organization for that participant — structured knowledge that allows AI to contribute meaningfully to processes it would otherwise know nothing about.
And that knowledge falls into four clean categories: Build, Product, Operations, Domain. Each with different owners, different lifecycles, and different access patterns. Each serving a different role in the process improvement lifecycle.
This wasn't just a taxonomy for my projects. It was a taxonomy for any process improvement through software and AI. Any team building software is creating Build and Product knowledge. Any domain has Operations knowledge. Any user generates Domain data. The categories are universal.
Which meant that harness.os wasn't just a personal tool anymore. It was a methodology. A way of thinking about knowledge organization for AI-assisted process improvement that could apply to any team, any domain, any scale.
But I was getting ahead of myself. I had the taxonomy. I had the theory. Now I needed to answer the concrete question: what categories of knowledge does this new participant actually need, and how do you structure them so they're genuinely useful instead of just well-organized noise?
The answer turned out to be simpler than I expected.