harness.os
AI Knowledge Engineering

harness.os Blog

Two series — the journey of building harness.os, and the continuous improvement that followed. From files to methodology, from methodology to a system that improves itself.

Marco May 2026 2 series · 15 posts · ~85 min total

First, see what was built

Before the journey — see the destination. A multi-app AI ecosystem, from domain data to full orchestration.

What is AI Knowledge Engineering?

Not an official term — different people call it different things (KPMG: "knowledge engineering", Anthropic: "context engineering", Fowler: "harness engineering"). It's the framing I use for how harness.os works — a useful way to think about the two layers.

AI Engineering

AI Engineering

Picking models, writing prompts, building agents, wiring tools. The inner harness — the thin runtime connector. Claude Code, Copilot, custom API agents. Interchangeable by design.

Inner harness: Claude, GPT, Copilot, Codex, custom agents. A thin connector that reads context, calls a model, and routes tools. These tools keep getting better every month.
AI Knowledge Engineering

AI Knowledge Engineering

Structuring what the AI needs to know AND how work should be done — so that any AI can use it. The outer harness. Your knowledge AND your processes, organized so AI becomes a competent participant.

Outer harness: build, product, operations, domain knowledge + rules, workflows, process definitions. Stays the same when you swap models. It's yours.
AI Engineering runs tasks. AI Knowledge Engineering makes them more structured every time.

This blog is proof of both

This blog was built with build.ai — AI agents that decompose requests into phased pipelines. That's AI Engineering: the agents, the models, the execution. But the blog also draws from 6 weeks of structured knowledge — project history, architecture decisions, methodology documentation, dev workflow patterns — all organized in a harness.os knowledge mesh. That's AI Knowledge Engineering.

AE
AI Engineering
build.ai agents generated, structured, and deployed 8 posts in parallel
AKE
AI Knowledge Engineering
Harness knowledge made the output accurate — git history, decisions, learnings, all pre-structured
=
The Result
A blog that maintains itself — new posts, updates, corrections flow through the same pipeline. Built after 6 weeks of structured knowledge accumulation — the detail and accuracy come from AI Knowledge Engineering, not from the tools

The agents that wrote this blog had no special instructions about the content. They connected to the same knowledge harness that builds software — and produced documentation instead of code. Same outer harness, different output.

For the full technical documentation, see the kernel docs.

Series 1

Building harness.os

Eight posts across four phases — from first commit to methodology

Series 2

Continuous Improvement

The system that improves itself — each post documents a real improvement, written in the session where it happened

Improvement Ongoing
1
One Brain, Every Interface
How harness.os connects Claude Code, build.ai, Copilot, and anything else — and why the interface doesn't matter.
2
The Session That Stopped Itself
How a QA session across three apps revealed that working without the harness defeats its own purpose — and the enforcement mechanism we built to prevent it.
3
The Knowledge Precision Problem
Why loading everything is the opposite of intelligence — how task-based routing cut context tokens by 97% across three apps.
4
From Files to a Knowledge API
Why your AI rules don’t scale past one project — and the 4-stage progression from firehose to learned routing.
5
Building the Knowledge API
The implementation story — PostgreSQL tables, MCP tools, concern-based tagging, and the usage tracking that powers learned routing.
6
Token Economics at Scale
Why context cost grows linearly with projects, how a harness creates a crossover point, and the 6-stage progression from expensive to self-funding.
7
Infrastructure That Moves
Why a harness that can't relocate its own hosting, restructure its own repo, and rewire its own connections in a single session isn't really infrastructure.
8
Kernel, Distribution, Mesh
Why “methodology, configuration, mesh” was confusing everyone — and how a Linux analogy gave the three layers names that explain themselves.
More posts added as improvements happen. This series has no planned end — that's the point.