← All posts
Part 2 of 8 May 2026 ~9 min read

The Artificial Anatomy of AI

How I organized AI knowledge into cortex and spine

From scattered files to a question

After finding that files made AI sessions noticeably better, I had a new problem. I had files everywhere. Markdown in root directories. Rules in nested folders. Architecture notes in docs. Decision records scattered across three projects with no consistent naming, no clear hierarchy, no system.

It worked — sort of. Each file made the AI slightly smarter. But I kept hitting the same frustration from a new angle. Instead of explaining context at the start of every session, I was now hunting for the right file to update, or discovering two files that contradicted each other, or realizing I'd written the same rule in three different places with three slightly different phrasings.

The files had solved the amnesia problem. They hadn't solved the organization problem. And that question — what's the right way to organize knowledge for AI? — turned out to be worth spending time on.

Two kinds of knowing

I spent a weekend looking at every file I'd created across all three projects, and a pattern became clear. Every single file fell into one of two categories.

The first category was knowledge. Things the AI needs to understand. Architecture decisions, domain concepts, feature specifications, data models, API designs, third-party integration details. This was the "what is true about this project" layer. Declarative. Descriptive. The AI reads it and builds a mental model of the system.

The second category was rules. Things the AI must follow. Coding standards, naming conventions, testing requirements, workflow procedures, constraints, forbidden patterns. This was the "what you must do and must not do" layer. Imperative. Prescriptive. The AI reads it and constrains its behavior.

Knowledge tells the AI what exists. Rules tell the AI how to act. They're fundamentally different, and when you mix them together in the same file, both get worse.

Naming the parts

I'm a backend engineer. I think in systems. And when I saw these two categories, my brain immediately reached for a metaphor: the human body.

Cortex — the brain. Where knowledge lives. Everything the AI needs to know about a project, a domain, a system. Architecture, decisions, specs, domain models. This is the thinking layer.

Spine — the backbone. Where rules live. The structural integrity that keeps everything upright. Coding standards, test requirements, workflow rules, constraints. This is the structural layer.

I called the whole thing the Artificial Anatomy of AI. The idea was straightforward: a body of knowledge with distinct parts, each with a specific function. The cortex holds what the AI needs to know. The spine holds how the AI should behave.

The naming wasn't arbitrary. Anatomy implies that the parts are connected and that removing one degrades the whole. In practice, that's what I observed. A project with cortex but no spine had knowledgeable AI that wrote inconsistent code. A project with spine but no cortex had well-formatted AI output that misunderstood the domain.

Organizing the organs

Once I had the two categories, the sub-structure followed naturally.

Cortex broke down into distinct types of knowledge:

Spine broke down similarly:

Each project got the same structure. Different content, same anatomy. way2fly's domain files talked about skydiving. way2save's domain files talked about financial goals. But they lived in the same place, served the same purpose, and were read by the AI in the same way.

From files to Obsidian

With three projects running simultaneously and each now carrying a growing body of cortex and spine files, I needed visibility. Which project had gaps? Which files were stale? Where was the anatomy incomplete?

Before building anything custom, I turned to Obsidian — a markdown-based knowledge management app that lets you link notes together and query them with plugins like Dataview. It was a natural fit — I already had markdown files organized by project, and Obsidian's query capabilities let me create dashboards over them. I could write Dataview queries to show all decision records across projects, surface rules that hadn't been updated in weeks, or list domain files that referenced stale patterns.

But I soon realized it was more hype than utility for what I needed. Obsidian gave me the bird's-eye view, and the queries made the file system feel more like a knowledge base than a folder hierarchy. But the dashboards were read-only views over files. I could see what was there, but managing it — identifying conflicts between projects, keeping rules consistent across three codebases, updating domain knowledge as I learned — was still manual work. Obsidian is a good tool for personal note-taking and it still has its uses, but it is not a system for managing structured AI knowledge at scale.

marco.ai and the domain needs

That's where marco.ai came in — originally called cuca.ai ("cuca" is Brazilian Portuguese slang for brain/head) — a personal assistant that could not just view the knowledge files but manage them. Read them, edit them, identify conflicts, suggest updates. As I started building views for my personal domains (skydiving progression, fitness, finance), I realized these domains needed their own structured knowledge — not just project docs, but domain-specific models, rules, and context.

The domain views in marco.ai kept growing. Skydiving had progression rules, canopy sizing tables, qualification requirements. Fitness had programming principles, recovery protocols, exercise relationships. Finance had budgeting rules, tax jurisdictions, savings strategies. Each domain had its own knowledge needs, its own rules, its own way of organizing information.

This is where the idea of a domain harness started forming. Each domain didn't just need a folder of files — it needed a structured knowledge container with the same cortex/spine anatomy. The pattern I'd built for software projects applied equally to personal knowledge domains.

The leap to build.ai

So I built build.ai — a custom dashboard that could do what Obsidian couldn't: actively manage harness state, dispatch work to agents, and track the development process across all projects. What started as "I need a better version of my Obsidian dashboards" became an orchestration platform.

I was showing the anatomy concept to a colleague, and they asked: "Could we use this at work?"

The answer was obviously yes. A legal tech company has different domain knowledge than my skydiving apps, but the structure is the same. They need cortex (case law models, regulatory knowledge, client domain specifics) and they need spine (coding standards, security rules, compliance constraints). The anatomy is universal. The content is specific.

That's the seed of what would become cortex.ai — a multi-tenant version. What if any company could plug into the same anatomical framework? Different organs, same body plan. A hospitality company's cortex knows about reservations and guest profiles. A manufacturing company's cortex knows about supply chains and quality control. But both have a cortex, both have a spine, and both benefit from the same organizational structure.

The practice of structuring knowledge so AI agents can use it effectively is emerging under different names. KPMG calls it "knowledge engineering" and published The Knowledge Engineering Imperative. Anthropic calls it "context engineering". Martin Fowler and Thoughtworks call it "harness engineering". The term "knowledge engineering" itself dates back to the 1980s expert systems era. I arrived at the same principles through practice — three apps, growing file systems, and the realization that managing AI knowledge is itself a practice worth structuring.

The structure is the product

Looking back, the Artificial Anatomy was the first time the work shifted from "using AI to build apps" to "building a system for AI to work well." It's a subtle but critical difference.

When you're just using AI, you optimize for output: faster code, better suggestions, fewer bugs. When you're building a system for AI, you optimize for input: better knowledge, clearer rules, more precise context. The output improves as a consequence.

The anatomy gave me a vocabulary. Cortex and spine. Knowledge and rules. It gave me a checklist: does this project have domain files? Does it have decision records? Are the coding rules explicit? It gave me a way to evaluate completeness, which meant I could identify gaps before they caused problems in a session.

But I'd built a knowledge layer. A static one. Files on disk, read at the start of a conversation. What I hadn't figured out yet was the dynamic layer — how the AI actually uses that knowledge during execution. How it picks the right files at the right time. How it makes decisions that respect the spine while leveraging the cortex.

I had the anatomy. I had the dashboard. I had the vision of multi-tenant knowledge. What I didn't have yet was the realization that the knowledge layer matters more than the execution engine. That everyone was obsessing over which AI model to use and which tools to give it, while ignoring the much harder, much more valuable problem of what the AI knows when it starts working.

That realization came next, and it shifted what I was building from "a collection of useful files" toward something more like a methodology.