Most developers build one app at a time. You think about your schema, your API, your users, your deployment pipeline. The boundaries are clean. The domain is self-contained. You ship it, maintain it, move on.
I was building three apps simultaneously, all powered by the same knowledge architecture, and I stumbled into a question that most developers never encounter: what happens when an AI assistant needs to reason across multiple domains at once?
The answer shaped how I structured everything afterward.
The Question None of My Apps Could Answer
It started with a simple question. I was talking to marco.ai — my personal assistant that had access to all my harness files — and I asked:
"Can I afford a skydive camp next month?"
Simple question. The assistant gave me a polite non-answer, because answering it properly requires information from three completely separate domains:
- way2save needs to check my budget. Is there enough allocated for discretionary spending? What's committed vs. available? Are there upcoming bills that would eat into that buffer?
- way2fly needs to check camp schedules and whether I'm even eligible. Do I have the right license level? Enough recent jumps to qualify? Is there a camp running at a dropzone within range?
- way2move might need to weigh in too. A week-long camp with 8+ jumps a day is physically demanding. Am I training for it? Should I be? What does the preparation timeline look like?
And the traditional solution — call API A, call API B, call API C, stitch the responses together in the frontend — misses the point completely. The hard part isn't fetching data. The hard part is understanding what the data means across contexts.
A "budget" in way2save relates to an "activity" in way2fly relates to a "training program" in way2move. Those aren't just data relationships. They're semantic relationships. The kind that humans navigate intuitively but that traditional APIs completely ignore.
Structure Data for Cross-Domain Reasoning
The first thing I learned was about data architecture. If you know that AI assistants will need to reason across domains, you need to structure your data so they can.
This isn't the same as building a REST API. An API exposes endpoints: GET /budgets, GET /camps, GET /training-plans. An AI reasoning across domains needs something richer — a knowledge layer that understands what data means and how concepts in one domain map to concepts in another.
Data should be structured so an assistant can read and write across all apps in a cross-domain way. Not just "here's the data" but "here's what this data means in the context of everything else."
Concretely, this meant that the way2fly knowledge store couldn't just say "camp costs $2,500." It needed to express that cost as a category the financial domain understood. And the financial domain needed to know that "skydiving camp" wasn't just a line item — it was a progression milestone with scheduling constraints and physical prerequisites.
I started adding cross-domain annotations to the knowledge files. Not full-blown ontologies — nothing that formal. More like breadcrumbs. A field in the way2fly camp data that says budget_category: recreation.sport.skydiving. A field in the way2save budget that says linked_activities: [way2fly.camps, way2move.coaching]. Enough for an AI to follow the thread across domains without needing a human to wire up every possible query path.
This was really knowledge engineering — structuring knowledge so it could flow between contexts, not just within one.
Feedback-Driven Development
The second thing I noticed was about how software evolves. Traditional development works like this: you build features based on roadmap priorities, ship them, collect analytics, maybe run some user interviews, adjust the roadmap, repeat. The feedback loop is measured in weeks or months.
But when your users interact with an assistant, you get something more useful than click data: you hear exactly what they're trying to do, in their own words.
Every time someone asks the assistant a question it can't answer well, that's a signal. Not a vague engagement metric — a specific, actionable signal about what capability is missing. "Can I afford the camp next month?" told me, in one sentence, that my apps needed cross-domain reasoning. No A/B test would have revealed that.
This inverted the development model. Instead of building features and hoping users want them, the assistant surfaces what users actually struggle with, and that feeds back into what to build next. Metrics, usage patterns, and natural-language requests through the assistant drive development. It's continuous and specific — users don't sugarcoat their needs when they think they're just talking to a computer.
The Feedback Loop as Architecture
I started designing for this loop explicitly. The assistant would log every question it couldn't answer confidently. Those logs fed into build.ai's request pipeline. The most common gaps became the next development priorities. Not because a product manager decided they were important — because users proved they were, through their actual behavior.
The development pipeline was responding to real demand signals flowing through the assistant layer, rather than guesses about what users might want.
Early Shape of the Mesh
These two lessons — cross-domain knowledge structure and feedback-driven development — pointed toward something broader than any individual app.
What I was building, without fully naming it yet, was a knowledge mesh. Not "app A talks to app B via API" — a network of knowledge stores that AI agents can query across, where the connections between domains are encoded in the knowledge itself, and where the agents understand enough about each domain to reason about the relationships.
The skydive camp question wasn't a three-API-call problem. It was a reasoning problem over a shared knowledge graph. And once I saw it that way, the architecture question shifted.
It shifted from "how does each app work independently" to "how does the knowledge layer enable cross-domain reasoning."
Different question. Different architecture.
way2do.ai: The Cross-Domain Product
This cross-domain thinking eventually led to the concept of way2do.ai — a consumer hub that bundles way2fly + way2move + way2save with a unified cross-domain assistant. The kind of assistant that can actually answer "Can I afford the camp?" because it understands all three domains natively.
"Plan my week" would pull from all three domains. Training sessions from way2move. Budget constraints from way2save. Weather windows and dropzone schedules from way2fly. Not three separate answers stitched together, but one plan that accounts for the relationships between fitness, finances, and flying.
way2do.ai validated that cross-domain knowledge architecture wasn't just an engineering exercise. It was a user-facing capability that no single-domain app could replicate.
MCP Fit This Well
This is where the Model Context Protocol became practical.
MCP was designed for this kind of use case: tool-based integration that lets an AI connect to multiple knowledge sources through a standardized interface. Instead of building custom glue code for every domain combination, I could expose each knowledge store as an MCP server. The assistant connects to all of them through the same protocol. Adding a new domain means adding a new MCP server, not rewriting the integration layer.
When I first adopted MCP, it was early. The protocol was new, the tooling was rough, and plenty of people in the community were skeptical. "Just use function calling," they'd say. "Just build a RAG pipeline."
But MCP got something right that those approaches miss: it standardizes how AI connects to context, not just how it calls functions. The difference matters — it's an AI that can work with systems, not just execute commands against them. With 97 million monthly SDK downloads and counting, it's become the industry standard. It was a good bet.
If you're building multiple products or working in a domain-rich organization, think about your knowledge architecture before you think about your API architecture. The data relationships your AI needs to understand are not the same as the endpoint relationships your frontend needs to call. Design for the AI's reasoning path, not just the client's fetch path.
Seeing the Shape
By this point, the shape of the system was becoming clear. Multiple apps, each with their own domain knowledge, connected through a shared knowledge layer that any AI agent could query. The knowledge mesh was running. Cross-domain questions were getting answered. The feedback loop was surfacing real gaps.
But I was still building custom agents. Every new capability meant spinning up a new Claude Code session, writing new prompts, configuring new context injection. The orchestration was getting more sophisticated, but it was also getting heavier. More code to maintain. More edge cases. More things that could break.
I could see the shape of the system now. But I was still building custom agents — and I was about to find out that was the wrong focus.