JGJack Gardner
Writing/11 min read
THESIS05·05·26·11 min read

How AI-native
organizations
will work

How leverage and coordination change when agents do the work, an extension of Conway's Law for the AI-native firm.

Published 05 May 2026
San Francisco

This is a thesis about how organizations will be structured when AI agents become the dominant layer that does the work. It is meant as an extension of Conway's Law (1968), which observed that the products organizations build mirror the communication structures of their teams. Conway's Law remains true. The thesis here is that it becomes even more decisive in an AI-native world, where the binding constraint on coordination is no longer human-to-human communication, but the legibility of an organization's knowledge to the agents that increasingly do its work.

The thesis, in one line:

In an AI-native organization, leverage is a function of what is written, not who is hired.

Three forms of the thesis

The same thesis, in three forms:

An AI-native organization's output is constrained by the depth and structure of its written knowledge, not by its headcount.

Companies become what they write down.

To change what your company can produce, change what it has written down.

What is an AI-native organization?

Most companies in 2026 use AI tools. That is not the same as being AI-native. The distinction is about organizational structure, not technology adoption.

An AI-native organization is one in which the work is designed assuming agents are the default layer that does the work, and humans are the exception layer. The question asked when designing any process is "why is a human doing this?" rather than "where can we add AI?" Operating knowledge lives in documents that both humans and agents read. Coordination happens through those documents, not through meetings. The unit of organizational leverage is the judgment-loop: a recurring pattern in which an agent runs autonomously across many steps, surfacing to a person only at the judgment calls where taste or decision is actually required.

Headcount is determined by the number and quality of judgment-loops the company needs, not by task volume. The org chart becomes a map of where humans sit inside the company's written knowledge, not the chart itself.

Senior leadership in an AI-native organization extends well beyond the CEO. We will use the term domain owner, or owner for short, to mean anyone who exercises taste over a particular domain and writes down what it should be: the CEO, other executives, VPs, product managers, senior staff individual contributors, function leads (legal, security, design). Any individual in a company has to act as an owner of their own work in the everyday sense; the term as we use it here is more specific. An owner is someone whose written judgment shapes the work other people and other agents do, not just their own. The CEO is one owner among many, at the broadest scope. Owners spend disproportionate time writing the documents that propagate through every agent run in their domain.

What happens when the CEO writes a memo?

Take the most universal example of an owner doing what owners do. A CEO writes a memo: a strategic shift, a values statement, a tradeoff the company is now making. In a traditional firm, the memo lands in Slack or email. Some people read it carefully. Some skim it. Some never see it. Over the following weeks, individual decisions across the company drift back toward the patterns that existed before the memo. Six weeks later, the CEO is in a meeting where someone proposes something that contradicts what the memo said. The CEO re-explains. Maybe they write a follow-up. The leverage of the original memo decays as it travels.

In an AI-native firm, the memo goes into the canon. From that moment, every agent across the company that touches relevant work grounds in it. The product manager's agent drafting the next roadmap proposal grounds in it. The marketing team's agent shaping positioning grounds in it. The legal lead's agent reviewing a contract grounds in it. The CEO has not attended any of those moments. The CEO's judgment is in every output, applied the same way every time, without re-explanation.

Then outcomes feed back. The proposals that ship, the positioning that gets used, the contracts that close. What worked, what did not. Those outcomes feed into canon as evidence. When the CEO is thinking about the next strategic move, the agent surfaces the original memo alongside the record of how the company has been applying it. The memo is not a broadcast that decays. It is a configuration that propagates, gets tested in the work, and informs whatever the CEO writes next.

Levels of organizational autonomy

Companies sit on a gradient, not at a single point. Most companies in 2026 are at the early end: some AI tools in use, organizational shape unchanged. Fully AI-native sits at the far end: agents handle most of the work, owners configure the knowledge base, and the loop closes continuously. Functions within a single company can be at different levels at the same time. The journey is by function first, by company last.

A six-level scale of organizational autonomy from L0 (pre-AI, coordination through meetings and the management chain) to L5 (self-improving AI-native, with agents maintaining the knowledge base), modeled on the SAE levels of vehicle autonomy.

Why this works

The mechanism is straightforward when traced.

Without an agent layer that reads from the canon. Knowledge living in people's heads was acceptable, because the only thing that needed to read it was other people. Humans transfer knowledge through meetings, mentorship, observation, and shared experience. The transfer is slow and lossy but it works. The org chart was a reasonable proxy for the actual coordination topology, because human-to-human communication was the binding constraint on what a company could build.

With one. Every agent in the company is a potential reader of the canon. Agents cannot read what is not written. Tacit knowledge that lives only in people's heads becomes invisible to the entire layer of the company that increasingly does the work.

This produces an asymmetry between writing and not-writing that did not previously exist. A company that codifies a piece of judgment makes that judgment accessible to thousands of agent runs per day. A company that does not codify the same judgment leaves it accessible only to the humans who happen to remember it. Every well-written principle document raises the floor on every agent run that grounds in it. Every undocumented piece of tacit knowledge is a ceiling on how far the agent layer can run autonomously. Over a few years, two companies with the same headcount but different canon depths produce dramatically different output.

Coordination through the canon

How does coordination work when management chains thin? Through what gets written.

Take a transitional example. Before AI tools existed, a sales leader would document the playbook (the ICP, the positioning, the objections, the qualification criteria) and hand it to a team of SDRs to interpret and apply across hundreds of outbound conversations themselves. The leverage of the playbook depended on how well each SDR carried it (and on how closely the sales manager enforced or coached it). Today, AI sales development tools (11x and other companies in the category) collapse that step. The sales leader configures the playbook in the tool, the AI agent runs the outbound, and salespeople take the meetings the agent books and work to close them. The playbook is applied at full fidelity across every conversation, not interpreted differently by ten different SDRs. The sales leader now has more leverage than they have ever had: changing the playbook changes every outbound conversation that happens next, instantly, without re-teaching anyone. Outcomes feed back too: which messages got responses, which segments converted, what objections came up. The next playbook is informed by what the last one produced. The sales leader has gained both reach and visibility, at a fidelity that was previously impossible: their judgment applied in every conversation, and the evidence of how it played out feeding straight back.

Now take that a step further. In a fully AI-native company, this same loop runs at every altitude and across every kind of boundary, and the work feeding back into the canon happens continuously as part of normal operation rather than as a separate effort. A CEO writes principles about what the company values, and every agent across every function grounds in them. A legal lead documents how the company handles user data, and every team's agents working on anything that touches user data incorporate it automatically (the cross-functional case). A principal engineer writes the architectural canon, and every coding agent in the codebase grounds in it before any pull request is opened (the within-function case). The mechanism is the same in all of them: an owner writes canon for their domain, agents working under that canon produce outcomes, the outcomes feed back into the canon, and the next decision (the CRO's, the head of product's, the CEO's) is grounded in richer context than the last. (Software engineers will recognize this as inversion of control applied to organizational structure: rather than directives flowing top-down through management layers, behavior shifts when the shared knowledge every agent reads from is updated.)

Individual contributors feel more autonomous than under traditional management, because no one is constantly directing them. At the same time, owner reach amplifies, because their judgment now propagates without management hops, and the outcomes of every application of their judgment feed back to inform what they do next. Both happen at once, because the canon has absorbed the coordination work that managers used to do.

The shape of an AI-native firm: owners (CEO and other domain owners) above a central canon write into it, while agents inside pods below the canon both read from it and contribute back as their work proves out.

This dynamic predates AI

The pattern this thesis describes is older than AI. Companies that wrote their decisions down, kept better records, and built durable processes have outperformed those that did not, all else equal, for decades. The advantage was modest because the only consumers of internal artifacts were other humans, and humans can fall back on meetings, mentorship, and shared experience when the documentation is thin. The advantage was real, but writing was a marginal contributor to leverage rather than the dominant one.

Where the same pattern shows up in stronger form, it has been transformative. The Toyota Production System turned manufacturing into a written and continuously revised body of standard work. Peer-reviewed journals turned research into a cumulative artifact-driven enterprise. Open-source projects coordinate thousands of contributors through code, documentation, and issues with almost no real-time communication. None of these depend on AI. They depend on a layer (workers, scientists, contributors) that reads and acts on what gets written. AI agents are simply the latest such layer, applied to the rest of the work organizations do.

What this does not mean

It does not mean humans matter less. High-judgment humans matter more, not less, because they are the source of what gets written. AI-native companies will hire fewer people, more carefully, at higher individual leverage. The cost of weak judgment scales with the leverage; talent quality requirements rise.

It does not apply equally across all functions. Functions involving high-judgment, low-volume coordination (strategy, R&D, design, executive decision-making) compress most aggressively under the law. Functions involving high-volume operational execution (manufacturing at scale, regulated clinical operations, physical logistics) need a different overlay: a small judgment-loop core for taste and strategy, plus a larger operational layer that runs structured work via agents, with humans handling exceptions.

A note on the moment

Conway's Law was descriptive in 1968 and remained descriptive for fifty years. The thesis here is descriptive of a transitional moment, roughly 2025 to 2030, in which the organizations that internalize it earliest will compound advantages that later entrants will struggle to close. By 2030 it will probably read as common sense rather than insight. The window for asymmetric advantage is short, which is one reason it is worth writing down now.