JGJack Gardner
Writing/11 min read
THESIS07·05·26·11 min read

How AI-native
organizations
will work

How leverage and coordination change when agents do the work, an extension of Conway's Law for the AI-native firm.

Published 07 May 2026
San Francisco

This is a thesis about how organizations will be structured when AI agents become the dominant layer that does the work. I was thinking through Conway's Law (1968), which observed that the products organizations build mirror the communication structures of their teams, and thinking how this would hold up in an AI-native company. In general, Conway's Law remains true. The thesis here is that it becomes even more important in an AI-native world, where the binding constraint on coordination is no longer human-to-human communication, but the legibility of an organization's knowledge to the agents that increasingly do its work.

The thesis:

In an AI-native organization, leverage is a function of what is written, not who is hired.

An AI-native organization's output is constrained by the depth and structure of its written knowledge, not by its headcount.

Companies become what they write down.

To change what your company can produce, change what it has written down.

What is an AI-native organization?

Most companies in 2026 use AI tools. That is not the same as being AI-native. The distinction is about organizational structure, not technology adoption.

An AI-native organization is one in which the work is designed assuming agents are the default layer that does the work, and humans are the exception layer. The question asked when designing any process is "why is a human doing this?" rather than "where can we add AI?" Operating knowledge lives in documents that both humans and agents read. Coordination happens through those documents, not through meetings. Work in an AI-native organization occurs in what I call a judgment-loop: a recurring pattern in which an agent runs autonomously across many steps, surfacing to a person only at the judgment calls where taste or decision is actually required.

It follows that company headcount is then determined by the number and quality of judgment-loops the company needs, not by task volume. The org chart becomes a map of where humans sit inside the company's written knowledge, not the chart itself. The more AI is able to accomplish the work (at the same quality bar or higher than the person alone), the more autonomous AI actions there are and the fewer judgement-loops.

Senior leadership in any organization extends well beyond the CEO, and in an AI-native organization these people are critical. We will use the term owner to mean anyone whose written judgment shapes the work other people and other agents do, not just their own: the CEO, other executives, VPs, product managers, senior individual contributors, function leads (legal, security, design). The CEO is one owner among many, at the broadest scope. In AI-native organizations, Owners spend disproportionate time writing the documents that propagate through every agent run in their domain.

What happens when the CEO writes a memo?

Take the most universal example of an owner doing what owners do. A CEO writes a memo: a strategic shift, a values statement, a tradeoff the company is now making. In a traditional firm, the memo lands in Slack or email. Some people read it carefully. Some skim it. Some never see it. Over the following weeks, individual decisions across the company drift back toward the patterns that existed before the memo. Six weeks later, the CEO is in a meeting where someone proposes something that contradicts what the memo said. The CEO re-explains. Maybe they write a follow-up. The leverage of the original memo decays as it travels.

In an AI-native firm, the memo enters the company's knowledge base. From that moment, every agent across the company that touches relevant work grounds in it. The product manager's agent drafting the next roadmap proposal references it. The marketing team's agent shaping positioning aligns with it. The legal lead's agent reviewing a contract grounds in it. The CEO has not attended any of those moments. The CEO's judgment is in every output, applied the same way every time, without re-explanation.

Then outcomes feed back. The proposals that ship, the positioning that gets used, the contracts that close. What worked, what did not. Those outcomes feed back into the AI-native company knowledge base as evidence. When the CEO is thinking about the next strategic move, their agent surfaces the original memo alongside the record of how the company has been applying it. Instead of a one-off announcement that quickly fades away, the memo becomes a living system. It actively guides the daily work, learns from real-world results, and directly shapes what the CEO decides to do/write next.

Levels of organizational autonomy

AI adoption is a spectrum. Most companies in 2026 are at the early end: some AI tools in use, organizational shape unchanged. Fully AI-native sits at the far end: agents handle most of the work, owners configure the knowledge base, and the loop closes continuously. Functions within a single company can be at different levels at the same time. Some functions might be early adopters before wider company rollout.

A six-level scale of organizational autonomy from L0 (pre-AI, coordination through meetings and the management chain) to L5 (self-improving AI-native, with agents maintaining the knowledge base).

What changes

Today, knowledge lives in people's heads, and it travels between heads through meetings, mentorship, observation, and shared experience. The transfer is slow and lossy but it works. Coordination happens through the org chart: managers exist to surface disagreements, resolve them, and propagate the resolution down. The chart isn't just a map of who reports to whom; it's how the company actually gets things done. Bigger scope means more people, more layers, and more places where opinions need resolving on the way.

In an AI-native company this shifts. Knowledge lives in what is written, and the agent layer reads from it directly. Tribal knowledge that stays in people's heads becomes invisible to the layer of the company that increasingly does the work. Coordination is more direct, because the knowledge base handles the alignment that managers used to do; the org chart thins, and the layers of opinion-resolution thin with it.

This creates a new company dynamic. Every well-written principle document raises the floor on every agent run that grounds in it; every undocumented piece of tribal knowledge is a ceiling on how far the agent layer can run autonomously. Companies need fewer people, hired more carefully, whose work is vastly amplified, not because the people who remain are smarter, but because their written judgment compounds through every agent run that reads it. Two companies with the same headcount but different depths of written knowledge produce dramatically different output.

Coordination through the knowledge base

How does coordination work when management chains thin? Through what gets written.

Take a transitional example. Before AI tools existed, a sales leader would document the playbook (the ICP, the positioning, the objections, the qualification criteria) and hand it to a team of SDRs to interpret and apply across hundreds of outbound conversations themselves. The actual impact of the playbook depended on how well each SDR carried it (and on how closely the sales manager enforced or coached it). Today, AI sales development tools (11x, where I worked, and other companies in the category) cut out the middleman. The sales leader configures the playbook in the tool, the AI agent runs the outbound, and salespeople take the meetings the agent books and work to close them. The playbook is applied exactly as intended across every conversation, not interpreted differently by ten different SDRs. The sales leader now has more leverage than they have ever had: changing the playbook changes every outbound conversation that happens next, instantly, without re-teaching anyone. Outcomes feed back too: which messages got responses, which segments converted, what objections came up. The next playbook is informed by what the last one produced. The sales leader has gained both reach and visibility, at a fidelity that was previously impossible: their judgment applied in every conversation, and the evidence of how it played out feeding straight back.

Now take that a step further. In a fully AI-native company, this same loop runs at every level and across every kind of boundary, and the work feeding back into the knowledge base happens continuously as part of normal operation rather than as a separate effort. A CEO writes principles about what the company values, and every agent across every function operates by them. A legal lead documents how the company handles user data, and every team's agents working on anything that touches user data incorporate it automatically (the cross-functional case). A principal engineer writes the architectural principles, and every coding agent in the codebase runs on those principles before any pull request is opened (the within-function case). The mechanism is the same in all of them: an owner writes for their domain, agents read from what is written and produce outcomes, the outcomes feed back into the knowledge base, and the next decision (the CRO's, the head of product's, the CEO's) is grounded in richer context than the last. (Software engineers will recognize this as inversion of control applied to organizational structure: rather than directives flowing top-down through management layers, behavior shifts when the shared knowledge every agent reads from is updated.)

Individual contributors feel more autonomous than under traditional management, because no one is constantly directing them. At the same time, owner reach amplifies, because their judgment now instantly scales without managers acting as middlemen, and the outcomes of every application of their judgment feed back to inform what they do next. Both happen at once, because the knowledge base has absorbed the coordination work that managers used to do.

The shape of an AI-native firm: owners (CEO and other domain owners) above a central knowledge base write into it, while agents inside pods below both read from it and contribute back as their work proves out.

This playbook isn't entirely new

The best companies have always run on written knowledge. Communication has always been the foundation of high-performing teams; companies that wrote their decisions down, kept better records, and built durable processes have outperformed those that did not, all else equal, for as long as companies have existed. But the advantage of writing was bounded: the only readers of internal artifacts were other humans, and humans can fall back on meetings, mentorship, and shared experience when the documentation is thin. Writing was a force multiplier, but not the main engine.

What changes now is the share of the work that relies entirely on what is written. As more of a company's output flows through agents that read directly, the distance between companies that have invested in clear, durable knowledge and those that have not stops being a soft advantage. It becomes the ceiling on what each can do.

What this does not mean

It does not mean humans matter less. High-judgment humans matter more, not less, because they are the source of what gets written. When AI amplifies the work, it also amplifies the cost of weak judgment; talent quality requirements rise.

It does not apply equally across all functions. Functions involving high-judgment, low-volume coordination (strategy, R&D, design, executive decision-making) compress under this new AI-native model. Functions involving high-volume operational execution (manufacturing at scale, regulated clinical operations, physical logistics) need a different overlay: a small judgment-loop core for taste and strategy, plus a larger operational layer that runs structured work via agents, with humans handling exceptions.

It does not solve itself. Calling it "the knowledge base," as if it were a single clean object, skips the hardest part: keeping it current, resolving contradictions between owners, knowing which document an agent relied on at decision time. Agents can help (drafting summaries, flagging contradictions, surfacing stale documents for owner review), but the work of arbitrating between owners, deciding what becomes binding, and signing off on changes remains a human owner's job. Versioning, governance, and quality control across knowledge base are unsolved at company scale. This piece focuses on the organizational shift; the systems that hold a knowledge base together are a separate problem, and a deeper one.

Where this leaves us

In an AI-native organization, leverage is a function of what is written, not who is hired.

The work that compounds is no longer hiring more people, coordinating better, or rolling out more tools. It is writing the company down.

Done well, the company gets quieter, smaller, and more legible to itself.

Done badly, the same system scales bad judgment just as fast.

What's changing isn't how fast we work. It is what work is.