There is a sentence I keep saying that lands harder every time I say it.
If you haven’t been doing open source, you don’t get to do open source.
It is a deliberate exaggeration. There is no certificate at the door. But the gap between companies that have an open-source posture and companies that do not is widening fast right now, and it is going to determine which enterprises can take advantage of the shifts happening in the AI landscape — and which ones are stuck on the rails the closed model providers built for them.
The signal that just got more valuable
For most of the last decade, an enterprise’s open-source involvement was treated as a “nice to have.” Some companies contributed back to the foundations they depended on. Some sponsored a release. Some open-sourced internal tools. Some quietly used everything and contributed nothing. None of that mattered very much in board-level conversations.
It matters now. Not because the politics changed, but because the AI landscape changed underneath everyone’s feet.
The headline shifts of the last year — open weights from credible research labs, permissive-license model families, the rapid catch-up of open inference performance against closed APIs, the appearance of open foundation models trained outside the existing big-three umbrella — all share one property. They are usable by enterprises that already know how to operate open-source software.
The companies that have been contributing to the foundations, running the build systems, integrating open-source pipelines, and respecting the licensing discipline for the last decade are positioned to take advantage of every one of these shifts. The companies that have not are watching from the other side of the glass.
The gating is not legal — it is operational
The interesting thing is that the gating is not really about license compliance, although that matters. It is operational.
To use an open model meaningfully you have to be able to host it, evaluate it, fine-tune it, run inference at production weight, monitor it, version it, decommission it, and audit its outputs. Each of those is a discipline that an open-source-posture organization has been quietly building for ten years. Each of those is a discipline that a closed-SaaS-only organization has been outsourcing for ten years.
When the open model arrives, the second organization does not have the muscle. They cannot just turn it on. They have no internal team that knows how to run it at the scale their business needs. They have no governance precedent for an artifact that is not coming from a vendor with a glossy compliance package. They have no existing pipeline to integrate it into. The model is technically free; using it is years of capability they do not have.
So they default to the path they already know. Train against a closed model. Pay per token. Accept the lock-in. Watch the open-source landscape from a distance, knowing it is theoretically cheaper and structurally more durable, and feeling the gap widen quarter over quarter.
What “open source DNA” actually looks like
When I look at a company’s signals, the open-source DNA shows up in specific places. Not in slideware. In the substrate.
It shows up in contribution — code commits to upstream foundations, governance involvement in standards bodies, participation in the boards and working groups that shape how the artifacts evolve. Companies that contribute have engineers who understand the discipline because they have lived it.
It shows up in consumption with discipline — open-source software running in production with a real story for how it is patched, audited, hardened, and supported. Not just “we use it.” We use it, and we know how it is made, who maintains it, and what we will do when the next CVE drops.
It shows up in publication — internal tools released back into the commons, specs published rather than hoarded, learnings written up rather than buried. Each one of those is a hiring signal, a recruiting signal, a posture signal that compounds.
It shows up in standards investment — actual time spent in working groups that produce the specs everyone else will eventually depend on. The companies that show up in those rooms are the ones who get to shape how the future works for them, instead of inheriting how the future was shaped for someone else.
A high open-source signal score is not a vanity metric. It is a structural asset. It is the muscle a company will need the moment open AI artifacts become the rational default for an enterprise workload — which, depending on the workload, is either already happening or about to.
The downstream consequence
Here is the consequence most leadership teams have not internalized yet.
If your AI strategy quietly depends on the closed providers continuing to be the cheapest, fastest, most-capable answer for every workload — that is a bet, and it is a bet against the trajectory of the last twenty-four months. You can make that bet and be right for a while. You will probably be wrong on a meaningful share of your future workloads, and the cost of being wrong is structural, not incremental.
The companies that come out of the next two years in the strongest position are going to be the ones who can move workloads between closed and open foundations fluently — picking the right artifact for the right job, running them on infrastructure they control, governing them under standards they have a say in. Doing that fluency well is an open-source DNA problem, not a vendor selection problem.
Where this leaves the laggards
If your company has not been doing open source for the last decade, you do not get to start tomorrow with the same posture as the companies that have been at it the whole time. That is the honest version. There is a ramp, and the ramp takes years to climb.
But there is a starting point. Every company sits somewhere on the curve. Find one foundation worth contributing to. Find one internal tool worth open-sourcing. Find one standards working group worth showing up in. Find one open-source artifact worth running with full discipline, in production, with a real on-call story. Each of those is a small move on its own. Together they compound into the muscle the next decade is going to require.
The companies that started building this muscle a decade ago are the ones the AI landscape is now bending in favor of. The companies that never started are the ones who keep paying per token to providers whose business model depends on them never starting.
That is the readiness signal. It was always a signal. Now it is the signal that matters.