March 16, 2026

Context Graphs in the Physical World: Capturing the ‘Why’ Behind Industrial Decisions

The context graph conversation has been dominated by horizontal enterprise players like HubSpot, Glean, and Box, thinking across every workflow and every industry. But barely anyone is talking about context graphs for the physical world. In this piece, Bardin CEO Fay Goldstein argues that industrial application engineering is the most exception-heavy, physics-constrained, and knowledge-dense domain the context graph thesis has ever encountered, and the one where capturing the "why" behind decisions isn't just valuable, it's urgent. With 25% of the industrial workforce over 55 and retiring, the tribal knowledge that took companies decades to build is walking out the door. Here's why that's a trillion-dollar problem, and how Bardin is building the reasoning layer to solve it.

About a month ago, I took a deep dive down a rabbit hole that started with a post Guy Korland from FalkorDB (a great partner of ours at Bardin ) shared from Ashu Garg and Jaya Gupta at Foundation Capital on context graphs, and ended several hours later having read the follow-on responses from Dharmesh Shah at HubSpot, the team at Conviva and Carnegie Mellon University, Aaron Levie at Box, Arvind Jain at Glean, and others.    

Most of these responses came out months ago, and I've rewritten this piece eight times since (and shockingly even shortened it multiple times). But I finally decided that I needed to share my thoughts, because every week the conversation got more relevant and the missing piece stayed missing.    

Barely anyone in this Silicon Valley bubble were talking about context graphs for the physical world.    And to be clear: the physical world isn't missing from broad investment or tech conversations.    

Quite the opposite. Reindustrialization, defense manufacturing, physical AI, it's all anyone wants to talk about right now, and honestly, after spending 2024 insisting we needed to be building for manufacturers when that wasn't exactly the sexiest pitch in the room, I'll take the vindication 😉.    

Billions are pouring into robots, factories, and autonomous systems.    

And alas, you can fund the robot, you can build the factory, you can spec the autonomous line, but none of it moves until someone, or something, scopes it, sells it, and supports it.    

You can fund the robot, you can build the factory, you can spec the autonomous line, but none of it moves until someone, or something, scopes it, sells it, and supports it.    

And yes, that's exactly what agents are for (and we’re building them too). But whether it's a thirty-year veteran or an AI agent doing the scoping, neither one can reason well from a blank slate. They both need access to the “why”. The precedent. The physics-grounded reasoning behind why certain configurations work in certain environments and others don't. Without that layer, the human burns hours reinventing what someone already figured out, and the agent hallucinates a configuration that looks right until it fails in the field.    

I spoke to a system integrator sales leader recently, thirty years in the business, who turned down a defense contract not because they lacked technical capability, but because they didn't have enough application engineers to scope it confidently enough to make it worth the risk.    

More that a human capital problem, it’s a missing reasoning layer problem. And it's the conversation this context graph debate has been missing entirely.    

At Bardin, we've been building that reasoning layer, the decision infrastructure for industrial commerce. And when I read through these context graph blogs, I kept thinking that while this framework puts into words what we've been building, it’s not all the way there yet.    

To understand why it matters so much here specifically and the gap we’re filling, it helps to start with what everyone in this debate got right.    

What the Context Graph Enterprise Conversation Did Get Right    

Ashu and Jaya said it plainly: the next trillion-dollar platforms won't just capture what happened, they'll capture why it happened.    

As they eloquently explained: incumbents knows you closed a deal at 20% discount. They don't know which exceptions applied, what precedent justified it, who approved it and why, or what similar deals influenced the decision. They're built on "current state" storage. They capture final outcomes but can't replay the state of the world when decisions were made.    

That gap is what people are calling "tacit knowledge," or what I call the knowledge tax. It’s hard and damn expensive to access this knowledge because those “whys” and “decisions” have never been treated as first-class data until now.    

The accumulated structure of captured decision traces is the context graph: a living, queryable record of how decisions were made, stitched across systems and time so precedent becomes searchable. When I explain it to our customers, I describe it as a neural network of "whys" that can be traversed back.    

Classic context graph use cases like the ones I mentioned offer a powerful frame, but were designed to describe enterprise software workflows. Its way way way more complex in the industrial and physical space and there’s a structural blind spot in the current thinking.    

The Limits and Paradox of Horizontal Context Graph Thinking    

To understand why the physical world is missing from many Twitterverse conversations, look at who's been leading it: Foundation Capital, the CEO of HubSpot, Glean, Box. Brilliant people building important companies. But they're all horizontal players, thinking about context graphs across workflows that span nearly every industry and every use case.    

And because of that scope, both Dharmesh and Arvind arrive at the same conclusion from different angles.    

Dharmesh says context graphs are "inevitable" but far off: "Companies are still struggling with basic data unification... Asking companies to capture decision traces when they haven't even deployed agents at scale yet is sort of like asking someone to install a three-car garage when they don't own a single car."    

Arvind says: "Creating this level of knowledge and understanding isn't easy. Building context graphs is hard... At Glean, for example, our task understanding reaches ~80% accuracy, an indicator of how strong all the upstream technology needs to be to make this viable."    

They're both right. For horizontal AI.    

This is where I climb back up to the hill I'm willing to (proverbially) die on: they're missing the magic and value of vertical AI.    

When you start at the vertical level, with deep understanding of a specific domain's physics, workflows, and decision patterns, you can build a context graph that actually works today, not in five years.    

Because in a vertical, the ontology already exists. The questions engineers ask, the order they ask them, the constraints that govern the answers, these are knowable in advance.    

You build the schema first, grounded in real domain knowledge, and let every project either confirm or challenge what's already structured. That's a fundamentally different epistemology, and it only makes sense once you understand what makes industrial so different from every other domain this conversation has touched.    

Physical Constraints and Causal Reasoning Is Why Industrial Is Different (And Both Easier and Harder)    

In general enterprise AI, context graphs capture procedural logic: “we always give healthcare companies 10% extra because procurement is slow”. “We structured a similar deal for Company X last quarter”. This is organizational memory: valuable, but ultimately about business rules that can change with the next VP of Sales.    

In industrial automation, context graphs must capture physics.    

When an application engineer selects a motor for a conveyor system, they're not making a business policy decision. They're solving a physics problem: thermal environment, mechanical constraints, electrical requirements, environmental factors. The same motor behaves completely differently in a food processing plant versus a lithium mine versus a pharmaceutical cleanroom.    

Standard configurations almost never work unchanged. Unlike business rules that can change with policy, physical constraints are immutable, unless you're Keanu Reeves in the Matrix.    

In industrial automation, context graphs must capture physics… Standard configurations almost never work unchanged. Unlike business rules that can change with policy, physical constraints are immutable.    

The reasoning behind those constraints is causal, not just tribal, and it requires people with a very specific and expensive skill set to navigate it. Understanding who those people are, spending over a year speaking deeply with them all, and deeply understanding why their knowledge has never been captured, is what makes the opportunity we’re after at Bardin so clear.    

Application Engineering: The Ultimate Glue Function Industrial AI Can't Ignore    

Foundation Capital identified the highest-value opportunities at what they called "glue functions": roles that exist precisely because no system captures cross-functional context, like RevOps, DevOps, SecOps.    

Application engineering is the industrial world's quintessential glue function.    

Application engineers, also known as FAEs, solution architects, technical solution specialists, and a whole slew of other alphabet-soup titles, sit between customers and sales, product development and engineering, compliance, and all the ManufacturingOps functions that haven't yet made it into the vernacular of the acronym rulers.    

Before any automated system gets built, they answer: given what the customer is trying to do, what system configuration will actually work in the real world?    

Before any automated system gets built, industrial application engineers answer: given what the customer is trying to do, what system configuration will actually work in the real world?    

This knowledge has never been systematically captured.    

It lives in 30-year veterans who "just know" what works. In engineering notebooks that never get digitized. In support tickets that record the problem but not the physics behind it. In RFQ responses that show the final BOM but none of the reasoning that produced it.    

And this human application skill is expensive.    

It requires people who are both engineers and customer-savvy, who can think inside physical constraints but outside the box when it comes to creative solutions, needed throughout the entire value chain from pre-PO scoping to solution design to post-sale support.    

The reason this knowledge is so hard to systematize, and so valuable when you do, comes down to the nature of the work itself: almost nothing that crosses an application engineer's desk is standard.    

Why "Exception-Heavy" Application Engineering Matters

Foundation Capital identified these non-standard and "exception-heavy decisions" as a key signal for where context graphs work best: workflows where "it depends" is the honest answer and precedent matters more than policy.    

Industrial application engineering is the ultimate exception-heavy workflow.    

Industrial application engineering is the ultimate exception-heavy workflow.    

Customer needs thermal tolerance beyond component ratings? Requires derating or oversizing.    

Environment has dust, moisture, or chemicals? Invalidates standard IP ratings.    

Space constraints? Forces non-standard mounting.    

Timing requirements? Demands custom control logic.    

Navigating these exceptions is the core of the very human, very time-consuming application work; high judgment, high precedent, and no single system owns the cross-functional workflow.    

Exactly the pattern where context graphs deliver the most value.    

Which raises the obvious question: if this problem is so clear, and the value is so high, why hasn't it been solved? The answer comes down to where existing systems sit in the workflow and why that position makes the incumbents, even those building industrial context graphs, structurally unable to build this layer.    

Why Incumbents Can't Build This. And Why Bardin's Position Is Different    

Cognite and Palantir are powerful examples of context graph thinking applied to the physical world. Cognite builds an industrial data fabric that contextualizes operational data to improve how plants run. Palantir's ontology models real-world entities and actions so enterprises can make governed decisions at scale. Both are right about the importance of semantics, relationships, and decision-aware systems.    

But both enter the picture after systems exist…after assets are deployed, after BOMs are finalized, after decisions have already been made. They contextualize the relationships between states.    

At Bardin, we’re building agents to exist at the moment of judgment: when an application engineer needs to size a system, when a sales engineer chooses one configuration over another, when constraints are negotiated, exceptions are made, and tradeoffs are accepted.    

We contextualize reasoning while it happens.    

At Bardin, we’re building agents to exist at the moment of judgment… We contextualize reasoning while it happens.    

In practice, that means that not only are we building the knowledge graph layer, we’re building AI agents and tools that sit directly inside the workflows where application decisions happen, embedded in the actual work.    

An agent that helps an engineer scope a new project by surfacing the most similar historical applications. One that compares configuration options against proven solution patterns. One that sizes a system against known physical constraints and flags when a chosen approach has failed in similar environments before. One that supports a sales engineer mid-conversation with a customer, pulling the right precedent at the right moment.    

We’re intentionally building tools, agents, and skill sets that makes getting to the next decision in a classic industrial sales JTBD, in a faster and more confident way.    

Our mantra: Brilliant, but boring, by design.    

By building both at Bardin, the historical mapping AND the system of agents that win because they're in the execution path at decision time, we not only have a deterministic map of what works, but also can capture (and become the system of record of) these physics-driven constraints, rejected alternatives, and implicit tradeoffs evaporate the moment the project moves forward.    

The value in this very hard but necessary “do both” motion we have here is that there's also a trust dimension most context graph discussions skip over entirely: in enterprise software, you can deploy and iterate. In industrial operations, you can't roll back a motor that burned out or iterate on a conveyor configured for the wrong environment.    

We’re learning, sometimes with a slap to the face of a not-as-awesome-as-it-should-have-been pilot, that trust has to come before deployment, not after.    

Which is exactly why we’re building Bardin's architecture to start from a structured, physics-grounded schema rather than only relying on these accumulated traces that will eventually reveal patterns (happy to explain more about how we’re actually doing this in private).    

From Tribal Knowledge to Queryable Precedent in Industrial AI

Our mission is that soon, a sales engineer won’t have to wait two days for a senior colleague to confirm whether a configuration will hold up in a high-humidity environment. The answer exists in the graph, traced back to a nearly identical application from three years ago, with the constraints documented and the outcome recorded.    

A new hire doesn't spend their first year slowly absorbing tribal knowledge from whoever has time to mentor them. They start from the accumulated judgment of everyone who came before. And when something fails in the field, "why did we choose that configuration?" has an actual answer. Not a guess. Not a reconstructed story. But a record of the reasoning at the time.    

This is what structured application memory makes possible: faster, more confident human (and agentic) judgment.    

And because the value is immediate and practical, real answers to real questions, in the flow of real work, teams actually use it.    

Every time they do, the graph grows, the ontology expands, the precedents multiply. What starts as a productivity tool quietly becomes the place teams go first when an application question arises.    

And as the graph matures, something more significant starts to happen: the system stops just retrieving past answers and starts reasoning across them.    

This is also why AI procurement tools don't solve this problem.    

They're built for buyers who know what they want to buy. In industrial automation, the buyer knows what they need to achieve.    

They don't know exactly how to get there, which configuration will work, which exceptions apply, which product combination has actually performed in their specific environment. That translation layer lives entirely on the seller side, in application reasoning that has always been unstructured and impossible to query.    

The graph structures it.    

And this structured seller knowledge doesn't just make humans faster, it makes industrial seller agents actually possible. A reasoning layer that's trusted and traversable is the substrate for agentic commerce: autonomous scoping, autonomous configuration, agent-to-agent transactions.    

A trusted, traversable reasoning layer becomes the substrate for agentic commerce: autonomous scoping, autonomous configuration, agent-to-agent transactions.    

The graph is what makes that future of industrial autonomous commerce safe to deploy.    

Hardware Is Commoditizing. The Industrial AI Knowledge Layer Is the New Margin.    

The market sitting behind all of this is much larger than the hardware numbers suggest, being driven by a cost inversion most people in this space haven't fully priced in yet. Global manufacturing sales revenue is $50.8T, with the industrial automation market at $435B by 2030.    

But these numbers miss the hidden cost: the engineering multiplier.    

For every $1 spent on automation hardware, companies spend $0.50–$1.50 on “soft costs”, the manual work of translating customer needs into working systems.    

Hardware is commoditizing. The "Electric Stack" (batteries, motors, power electronics, compute) has dropped 99% in cost since 1990. But the human layer, selling, configuring, supporting, is climbing a vertical cliff. The cost structure has inverted: hardware is now roughly 30% of project costs and falling, while soft costs, technical sales, engineering, permitting, are around 70% and rising.    

Hardware is commoditizing. The human layer, selling, configuring, supporting, is climbing a vertical cliff.    

Profit margins are being squeezed by what I called "the scissors" in that LI post: as hardware prices drop and labor costs rise, companies that rely on finite, expensive engineers to manually bridge knowledge gaps for every customer are getting cut. That’s why we’re seeing that AI in manufacturing is expected to hit $106B in 2030 (growing at a 45% CAGR the fastest in industry and 5x faster than hardware), and AI in industrial automation (our initial market) is growing at 18.6% annually to reach $90B.    

Investing in hardware is the trend now, and absolutely vital especially with what’s going on in the world (editing this final version between missile attacks in Tel Aviv), but we need to be thinking beyond better hardware specs and focus seriously on automating the high-fidelity application knowledge required to deploy that hardware at scale.    

We need to be thinking beyond better hardware specs and focus seriously on automating the high-fidelity application knowledge required to deploy that hardware at scale.    

The system of record for that knowledge is itself a significant strategic asset, which is why the question of how it gets built matters enormously.    

Creating a New System of Record Connecting Products, Physics, and Outcomes in Industrial AI    

Foundation Capital outlined three paths for startups building context graphs:    

  1. Replace existing systems entirely: think displacing Salesforce with something that captures reasoning natively from the start.
  2. Replace specific modules: think dropping a smarter quoting engine into an existing ERP.
  3. Or create entirely new systems of record for knowledge that was never captured anywhere before.

In industrial automation, nobody has a system for application reasoning. There's no incumbent to replace, no module to swap out. The knowledge doesn't exist in structured form anywhere. Not in the CRM, not in the ERP, not in the product catalog. Bardin is path #3: building the layer that has always been missing.    

We're not replacing product catalogs; suppliers will always own component specs. We're not replacing ERPs; manufacturers need transaction systems. We're replacing the trillion dollar productivity risk that comes with a human tendency to “Ask Mitch” or “dig into 2017’s Share Point folder…” and building the missing layer that connects products, physics, and outcomes.    

We will be the system that can answer questions no existing system can. "What configuration will work for this application?" "What similar applications have we solved before?" "What are the physics-based constraints we must respect, and which ones have we seen fail in the field?"    

If Palantir and Cognite are about running the physical world more intelligently, Bardin is about deciding how that world gets built in the first place. And because it’s not enough to reconstruct application knowledge after the fact, the companies that start capturing it now will own an asset that compounds over time.    

Physical AI, Context Graphs, and the Closing Window for Institutional Knowledge    

The context graph conversation, from Ashu and Jaya's original piece to Dharmesh, Arvind, Aaron, and everyone building knowledge graphs infrastructure like Guy and his team, framed it clearly: the next generation of valuable platforms will capture data plus reasoning. The why. (Simon Sinek would be proud.)    

We at Bardin know that industrial application intelligence is where that argument becomes insanely compelling.    

The stakes are highest, the knowledge is hardest to capture, and the compounding effects are most dramatic. Everyone in this debate is right that context graphs are the next major shift. This physical world is where the idea of smart, correct, and traceable decisions matters most, and where that reasoning layer has been most completely absent.    

But, most importantly, in industrial it’s urgent in a way no enterprise software problem ever is. We’ve got a literal closing window (that leads to a golf course), and a solution like Bardin not only scales expertise, but preserves institutional memory that would otherwise be lost: 25% of the workforce over 55 and retiring. An 85% deficit in engineering roles. 1.9 million manufacturing jobs unfilled by 2033. The tribal knowledge that took a company 30, 40, 50 years to build doesn't survive the retirement party.    

This industrial application memory is a real trillion-dollar opportunity.    

This industrial application memory is a real trillion-dollar opportunity. And the reason nobody else has built it yet is the same reason it's worth building now. It's hard, it's domain-specific, and the market didn't look sexy until physical AI became the thesis of the moment.    

We've been building anyway and I’m sure as hell glad that we stuck with it. 🚀    

Bardin is an AI platform built for industrial automation sales and application engineering teams — putting the depth of an application engineer in the hands of every salesperson, at the point of sale. Built for OEMs, distributors, and systems integrators who need to scope, sell, and support complex technical solutions faster and smarter.

Learn more at: www.bardinAI.com

Discover how Bardin accelerates technical deals and scales expertise across your industrial sales team.

Request a Demo