
From Surrogates to Large Physics Models: Making AI-Native Engineering Work in Production
Interview with Nico Haag of PhysicsX
Nico Haag of PhysicsX joins us ahead of his presentation at CDFAM Barcelona, where he will be discussing Large Physics Models and what a genuinely AI-native engineering workflow looks like in practice.
The interview covers the transition from traditional surrogate modeling to reusable physics intelligence, the data pipeline underpinning aerodynamic model training, and how the platform integrates into production environments across aerospace, automotive, energy, and semiconductor applications.
A central theme is the relationship between model capability and engineering judgment.
Nico is direct that physics AI expands what engineers can do rather than removing them from the decision loop, and that LPMs are already deployed in mission-critical programs, not waiting in a research queue.
The conversation is grounded, technically specific, and worth reading before his session.

Can you introduce PhysicsX and outline what you’ll be presenting at CDFAM in terms of Large Physics Models and AI-native engineering workflows?
PhysicsX is a deep-tech company with a clear mission: to accelerate industrial innovation and transform how the world designs, builds, and operates complex physical systems.
We have roots in Formula One — one of the most extreme engineering environments on the planet — where rapid iteration and system-level optimization are critical to competitive advantage and performance.
Today, we apply that same mindset across the industries we work in, including aerospace & defense, energy, materials, semiconductors, and automotive.
At the core of what we do is an AI-native engineering platform. Rather than treating AI as an add-on to existing simulation tools, we embed physics AI directly into engineering workflows — enabling engineers to move from days of simulation and analysis to seconds. This fundamentally changes how hardware is designed, tested, built, and even operated.
At CDFAM Barcelona, I’m looking forward to sharing how Large Physics Models (LPMs) and the PhysicsX platform make this possible. Specifically, how LPMs are integrated into existing engineering workflows and toolchains, enabling teams to explore larger design spaces, accelerate development cycles, and bring higher-performance products to market faster.

How does your work expand upon traditional surrogate modeling, and what does this shift mean for generalization and reuse across engineering domains?
Traditional simulation tools rely on repeatedly solving complex partial differential equations. They are powerful but computationally expensive and time-intensive, often taking hours or days to evaluate a single design configuration. LPMs learn the structure of simulation data and deliver predictions in seconds through AI inference.
We train our models on a combination of high-fidelity simulation data and real-world operational data, ensuring they remain both physically grounded and accurate in deployment.
Because these models learn more general representations of geometry and physics, they can also be reused across a much wider range of designs and applications. This makes it possible to move beyond one-off surrogate models toward reusable physics intelligence that can be applied to complex engineering problems where multi-physics simulation is essential to unlocking system-level optimization.
Our frontier physics AI models are available directly within the PhysicsX platform, alongside leading third-party model families such as NVIDIA PhysicsNeMo. Together with integrated data infrastructure, orchestration, and application layers, the platform gives engineering teams the foundation to solve their most challenging problems.

What does the data flow look like for training and deploying your aerodynamic intelligence models, particularly regarding vehicle geometry representation and simulation ground truth?
In aerodynamics, high-fidelity computational fluid dynamics (CFD) simulations remain essential but extremely computationally intensive.
We focus on retaining — and enhancing, through the integration of real-world operational data — the accuracy of those simulations while dramatically reducing the time it takes for engineers to access meaningful results.
From a data perspective, the process begins with geometry. Vehicle designs can vary from early concepts — usually represented as coarse tessellated STL/3MF files — all the way to very detailed computer-aided design (CAD) assemblies with thousands of parts, including all their aerodynamic features. That geometric information is then paired with high-fidelity CFD simulation results, which provide the ground truth data used to train the models.
During training, the models learn the relationship between geometry, operating conditions, and aerodynamic response. Instead of running a new CFD simulation for each design iteration, the trained model can infer the physical behavior of a design almost instantly.

What tools and infrastructure does PhysicsX use to support real-time inference and integration of these models into production workflows across design and manufacturing stages?
Supporting real-time inference involves more than just a trained model — it requires a platform that is AI-native from the ground up.
Our platform has been built with inference as a core capability, rather than layered onto traditional simulation software.
This architecture enables near-instant feedback during design exploration, optimization, manufacturing, and even subsequent real-time operation.
But what’s most important to us is that the platform delivers value in production — bringing frontier physics AI to life in mission-critical environments.
Our Delivery team works closely with customers to integrate the platform into their day-to-day workflows and enable them to build applications that are tailored to their unique use cases, decreasing time to value. The platform massively amplifies what their teams can achieve — enabling engineers to enhance their decision-making with real-time physics intelligence

In moving toward continuous optimization and AI-augmented engineering, how do you balance model autonomy with human oversight in decision-making and validation?
AI-augmented engineering doesn’t remove the engineer from the loop — it expands what they can achieve.
Our platform accelerates simulation and enables engineers to explore a much larger design space, but validation remains central.
It is the engineer who defines constraints, interprets results, and makes final decisions.
Rather than replacing simulation expertise, physics AI enhances it. By dramatically reducing the time required to evaluate designs, engineers can move beyond waiting for simulation runs and focus on exploring alternatives, understanding trade-offs, and solving more complex problems. In practice, physics AI increases both the scope and the pace of engineering work while keeping human judgment firmly in control.

What do you hope to share with and learn from the CDFAM community, particularly regarding the adoption of physics AI in production engineering environments?
Our key message is that LPMs are rapidly moving from research to production, delivering tangible, lasting impact across use cases and programs. There is often skepticism around AI in safety-critical or performance-critical environments.
We demonstrate that physics AI is not just about research and experiments — it is deployed today, accelerating industrial development and empowering engineers to build beyond human imagination.

If this interview raises questions you want to put directly to Nico and other eexperts in the adoption and development of AI in engineering, CDFAM Barcelona is the place to do it.
The symposium brings together practitioners across computational design, simulation, AI, and architecture who are working on exactly these problems. Registration is open. We hope to see you there.





