Ahead of his presentation, From Tools to Agents: How Agentic Engineering Workflows Are Reshaping Simulation-Driven Product Development, we spoke with Jon Wilde, Vice President of Product at SimScale, about the evolution of engineering simulation from an expert-driven task to a coordinated, scalable component of product development.

In this interview, he discusses the emergence of agentic engineering workflows, how Engineering AI and Physics AI are being integrated into simulation environments, and the implications of these systems for managing fidelity, traceability, and engineering decision-making across the product lifecycle.

Promotional image for CDFAM Barcelona 2026, featuring the title 'From Tools to Agents', discussing agentic engineering workflows in simulation-driven product development, presented by Jon Wilde from Simscale.

Can you introduce SimScale and explain what you’ll be presenting at CDFAM, including how your work on agentic engineering workflows has evolved since the presentation in Amsterdam last year?

SimScale is a AI-native engineering simulation platform that combines high-fidelity physics solvers with Engineering AI and Physics AI to help teams explore 1000’s of engineering decisions in seconds.

At CDFAM, I’ll focus on how we see companies evolving from democratizing simulation to scaling implementation of agentic engineering workflows — systems that guide when, how, and at what fidelity simulation is used across NPI processes. The shift is from enabling access to simulation toward orchestrating engineering decisions at scale.

Screenshot of a robotic arm simulation software showing settings for a static analysis. The interface displays a robotic arm model, simulation parameters, and a chat window with an AI agent providing guidance on running the simulation.

How do you define agentic workflows in the context of engineering, and what distinguishes them from traditional automation or scripting approaches?

Agentic workflows are AI-driven systems that reason the engineering intent and coordinate simulation tasks accordingly.

Unlike static automation or scripts, they don’t just execute predefined steps, they can reason. This means they can understand context, assess result quality, and guide next actions. The distinction is adaptive processes grounded in physics, rather than linear execution.

3D visualization of a solenoid valve with fluid flow simulation, showcasing various geometrical components in different colors to represent flow dynamics.

What types of simulation and decision-making tasks are most effectively handled by agentic systems, and how are levels of fidelity and human oversight managed in practice?

Agentic systems are effective for simplifying time intensive processes — extracting information from documentation, mesh validation, boundary condition checks, convergence assessment, design variant exploration, and whether we can leverage Physics AI or need a more traditional approach.

In practice, agents operate within defined guardrails. High-fidelity simulations remain the authority and are always there for engineers to refer to. Agents iterate and recommend; engineers interpret and approve. Oversight is built into the workflow, not layered on afterward.


What does the software and data infrastructure behind these agentic workflows look like, and how are simulation insights orchestrated and reused across product development stages?

The foundation is cloud-native simulation infrastructure with structured data capture — geometry, meshes, solver parameters, results, performance metrics and all associated metadata..

The agent can also leverage Physics AI models trained on validated simulation data. Insights are stored as reusable objects — not just reports — allowing simulation results to inform downstream design, validation, and optimization loops.

A wireframe model of a pump and piping, showcasing a yellow pump component, with a chat window discussing pump performance characterization.

In integrating Engineering AI and Physics AI, what are the challenges in ensuring models remain interpretable, traceable, and grounded in domain expertise?

The key challenge is maintaining alignment between AI predictions and validated physics.

Physics AI models are trained on high-fidelity simulation data, either from SimScale or from a customer’s software.

Any Physics AI model can be investigated for quality and we make it easy to dig into the details. Where a model is lacking accuracy, we guide users towards improving it. It is easy to see who trained each model and how.


What do you hope to share with and learn from the CDFAM community, particularly around building organizational workflows that support scalable, AI-augmented simulation practices?

I hope to share practical lessons on introducing agent-supported workflows without compromising engineering rigor.

From the CDFAM community, I’m interested in discussions around how others are leveraging AI-augmented simulation to tackle real-world engineering problems and if not, if/when/how they plan to start

Graphic promoting the CDFAM Barcelona Computational Design Symposium featuring text about leading experts in computational design, AI, and machine learning, with a collage of headshots.

Join us at CDFAM Barcelona April 8-9, 2026 to see the full presentation, and connect with other leading experts in computational design, AI and machine learning for engineering and architecture. Two days of knowledge sharing and networking with industry, academia and software developers on the shore of the Mediterranean.


Recent Interviews & Articles