
Generative Engineering on Simulation Setup, Design Space Exploration, and Why Physics-AI Demos Miss the Point
Interview with Laurence Cook of Generative Engineering
One of the persistent blockers for adoption of simulation-driven design is not (always) prediction speed, it is setup time.
While much of the AI-for-engineering conversation centers on surrogate models, the bottleneck in real iterative workflows often sits earlier: getting a design to a simulation-ready state in the first place.
Generative Engineering has been working on this problem directly, targeting simulation pre-processing as an automation challenge that requires less training data, tolerates non-parametric design changes, and fits into workflows where designs are still evolving.
We spoke with their team ahead of their presentation at CDFAM Barcelona, where they will share real platform examples and lessons learned from customer projects on where the actual bottlenecks in design space exploration lie.

Can you introduce Generative Engineering and explain what you’ll be presenting at CDFAM, including how this work expands on your previous presentation in New York?
Generative Engineering builds pioneering technology to accelerate engineering. We have been lowering the barrier to entry for design space exploration and simulation-driven design across a diverse range of engineering problems for years.
At our CDfAM presentation in New York, we presented how generating designs rather than manually creating them enables data-driven decision making in engineering, and how AI can reduce the barrier to achieving this whilst keeping engineers’ expertise in the driving seat.
Our recent work uses data generated in such a process to automate pre-processing steps for simulation, even with non-parametric design changes. This allows a faster iteration loop between a design hypothesis and insight from simulation. We’ll be presenting this technology and how it’s used in our platform with real examples.

You argue that real-world design iteration isn’t strictly parametric. How does your approach accommodate topological or structural changes in a simulation-driven workflow?
Finding the information needed to fill a simulation template is a lower-dimensional problem than finding a solution field for a physics problem. The geometric features that are most important to simulation setup (such as where loads are applied, what boundary conditions there are) won’t vary much across design iterations, even if the topological or structural changes do. Structural changes to a design that don’t affect these dimensions can be freely varied without impacting the automated set up.
There’s of course a limit here to the magnitude of changes that are compatible, but the limit of changes that a setup-model can accommodate is larger than the limit of changes a physics surrogate can do, and the analysis relevant to a problem changes over longer timelines than the design. Engineering designs iterate through many directions before converging, all while the way you measure performance and requirements is fixed.
What does the process look like for using geometry preparation models, and how does this differ from building physics surrogates?
Physics-AI demos are genuinely impressive: models predicting aerodynamic loads in milliseconds across thousands of design variants. These need to be trained on massive data sets, and to know what data to collect, you have to make a prediction of what changes the design will go through by the time the models are ready to be used. Only the changes compatible with the models can be used to help the design. But because of the iterative nature of engineering design, by the time the models are ready, the design has evolved in a different direction and these models miss providing their intended usefulness.
Instead, we target simulation setup as a problem that requires less data and less knowledge about where the design will iterate to. In many cases it doesn’t even involve training new models: generative methods that are highly guidable at inference time can use a small number of examples to achieve the automation. This makes them useful to fit into iterative product workflows, where re-training models whenever a design iteration happens would be too much of a slow down.

What types of data formats or design history are required to support this workflow?
Any example of a geometry that has been successfully pre-processed, meshed, simulated, and post-processed can be used as training data.
What is most useful is examples where automation fails: where feature detection applied a load to the wrong face. We have built into our platform the ability to easily correct the system when things go wrong, providing updated input data. The ability to override, correct, and set guardrails is essential for any AI-enabled automation to be useful in real engineering workflows.
A combination of parametric exploration and novel geometries is most powerful. Initially setting up a full end-to-end parametric workflow that can be used to generate real examples, and then manual iterations that caused automation to fail are what give the fastest workflows.
How do you handle inconsistencies across historical models?
Directly feeding inconsistent historical data is always going to be hard. Advances in AI don’t completely remove the need for good data for it to make use of.
But because we’re not trying to build a foundation model that generalises across problems, historical models aren’t as relevant. We aren’t trying to make an AI able to learn from all historical examples to apply to any new problem instantly. Instead, we work with customers initially on a project to convert anything they have into a full end-to-end automated workflow most relevant to their problem, which gives the initial dataset needed to start our iterative process.
In automating the path from sketch or annotation to simulation-ready geometry, what role do you see AI playing as an active collaborator rather than just an accelerator?
Agents are incredibly powerful collaborators that can set up automation scripts for a particular problem, making the setup time for moving into a generative-first workflow low.
Because our platform is built around the iteration between automation and correction from a human, it is a naturally collaborative process speeding up simulation. But where AI becomes a true collaborator is in being able to analyse the thousands of different designs and simulation results you can now run with a generative process, suggesting interesting simulations and correlations across this data, revealing insight it would be too painful to dig through otherwise. All while the simulation remains physics-based and all decisions are with the engineer.
What do you hope to share with and learn from the CDFAM community, especially regarding lowering setup costs and increasing accessibility of simulation for iterative design?
We want to share our experience of where we’ve found the real bottlenecks in doing design space exploration and fast simulation-driven design in our customer projects, and why it’s not always what many AI demos are showing.
We want to learn when simulation setup is a problem we can help with, where other people working on advanced engineering design workflows are finding unexplored bottlenecks, and how the field as a whole is pushing towards novel engineering workflows.

Generative Engineering will be presenting at CDFAM Barcelona alongside researchers, engineers, and software developers working across the full spectrum of computational design and simulation-driven workflows.
If the problems discussed here are relevant to your work, Barcelona is the opportunity to hear more, ask questions directly, and connect with others navigating the same challenges.
Register now, space is limited





