
Constrained Creativity in AI-Accelerated Automotive Design
Interview with Ruben Verhack – Datameister
In this interview ahead of his presentation at CDFAM Amsterdam, Ruben Verhack founder of Datameister explains how their platform brings generative design into real-world engineering workflows—not through generic automation, but by building application-specific tools trained on domain knowledge, constraints, and goals defined by engineers themselves.
The focus is on assisting, not replacing, the designer: enabling rapid iteration, design exploration, and constraint satisfaction without removing the engineer from the loop.
Drawing from recent work in the automotive sector, Ruben outlines how Datameister’s AI-driven tools integrate with established CAD and simulation processes, helping teams move faster while staying within strict technical boundaries.
By treating geometry generation as a collaborative, constraint-aware process, Datameister opens up new possibilities for performance-driven design that remains grounded in engineering reality.
Can you start by letting us know what you will be presenting at CDFAM Amsterdam, and why is automotive design such an interesting proving ground for Datameister’s approach to AI-driven workflows?
I’ll be talking about the need for controlled automation in 3D design—especially in industries where creativity meets hard constraints.
Automotive is a perfect example because the ideation process is long and involves many stakeholders, from clients and designers to various engineering teams. A lot of overhead comes from communication gaps and misaligned interests. The client wants something bold; the engineers are up against physical reality.
Clients sometimes experiment with tools like Midjourney to get rough ideas, which is great for early inspiration. But in automotive, creative freedom is quickly bounded by regulations, physics, aerodynamics, ergonomics, and manufacturing realities. This is exactly where constrained generative 3D methods come in—they allow us to explore the creative space within those boundaries, not in spite of them. Instead of producing ideas that need to be retrofitted to reality, these models generate design options that are viable from the start—bridging the gap between bold ideas and hard constraints.

Additionally, one of the biggest structural problems is design lock-in. In many sectors, including automotive, early design decisions are hard to reverse because the process is so sequential. The key is decoupling these dependencies. That way, early changes become cheap, fast, and low-risk—enabling more iteration and faster delivery. You’re no longer blocked by upstream phases; teams can work in parallel instead of waiting in line.
At CDFAM, I will be presenting how novel constrained 3D generative AI methods can break us out of these rigid pipelines—creating space for more iterative, collaborative, and constraint-aware design.
Many generative design tools struggle with real-world constraints like manufacturability, stakeholder input and, reality… What were some of the biggest limitations you encountered with traditional AI tools before developing your new approach?
There are three main issues I’ve encountered.
First, there’s a general lack of high-quality 3D data in the public domain. This stands in stark contrast to text- or image-based AI, where vast, diverse datasets are readily available. The 3D data that does exist is heavily biased. For instance, most 3D car datasets are actually game assets—not engineering-grade car designs. That’s a key reason why out-of-the-box generative models struggle to produce useful, let alone manufacturable, 3D outputs for cars.
Second, even with access to infinite high-quality 3D data, we’d still face a major gap: control. In automotive design, you almost never start from scratch. You usually begin with a given platform—say, a specific wheelbase—which already locks down certain dimensions, like overall length and wheel placement. These are what you’d call absolute constraints. On top of that, there are relational constraints. For example, ensuring a minimum driver viewing angle inherently ties the position of the driver’s seat to the shape and placement of the windshield. We haven’t solved every possible constraint, but in principle, there’s nothing preventing these from being modeled and enforced within the generation process.
Third, current 3D generative tools tend to converge on what they “know” a car is supposed to look like. That’s both a strength and a limitation. On one hand, it helps generate plausible-looking results quickly. On the other, it resists deviation—so when you try to push the design in a bold direction, the model often pulls it back toward something more conventional. The challenge, then, is how to steer the algorithm outside of its comfort zone—whether by injecting external style guidance or explicitly locking in certain unconventional design elements.

In our conversations you mentioned an “inside-out” process that integrates AI into the designer’s workflow. How does this change the relationship between designers, engineers, and the AI systems they’re using?
You’re essentially setting up the rules of the game before play begins—defining constraints upfront so the designer is operating within a space that’s already manufacturable. But the deeper shift is that AI stops being just a passive tool and becomes an active design partner.
Instead of generating something and checking feasibility after the fact, the designer is co-creating with a system that already understands the constraints—structural, ergonomic, regulatory, whatever they may be.
That changes the nature of the workflow. Designers don’t have to second-guess whether their idea is buildable; they can push creativity right up to the edge of what’s possible, knowing the system won’t let them fall off a cliff.
It also changes the team dynamic. Engineers become enablers rather than blockers—because their knowledge is embedded in the process from the beginning. And feedback becomes immediate, constraint-aware, and continuous—not something that arrives three days later in a review meeting.
So the relationship between designers, engineers, and AI becomes much more fluid. You’re not just using software—you’re authoring ideas together with a system that understands feasibility in real time. That’s a big shift in how design decisions get made, and by whom.

Given that the relationship with this approach is more collaborative, what kinds of tasks or phases in the design process benefit most from AI support, and where do you think human creativity remains absolutely essential?
Humans are absolutely essential for the creative part of the process. The goal of AI isn’t to replace the designer, but to empower them—to make it easier to explore different directions without getting bogged down by constraints or repetition.
One of AI’s biggest advantages is its ability to account for many factors simultaneously like physical constraints, style requirements, regulatory boundaries, and more.
The fact that you can co-optimize across multiple dimensions while feeding the system high-level design intent is incredibly valuable. But as mentioned earlier, the machine tends to regress toward the mean. Left alone, it will recreate what it already knows a car looks like, based on historical data. That’s why the human is still needed to actively push the system out of its comfort zone—to steer it into new territory. AI accelerates iteration, and by doing so, increases the likelihood of those “happy mistakes” that spark truly novel ideas.
I’d also add that every designer has their own workflow, and that needs to be respected. At the same time, no one wants to spend time doing repetitive tasks over and over.
The key is to let the designer define something in their own style, and then allow the system to propagate or adapt it efficiently.
Think of things like rigging characters in games, laying out repetitive rooms in large buildings, or populating background elements in virtual environments—these are perfect use cases where automation saves time without compromising the designer’s vision.
As a company applying this process across multiple industries, what lessons from automotive design apply more broadly to how we should think about AI-accelerated design in other sectors?
There are many industries that face similar challenges to automotive, and one of the most critical is design lock-in.
Take construction, for example—before you can invest in detailed design work, you first need to understand what you’re legally allowed to build. Typically, you begin with a high-level volume or massing study and then progressively add layers of detail. But if, midway through, the allowable height changes and you lose two floors, a huge amount of downstream work has to be redone.
Ideally, you’d want to decouple those dependencies. That would drastically reduce the cost of changing early decisions and allow for more frequent iteration. It also shortens the overall design timeline, because different phases of the process can run in parallel instead of waiting on one another.
But this idea extends far beyond physical construction. In media and entertainment, for example, outside shoots are expensive, weather-dependent, and logistically painful. That’s why there’s been a steady shift toward virtual production studios, where the entire environment is controlled. The trade-off is that creating those virtual environments requires detailed 3D modeling, which is still expensive and time-consuming. But once built, they unlock massive flexibility and reuse—making the upfront cost worth it.
This shift is also driving demand for high-volume, high-quality 3D asset generation—not just in film, but in game development, where asset pipelines need to meet strict demands around things like topology and real-time performance.
Then there’s a less obvious but rapidly growing area: synthetic data generation for industrial AI applications, especially in manufacturing.
Traditionally, you either capture real-world field data or manually model 3D scenes and inject variation by hand. But if you can use generative systems to produce constrained variation—without having to model everything manually—you get a more scalable, cost-efficient way to generate training data.
Zooming out even further, this becomes essential in robotics, where the sim-to-real gap is a major bottleneck. It’s not just about creating a visually convincing 3D world—it needs to be physically plausible.
So while media applications can get away with things just “looking right,” robotics systems need to interact with simulated environments that obey real-world physics. There’s a lot of active research in this space, and we see real commercial potential.
Finally, what are you hoping to share at CDFAM Amsterdam, and what kinds of conversations or collaborations are you looking forward to having with the computational design community?
At Datameister, we specialize in developing AI algorithms and running them on our highly optimized cloud infrastructure. We’re always looking to partner with organizations that need these capabilities integrated into their workflows or products—especially those with clear bottlenecks that, if removed, would unlock significant scalability.
What I’m really hoping to do at CDFAM is connect with people who are facing those kinds of challenges.
Every week, I’m surprised by the range of industries that turn out to have real demand for this kind of technology.
That’s what makes working on such a horizontal platform so exciting—we constantly discover new, unexpected applications and get to collaborate with partners doing incredibly interesting work.
To learn more about how AI is being shaped to serve—not replace—engineering expertise, join us at CDFAM Amsterdam, July 9–10, 2025. You’ll have the chance to connect with Ruben Verhack and other experts developing computational tools for simulation, optimization, and design automation across fields.
Whether you’re building new workflows or rethinking existing ones, CDFAM is where software developers, engineers, and designers come together to share methods, challenges, and possibilities.






