Functional AI for 3D Design Automation: From Path Finding to Generative Modeling for Building Construction
Interview with Hao (Richard) Zhang – Augmenta
Ahead of his presentation at CDFAM Barcelona, we spoke with Hao (Richard) Zhang, VP of AI and R&D at Augmenta, about functional AI, construction automation, and why the gap between “looks right” and “works” is the central challenge in generative design for the built environment.
Augmenta is building toward one of the more ambitious targets in construction technology: fully engineered, construction-ready 3D buildings generated from high-level inputs, with MEP systems that are geometrically valid, code-compliant, and ready to build from day one.

Can you introduce Augmenta and summarize what you’ll be presenting at CDFAM in terms of Functional AI and 3D design automation for the built environment?
Augmenta is a Canadian start-up company with an ambitious grand vision: to generate fully engineered, construction-ready 3D buildings from high-level descriptions. Such descriptions may include texts, engineering constraints, floorplans, or any combination of these. The output is not a rendering or a 2D schematic.
It is a complete 3D building design, including all mechanical, electrical, and plumbing (MEP) systems, that is geometrically valid, code-compliant, and ready to build. Our current focus is automated MEP design — the most spatially complex and manually intensive part of the workflow.
My presentation will define the problem space, the technical challenges, and how AI and machine learning can be employed to tackle them. At the heart of this work is what I call functional AI, a term I use deliberately to distinguish it from spatial AI in the generic sense.
Spatial AI asks whether a design fits in space. Functional AI asks whether it works: whether the geometry it produces can actually carry electricity, move air, and serve the people inside the building as intended. That distinction, between looking right and being right, is the central design and research challenge of building automation.
Your approach prioritizes functional outcomes over visual realism in generative modeling. How is this distinction reflected in the AI models and training objectives you use?
Most generative AI in 3D is both trained and benchmarked on visual quality: does the output look plausible? That is the wrong objective for construction. An MEP design with a single unresolved clash or clearance violation is not almost correct; it is unusable. The cost function is binary in a way that visual generation is not.
This forces a fundamental rethinking of training objectives. Rather than optimizing for perceptual plausibility, our models need to be trained against geometric and physical constraints taking into account clearance, routing feasibility, and constructibility. The training signal comes from human engineering corrections: when a designer modifies an automated output, that correction is a precise, labeled signal about what functional validity requires. Over time, the models learn not what looks right, but what works.

What does the data flow look like when building a foundation model for construction intelligence, especially for non-residential structures with complex systems like HVAC and electrical networks?
The data flow has two sources that work in tandem. The first is synthetic through generative design: we use purpose-built generators to produce large volumes of realistic building models and design scenarios, encoding deep domain knowledge about how MEP systems are configured in practice. This gives us training data at a scale that real project data alone cannot provide. The second is real: every project study through our platform generates a paired dataset — our automated design alongside human corrections. This leads to labeled supervision that captures not just what a valid design looks like, but where human expert judgment diverges from automated output, and why.
Together, these two sources feed a foundation model that learns construction intelligence bottom-up, from geometry and constraints, rather than top-down from language. For complex systems like HVAC and electrical networks, this matters enormously: the spatial interactions are too intricate, and the functional requirements too strict, for a model trained primarily on language or visual data to internalize reliably.

How are path-finding agents integrated into your generative pipeline, and what types of constraints or performance criteria do they need to satisfy in real-world design contexts?
Pathfinding agents form the computational backbone of our generative pipeline, solving for valid routes under simultaneous hard constraints including geometric clearance, structural penetration rules, clash avoidance, and cost, etc. While AI and machine learning will guide the search, the computational engine ensures physical correctness. Speed is equally important as designers must be able to iterate, which means agents must solve at interactive speed despite the underlying search complexity.

What software infrastructure or toolchains does Augmenta rely on or provide to support automated delivery of functional building designs, such as the electrical systems you deployed in Michigan schools?
Augmenta’s Construction Platform (ACP) integrates seamlessly into existing industry workflows, meeting engineers where they already work. Specifically, ACP provides an Autodesk Revit add-in to access a building’s structural geometry. This data is uploaded to our cloud compute platform, where the heavy lifting happens.
On the cloud side, Augmenta’s proprietary generative AI performs three sequential but tightly coupled operations: spatial analysis to understand the building site and its constraints, pathfinding to determine valid and constructible routes for electrical systems through the building, and finally generative design to produce complete, construction-ready electrical raceway layouts.
The Michigan schools project is a concrete example of this pipeline delivering real output — not a prototype or a proof of concept, but designs that were actually built.
What do you hope to share with and learn from peers at CDFAM, particularly in bridging generative AI and functional design requirements in architectural and engineering applications?
CDFAM brings together exactly the community I want to engage with — researchers and practitioners who understand that generative AI in physical domains, such as design and manufacturing, is a fundamentally different problem from generative AI just for visual content. I hope to share Augmenta’s vision and our technical approach, and to provoke an honest conversation about where current AI can succeed in functional design and where it still falls short. That conversation is more valuable when it happens across domains rather than within a single one such as AEC.
I am equally curious to learn. The tension between generative flexibility and functional validity that we navigate in building design almost certainly appears in other engineering domains including product designs, each with its own constraint languages and its own definition of what “functions” means. I suspect the concept of functional AI, and the two-layer architecture of computational engines paired with learned reasoning, may resonate and find analogues across these fields. If CDFAM can surface those parallels and seed collaborations around them, that alone would make it worthwhile.

Join leading experts in computational design, AI, and engineering at CDFAM Barcelona, April 8-9, 2026. Connect with practitioners like Richard Zhang, hear first-hand how these technologies are being applied across architecture, engineering, and manufacturing, and be part of the conversations that matter in this field.





