
AI and the Battle for the Soul of Design
Interview with Chris McComb, Carnegie Mellon University
Chris McComb, Associate Professor at Carnegie Mellon University, returns to CDFAM in NYC this year after presenting at the inaugural symposium in 2023, where his talk “Design for Artificial Intelligence: A Force Multiplier for Additive Manufacturing” explored early questions (even though it was only two years ago) of human–AI collaboration in design for 3D printing, and beyond.
Since then, his team’s research has evolved from proving that collaboration was possible (and sometimes perilous), to exploring and engineering robust systems that integrate AI agents, workflows, and validation methods into design practice.
Following is an interview ahead of his presentation at CDFAM in NYC where he will share new technical results alongside broader reflections on how today’s choices in tools, interfaces, and norms will shape the trajectory of design for decades to come.


“Since these guys showed up, all this technology has just sort of turned itself on.”
– Dr. Brakish Okun, Independence Day
Since your presentation at the first CDFAM in 2023, how has your research into human–AI collaboration in design evolved, and what new perspectives will you be sharing this year?
Over the last two years, my team’s work has progressively shifted away from being a primarily scientific endeavor (i.e., asking whether collaboration between humans and AI was even possible in design), to something closer to systems engineering.
We’re now building pipelines, agents, and workflows that have to operate robustly in real-world contexts, where questions of reliability, provenance, and validation matter just as much as novelty. And that also shifts the research that we’re doing. We’re asking more questions that focus on aspects of system scale and robustness as well as questions about the nuanced human experience.
Alongside those engineering advances, we’ve also begun paying much more attention to the long-term trajectory of human-AI teaming for design.
It’s not enough to know what works today (which is hard enough!). We also need to think about how the patterns we’re establishing today (through UI/UX, social norms around AI use, feedback loops, and adoption trends) will shape design practice decades from now.
At CDFAM this year, I’ll be sharing some concrete technical results but also raising the question of how our near-term choices in tools and methods will set the stage for what comes after.
We’ve also been doing some exciting work with Ansys/Synopsys through the new Human+AI Design Initiative at CMU, and I hope to share some of that work as well!
How do you compare the role of AI as a coach versus a collaborator in design and engineering, and how might that role shift across different applications or steps within a design process?
A dichotomy that’s really useful for questions like this is thought partners versus production assistants, an idea I learned while participating in the Generative AI Teaching as Research (GAITAR) program.
A thought partner doesn’t execute the work directly, but instead helps you reason through a problem, ask better questions, and adopt alternate framings. A production assistant, by contrast, takes a direction and executes it with a high degree of autonomy.
Depending on where you are in the design process you might need either, neither, or both. In the early phases, acting as a thought partner can be valuable, because it provides a natural way to help expand the search space. Later, in detailed design or engineering analysis, it’s often more powerful to treat AI as a production assistant: generating options, running iterations, or automating time-consuming steps.
There certainly isn’t a “one size fits all” solution here. Its going to require designers, engineers, and developers to be more mindful of their own patterns of work so that they can reach out for the right type of support at the right time.
What computational tools and AI frameworks are you currently using in your research, and how are they integrated into the design workflow?
First and foremost, I love Hugging Face!
The company has been an incredible boon to the academic community, not only as a place to store datasets and models, but also as a platform to share and disseminate them widely. Their ecosystem has really lowered the barriers to entry for experimentation and collaboration in AI research.
In my team’s work, we’re doing a lot with embeddings combined with conventional machine learning methods, exploring how the two can complement one another.
We’re also deeply invested in AI agents, especially the lightweight kind that can run directly on devices like your smartwatch, your phone, or your laptop. Especially when combined with conventional, deterministic workflows these agents can deliver immense value.
At the same time, we’re running extensive assessments across different model sizes in order to better understand tradeoffs between performance, efficiency, and accessibility. Model size is not a one-dimensional question of “bigger is better”! Rather, its a systems engineering consideration.
Of course, once you build agents, you need to connect them somewhere. Lately, we’ve been experimenting with Slack and OnShape as modalities for exposing these AI agents. Together, these integrations start to illustrate what it looks like when AI becomes not just a tool, but an active partner embedded in the platforms where work already happens.
Can you describe the flow of data and decision-making between human designers and AI systems in your collaborative design experiments?
There isn’t a “one-size-fits-all” solution for the flow of data and decision-making in human organizations, and there probably isn’t a “one-size-fits-all” solution for AI systems either.
In human organizations, what makes this challenging is that people aren’t perfectly interchangeable. Likewise, neither are different AI systems. We need to think carefully about how to adapt workflows dynamically, enabling flexibility while also ensuring that AI-generated designs remain valid (more on that below).
To support that adaptability, we study a wide range of ways to share decisions between humans and AI agents: varying degrees of proactivity and reactivity, shifting the AI’s focus between the problem and the process, and adjusting the extent to which the AI emulates human experts. By tying these results together, we are working to identify heuristics that both AIs and humans can use to become better collaborators in design scenarios.
How do you approach validation and quality assurance for AI-generated designs, particularly when the reasoning process behind them may be opaque?
Some of the advances in AI-driven design make the design process much more similar to software development than engineering, and we should push that similarity as far as it can go.
Here I think the community should be integrating practices from our colleagues in software development, specifically test-driven development and continuous integration.
In software, we don’t release features without first writing tests that specify the required behavior. Doing the same for design would mean encoding requirements as explicit, testable constraints. Continuous integration complements TDD. Instead of treating AI outputs as one-off proposals, we should establish automated pipelines where each new design is immediately validated. If a design fails a test, it never reaches a human; if it passes, it’s logged with metadata about the decision path, making the process auditable.
Rather than trusting the AI itself, we should trust the hardcore engineering tools that have been hardfought and won over the last century. This not only provides a workflow and process for verifying AI-generated designs, but it also enables human designers and engineers to benefit from the practices as well.
What do you hope to share with, and learn from, the CDFAM community about balancing the creative role of the human designer with the growing influence of AI?
What I hope to share with the CDFAM community is a way of thinking about human-AI balance not as a single destination, but as a set of equilibria that we might eventually settle into depending on economics and energy flows.
Every design activity carries costs and the natural boundary between what humans do and what AI does will always shift towards places with attractive cost/benefit characteristics.
In other words, we need to start thinking seriously about the endgame.
There are lots of scenarios for us to consider. In one scenario, humans might stay closest to the physical side of design – if AI outpaces physical robotics, this might be an attractive equilibrium.
In another, our role shifts earlier, toward framing problems and setting intent, while AI handles the heavy lifting of iteration and detail.
There’s also a post-scarcity future where the balance is set by preference, with people gravitating toward the parts of design they find most meaningful. There’s also my preferred future, where humans will continue to do the most human parts of design (values, trade-offs, ethics, and meaning).
The question is: which equilibrium are we headed for? How do we settle into the right one? How do we even steer the ship?

Chris’s return to CDFAM highlights both the technical frontiers and the philosophical questions shaping design in the age of AI.
Join us at CDFAM in NYC to hear his perspective on where human–AI collaboration is headed, and to connect with other researchers and practitioners advancing computational design across engineering and architecture.
In the meantime you can check out the recording of his presentation from 2023 to get an idea of how far we have come in the past two years.





