Design for Artificial Intelligence: A Force Multiplier for Additive Manufacturing

Chris McComb – Carnegie Mellon University

In this nascent phase of integrating artificial intelligence into design and engineering, it becomes vitally significant to strategize our approach towards Design for Artificial Intelligence (DfAI).

As with any tool we use, the influence of a tool extends beyond the results of what the tool enables us to build, to the use itself.
While many researchers are concentrating their efforts on understanding the intricacies of infusing AI into engineering, tackling a plethora of data-related hurdles. Chris McComb and his colleagues at Carnegie Mellon University are investigating not only the mechanical process, and individual’s approach towards DfAI, but also the dynamics of team-based approaches. Their exploration extends to understanding the nuances of how engineers engage with AI-enabled engineering software and, equally importantly, their interpersonal interactions within this context.

In the upcoming CDFAM Computational Design (+DfAM) Symposium, Chris will be delving into the details of his research, so we posed a few questions about his work, the obstacles he perceives in embracing DfAI, sharing tips for how to integrate AI into design and manufacturing organizations to maximize value.

Can you first tell us a little about your background and how you came to be researching Design for Artificial Intelligence (DfAI) in the context of Mechanical Engineering?

I was one of those kids who liked to ask “why”, and as the son of a short order cook and a heavy equipment operator, there were lots of natural “whys” surrounding me from a young age! It took me a while to figure out that I wanted to do mechanical engineering, but everything clicked once I started down that path. Mechanical engineering gave me the tools to understand so much of the world around me, from the forces in my dad’s chainsaw to the temperature of my mom’s skillet. 

Although it took me a while to find mechanical engineering, I’ve always been fascinated with artificial intelligence.

When I was a toddler, I’d take tools from my dad’s shop and lay them out on the lawn, hoping they’d get struck by lightning and “come alive.”  Thankfully my dad taught me how to put those tools to better use! The methods I use now are a little more rigorous, but I’m still just as driven by fascination and curiosity.

Given the quality of any AI generated material is based on the quality of the data training the machine learning algorithms, how can we best seed the content with metadata to allow AI to create and compare for multiple requirements from the same data.

Metadata is the lifeblood of AI engineering, and this is one of the things that differentiates our niche from our colleagues in computer science – not only do we crave the metadata, but it is absolutely essential for what we need to do!

For example, a simple voxelized file format or an STL retains geometry information about a part. However, for many useful engineering AI applications we need much more than just geometry – we need surface finish information, material specifications, etc. 

There’s a spectrum of metadata that we might be interested in for engineering applications. At one end of the spectrum is direct declarative information, like specifying the surface finish on a face of the part. On the other end of the spectrum is metadata that results from a time-consuming process, such as data related to test builds or high-accuracy simulations. But the spectrum also varies in generalizability – the declarative information is likely much more consistent and standardizable than the more specific metadata. Let’s target that low-hanging fruit! 

Many companies in the manufacturing space consider their design, simulation and manufacturing data to be their ‘secret sauce’ that they are not willing to share, sometimes even internally, how can build a warehouse of meaningful data both inside, and outside of these walled gardens?

This perfectly highlights the “AI Hierarchy of Needs” by Monica Rogati.

We all want to get to the top of the pyramid – deploying powerful AI models that revolutionize our businesses. However, many of us are still at the base of that pyramid, figuring out how to effectively log and clean data. I don’t have answers here, but there are some areas that might yield solutions.

The first is dataset distillation. The idea is this: is it possible to create a very small, synthetic dataset that can replicate the training results of a larger, ground truth dataset?

While this gets around the issues of directly sharing IP-protected data, it could still be possible to extract the “secret sauce” from the synthetic distilled dataset.

The second opportunity is federated learning. In this approach, multiple models are trained on multiple, distinct datasets and then combined to produce a single model. 

There might be less technical solutions as well – industry consortia that align incentive structures to encourage data sharing, federal initiatives to generate open data, and others. 

Much of the research you are undertaking starts with understanding the people that are seeking ‘AI superpowers’. Can you explain the process of gathering their needs and how you use that information to then proceed in empowering them?

If it has done anything, the waterfall of Large Language Models has shown us that the desire for “AI Superpowers” is huge!

Although my team’s work focuses on AI, we always seek to keep the human at the center of the experience. We use a lot of different approaches to keep our fingers on the pulse: attending conferences like CDFAM and SFF, conducting brownbag sessions and workshops, fielding surveys, and working closely with industry through graduate internships and co-ops. In many cases, we also serve as our own lead users and pursue the AI superpowers that we have always personally wanted to have!

In general, I think the AI research community should be engaging with stakeholders much more than we are. We must learn to advocate for the needs of the people, rather than the needs of AI.

In the early 80’s Actor Network Theory was being explored how to examine non-human objects that play an active role in human interactions (to wildly simplify the research). With AI tools, the process of human computer interaction is stepped a few notches above a passive tool like parametric CAD that requires all inputs to be human. 

How do you see the social dynamic within a team change when they are collaborating with AI AND each other simultaneously, especially given the tendency for AI to hallucinate/provide different answers to the exact same question/prompt?

One thing that I encourage people to keep in mind when working with AI is that we are not Teflon – just as we influence AI tools, the tools also influence us. Exactly how AI influences us depends on how we work with it, though. 

  1. When we use AI as a tool, our focus shifts away from detailed operations and towards higher-value sensemaking and problem framing activities.
  2. Welcoming AI as a partner, on the same level as other members of the team, makes it possible for hybrid teams to outperform human teams with nearly twice as many people. 
  3. If we empower AI to coach and guide our problem-solving process, we can achieve real-time, flexible guidance that boosts overall problem-solving performance.

While it’s tempting to think that AI is always helpful, that’s unfortunately not the case!

In some of our other work, we built an AI Designer that outperformed human counterparts. However, teams composed of human engineers and an AI designer actually performed worse than the AI or human alone. While we are starting to see huge potential behind AI support for design, we also see equally huge risks. 

How do you see AI collaboration being introduced and applied in a mechanical engineering setting?

We can think of AI as a “digital animal” that should “live” in the digital environments that are already part of the mechanical engineering world.

Over the course of COVID, many of us humans got more comfortable in these digital environments too! I’m thinking of things like Slack, Microsoft Teams, and Zoom (allowing AI to contribute as a team member would), but also CAD and simulation engines (allowing AI to access the tools of the trade).

It doesn’t stop there though – manufacturing machines are increasingly digital in nature and in some cases host the digital twin associated with the machine. AI agents are capable of “inhabiting” that digital twin, and by extension controlling, monitoring, and driving the machine itself.

What other research are you seeing in academia that most interests you in the field of DfAI and why? 

There are many open questions around DfAI but thankfully there are lots of excellent minds on the job. Here are some people and topics to keep an eye on:

  1. Kosa Goucher-Lambert at UC Berkeley studies the intersection of neurocognition and design, building a deep understand of how AI changes the way that we work and think.
  2. Bryony DuPont at Oregon State is harnessing AI to help us design and manufacture more effective renewable energy systems.
  3. Tahira Reid at Penn State researches how inclusivity and trust influence the characteristics of human-AI collaboration.
  4. Daniel Selva at Texas A&M is exploring how AI can help us design better space missions, important for both understanding our own world and exploring others. 
  5. Researchers in the Human+AI Design Initiative at Carnegie Mellon University are contributing to DfAI from perspectives as diverse as organizational psychology, infrastructure maintenance, and design for emerging markets.

You and your team offer consulting engagements to help teams understand how to apply and adopt AI into their design process. Can you give us an example of a previous engagement, and what engineering problems would indicate a company should reach out to engage your team.

My team sits at the intersection of mechanical engineering, AI/ML, and human-centered design, so we are well-suited to address a variety of problems at that intersection.

As one example, we helped a traditional manufacturer strategically pivot into additive manufacturing, helping them plan capital expenditures and also providing some customized design software to support the types of parts in their catalog.

We are currently working with another organization to examine potential applications of AI in the construction industry. So really, we span the gambit from early stage business development exercises to the delivery of custom software to support business value.

I am really excited that you will be presenting some of your research at CDFAM 23. What will you be talking about and what else are you looking forward to learning about at the event?
I’m excited too! I’ll be speaking about the DfAI framework. Specifically, I’ll be sharing tips for how to integrate AI seamlessly into design and manufacturing organizations to maximize value.

Recent Posts