Learning to Generate Shapes with AI
Karl D.D. Willis, Autodesk
As we delve further into the intersection of artificial intelligence and engineering, we are turning our attention to the work carried out by Karl D.D. Willis and his team at the Autodesk Research AI Lab.
Pushing the boundaries of 3D generative AI research, Willis and his colleagues are not just focusing on how AI can assist in the 3D creation and modeling process, but also on its potential role in the context of full mechanical systems assemblies.
This approach moves beyond much of the current research that explores what a geometry ‘might’ look like, to a more nuanced understanding of how components ‘should’ function or be modeled.
Willis delves into some of the research he will be presenting at the CDFAM.
From generating parametric CAD, to using simulation for obtaining performant results beyond the constraints of training data, to envisioning what ‘Clippy’ for engineering might look like, Willis offers his insights into the future of AI in mechanical engineering.

Could you start by describing your current role as a Senior Research Manager at Autodesk and sharing some details about the projects you are currently working on?
I work in the Autodesk Research AI Lab. We work on learning-based approaches to solve problems with the design and make process, be it manufacturing, architecture, or construction.
Can you share an overview of the topics you plan to cover in your upcoming presentation at CDFAM, titled ‘Learning to Generate Shapes’ and how it relates to these projects?
There has been amazing progress with text and image generation using machine learning. We have also seen some great initial work with 3D shape generation. However, one area that remains in its infancy, and will be the focus of my talk, is the generation of editable shapes that a designer can directly manipulate. One example is generating and editing vector graphics such as SVG (see how ChatGPT can attempt this). More specific to manufacturing, we might want to generate parametric CAD models complete with modeling history and constraints.

As you and your team have been exploring machine learning at Autodesk Research AI Lab for a number of years, how has the recent surge of interest and development in large language models like ChatGPT, as well as the early examples of Point-E text to 2D to 3D generative AI, influenced your research?
One of the great things about having research happen out in the open is we can very quickly stand on the shoulders of giants to advance what is possible with ML.
Models like CLIP have had a huge impact on the field and the Autodesk AI Lab has been quick to leverage them, for example, we published the first text-to-3D work in the field.

Sanghi, Aditya, et al., Clip-forge: Towards zero-shot text-to-shape generation, CVPR 2022.
Using AI to assist in CAD modeling for repetitive processes, similar to autocomplete with text or Clippy guiding a designer through a UX instead of menu diving, has the potential to save time and reduce friction in the learning curve of new modeling processes. Can you explain how you and your team are exploring and thinking about exposing this functionality?
I think autocomplete is a natural application for design for a number of reasons. Firstly the designer doesn’t need to do anything differently, the ML model does the work of interpreting their design and predicting what might come next. Secondly, the designer can opt-in or out of the recommendations provided.
Finally, autocomplete aligns well with how most language models are trained to predict the next word in a sentence.

How can we move beyond AI assisting what happens inside the software, such as 3D modeling, to AI helping with design intent and manufacturing processes that occur outside of the software?
What we can work on is determined by availability of data. The type of data we might need to understand design intent is very specialized.
I think of it as understanding the different levels of a design, e.g. a trimmed cylinder surface, made with the fillet feature, forming a rounded internal corner, to avoid stress concentrations and reduce machining cost. Obtaining this granularity of data for all aspects of the geometry is challenging, so it’s an open question right now how to tackle this area.

In my conversations with you and other researchers exploring AI for mechanical design and engineering, the lack of rich data that captures design intent and manufacturing processes seems to be a major obstacle in training algorithms. Is it possible to overcome this lack of data by using simulations, which may be computationally expensive, or by enriching existing data with synthetic data? Or, do we need to undertake a manual effort to capture as much real-world data as possible?
Right now, simulation can be really helpful as an evaluation metric or additional part of the loss function.
Metrics for evaluating generative models are very tricky and often come down to comparing the distribution of generated designs with the training data. This only tells us if the generated design is like the training data.
But simulation allows us to evaluate more objectively, so we might be able to find designs that are unlike the training data but perform well, i.e. the network produces something that’s both novel and useful.
In addition to the exploration of training AI on the 3D modeling process, your team has been collaborating with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to explore the assembly and disassembly of mechanical designs. Could you explain how these two areas of research might connect?
Assemblies are interesting because you get a broader view of how a single part fits in and functions in context.
For example, with Automated Modeling in Fusion 360 the designer defines how they want a single part to be generated inside of an assembly. In this type of problem the part can be generated using various different approaches, including learning-based methods, or alternatively it can be retrieved from a database of off-the-shelf parts that might meet the requirements.
So geometry synthesis directly in an assembly or retrieval and part placement in an assembly can both aid designers.

Once we have AI helping us solve every mechanical design, manufacturing and assembly process we no longer need mechanical engineers, how can we also incorporate aesthetics and styling and negate the need for industrial designers as well?
We have seen AI get very good at text and image generation, but 3D generation for manufacturing still has some way to go.
Real assemblies can contain 1000s of parts all with associated simulation, tooling, and supply chain data etc… Rather than replace engineers and designers, I’d like to see AI enable human potential by allowing us to focus on the creative problem solving part of the process rather than the tedious details.
Design should not be about selecting edges to make good fillets, but coming up with ideas to solve challenging problems.

Great, Now that we’ve successfully alienated everyone except computer scientists and ‘prompt engineers’, what are you most excited to take away from the CDFAM symposium?
Within the machine learning community mechanical design and manufacturing is viewed as a very specialized domain, when in reality it touches the lives of everyone through the manufactured objects that surround us everyday. I’m most excited to meet and learn more about the work of those people pushing the boundaries of ML for design and manufacturing.
Connect with Karl and world leading experts working at the intersection of AI, engineering and advanced manufacturing at CDFAM in NYC June 14-15 2023.
