Is AI Driven Design a Computational Dead End in Engineering?

Recent academic investigations question whether AI can ever really compete with ‘traditional’ topology optimization.

There is a lot of discussion, claims and hope that AI driven design will solve the most complex engineering problems through advanced manufacturing. 

In both academia and engineering practice, this would most likely be realized though replacing/augmenting design of experiments that lean heavily on simulation feedback loops, topology optimization to solve high performance engineering problems.

Problem boundary conditions considered for exemplifying the effect of grey scale and structural disconnections. Image courtesy of R. V. Woldseth et al.

A recent paper published by Rebekka Vaarum Woldseth and Ole Sigmund who leads the TopOpt Group at DTU along with colleagues Niels Aage and J. Andreas Bærentzen, reviewed academic publications exploring AI in Topology Optimization and found the current results did not meet the expectation that AI could efficiently, or accurately replace gradient topology optimization. 

In this interview with the authors we ask why AI might not currently be viable to advance topology optimization, and how AI might be useful in other supporting tasks of mechanical and structural engineering. 


Q. Your recent paper entitled ‘On the use of artificial neural networks in topology optimisation’ questions recent trends in academic research to attempt to train AI to solve topology optimization problems. 

Even though a lot of attention (and research funding) has focused on using AI to replace gradient topology optimisation there seems to be little positive progress, can you explain why?

There are several reasons for this. First of all, ANN’s are good at interpolation, whereas optimization seeks the extremes. 

Hence, by nature, optimization seeks for solutions outside the dataset. Therefore, it is hard to imagine that ANNs will be successful in coming up with new and improved solutions going beyond the training set. Second, TO algorithms are highly developed over several decades and now routinely and efficiently solve inverse design problems with millions and even billions of variables using just a few hundred (expensive) function evaluations. Such highly refined and specialized algorithms are just hard to compete with. 

Image courtesy of R. V. Woldseth et al.

Q. Training AI to solve topology optimization problems would require a large dataset of quality solutions that is unlikely to already exist, forcing the use of simulation tools to synthesize results would be computationally expensive. 

How many iterations would be required to train such a dataset and what would be the benefits or risks of this approach?

A. In our review paper we discuss a break-even threshold that says how many optimization problems one has to solve before the cost of establishing the dataset and training pays off. In the best cases seen in the literature, this break even-number is counted in thousands and in the worst cases in hundreds of thousands – even for problems with just a few thousand design variables. At the same time, these approaches score low in generality, meaning that they only work for a limited number of boundary conditions and other parameters.

Q. In any simulation driven process there is a need to balance the resolution/number of elements (the higher gives us greater potential accuracy) vs computational processing time (the longer the more expensive). 

How does this equation come into play training an AI vs ‘traditional’ topology optimisation?

A. As discussed before, TO algorithms routinely solve problems with millions and even billions of elements including multiple physical and geometrical constraints, whereas ANN’s are still limited to a few thousand variables, simple objectives and a single volume constraint. High resolution is required either to resolve local field properties like stress concentrations or to allow optimization of huge structures like airplane wings or bridge decks. 

One finite element evaluation may take hours on super computers making it infeasible to generate data enough for training an ANN. 

Q. If an AI approach is built that unintentionally optimizes for flaws in the data provided and we do not know this because of the ‘black box’ nature, and ‘hidden layers’ of the equations, how could we recognize and counter these flaws?

A. It is a general observation that any optimization will take advantage of errors in the modelling process. This is an issue that has been resolved in the TO field by introducing regularization techniques and geometry control. 

With an AI black box model it becomes hard to identify such errors if they are not directly visible – and even harder to check if a solution is properly optimized. 

One would have to do post-analysis if not post-optimization with established algorithms to check the efficiency and validity of an AI-generated result.

caption…

Q. We are seeing information online where commercial design software claims to be using ‘AI driven design’. 

Are you aware of any software that ‘actually uses AI’ for topology optimization or if this would be technically and financially feasible given the issues raised in your paper?

A. The term “AI driven” is often misused. Sometimes it is used for anything computer generated. 

If we limit the term to algorithms that are not pre-programmed, i.e. their outcome depends on learning from a large dataset, we are not aware of any commercial product that uses AI as a substitute for TO.

Q. A previous paper published in 2011 titled ‘On the usefulness of non-gradient approaches in topology optimization’ similarly questioned the application of Genetic Algorithms in topology optimization that exposed it to be an elegant theory, but a practical dead end.

Are we at a similar place in using AI for topology optimization, where we need to take an honest look and say it is a dead end, or should we continue to investigate knowing that we are doing so with the proviso that current computational processing is inefficient, and most likely giving us inaccurate results but future tech may converge to make it viable?

Indeed, in the review paper we conclude that using current state-of-the-art AI approaches as a direct substitute for TO is a dead end. 

However, we also highlight a number of support tasks (like analysis acceleration, post-processing and data reduction), where there is a potential not to replace TO but to improve its computational speed or result quality.

Q. Any last points you would like to make that were not covered in the paper, or research you are exploring (such as de-homogenization) that you would like to discuss?

A. Actually, even though the review paper was just published, we have already discovered a point that we would have formulated differently. 

In the review paper, we mention an AI-based approach for performing so-called de-homogenization (establishing high-resolution multi-scale structures from coarse-grained TO) that we found had some potential. 

However, in on-going work, we found that the same process can be performed as efficiently and with better quality using standard image processing techniques.  

Image
“…current state-of-the-art AI approaches as a direct substitute for TO is a dead end.”

I would like to thank both Rebekka and Ole for their time in both undertaking the research and answering questions around AI in topology optimization that is currently murky due to poorly defined terminology, academic research following funding and at best, misunderstanding by marketers and journalists.

It seems that with current available computing technology that while AI may be viable for interpolating between known solutions, that the computational expense, and nature of optimization algorithms to take advantage of flaws in modeling, that AI may not be a path to solving performant topology optimization any time in the near future. 

If you, your research institution or company are working on, or do have AI incorporated into design research or software please reach out as I (along with Ole and Rebekka) are always interested in learning more.


About the Authors.

Rebekka Vaarum Woldseth is a second year Ph.D.-student at the Department of Civil and Mechanical Engineering at the Technical University of Denmark. She has a background in optimization, algorithms and data analysis and is investigating novel techniques for Topology Optimisation in her Ph.D. project.

Ole Sigmund is a Professor and Villum Investigator at the Department of Civil and Mechanical Engineering at the Technical University of Denmark. He is one of the founders and main contributor to the development of topology optimization methods in academia and industry and has published more than 300 academic papers.