Shape Of Generative AI

Onur Yüce Gün gave the opening keynote at the inaugural CDFAM symposium in NYC in 2023, perfectly setting the tone for the event, and the ethos of the CDFAM symposium, giving a clear-eyed overview of the current state of computational design, and the adoption of AI in design and engineering practice.

But importantly, this was not just a purely technical presentation, though Onur’s work at New Balance requires exacting attention to detail to apply these tools in the mass production of consumer products.

Onur’s keynote focused also on the human in the loop, the importance of the philosophical approach to computational tools and AI in the design process, and the historical context of the adoption and leveraging of other ‘game changing/revolutionary/hyped’ tools.

At CDFAM Berlin, Onur will present the closing keynote, to ensure people walk away from the event with this in mind, that while having shiny new technical tools may be empowering, the way in which we approach, understand and leverage them may be more important than the given algorithm.

In the following interview we explore what Onur has been working on since CDFAM last year, his views on the developments in AI, and what he will be discussing in his keynote in Berlin.


Don’t miss out on the opportunity to connect with the leading experts in computational design and the adoption of AI in engineering at all scales at CDFAM Berlin, May 7-8, 2024


Exploration and Progress Since CDFAM NYC

Since our last meeting at CDFAM in NYC, could you share the new territories you’ve explored regarding computational, generative, and AI-driven design?

The period in between was more like the future past. We were already in the accelerating hype cycle about AI implementations.

Rather than delving into a list of developing technologies, I believe it’s crucial to underscore the significance of critical approaches. This mindset is key to navigating the AI landscape and its potential impact on design.

I kept playing with, reading, testing, and implementing. Everyone should shift focus on the tools that look promising and investigate quickly if that sensation is true. So, I found value in pushing the brake a bit more than the gas as the overall tendency was to accelerate as much as possible.

Once you realize that only a fraction of what is promised can be realized, it makes more sense to keep your foot on the brake and accelerate with the tools and methods you believe in. The tools that enable more human input proved to be more successful. As in the case of image generation tools, sketch-to-render tools create more tangible and useful results than text-to-image tools. Those will have longevity in design processes.

Some of the developments in AI highlighted the importance of already existing tools, such as evolutionary algorithms. I also found some of the automation-related problems more attractive than those related to generation.

As presented today, the generative models look inexhaustible, but they keep yielding the same result repeatedly. That is not interesting at all, especially in the design context.

More importantly, I spent some time going back to the history of AI and computational design. Couple of events, discussions and a long paper I recently wrote about Generative AI triggered this.

Most of the answers you are looking for are actually in the past—most of the developments we are seeing were thought of years or decades ago.

I suggest everybody look into What Computers Still Can’t Do by Dreyfus and Turing’s Cathedral by Dyson. The other invaluable resource is late Professor Patrick Winston’s Human Intelligence Enterprise course, accessible on MIT OpenCourseWare. When you make conceptual connections in a historical context and apply them today, things make much more sense quickly.

The expanding potential of AI tools also made the importance of critical understandings and approaches more evident. I presented last year during CDFAM New York 2023, and it somehow proved itself (once again) in the meantime.

Onur Yüce Gün – CDFAM 23 NYC – Opening Keynote

Which aspects of this recent exploration will you highlight in your presentation at CDFAM Berlin?

Bringing together seemingly disparate computational design theories, I embark on a unique journey of critical evaluation of generative AI technologies. My aim is to unravel their strengths, weaknesses, and underlying reasons. In this exploration, I touch upon computational theories that detach themselves from conventional digital electronic computing purposefully.

I will talk about computing with symbols – in which ideas, thoughts, concepts are represented through symbols and compare them with systems in which you compute with things themselves.

So, symbols, no representations.

Interestingly all AI technologies are dependent on symbols, as evident in being dependent on electronic digital computation. When we try to use AI to evaluate or generate text or images, we deal with layers and layers of abstraction, representations, and symbols.

Try to assess how much of the meaning we are dealing with gets transferred to the AI systems. You hit this communication challenge when you spend enough time with digital technologies. The same ambivalent feeling applies to AI.

First, you would think that it is really smart. Then, when you have a clear goal, you need to learn the system’s quirks to the bone to get close enough to what you are trying to generate.

I am keeping the optimization applications out of this context. Although they are generative, they operate within a more bounded design space.

Here, I am discussing open-ended creativity in which ambiguity plays a central role. Performance-oriented applications do not necessarily fit into this category.

Reflections on AI Developments

Over the past year, AI has evolved rapidly with LLM, VLM, Image generation, video and 3D mesh synthesis. What developments have you found most surprising in the context of design, and how have these advancements impacted your work?

Overall, I am not too surprised by emerging applications, as most of these tools have been discussed for decades. What is surprising is the speed of development in some areas and the lack of advancement in others.

Through hardware and software implementations, incremental developments should happen faster; that is a given. However, some technologies are developing faster than anticipated, while others are taking a lot more time to resolve.

One evident development is about the text-to-image tools and how quickly they started yielding much more compelling results through each version. 3D mesh synthesis is, though, lacking behind, and of course, there are obvious reasons for that.

For images, the LAION 5B dataset has been available for some time now—that is, 5 billion images plus text descriptions. Images are essentially pixel fields distributed in two dimensions. 3D is a much more complex area.

Even to start with, there is an extra dimension and layers of information that need to be handled, such as topology.

…the availability of a 3D model dataset for AI training is debatable, and even if there is one to become available, its size will remain fractional to the sheer size of image datasets. And with a limited dataset, you could only generate limited variations of low-fidelity 3D configurations.

The impact of AI technologies is and should be considered to be incremental, as one needs to take smartly-paced implementation steps. You cannot rule out the existing design workflows and processes overnight.

Whether AI-backed or not, each developing tool needs to be evaluated and injected into the existing processes through a value-driven lens.

If you see a rush sale for “snake oil” – that is probably snake oil.

So overall, AI-backed technologies are helping automate or expand some portions of design workflows, but again, they require insightful implementation more than anything.

Integrating AI with Computational Design in Practice

You have been operating in the computational design space, where design decisions are often informed by hard data on performance and manufacturing constraints. How do you envision the integration of generative AI tools with the computational design approaches you’ve been championing?

Data is a way of world-describing, something that we project onto things… It is a way of measuring, controlling, and constructing. But all this is essentially about understanding, and understanding is about meaning-making.

To cut to the chase, hard data is extremely important, but all data stories revolve around the real, actual, lived, and experienced. A designer’s role is to find the ultimate balance while incorporating the “measured” descriptions of the world, with meanings that cannot be described but can be felt, observed, seen, and experienced.

OK—then how will we integrate generative AI into the design processes? The answer is the same: the way in which we incorporated computation, parametric modeling, and generative design into design processes over the past decades. By taking cautious steps, studying these systems rigorously and developing proof of concepts for validation.

Imagine a novice designer stepping into the parametric design space for the first time. This designer often generates very (unintentionally) complex and heavy models, potentially involving overlapping and unresolved geometries (hence lack of control over data structures). Mastery would only emerge out of this condition as models become lighter and more precisely crafted. Some generative AI tools, by nature and due to their unrefined development, are akin to such novice parametric designers. They may become promising in the future, but you cannot deliver flawless value using their models today!

A close-up of a pipe – description automatically generated

One last point is to remember that “this is not a pipe.”

The building renders you create with AI imagery are not buildings. There is no AI architecture. Or everybody is an AI architect who makes building images.

Feel free to pick one idea over the other one.

You can generate product design concepts by employing AI text-to-image tools and blending them with some image-processing and 3D modeling skills.

We have been conducting such experiments to demonstrate how these technologies would affect the design landscape in advance.

We love demonstrating possible designs (such as footwear concepts) that could be hyped in the space we operate in. We generate and critically evaluate the hype to get over it as soon as possible.

Doing so already requires various technical skills, and our computational team at New Balance has plenty of those. However, the actual skill is to know that the concept you create is not the thing itself.

That is where I step in and make sure that we deliver tangible results and create measurable positive impact. We will discuss employing emergent technologies with this kind of awareness without getting lost in the hype.

Guidance for Adopting Computational and AI Processes

For designers, architects, and engineers curious about incorporating computational and AI methodologies into their workflow, what advice would you offer based on your experiences? Are there particular strategies or mindsets that facilitate a smoother transition into these advanced design processes?

Every technological development creates a trend, requiring several skills to be built and/or brought forth. In my Mastering Leadership article, at some point, I listed these design trends for us to review where everything ended:

1996 — Everything will be animated;

2000 — Everything will be parametric;

2004 — Everything will be 3D printed;

2010 — Everything will be robotic,

2015 — Everything will be VR,

2020 — Everything will be MetaVersed (and NFT’ed),

2021 — Everything will be AI … and so on.

So you see, everything never becomes something else, everything remains as everything. Can everything made be ever slightly better? Potentially!

So, our mindset should ask “how?” How can we use advanced design processes to incrementally improve things? Better performing, more sustainable, healthier, and joyous.

When we are using a tool, are we immediately mesmerized or confused? If so, why? Are we lacking an understanding of the tool? Is something truly marvelous, or is it all smoke and mirrors, and we don’t have clear enough vision and wisdom to evaluate?

The best way to use a tools is the critical employment of it. To be critical, you need first to know what you are doing and how the tool is doing it.

Imagine a master mechanic racer—who knows how to build a car and knows how to drive it to its limits. This fictional character could develop a cybernetic loop (between the self and the apparatus, the car) to perfect both the vehicle and the driving through constant feedback mechanisms by learning and applying and learning and applying again.

Relying on tools without the necessary skills to understand and implement them is a sure path to personal stagnation.

I have observed two contrasting approaches to developing technologies – one is “oh good, it will do it for me,” the second one is “oh great, now I can do it in a better (maybe faster) way.” The mindset I am advocating for is one that takes initiative and strives for continuous self-improvement.

Learn and apply. Push the tools to their limits, and try to understand why and how things work. If you cannot find the answer, ask.
Do not expect it from an external entity, others, or the tools; do it yourself.

AI does not offer shortcuts in any of the subjects I mentioned here. It can certainly accelerate learning and production, but if you think it will enable shortcuts for achievements, remember one thing—maybe it is not a shortcut, but all the roads are now shorter because of the ubiquitously available technology.

The importance of hard skills and critical thinking is here to stay.

Work hard and approach your work with a value-driven mindset… and then use AI as a tool to advance your goals.

Expectations from Software Developers at CDFAM Berlin

Finally, with many leading software developers attending CDFAM Berlin, what advancements or capabilities do you hope they will showcase? Is there a specific challenge or need you’d urge them to address to better support the integration of AI and computational design in professional practices?

Bridge the gaps. Many digital design and manufacturing technologies have been valuable because they help bridge the gaps between the steps of design processes.

While the goal of using 3D printing for fully custom manufacturing remains to be realized at scale, 3D printing technologies remain valuable because they bridge the gap between ideation in modeling and prototyping.

Photogrammetry popped and remained because it bridged a link by proposing an inverse digital creation process, digitization of the 3D reality instead of realizing (as in rendering) the 3D virtuality!

With the development of AI tools, some former gaps between design and modeling steps are becoming more evident, or new gaps are being identified.

Solutions that incorporate human input more or learnings from specific projects prove to be more valuable—my example of sketch-to-rendering tools over text-to-image tools applies here as well.

And, yes, custom AI models or smaller AI models that target specific problems will potentially prove to be more useful in design processes.

But beyond these, the main problem with AI is that it is only as good as the main chunk of its averaged-out dataset. The real challenge is using AI to generate things outside the box.

Here, we are looking for a result that resides outside the median curve and searching for a solution (very controversially) that is outside AI’s capabilities.

Soon, a more significant portion of the population will realize the difference between machine intelligence and human intelligence, as AI will keep producing similar results in massive amounts.

Yet we will keep being mesmerized by striking or sometimes silly anomalies created by humans.

Opening the AI models to have open ends is a big challenge. Developers should ideally work on these, even towards potentially many road-blocked ends.



Don’t miss out on the opportunity to personally connect with Onur, and other experts in computational design and AI for engineering at CDFAM Berlin, May 7-8 2024.


Recent Interviews & Articles