
Design for real world engineering: integrating uncertainty into product assessment
Interview with Greg Grigoriadis of Metisec
Engineering design has long relied on fixed assumptions for loads, materials, and operating conditions, despite the variability inherent in real-world use. This deterministic approach often leads to conservative designs that may still fail under unexpected conditions.
At CDFAM Barcelona, Grigoris (Greg) Grigoriadis of Metisec will present a probabilistic design toolkit that integrates uncertainty directly into simulation workflows using Monte Carlo methods.
In this interview, he explains how treating inputs as statistical distributions enables engineers to quantify failure risk, develop more realistic digital twins, and make more informed design decisions based on true performance probabilities rather than assumed limits.
Can you introduce Metisec and explain what you’ll be presenting at CDFAM regarding the integration of uncertainty into engineering design workflows?
At Metisec we help clients get the most out of their products and processes while reducing failure risks.
We do that by really getting into the detail of how something works, from its function and structure to its potential failure points, before we optimise the design. That level of rigour is essential, especially with complex or unexpected failures, because the real cause is often hidden in the fine detail.
A major part of how we do this is by developing digital twins using computational modelling and simulation, and exploring how a product behaves long before it reaches the real world.
At CDFAM I will discuss why it is essential to bring uncertainty into the engineering design workflow. Even though we now have huge amounts of data from sensors, simulations, and field data, and advanced analytics are available, we still design against a single limit or red line.
It is an old-school deterministic approach in an era where everything around us is variable and data rich. After all, the design process has always been a risk management activity, but risk can never be fully eliminated and uncertainty is always present.
Probabilistic design helps us quantify what we don’t know by treating inputs as distributions rather than fixed numbers. We have developed a toolkit where any simulation input can be treated as a variable distribution, making the process a virtual replicator of real physical trials.
This allows us to develop a realistic digital twin. This both maximises the potential of any intervention we might perform and aligns designers and engineers with real outcomes, such as a true failure rate, enabling more robust and efficient designs.
At Metisec, exactly because we go to the core of how something works, this process is invaluable as it provides better insights, necessary to make informed design changes efficiently.
How does your toolkit incorporate Monte Carlo methods into traditional simulation processes, and what types of inputs are most commonly modeled as distributions?
Our toolkit brings Monte Carlo analysis directly into the simulation workflow, by intervening exactly when input parameters are set, before performing the simulation. Instead of running a single simulation with fixed inputs, we allow selected parameters to be represented by statistical distributions. The toolkit then runs large numbers of simulations, each time sampling new values from those distributions based on their likelihood, which builds a full picture of how the design behaves across realistic conditions. Rather than ending up with a simple pass or fail, we obtain a probability of failure and a much clearer understanding of the risks within the design space.
To put it simply, it is as if we suddenly had the ability to run randomised physical trials; we are performing in silico trials. Each trial would naturally give a slightly different result because the conditions vary.
For example, if the test involved a wearable device, differences in user anthropometry would strongly influence the outcome of an abuse case study. With Monte Carlo simulations, this variability can now be fully represented virtually.
The inputs most commonly modelled as distributions are those that vary in the real world. These include anthropometric measurements, material properties, manufacturing tolerances, loading conditions, environmental factors such as temperature, and parameters identified through root cause analysis or customer field data.
We usually begin with sensitivity studies to identify which parameters have the greatest influence, so we only model distributions where it genuinely matters. We can then go further and determine the effect of each factor, as well as any correlations, using design of experiments methods such as Taguchi, which is particularly useful in more investigative work like failure analysis.
How do you interpret simulation outcomes, and how are these results communicated to design teams?
The typical output from this process is a set of statistical insights into the design’s performance. We obtain information on average behaviour, the range of worst case conditions and their likelihood, and the failure rate associated with different segments of the design. In essence, we get everything a standard simulation would provide, but expressed statistically rather than as a single fixed result.
Interpreting these outputs is not a simple, one size fits all step. It depends heavily on the specifics of the product or process being assessed. It is very similar to following the scientific method: we form hypotheses early on, usually during the sensitivity analysis phase, about which behaviours or variables may drive particular failure modes and then we need to prove or disprove them. We may choose to examine certain performance metrics in more detail, for example shear stresses, contact forces or thermal expansion strains, because they more accurately represent a specific failure mechanism.
This shifts our understanding from a broad statement like “failure due to shear stresses” to something far more meaningful, such as “failure when this particular user carries out this particular activity with the product”.
Getting to that level of insight is a collaborative effort between engineers and designers, and it works best when done together rather than simply sharing plots.
The live debugging and discussion around results are extremely valuable and help shape clear strategies for the next round of design optimisation. We always follow this approach with our internal design team, and we aim to do the same with external design teams whenever possible. If that is not feasible, we still follow the same process internally before suggesting any mitigation actions to the client.
We keep the final output very visual and focused on decision making. Instead of delivering raw simulation data, we present probability curves, risk maps and clear visuals that show how likely the design is to meet its performance targets. We also illustrate how design changes shift the risk profile so teams can immediately see the effect of their decisions.
For example, we might show how tightening a tolerance reduces the failure rate, linking the extra manufacturing cost directly to the reduction in risk and the associated savings from fewer failures. This makes trade offs far easier to evaluate and supports more informed design choices.

How do you then move on to modifying the design of the product or process of interest? What tools do you use? How do you balance manual and automated optimisation techniques?
Once we have developed an optimisation strategy, as discussed earlier, even in cases where we are not following a probabilistic design approach, our internal design team carries out the design modifications using a NURBS based environment such as ALIAS.
A toolkit like this gives full freedom in the design space, allowing us to address exactly what needs addressing. This is not simply a matter of reinforcing weak areas or removing material from over supported regions. It often involves changing the way a design functions or reshaping geometry so it performs more effectively under a specific loading scenario, always with the intended manufacturing method in mind.
For example, we might isolate sensitive regions, redistribute deformations across a wider area, or change the type of resulting deformations, opening up opportunities for weight reduction or easier manufacturing. If a component is failing in bending, but torsion is not a limiting factor, can we redirect part of the bending load into torsion? This kind of thinking is central to what we do at Metisec. It is why we value design tools that offer a high degree of freedom, and why we follow a largely manual and intentional design process.

At the moment, we do not rely on automated optimisation tools. Once we understand how a product behaves and the reasoning process described earlier is complete, it is true that we could introduce automated tools. But given the way designs are initially set up, making thoughtful manual modifications is neither time consuming nor labour intensive once we have set our design optimisation strategies.
Automated tools become more useful at the very end of the process, where they can help capture the last few percentage points of optimisation. I anticipate our approach as achieving roughly the first 95% of what is feasible, after which automated procedures can be used to refine the final 5%. In most cases that 95% already exceeds the project targets, so there is usually little appetite from the client in pursuing the final 5%. When there is interest, the client often has the tools to carry out that final refinement themselves.
Can you give us examples of cases where the probabilistic analysis changed the design direction compared to traditional optimization?
I will be presenting a couple of case studies at CDFAM: one involving a component of a handheld consumer electronic device under drop loading, where the main focus was weight reduction while maintaining survivability; and another involving a laser guidance mirror under thermal loading and rapid accelerations, where the aim was to minimise both weight and vibrations.
For drop loading, the traditional approach would be to simulate only a few selected drop orientations, assuming them to be the worst cases, and then exaggerate the impact acceleration by choosing an extreme drop height and a rigid floor. However, when drop orientation, user height and floor type are treated as variables with statistical distributions, the assumed worst case scenario often turns out not to be the worst one at all.
A different combination of variables may carry a much higher probability of failure, even though the original “worst case” load is extremely unlikely to occur. In other words, the design responsible individual might believe they are working conservatively, but incorporating uncertainty can reveal a very different and more realistic risk profile.
For thermal loading, the traditional approach is to simulate a case of worst case temperature values. In reality, rapid acceleration of a heated device can accelerate cooling, making specific high temperatures impossible at specific sections of a design. When both temperature and acceleration are represented as statistical distributions based on field measurements, and when the interaction between them is captured, new optimisation strategies become possible, enabling the design to achieve better performance targets.
In both case studies, choosing to optimise for an industry relevant failure rate under realistic loading conditions allowed us to maximise the effect of the intervention compared to traditional methods.
More importantly, this approach is not just about improving optimisation potential. It is primarily an exercise in understanding how the component is truly loaded.
In cases where the traditional worst case scenario does remain appropriate, the probabilistic approach still helps us understand the influence of key design parameters, identify additional failure points and quantify how confident we can be that the worst case scenario is the correct one. This is why uncertainty quantification is a key part of submissions to regulatory bodies for components or devices with very tight safety factors, such as medical devices.

What do you hope to share with and learn from the CDFAM community through your participation this year?
At CDFAM this year I hope to share both the strengths and pains of the processes we follow at Metisec, and to advocate for something I feel strongly about: focus should be on developing tools to better inform and empower designers and engineers at the core of the design process rather than replacing them.
I am also keen to stay connected with the latest technological developments, learn about the challenges and constraints faced by current industries, and explore how our work can help address those.
Equally important for me is forming collaborations with others in the community, whether that is to improve workflows, co develop new tools or simply exchange good practices. I am excited to learn about new technologies we could integrate into our own processes, and ultimately to strengthen my involvement in the international computational design community as an active and contributing member.

Greg Grigoriadis’ work highlights a critical shift in engineering practice, from designing against assumed limits to designing with a quantified understanding of real-world variability. By integrating uncertainty directly into computational workflows, probabilistic methods enable more reliable, efficient, and informed product development.
To learn more about this approach and connect with Greg and others advancing computational design, simulation-driven engineering, and digital product development, register to attend CDFAM Barcelona, taking place April 8–9, 2026. The symposium brings together leading engineers, designers, researchers, and software developers working at the forefront of computational design at all scales.





