
Procedural BIM: Large Scale Metadata Workflows from Design to Manufacture
Keyan Rahimzadeh of Formulate
Can you start by giving us an overview of Formulate and the types of projects you work on?
Formulate is an independent consulting firm focused on using digital tools to realize innovative physical constructions. We specialize in parametric workflows that integrate metadata to manage complexity and scale, and automate work such as engineering analysis, documentation, and manufacturing. Though our roots are in facade design, we create computational workflows for various design processes across architecture, art, engineering, industrial design, and manufacturing.
Our services range from high level consulting to full integration into project teams. Some examples have included:
– High level guidance for computational teams within design firms
– Workshops for advanced modeling techniques for architectural fabrication
– Custom software tools for manufacturers
– Algorithms for quantifying facade systems to support the bidding process
– Tooling to produce fabrication information for large, complex facade projects
I think it’s important to acknowledge that the specific project represented in this article, however, is from my time at Front Inc., to whom I am deeply grateful, and with whom I continue to collaborate.
Your presentation at CDFAM NYC is on “Procedural BIM: Large Scale Metadata Workflows from Design to Manufacture” Can you explain what this approach entails and how it differs from traditional 3D modeling methods?
In most cases, people are directly modeling the object they see on the screen. They are doing a lot of thinking about why that object is the way it is, and where it goes. But once it’s been placed or sculpted, and you move on, all of that thinking evaporates. If there is a change, that designer or technician has to try and remember all the reasons they put that thing there, and it’s very easy (and frustrating) to forget.
Oftentimes, there is also a need for more than one mode of representation – an exterior wall could be a simple, large surface, it could be a series of surfaces representing openings, and panels, or it could go all the way down to individual components, with thickness and bolts and the whole deal. The way you model for a high-resolution rendering is different than if you are trying to calculate square footage. Traditionally, these are entirely different models, being done by hand, which have little to no relationship to each other, and have to be manually coordinated.
In our approach, design decisions are embodied in a series of scripts, generally Grasshopper. But rather than a single master script, which is incredibly cumbersome to build, use, and maintain, we break it up into discrete operations, with inputs and outputs.
What comes out of each step is a dedicated 3D model, parametrically generated, that is annotated with metadata. This allows it to feed into further processes downstream.
Fig. 1 – Metadata is used to access geometry, then relate objects to each other, such as the A and B parameters that are shared. A process is executed on those objects, producing a new object, on the right, which combines all of the information from the objects from which it was generated.
You do this incrementally until you have a whole network of models, with different modes of representation, but where the decision making logic has been inherently preserved. If you need to tweak a decision in one place, that change cascades through the network, without you having to remember every constraint downstream.
Fig 2 – Multiple models are generated, and the different levels of detail are all maintained, with the functional nodes (e.g. the Grasshopper scripts) use the metadata to maintain the sequence and relationships of the different representations
This is why we call it “Procedural” BIM – the model is generated from a process, and the whole routine could be re-run and produce the same outcome.
This gives you the flexibility to work on multiple parts of the design at the same time, as well as multiple modes of representation in an interconnected way.
CDFAM Computational Design Symposium Brings together the leading experts in computational design and engineering at all scales
Register to attend the next event in NYC, October 2-3, 2024
How is metadata integrated into your design framework, and what impact does it have on the design-to-manufacture workflow?
We typically use Rhino and Grasshopper as our main workhorse. This is augmented with a plugin called eleFront, for which we were a lead developer during previous years at Front, Inc.
eleFront makes it easier to create and apply metadata to objects directly in Grasshopper. Importantly, the way we use it allows for objects that are downstream to “inherit” the metadata from the inputs that created them. This allows you to build up a ton of useful information in incremental steps, at the time that it makes sense to apply them.
Fig 3. Objects take on an architectural meaning from the metadata applied. They can then be referenced, related to each other, and produce new objects. This illustrates how parameters are “inherited” by the objects before them (yellow), but also how some parameters are mutated (pink) where relevant.
The metadata also allows you to create relationships between objects without having to rely on their geometry.
For example, a bracket might support two adjacent panels. In the model, the bracket will have metadata that records which panel is on its left, and which is on its right. To determine the bracket geometry, you can use the metadata to select those panels, and derive the bracket accordingly – no geometric searches in 3D space.
Taken far enough, we were able to produce fabrication files for CNC manufacturing, where the individual holes on a framing component had metadata such that they ‘knew’ exactly which piece they were attaching to, what type of screw would be used, what the hole spacing should be, and so on. This means we could generate 2D fabrication tickets, but more importantly we could run quality control algorithms to examine a piece and ensure it had the right number of holes, of the right type.
The project you’ll be discussing involves the design, fabrication, and installation of 23,000 unique curtain wall panels. What were some of the key challenges in managing such a complex system, and how did you address these challenges?
Certainly scale is an issue. There is more geometry than you con process on a single machine, in a reasonable amount of time.
This results in a few challenges / innovations:
1) Create the right mode(s) of representation.
Let’s say for example we want to measure how many linear feet of framing we will need. If you have a model with every single framing element in all of its 3D splendor, this will be a ton of data, and actually measuring its length is not straightforward. On the other hand, a simple model of the panel’s outline might be too coarse, it doesn’t account for how the lengths are adjusted at the corners. So instead, you create a node in your network of models, that creates a bespoke model for that exact purpose, which accounts for those specifics, but is still connected with the ecosystem of models, such that it stays up-to-date as the model changes. This makes it possible to extract useful information and start making important, impactful decisions quickly.
2) Holistic design approach
As suggested above, this makes it possible to consider a huge set of conditions at once. We needed to design a system that could accommodate the entire range of variation in the facade; if you design a component for a panel on Level 6, how do you know it will work on Level 76? By working across multiple levels of detail, we could interrogate the models (things like angles, dimensions, adjacencies) quickly and in parallel. We could test out an idea, and immediately propagate it across all the conditions. Crucially, though, the nature of our discretized network of models is that those tests can be built out and evaluated in their own, sequestered studies. But, if a design is approved, you can move that node into the network of models, and you already have all of the design logic in place.
3) Collaborating as a team
A project of that complexity and scale requires a team. Gone are the days of Grasshopper ‘mega-scripts’. Every body needs to be able to work in parallel, which is another reason for breaking up the model into discrete steps. The team will work together, so that each person understand what kind of information they can expect from each other. Let’s use some of the examples above. Say I’m working on the brackets, and my teammate is working on the panels. They don’t have to know why the brackets are the way they are, and I don’t have to know that about the panels. As long as we agree on the format of the metadata we share, our workflows can speak to each other.
4) Encapsulation and legibility
By separating the design tasks into different scripts and models, but maintaining their link to the network of models, you break out all of the myriad decisions into discrete places where they can be assessed and fleshed out independently. Inherently, Grasshopper’s node based layout inherently provides a clear, visual explanation of what the script is doing – it’s an explicit record of the steps.
5) Metadata and nomenclature
It’s important that the team is consistent in nomenclature and how metadata is formatted and stored: something as simple as using “-” instead of ” ” can break the next step in the process. To solve this, we had a project database with the agreed names and the expected formatting of the values; then a small bit of code would always check your outputs before you “commit” the change.
You used machine learning to reverse engineer the flat shape of twisted panels and other complex geometries. How was this model developed, and how might the same process be applied for other applications, perhaps smaller than the architectural scale?
To put it succinctly, we used a combination of material simulations in Finite Element Analysis, to simulate the reverse process – going from the twisted shape, to the 2D shape. We then took those 2d approximations, rebuilt them, and simulated twisting them into 3D. This allowed us to quantify the differences between the 3D shape in Rhino, and the expected 3D shape once twisted.
We then built some prototypes to make sure that our simulations were a good match for reality. Armed with that, we then built a mapping function which would automatically modify our 3D surfaces to predict their true, final shape, such that when they were unrolled, it would be correct.
Fig 4. The blue dots represent the defining input parameters, which are then mapped to the magenta “shape functions”. This creates a mapping function from what we know, the perimeter of the shape, to what we are approximating, which is the curvature in the main area of the panel.
For a more detailed explanation, there is a journal paper Beyond the Hypar: Predicting Buckled Shapes in Bent Glass with Machine Learning available.
Part of the reason this worked is because we had a lot of sample data – thousands of panels which were slight variations of each other. I think on a smaller scale, very similar process can be used, as long as you have enough sample points. So, perhaps a given project might only have a dozen sample points, but if you have enough projects, you can start to build a sample set that is big enough.
Fig 5. Representation of the thousands of Finite Element Analysis simulations that were executed to populate the sample space of the machine learning model.
I think the critical thing to grapple with, is understanding the relevant features. For example, we determined that only three data points were good enough for us: how far is the panel twisted, what’s the curvature at the top, what’s the curvature at the bottom. These three values were enough to serve as the inputs for our mapping functions. But we only determined that after testing a dozen or so possible inputs, and ultimately finding that some had no correlation whatsoever to our outputs. In Machine Learning circles they call these “features” and this process is “feature selection” or “feature engineering”. There are techniques that can look at your data and try to identify the most relevant features for you, but honestly we used a combination of Excel and our own intuition of the problem to get there, and it was probably faster. More importantly, we understood it, and it wasn’t as much of a black box.
How do you ensure a continuous and robust flow of information across different levels of detail, from the overall design to specific fabricated components?
The creation of the models happens in discrete steps – almost all of the “work” of making or modifying geometry and information is done through a script. That means everything is reproducible; in theory, you could run the process from the very first stages, and produce all of the intermittent representations afresh.
I think of the system as “static nodes”, which are the repositories of data; and “active nodes”, which are the parts of the system that create or transform information. In our case, this usually corresponds to the models and the scripts respectively.
The scripts read in objects from the input models, and in particular, the metadata on those objects. For any new objects that are created, the relevant existing metadata from the inputs is copied on to them, along any new metadata that’s relevant for these new objects. This creates the continuity of information.
Fig 6. When a change is made, or an attribute is added to a system in the middle of the workflow, that change is propagated downstream to the other affected systems. This illustrates how the use of inheritance maintains a continuous flow of information from the coarse levels of detail down to the final components.
In the end, even if you are looking at a component of the window frame, it has metadata that has been accumulating through the many parametric steps that generated it, all the way back to the original massing model.
What key insights or lessons do you hope the audience will take away from your presentation?
I hope that people see the value in breaking up large, monolithic models into discrete steps, and that their interest is piqued by the possibility of creating 3d models in a way that preserves design decision making.
Rather than conceiving of 3D models as collections of objects, we can think of a system, or a process, that creates them. Hopefully it provides some fundamental ideas and techniques for how to begin to scale up 3D modeling processes, and how metadata can supercharge parametric workflows.
Finally, what do you hope to gain from attending CDFAM NYC?
I’m excited to meet more experts in the field, and to see how these kinds of problems are being approached by other disciplines and industries; and likewise, if there are any interesting challenges out there that I wasn’t aware of, but that I might be able to contribute to or learn from.






