BETWEEN INTUITION AND ALGORITHM: RETHINKING THE DESIGN PROCESS
by Fabio Gnassi
interview with Karla Saldaña Ochoa
The evolution of neural networks and machine learning algorithms is profoundly transforming the architectural landscape, turning design practice into a dynamic and ever-expanding research field. In this context, cutting-edge laboratories are emerging, exploring innovative applications of these technological tools and laying the conceptual foundations for a new frontier in design: “data-driven architecture.”
Could you illustrate the main topics explored within the SHARE Lab by describing some of your most significant projects?
The SHARE Lab is a research laboratory that explores the use of artificial intelligence from a human-centered perspective. We apply it to spatial analysis processes, from the scale of a building to that of a city. Our theoretical framework is based on studying the practical applications of AI, distinguishing its use as a “tool” or an “instrument.”
The concept of a “tool,” comparable to a hammer, represents something designed to perform a specific task. In contrast, an “instrument,” like a musical instrument or a sheet of paper, gains meaning and value only through interaction with the user.
This distinction implies that when AI is used as an “instrument,” it enhances the creativity of those who employ it. Conversely, when used as a “tool,” AI must be fast and precise in delivering the best possible outcome for the task at hand.
In the second case, AI as a “tool”, is a valuable asset for the common good, as many issues related to big data analysis directly impact people’s lives. To effectively address the social phenomena emerging from these challenges, the “tool” must be specifically targeted and oriented toward a well-defined goal.
An example of AI used as an “instrument” includes SOMs (Self-Organizing Maps), an unsupervised clustering algorithm. An example of AI used as a “tool” is the training of a supervised machine learning algorithm capable of detecting fruit-bearing trees from satellite images—an application particularly useful in food security and disaster management contexts.
These are the two primary ways I integrate AI into my practice.
From a broader societal perspective, my work extends to larger partnerships with colleges and organizations that address space-related issues.
What I consider truly essential in research is an ongoing reflection on the concept of space: we continuously apply our knowledge to understand what space is, why it matters, and how it can respond to the fundamental questions we ask ourselves.
Do your experiments also involve the use of generative models?
When it comes to generative models, I am particularly fascinated by how they work. Applications like Stable Diffusion can process vast amounts of images and videos; to do so, they develop a unique form of knowledge. This intrigues me because I see it as a kind of digital “common sense,” I find it exciting to experiment with. The fact that these models have been trained on an extensive dataset sourced from millions of people drives my curiosity to explore them further.
Let’s consider diffusion models in the text-to-image domain. When we use a text prompt to generate an image, the model progressively diffuses the meaning of the words to refine the final image. The process starts from a random distribution and, through diffusion, transforms noise into a visually meaningful representation.
What truly interests me is what happens in the intermediate steps of this transformation. I would love to disassemble the model and observe the images generated at each stage of its decision-making process. I want to see how the model arrives at a specific final representation.
This is because we rely on a clear mental model when we use language to describe something. However, images tend to simplify and constrain thought, reducing complexity to a single visual response. But what if we could explore the entire spectrum of information that led to that representation?
If we could see all the alternatives the model considered, we might discover new insights and ideas. We could even challenge what we consider “common knowledge” and decide whether we agree with it or if we want to explore new possibilities.
This is how I utilize these tools. Recently, I wrote an article discussing a concept I refer to as “possibilities and intermediate concepts,” which aims to define an approach that can advance architectural design forward by broadening the way we conceive and generate new ideas.
What is meant by “Data Aided Design,” and what are the advantages of this approach?
For me, this topic is closely linked to computational design. We know that the concept of computation has existed in humanity since the 19th century, when machines began processing different types of data (in that case, numbers) to support decision-making processes.
I believe that data-driven design bridges the gap between a quantitative and a qualitative approach to design. Through data and metrics, we can describe and measure the objective aspects of a design, but the design process also involves perception and intuition. Sometimes, we know that a design “works”—the shapes, proportions, and relationships between elements feel right. However, a data-driven approach allows us to manage and quantify these qualitative aspects, providing stronger arguments for our design choices.
This is particularly intriguing because today, thanks to computational power and machine learning algorithms, we can transform any type of data—images, 3D models, videos, sounds—into numerical values. This means we can analyze and understand how these pieces of information influence design, abstract new relationships, and build a design process based on more informed decisions.
How would you argue that, with the advent of artificial intelligence, an architect’s authorship can be linked to the construction and curation of datasets?
This could be a useful indication of how to describe a job offer for an architect. For example, a job listing might state that an architect should have experience in design, drafting, and managing and manipulating datasets.
Today, more than ever, architecture should be seen as the ability to connect different elements and understand the multiple layers that influence a project. Datasets can represent information about communities, energy, materials, and much more. Our work is to integrate all these factors—not just to define the aesthetics or performance of a project, but to create something more meaningful.
Thanks to a data-driven approach, we can now incorporate data that we previously lacked the resources to collect or didn’t consider helpful for design decisions. This is transforming the role of the architect: while the figure of the “master” once dominated the creative process, today we see students and professionals leveraging diffusion models to redefine their working methods.
However, I believe we shouldn’t passively select what these models generate, but rather add layers of complexity, actively contributing to the process. The applications of diffusion models are fascinating, not just for image generation.
SOMs represent a highly interesting tool for interacting with and manipulating datasets. Could you explain what they are and describe some practical applications?
A self-organizing map (SOMs) is a machine learning unsupervised clustering algorithm capable of identifying similarities within the feature vectors of a dataset. One of the most intriguing aspects of SOMs is their flexibility. The same algorithm can perform various tasks, such as clustering, prediction, dimensionality reduction, or feature extraction.
Currently, one area where I am trying to apply them is digital twins at different scales. Digital twins are digital representations of real environments that respond to simulations or environmental phenomena. They can be instrumental in helping people unfamiliar with maps and graphs make decisions in an immersive way within a space.
The main challenge in digital urban models is relying on predefined data aggregation units, such as ZIP codes in the United States or census tracts. However, these divisions often have little relevance to our research questions: they are arbitrarily drawn and may separate communities that are very similar.
For example, in a case study, we aimed to analyze the correlation between:
- the density of septic tanks,
- urban flooding caused by storms,
- public health impacts,
- effects on housing prices.
However, when data is mapped onto traditional census units, the results are often not meaningful, as these units can be too large or artificially separate similar communities.
To overcome this issue, we used self-organizing maps, which allow us to restructure aggregation units more meaningfully. We employed feature vectors to describe phenomena such as flood maps, housing prices, and septic tank density while also incorporating geographic and social variables.
What makes this particularly interesting is that self-organizing maps transform high-dimensional data into low-dimensional spaces, such as a 2×2 grid. This enables us to identify city areas with similar characteristics, even if they would have been separated in traditional models.
This is an example of how self-organizing maps can be applied to design and socially relevant issues, helping to make decisions and develop urban policies based on new data aggregation units that are more aligned with research needs.
Karla Saldaña Ochoa
Karla Saldana Ochoa is a tenured track assistant professor at the School of Architecture at the University of Florida and a faculty affiliate at the AI2 Center, the Center of Latin American Studies, and FIBER. Karla is the director of SHARE Lab, a research group focused on developing projects that leverage the interaction between artificial intelligence (AI) and human intelligence, which is applied to boost creativity in architectural design and create tools to analyze big data of urban phenomena. She collaborates on international projects in Germany, Italy, Switzerland, Mexico, and Ecuador. Karla is an Ecuadorian architect and coder with a Master of Advanced Studies in Landscape Architecture and a Ph.D. in Technology in Architecture from ETH Zurich. Her Ph.D. researched the integration of Artificial and Human Intelligence for an accurate and agile response to natural disasters, leveraging a multimodal fusion approach for ML inference.