A Tour of AI in Architecture

A Virtual Tour at the Arsenal Pavilion

Stanislas Chaillou
Towards Data Science

--

AI & Architecture Virtual Tour | Source: Author

In this article, we present the content of “Artificial Intelligence & Architecture”, an ongoing exhibit on display at the Arsenal Pavilion Museum, in Paris. Due to the recent events, the museum has closed its doors, but reopens online, and offers today a virtual tour of the exhibition, freely available. We unveil here part of the exhibit’s content and invite you to visit it, using the Arsenal Pavilion’s Virtual Tour.

You can now access the Virtual Tour at this address.

Exhibit Virtual Tour Walk-through

I. Introduction

Artificial Intelligence (AI) has already made its way into the industry, providing it with the means to meet new challenges. Its use in the field of architecture is still in its infancy, but the expected results already obtained are promising. This technology is much more than a mere opportunity, it is without a doubt a decisive step forward, quite capable of transforming the architectural practice. This exhibition explores this engagement and its application to the built environment. Defining AI, explaining what it encompasses, both as techniques and as paradigms, is central to understanding its advent in architecture.

AI first needs to be considered from the perspective of the history of science and that of architecture. Rather than a “disruption”, the technological evolution surrounding and supporting AI is the result of a slow maturation. Indeed, the profession has been undergoing a transformation for quite some time. The progressive adoption of technological solutions has already profoundly changed each stage of its value chain: first by exploiting new construction techniques, then by developing appropriate design software, and now by introducing statistical computing capacities, including, in the forefront, data science and artificial intelligence. Henceforth, rather than a radical orbit change, we want to see a change of trajectory whose acceleration is ultimately in the continuity of a practice that has led architecture to what it is today. Modularity, Computer-Aided Design (CAD), Parametricism and Artificial Intelligence (AI) are the four inexorably interwoven stages of a sequence that frames the slow hybridization of our practices such as we live them and can imagine them today.

Bringing together concrete examples and recent results achieved in various fields of research, we showcase for the first time an inventory of AI’s presence in architecture and a panorama of the latest advances in this field. Facade, plan, structure, perspective: as many scales for measuring the city to which AI can already contribute, whether real or hypothetical.

Between current theory and experimentation, this exhibit intends to shed light on the inception of a new technological era, leveraging the architectural practice, while offering it a renewed relevance.

In the video here below, we display the opening conference of the exhibit, held on February 27th, at the Arsenal Pavilion. In this talk, we sum up the overreaching goal of the exhibit while framing the potential of AI for Architecture.

II. History

Modularity, Computational Design, Parametricism and finally artificial Intelligence are the 4 interwoven steps that have shaped the past 100 years of the systematization of Architecture. AI is simply the latest development of this gradual effort. We offer here to unpack each period and illustrate them with key events and historical figures, in order to build up AI’s advent in our discipline, Architecture.

A. Modularity

“Baukasten” by W. Gropius (far-left), Dymaxion House by R. Buckminster Fuller (left), Winslow Ames House by R. W. McLaughlin (right), Habitat 67 by M. Safdie (far-right)

Modularity could be defined as the starting point of systemized architectural design. The “modular grid” theorized in 1920 by Walter Gropius for the Bauhaus, carries the hope of technical simplicity and the promise of affordable architecture. Initially, it arose as a topic of exploration for academics and practitioners. Gropius, along with Adolf Meyer, introduce the idea of “Baukasten”, a typical module with strict assembly rules. In the same period, Richard Buckminster Fuller provides a more systemic view of the module, which integrates pipelines, structures, etc. His Dymaxion house (1929–1946), which pushes modular housing to the extreme, sets a vibrant precedent, and the first convincing demonstration of the concept for the industry. This standardization is later elaborated with Le Corbusier’s “Modulor” (1945), which applies the modular idea to the human scale, making possible, as early as 1946, the holistic implementation of this principle.

With the Modulor, the dimensions of the built environment align to key metrics and ratios derived from the scale of the human body. Consequently, from the “Unité d’habitation” in Marseille (1952) to the Sainte-Marie de La Tourette Priory in Éveux (1959), Le Corbusier systematizes the dimensions and spans in relation to this scale.

Following these early theorists, architects adapt their practice to prioritize the matrix, which amounts to transferring part of the technical aspects of building design to the logic of the module. The arguments prove convincing: a significant improvement in the predictability of construction reduces the complexity and cost of the design. Modularity then swiftly extends to the field as a whole: the Winslow Ames House, built in the United States by Professor Robert W. McLaughlin in 1933, is one of the first large-scale modular projects in the world. This initiative is rightfully perceived as a major breakthrough, much like the Habitat 67 by Moshe Safdie in 1967 in Canada.

Modularity even influences urban planning in the early 1960s when projects like the “Plugin City” by Archigram aspire to create entirely modular cities. Through the continual assemblage and dismantlement of modules, fitted on a three-dimensional structural matrix, cities are expected to find a new logic, addressing both the possibility of growth and the imperative of feasibility. The initial sleekness of the theory quickly reaches its limitations and is prematurely exhausted. Indeed, constraining architectural design to a simple device for assembling modules mechanically adjusted on a frame ultimately leads to its decline. Architecture cannot be resolved to confine its practice to a role of assembler, only a guarantor of rules and processes, especially since the results prove to be monotonous and the assembly systems betray real constructive weaknesses. Nevertheless, if modularity of «strict observance» significantly inflects the practices of the profession, it is also pervasive, thanks to its system of rules, and had a lasting mark on the underlying principles of architectural design.

B. CAD

PRONTO by Patrick Hanratty (far-left), URBAN II by N. Negroponte (left), ‘Seek’ by N. Negroponte (right), GENERATOR by Cedric Price (far-right)

The surge in computer technology (microprocessors, memory, PC, networks, etc.) allows for unprecedented complexity in modular design. The early 1980s, marking a rehabilitation of the systematization of rule-based architectural design.

In fact, as early as the mid-1950s, a fundamental analysis on the potential of computer design has begun in some engineering offices. In 1959, Professor Patrick Hanratty releases PRONTO, the first prototype of CAD (Computer Assisted Drawing) software, created for designing engineering components. The possibilities this software offers, coupled with the fast-paced evolution of computational power, jumps start the discussion within the architectural field.

From the 1970’s onwards, an entire generation of computer scientists and architects go on to create a new field of research : computer-assisted architectural design (CAD). The Architecture Machine Group (AMG), created at Massachusetts Institute of Technology (MIT) in 1967, and led by Professor Nicholas Negroponte, is probably its most singular example. Negroponte’s book “The Architecture Machine” (1970) encapsulates the essence of the AMG’s mission: “investigating how machines can enhance the creative process, and more specifically, the architectural production as a whole”. Culminating with the release of projects URBAN II, and later URBAN V, this group demonstrates, even before the industry makes any headway in this field, the potential of CAD applied to space design. A few years later, Cedric Price, then chair of the Department of Architecture at the University of Cambridge, invents the Generator (1976). Following Negroponte, Price uses AMG’s work on AI and expands it by exploring the idea of a constantly evolving, autonomous building which reacts “intelligently” to adapt to user behavior. According to Price, the term “intelligence” embodies the behavior that the Generator manages to emulate.

Building on this new momentum from MIT, architects and the industry at large actively transform these inventions into numerous innovations. Architect Frank Gehry, certainly the most vibrant advocate of the cause, perceives the application of computation as the means to drastically relax the boundaries of assembly systems and to give his buildings new shapes and geometries. Gehry Technologies, founded by Gehry and Jim Glymph in the 1980s, used early CAD-CAM software (CAM: computer- aided manufacturing), such as CATIA, from Dassault Systems, to tackle complex geometric problems. Setting the precedent for the next thirty years of Computational Design, Gehry Technology demonstrates the value of CAD to architects. Over the next fifteen years, the compelling growth of computational power and data storage capacities, combined with increasingly affordable and more user-friendly machines, massively facilitates the adoption of 3D-design software. Designers rapidly take possession of this new system which, by allowing a rigorous control of geometry, boosts design’s reliability, feasibility and curbs the cost of design; facilitating collaboration among designers/architects and, moreover, enabling more design iterations than traditional hand-sketching ever could. More tests and more options for better design results, such is the goal. However, shortcomings eventually arise. In particular, the repetitiveness of certain tasks and the lack of control over complex geometric shapes become serious impediments. Faced with these limitations, a new paradigm emerges beyond CAD: Parametricism.

C. Parametricism

Stadium N by Luigi Moretti (far-left), Ivan Sutherland & SketchPad (left), Grasshopper Interface (right), Kartal-Pendik Masterplan by Zaha Hadid Architects (far-right)

Parametricism allows the architect to better master complex shapes while avoiding repetitive tasks. Thanks to this new approach, each task is rationalized into a set of simple rules, constituting a procedure. This procedure can be encoded in the program by the architect so as to automate a previously manual and tedious execution. However, while allowing the encoding of a given procedure, a parametric program helps in isolating the key parameters affecting the result. The architect is then able to vary the parameters in order to generate different possible scenarios: different shapes or options, generated instantly, by simply varying the previously defined parameters.

In the early 1960s, architect Luigi Moretti initiates the emergence of parametric architecture. His project “Stadium N”, is the first clear expression of Parametricism. By defining nineteen driving parameters — among them, the spectators’ field of view and sun exposure of the tribunes — Moretti establishes a strict procedure, directly responsible for the shape of the building. Each variation of the parameter set induces a new form for the stadium. The resulting shape, surprising as it might be, gives the first example of this new parametric aesthetic: although from a quasi-scientific process, the result is striking by its organicity.

Three years later, Ivan Sutherland applies such principles to design software with his creation of SketchPad, one of the first truly user-friendly CAD software. Embedded in the heart of the software, the notion of “atomic constraint” is Sutherland’s translation of Moretti’s idea of “parameter”. In a drawing made with SketchPad, each geometric form is translated for the machine into a set of atomic constraints, in other words, parameters. This very notion is the first formulation of parametric design in computer’s terms. Samuel Geisberg, founder of the Parametric Technology Corporation (PTC), in 1988, rolls out Pro/ENGINEER, the first software program providing its users with a full access to geometric parameters. As the software is released, Geisberg sums up perfectly the parametric ideal:

“The goal is to create a system that would be flexible enough to encourage the engineer to easily consider a variety of designs. And the cost of making design changes ought to be as close to zero as possible.”

The bridge between design and computation, established by Sutherland and Geisberg, enable a new generation of “parameter-conscious” architects to emerge. In fact, a handful of key individuals adopt Parametricism to translate this new method into practical innovations throughout the industry. Zaha Hadid Architects is an outstanding example of the parametrization of architecture. An Iraqi architect and mathematician trained in the UK, Hadid merges math and architecture using parametric design. Her work is often the result of rules, encoded in the program, allowing for unprecedented levels of control over the buildings’ geometry. Each architectural decision is translated into a given set of parameters, resulting in a specific building shape. For architect and engineer Patrick Schumacher, company director of Zaha Hadid Architects, the discipline, therefore «converges» towards Parametricism as a design technique, but also as an architectural style. In his book Parametricism — A New Global Style for Architecture and Urban Design (2008), he explains that Parametricism is linked to a growing awareness of the concept of “parameter”, at each stage of the built environment. This work would have not been possible without Grasshopper, software developed by David Rutten in the 2000s. Designed as a visual programming interface, Grasshopper allows architects to easily isolate the driving parameters of their design while allowing them to tune them iteratively. The simplicity of its interface coupled with the intelligence of the built-in features continues to power most buildings’ design across the world and has inspired an entire generation of “parametric” designers.

However, a more profound revolution, driven by parametrization since the early 2000s, makes Parametricism prevalent to the daily practice of most architects: BIM (Building Information Modeling) is its most striking expression. The creation and development of BIM, spearheaded by Philip Bernstein, then vice president of Autodesk, brings rationality and feasibility to a brand-new level within the construction industry. The underlying idea of BIM is that every element in a 3D building model is a function of parameters (“properties”) that drives each object’s shape and documents it. From Sutherland’s SketchPad to Revit — the most widely used BIM software today –, there is a single common thread: the explicit use of parameters as the driving force of design.

D. Artificial Intelligence

John McCarthy (left), AI-generated faces by Nvidia Research (middle), GAN Model Architecture (right)

Artificial Intelligence (AI) is fundamentally a statistical approach to architecture. AI seems to not only provide a response to the limitations of parametric architecture but also and above all to open up a radically new era of architectural design.

In 1956, the American mathematician John McCarthy invents the concept of AI, namely “using the human brain as a model for machine logic”. Instead of designing a deterministic model, built for a set number of variables and rules, AI lets the computer create intermediary parameters, from information either collected from the data or transmitted by the user. Once the “learning phase” is achieved, the machine can generate solutions that are not simply answering a set of predefined parameters, but which create results emulating the statistical distribution of the information received during the learning phase. This concept is at the core of the paradigm shift brought about by AI. The partial independence of the machine to build its own understanding of the problem, coupled with its ability to digest the complexity of a set of examples, disrupts the premise of Parametricism. Since not all rules and parameters are explicitly declared upfront by the user, the machine can unexpectedly reveal underlying phenomena and even try to emulate them. It is a quantum leap from the world of heuristics (rule-based decision making) to that of statistical modeling.

At the beginning of the 1980s, the sudden increase in computational power and the sharp increase of funding gives AI research a second wind. Key to this period are two main revolutions: expert systems and inference engines. The first corresponds to machines able to reason based on a set of rules, using conditional statements. A real breakthrough at the time is Cyc, the project developed by Douglas Lenat, which involves machines geared towards inference reasoning. Using a knowledge base (a set of statements established as true), an inference machine is able to deduce the veracity of a new statement.

It is not until the early 1990s, and the advanced mathematization of AI, that the field provides promising results. The emergence of a new type of models reveals a second area for AI’s potential: networks and machine learning. Thanks to the use of “layered” computer models, also called “neural networks” in that they recall the neural structure of the human brain, a machine can now grasp higher complexities then previously developed models. Such models can be “trained”, or in other words, adjusted for specific tasks. Among the many innovations which were inspired by this development, the generative adversarial network (GAN) proves to be particularly relevant for architecture. Theorized in 2014 by Ian Goodfellow, a researcher at Google Brain, this model could generate images from neural networks while ensuring a level of accuracy through a self-correcting feedback loop.

Goodfellow’s research took AI from an analytical tool to a generating agent, and in doing so brings it closer to architectural concerns: design and image production. In other words, AI now represents a new generation of affordable, powerful and relevant tools for the discipline. If Negroponte’s or Price’s work were initially almost devoid of true machine intelligence, current architectural software can now leverage such possibility and multiply its potential.

Although the potential AI represents for Architecture is a priori significant, it remains nonetheless contingent upon the designers’ ability to communicate their intention to the machine. To become a dependable aide, the machine must be trained, implying that architects face two main challenges: selecting, in the vast field of AI, the appropriate tools, and choosing a relevant level of abstraction and measurable qualifiers, which can be communicated to the machine. The fulfillment of these two prerequisites determines the success or failure of a form of artificial intelligence compatible with Architecture.

E. Historical Videos

To illustrate and complement the previous chapter we offer here a few videos, that will give life to the most important moments described above.

Historical videos extracts

III. AI in Architecture

Artificial Intelligence represents a new technological wave rather than a disruption. It complements our architectural practice by assisting the architectural expertise and enhancing its expression. Today, the results of academic and private research show the first proof of this evolution. So-called “generative” AI techniques — that is, able to create shapes, not just analyze them — are recent. Over the last three years, they have opened up new fields of experimentation.

Generative adversarial networks (GAN) represent one of these potentially promising fields. These models are able to learn to replicate statistically significant phenomena found among the data presented to them. Their structure is a conceptually decisive innovation. Combining two models, the “generator” and the “discriminator”, GAN follow a similar analogy to that of a student and a teacher. The “generator” (the student) seeks to generate images. The “discriminator” (the teacher) gives the “generator” a “grade” for each new image generated by the latter. This grade will assess the resemblance of the image created, to images found in a training set. Based on this result, the “generator” will adapt to get better results. It is this back-and-forth process between the “generator” and the “discriminator” that hone the generated images throughout the training phase of a GAN model. Such a model will thus gradually develop its ability to create relevant synthetic images, taking into account phenomena that it will have opportunely detected among the observed data.

If GANs represent a great opportunity, it is essential to train them wisely. Formatting the database of images, or “training set”, makes it possible to control the type of information that the model will learn. For example, simply defining the shape of a parcel and the footprint of the associated building will produce a model that can generically create building footprints from the shape of a preexisting parcel. However, mimesis is not immune to blunders nor rambling if not guided. Our own “architectural sense” will, therefore, remain the guarantor for securing both the quality of the training sets and the quality of the results. In other words, a model will only be relevant if suitably trained by an architect.

As an example, the training sequence displayed here below produced over a day and a half (see here below), shows how one of our GAN models gradually learns to layout rooms and allocate necessary openings (windows and doors). Although the initial attempts are imprecise, after 250 iterations, the machine builds up a certain form of architectural intuition.

Training Sequence | Source: Author

IV. Applications

Artificial Intelligence is finally undertaking image creation, a fundamental medium in the practice of architectural design. Indeed, the image has emerged in architecture as the central mean of drawing and designing cities. It is, therefore, an obvious bridge between artificial intelligence and architecture: if AI is capable of creating images, and gauge their complexity, applying it to architectural production is a natural extension. Showcased here are recent research results, on four distinct building scales: plans, facades, structures, and perspectives.

We display here below some examples of applications displayed in the exhibit. For more explanations and in-depth description of each research project, we invite you to visit the exhibit, using the virtual tour.

Examples of AI applications at different architectural scales | Upper-left: GAN-Loci, Kyle Steinfeld, Upper-right: ArchiGAN, Stanislas Chaillou, Bottom-right: Pix2Pix, Isola & al., Bottom-right: DS LAB, Caitlin Mueller & Renaud Danhaive

V. Future & Perspectives

For architecture, artificial intelligence is ultimately neither a “sui generis” phenomenon nor an untimely disruption, let alone a new intimidating dogma. Signals have been announcing it for decades, and it is only the culminating point of seventy-five years of invention and innovation. To the extent that AI can allow us to reconcile efficiency and organicity while offering a wide variety of options to the designer, we see here tremendous potential. At the very least, AI will enrich our practice and shed light on any blind spots of our discipline.

This exhibition hopes to be at the forefront of this evolution and contribute to the creation of a discussion platform on the relationship between AI and architecture. We urge architects to take an interest in AI and the scientific community to consider architecture as a field of investigation in its own right. We believe that AI is an asset, favoring a statistical approach to architectural design. Its less deterministic and more holistic character is undoubtedly an opportunity for our profession. Rather than using machines to optimize a set of variables, AI could allow us to rely partly on the machine to extract important and relevant architectural qualities and to reproduce them throughout the design process.

Artificial intelligence could never automate the intuition and the sensibility of the architect. However, the real risk for architecture has little to do with any dystopic consideration about the «man-machine» relationship, but, more prosaically, to competition and to the sovereignty of the profession. Indeed, either the profession will be able to drive the aggregation of disciplines around these intelligent platforms that foreshadow the architectural practice of tomorrow, or miss this opportunity and will be reduced to an ancillary discipline at the service of more powerful engineering practices that will emerge from the worlds of construction or technology. In the meantime, let’s be practical: architects must realize that benefiting from an intelligent assistant is easier than many think and that this option, in its earliest phase, should be earnestly studied and experimented with.

In light of the examples offered in this exhibition, and the number of research projects being developed in the industry and in academia, let us concede that artificial intelligence in architecture is underway and is gradually becoming an actual field of investigation. Plan, facade, structure, perspective: AI is currently in the experimental phase to bring solutions to all scales of the design of our built environment. Clearly the results are there and the applications of AI eminently tangible.

Far from considering AI as a new dogma for architecture, we see this area as a new challenge, full of potential and promise.

--

--