![]() ![]() The challenge of drawing floor plates hosting multiple units marks the difference between single-family houses and apartment buildings. As described in Figure 3, each model of the stack handles a specific task of the workflow: (I) footprint massing, (II) program repartition, (III) furniture layout.įigure 8. I build upon the previously described precedents to create a 3-step generation stack. Regarding GANs as design assistants, Nono Martinez’ thesis at the Harvard GSD in 2017 investigated the idea of a loop between the machine and the designer to refine the very notion of “design process”. Peters’ work turns an empty footprint into programmatic patches of color without specified fenestration. Nathan Peters’ thesis at the Harvard Graduate School of Design in the same year tackled the possibility of laying out rooms across a single-family home footprint. If the user specifies the position of openings and rooms, the network elements laid out become furniture. Inversely, patches of colors in their work turn into drawn rooms. Floor plan images processed by their GAN architecture get translated into programmatic patches of colors. The authors proposed to use GANs for floor plan recognition and generation using Pix2PixHD. Zheng and Huang in 2018 first studied floor plan analysis using GAN. in November 2018 enabling image-to-image translation with their model Pix2Pix has paved the way for my research. Apartment architectural sequenceĪlthough the initial attempts proved imprecise the machine builds some form of intuition after 250 iterations. We show how one of my GAN-models progressively learns how to layout rooms and the position of doors and windows in space, also called fenestration, for a given apartment unit in the sequence in figure 2.įigure 2. It eventually took under 2 hours on a Tesla V100 in GCP, allowing for more tests and iterations than by running the same training locally. This sequence first took over a day and a half to train. įigure 2 displays the results of a typical training. I used TensorFlow 1.4.1 but a newer version of pix2pix with Tensorflow 2.0 is available. The simplicity of the NVIDIA GPU Cloud Image for Deep Learning offered on GCP allowed a seamless deployment by installing all the necessary libraries for Pix2Pix (Tensorflow, Keras etc) and packages to run this code on the machine’s GPU (CUDA & cuDNN). I ran fast iterations & tests using an NVIDIA Tesla V100 GPU for the training process on Google Cloud Platform (GCP). I prefered Tensorflow because the large user base and knowledge base gave me confidence that I can easily find answers in case I run into an issue. His code uses Tensorflow, as opposed to the original version, which is based on Torch, and has proven to be easy to deploy. I used Christopher Hesse’s implementation of pix2pix. As an example, just showing our model the shape of a parcel and its associated building footprint yields a model able to create typical building footprints given a parcel’s shape. ![]() We control the type of information that the model learns by formatting images. We use this ability to learn image mappings which lets our models learn topological features and space organization directly from floor plan images. ![]() The two parts of the network challenge each other resulting in higher quality outputs which are difficult to differentiate from the original images. The Generator transforms the input image to an output image the Discriminator tries to guess if the image was produced by the generator or if it is the original image. The network consists of two main pieces, the Generator and the Discriminator. Pix2Pix uses a conditional generative adversarial network (cGAN) to learn a mapping from an input image to an output image. Additionally, by tackling multi-apartment processing, this project scales beyond the simplicity of single-family houses.īeyond the mere development of a generation pipeline, this attempt aims at demonstrating the potential of GANs for any design process, whereby nesting GAN models, and allowing user input between them, I try to achieve a back and forth between humans and machines, between disciplinarian intuition and technical innovation. By nesting these models one after the other, I create an entire apartment building “ generation stack” while allowing for user input at each step. Let’s unpack floor plan design into 3 distinct steps:Įach step corresponds to a Pix2Pix GAN-model trained to perform one of the 3 tasks above. Rather than using machines to optimize a set of variables, relying on them to extract significant qualities and mimicking them all along the design process represents a paradigm shift. This approach is less deterministic and more holistic in character. I believe a statistical approach to design conception will shape AI’s potential for Architecture. ![]()
0 Comments
Leave a Reply. |