Deep Fluids: A Generative Network for Parameterized Fluid Simulations

Byungsoo Kim1, Vinicius C. Azevedo1, Nils Thuerey2, Theodore Kim3, Markus Gross1, Barbara Solenthaler1
1ETH Zurich, 2Technical University of Munich, 3Pixar Animation Studios

Above: Our generative neural network can synthesize fluid velocities continuously in space and time, using a set of input simulations for training and a few parameters for generation. This enables fast reconstruction of velocity fields, continuous interpolation and latent space simulations.


This paper presents a novel generative model to synthesize fluid simulations from a set of reduced parameters. A convolutional neural network is trained on a collection of discrete, parameterizable fluid simulation velocity fields. Due to the capability of deep learning architectures to learn representative features of the data, our generative model is able to accurately approximate the training data set, while providing plausible interpolated in-betweens. The proposed generative model is optimized for fluids by a novel loss function that guarantees divergence-free velocity fields at all times. In addition, we demonstrate that we can handle complex parameterizations in reduced spaces, and advance simulations in time by integrating in the latent space with a second network. Our method models a wide variety of fluid behaviors, thus enabling applications such as fast construction of simulations, interpolation of fluids with different parameters, time re-sampling, latent space simulations, and compression of fluid simulation data. Reconstructed velocity fields are generated up to 700x faster than traditional CPU solvers, while achieving compression rates of over 1300x.


supplemental video


Paper | Supplemental Material | Code | Presentation | Bibtex


This work was supported by the Swiss National Science Foundation (grant No. 200021_168997) and ERC Starting Grant 637014.