Research Project by Dr. Urs Bergmann & Dr. Nikolay Jetchev
The overarching goal of this project is to advance the field of generative modeling, i.e. to build methods that learn complex probability distributions. In particular, we focus on generative adversarial networks (GANs), a relatively new technique that has shown impressive results in purely data-driven modeling of images.
We have developed a novel method for texture synthesis called Spatial-GAN, or SGAN. Our method has the following features which make it a state of the art algorithm for texture synthesis:
high image quality of the generated textures, very high scalability w.r.t. the output texture size, fast real-time forward generation, the ability to fuse multiple diverse source images in complex textures.
Our paper has been accepted at the NIPS 2016 adversarial learning workshop:
The video shows how we can learn a large texture model from an example painting (by Juan Miro) and use it to dynamically morph and paint the Zalando logo.
We could get as a texture the satellite image of the city of Barcelona, and then generate a gigantic random city texture with the statistics of the Barcelona image, In our paper we examine that example in detail and show the superior performance of our approach compared to other techniques.