Research Project by Urs Bergmann (ex member), Nikolay Jetchev & Roland Vollgraf
The overarching goal of this project is to advance the field of generative modeling, i.e. to build methods that learn complex probability distributions. In particular, we focus on generative adversarial networks (GANs), a relatively new technique that has shown impressive results in purely data-driven modeling of images.
Texture created by Zalando Research method:
We have developed a novel method for texture synthesis called Spatial-GAN, or SGAN. Our method has the following features which make it a state of the art algorithm for texture synthesis:
- high image quality of the generated textures
- very high scalability w.r.t. the output texture size
- fast real-time forward generation
- the ability to fuse multiple diverse source images in complex textures.
Zalando box packaging:
Our paper has been accepted at the NIPS 2016 adversarial learning workshop:
and follow-up work has been accepted to the International Conference for Machine Learning 2017:
The video shows how we can learn a large texture model from an example painting (by Juan Miro) and use it to dynamically morph and paint the Zalando logo.
We could get as a texture the satellite image of the city of Barcelona, and then generate a gigantic random city texture with the statistics of the Barcelona image, In our paper we examine that example in detail and show the superior performance of our approach compared to other techniques.
If you want to generate infinite textures yourself, check out our method on github: