Swapping of Fashion on People Images with Generative Modelling

Research Project by Urs Bergmann & Nikolay Jetchev

The overarching goal of this project is to advance the field of generative modeling, i.e. to build methods that learn complex probability distributions. In particular, we focus on generative adversarial networks (GANs), a relatively new technique that has shown impressive results in purely data-driven modeling of images.

“How do I look?” – this is a question typically asked when someone tries a new type of clothing or shoe. It emphasizes that visual appearance is key to fashion. The look of fashion can even be more important than fit and size: many people are willing to trade worse fit with better looks, in particular for short-term or singular events. In a digitized world, a look is represented as an image in a computer. The research focus and purpose of this project is to generate and manipulate images related to the fashion domain by using Machine Learning (ML).

ML is a set of methods to learn from data and model  high-dimensional probability distributions. Images with millions of pixels are a domain where these techniques are very useful. In addition to classical computer vision (e.g. face detection), ML is changing the way we interact using images with computers and smartphones. Style exploration can be such a novel application utilizing computer vision that fundamentally changes how we experience the fashion world.

The methodology of Generative Adversarial Networks (GANs), on which our current line of research is focused, allows for learning arbitrary probability distributions and sampling from them. When we model images, the ability to generate samples from them gives rise to exciting new applications. An amazing example that gathered world-wide publicity recently allows to map motions from one human to another human and transfer pose. Many fashion applications of generative modeling require similar generation of human body poses conditioned on specific inputs — e.g. the swapping of clothes that people are wearing, or changing the body pose of a human image while keeping a plausible appearance. Our work also focuses on using state-of-the-art-image modelling techniques  in order to flexibly modify the appearance of human fashion images. We recently published our work at the International Conference for Computer Vision (ICCV): 

 The Conditional Analogy GAN: Swapping Fashion Articles on People Images  ICCV: Computer Vision for Fashion Workshop (10/2017) Authors: Jetchev, Bergmann

The following Figures show example generated images from this proposed method.

image1

A single fashion item (orange pullover) can be combined with multiple human models. This enables exploration of how it will look on different people.

image2

A single human  image will  be combined with different upper body garments, exploring which of them looks best on  the human.

We’re constantly pushing the limit by advancing methodologies – and since the mentioned publication made significant progress towards better image quality. See for yourself in the next Figure.

image3

The leftmost human image should wear the second-left article. The right image is the final result. Our improved methods for Model Transfer increase the image resolution and keep more texture details.

 We’ll soon publish our latest approach – so watch our homepage if you want to know how this is achieved!