Skip to content

Pytorch implementation of Image Manipulation with Perceptual Discriminators paper

License

Notifications You must be signed in to change notification settings

egorzakharov/PerceptualGAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PerceptualGAN

This is a PyTorch implementation of the paper Image Manipulation with Perceptual Discriminators

Diana Sungatullina 1, Egor Zakharov 1, Dmitry Ulyanov 1, Victor Lempitsky 1,2
1 Skolkovo Institute of Science and Technology 2 Samsung Research

European Conference on Computer Vision, 2018

Project page

Dependencies

Usage

1. Cloning the repository

$ git clone https://github.com/egorzakharov/PerceptualGAN.git
$ cd PerceptualGAN/

2. Downloading the paper datasets

Please follow the guidelines from official repositories:

Celeba-HQ

monet2photo, apple2orange

3. Setting up tensorboard for pytorch

All training data (with intermediate results) is displayed via tensorboard.

Follow the installation instructions in repository.

To launch, run the following command in repository folder:

tensorboard --logdir runs

4. Training

Example usage:

$ ./scripts/celebahq_256p_pretrain.sh
$ ./scripts/celebahq_256p_smile.sh

In order to achieve best quality results, you need to first pretrain the network as autoencoder.

For that, please use scripts with pretrain suffix for the appropriate dataset. After the pretraining, you can launch the main training script.

Also you need to set the following options within the scripts:

images_path: for Celeba-HQ this should point at the folder with images, otherwise it can be ignored

train/test_img_A/B_path: should point either at the txt list with image names (in the case of Celeba-HQ) or at image folders (CycleGAN).

pretrained_gen_path: when pretraining is finished, should point at the folder with latest_gen_B.pkl file (by default can be specified to:

--pretrained_gen_path runs/<model name>/checkpoints

For detailed description of other options refer to:

train.py
models/translation_generator.py
models/discriminator.py

You can easily train the model on your own dataset by changing the paths to your data and specifying input image size and transformations, see the example scripts for reference.

5. Testing

In order to test, you need to run the following command and set input_path to the folder with images (optionally, also set img_list to a list with subset of these image names), specify scaling by setting image_size (required for CelebA-HQ), file with network weights (net_path) and output directory (output_path).

Example usage:

python test.py --input_path data/celeba_hq --img_list data/lists_hq/smile_test.txt --image_size 256 \
--net_path runs/celebahq_256p_smile/checkpoints/latest_gen_B.pkl --output_path results/smile_test

6. Pretrained models

Models are accessible via the link.

If you want to use finetuned VGG for better results, you can download it and put in the repository folder. Also you will have to set enc_type option:

--enc_type vgg19_pytorch_modified

Default PyTorch VGG network is used in the example scripts.


Acknowledgements

This work has been supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001).

About

Pytorch implementation of Image Manipulation with Perceptual Discriminators paper

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published