GitHub – shaoanlu/faceswap-GAN: A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.

Share on facebook
Share on google
Share on twitter
Share on linkedin
GitHub – shaoanlu/faceswap-GAN: A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.

faceswap-GAN Including Adversarial loss and perceptual loss (VGGface) to deepfakes'(reddit consumer) auto-encoder structure.

Updates

Date Replace 2018-08-27 Colab help: A colab pocket book for faceswap-GAN v2.2 is offered. 2018-07-25 Knowledge preparation: Add a brand new pocket book for video pre-processing through which MTCNN is used for face detection in addition to face alignment. 2018-06-29 Mannequin structure: faceswap-GAN v2.2 now helps completely different output resolutions: 64×64, 128×128, and 256×256. Default RESOLUTION = 64 might be modified within the config cell of v2.2 pocket book. 2018-06-25 New model: faceswap-GAN v2.2 has been launched. The principle enhancements of v2.2 mannequin are its functionality of producing sensible and constant eye actions (outcomes are proven under, or Ctrl+F for eyes), in addition to larger video high quality with face alignment. 2018-06-06 Mannequin structure: Add a self-attention mechanism proposed in SAGAN into V2 GAN mannequin. (Notice: There may be nonetheless no official code launch for SAGAN, the implementation on this repo. may very well be improper. We’ll regulate it.)

You are watching: face swap keras

Google Colab help

Here’s a playground pocket book for faceswap-GAN v2.2 on Google Colab. Customers can practice their very own mannequin within the browser.

[Update 2019/10/04] There appears to be import errors within the newest Colab surroundings on account of inconsistent model of packages. Please make it possible for the Keras and TensorFlow comply with the model quantity proven within the requirement part under.

Descriptions

faceswap-GAN v2.2

  • FaceSwap_GAN_v2.2_train_test.ipynb

    • Pocket book for mannequin coaching of faceswap-GAN mannequin model 2.2.
    • This pocket book additionally gives code for nonetheless picture transformation on the backside.
    • Require further coaching photographs generated by way of photoshopservices.netb.
  • FaceSwap_GAN_v2.2_video_conversion.ipynb

    • Pocket book for video conversion of faceswap-GAN mannequin model 2.2.
    • Face alignment utilizing 5-points landmarks is launched to video conversion.
  • prep_binary_masks.ipynb

    • Pocket book for coaching knowledge preprocessing. Output binary masks are save in ./binary_masks/faceA_eyes and ./binary_masks/faceB_eyes folders.
    • Require face_alignment bundle. (Another methodology for producing binary masks (not requiring face_alignment and dlib packages) might be present in photoshopservices.netb.)
  • Refer: 20 Best Photo Editing Software for New Photographers (2021) | Photoshop Services

    MTCNN_video_face_detection_alignment.ipynb

    • This pocket book performs face detection/alignment on the enter video.
    • Detected faces are saved in ./faces/raw_faces and ./faces/aligned_faces for non-aligned/aligned outcomes respectively.
    • Crude eyes binary masks are additionally generated and saved in ./faces/binary_masks_eyes. These binary masks can function a suboptimal different to masks generated by way of photoshopservices.netb.

Utilization

  1. Run photoshopservices.internet to extract faces from movies. Manually transfer/rename the aligned face photographs into ./faceA/ or ./faceB/ folders.
  2. Run photoshopservices.internet to generate binary masks of coaching photographs.
    • You’ll be able to skip this pre-processing step by (1) setting use_bm_eyes=False within the config cell of the train_test pocket book, or (2) use low-quality binary masks generated in step 1.
  3. Run photoshopservices.internet to coach fashions.
  4. Run photoshopservices.internet to create movies utilizing the skilled fashions in step 3.

Miscellaneous

  • faceswap-GAN_colab_demo.ipynb
    • An all-in-one pocket book for demostration goal that may be run on Google colab.

Coaching knowledge format

  • Face photographs are purported to be in ./faceA/ or ./faceB/ folder for every taeget respectively.
  • Photographs can be resized to 256×256 throughout coaching.

Generative adversarial networks for face swapping

1. Structure

enc_arch3d

dec_arch3d

dis_arch3d

2. Outcomes

  • Improved output high quality: Adversarial loss improves reconstruction high quality of generated photographs. trump_cage
  • Further outcomes: This picture reveals 160 random outcomes generated by v2 GAN with self-attention mechanism (picture format: supply -> masks -> remodeled).

  • Read more: How to Headswap in Photoshop | Skylum How-to | Photoshop Services

    Evaluations: Evaluations of the output high quality on Trump/Cage dataset might be discovered right here.

The Trump/Cage photographs are obtained from the reddit consumer deepfakes’ undertaking on photoshopservices.internet.

3. Options

  • VGGFace perceptual loss: Perceptual loss improves path of eyeballs to be extra sensible and in line with enter face. It additionally smoothes out artifacts within the segmentation masks, ensuing larger output high quality.

  • Consideration masks: Mannequin predicts an consideration masks that helps on dealing with occlusion, eliminating artifacts, and producing natrual pores and skin tone.

  • Configurable enter/output decision (v2.2): The mannequin helps 64×64, 128×128, and 256×256 outupt resolutions.

  • Face monitoring/alignment utilizing MTCNN and Kalman filter in video conversion:

    • MTCNN is launched for extra secure detections and dependable face alignment (FA).
    • Kalman filter smoothen the bounding field positions over frames and get rid of jitter on the swapped face. comp_FA
  • Eyes-aware coaching: Introduce excessive reconstruction loss and edge loss in eyes space, which guides the mannequin to generate sensible eyes.

Steadily requested questions and troubleshooting

1. How does it work?

  • The next illustration reveals a really high-level and summary (however not precisely the identical) flowchart of the denoising autoencoder algorithm. The target features appear like this. flow_chart

2. Previews look good, but it surely doesn’t remodel to the output movies?

  • Mannequin performs its full potential when the enter photographs are preprocessed with face alignment strategies.
    • readme_note001

Necessities

  • keras 2.1.5
  • Tensorflow 1.6.0
  • Python 3.6.4
  • OpenCV
  • keras-vggface
  • moviepy
  • prefetch_generator (required for v2.2 mannequin)
  • face-alignment (required as preprocessing for v2.2 mannequin)

Acknowledgments

Code borrows from tjwei, eriklindernoren, fchollet, keras-contrib and reddit consumer deepfakes’ undertaking. The generative community is adopted from CycleGAN. Weights and scripts of MTCNN are from FaceNet. Illustrations are from irasutoya.

Leave a Reply

×

Powered by WhatsApp Chat

× How can I help you?
%d bloggers like this: