Contrastive Unpaired Translation (CUT)

Contrastive Unpaired Translation (CUT)

License: MIT
Model Type: Image Generation
Contrastive Unpaired Translation (CUT) is a PyTorch-based framework for unpaired image-to-image translation. It leverages patchwise contrastive learning and adversarial training to achieve high-quality translations without the need for paired datasets. CUT offers faster and more memory-efficient training compared to traditional methods like CycleGAN.

Key Features

  • Utilizes patchwise contrastive learning to align corresponding patches between input and output images.
  • Eliminates the need for hand-crafted loss functions and inverse networks.
  • Supports single-image training scenarios, enabling translation with minimal data.
  • Faster and more memory-efficient training compared to CycleGAN.
  • Based on PyTorch and compatible with existing CycleGAN and pix2pix frameworks.
  • Includes implementations of both CUT and FastCUT models.

Project Screenshots

Project Screenshot
Project Screenshot
Project Screenshot
Project Screenshot
Project Screenshot
Project Screenshot