Learning generative ai using various mini projects and training on big data
This project explored the Pix2Pix model from the pytorch-CycleGAN-and-pix2pix repo to understand image-to-image translation on paired datasets.
- Dataset: Maps (translating input maps (A) to satellite images (B)).
- Results: Visualizations show Input (A), Generated (Fake B), and Ground Truth (Real B).
- Note: The model was trained for only 100 epochs, and while it achieved the mapping, longer training is needed to eliminate artifacts and improve fidelity.
| Input (A) | Generated (fake_B) | Ground Truth (real_B) |
|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
This project utilized the CycleGAN model from the pytorch-CycleGAN-and-pix2pix repository to explore unsupervised image-to-image translation.
- Dataset: horse2zebra (translating horses to zebras (A → B) and vice-versa (B → A)).
- Training: 30 epochs were completed.
- Observation: The model successfully captured the texture/style change (e.g., stripes), but more epochs are needed to fully refine the output and improve cycle consistency.
- Key Learning: Verified the effectiveness of cycle-consistency loss in learning mappings between unpaired domains.
| Input (Real A) | Generated (Fake B) |
|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |






















