# sssgan-pytorch **Repository Path**: zongtao_wang/sssgan-pytorch ## Basic Information - **Project Name**: sssgan-pytorch - **Description**: release code for "Unlocking Efficiency in Fine-Grained Compositional Image Synthesis: A Single-Generator Approach " - **Primary Language**: Python - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-06-28 - **Last Updated**: 2023-06-28 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # sssgan-pytorch #### Introduction This repo is for our work "Unlocking Efficiency in Fine-Grained Compositional Image Synthesis: A Single-Generator Approach". > Abstract > > The use of Generative Adversarial Networks (GANs) has led to significant advancements in the field of compositional image synthesis. In particular, recent progress has focused on achieving synthesis at the semantic part level. However, to enhance performance at this level, existing approaches in the literature tend to prioritize performance over efficiency, utilizing separate local generators for each semantic part. This approach leads to a linear increase in the number of local generators, posing a fundamental challenge for large-scale compositional image synthesis at the semantic part level. In this paper, we introduce a novel model called Single-Generator Semantic-Style GAN (SSSGAN) to improve efficiency in this context. SSSGAN utilizes a single generator to synthesize all semantic parts, thereby reducing the required number of local generators to a constant value. Our experiments demonstrate that SSSGAN achieves superior efficiency while maintaining a minimal impact on performance. This code is developed based on [SemanticStyleGAN](https://github.com/seasonSH/SemanticStyleGAN). We thank the author for their help during the development of our project. #### Installation This repo is developed using pytorch. We use pytorch of version 1.13.1, cudatoolkit 11.6. and nvida driver 510 #### Usage Mainly you only need to run the train.py module. ``` python train.py YOUR-CONFIGURATION ``` More details and pre-trained models will be updated soon...