Skip to content

SinaDavari/bcon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BCon: ControlNet-Based Domain Adaptation of Synthetic Construction Images

Alt text for image 1

BCon is a domain adaptation framework that enhances the realism and diversity of synthetic construction images using ControlNet with Stable Diffusion XL, while preserving full annotations essential for training deep neural networks (DNNs). This approach effectively bridges the domain gap inherent in synthetic data, reducing reliance on costly real-world data collection and annotation.

Installation

  1. Clone the Repository:
    git clone https://github.com/SinaDavari/bcon
    cd bcon
    

Option 1: Using the Conda Environment YAML File

If you prefer to create the environment using the provided environment.yaml file, you can follow these steps:

  1. Create the Environment from the YAML File:
    conda env create -f environment.yaml
    
  2. Activate the Environment:
    conda activate bcon
    

Option 2: Setting Up a New Environment

  1. Create a new Conda environment:

    conda create --name bcon python=3.9.2
    
  2. Activate the environment:

    conda activate bcon
    
  3. Install the dependencies:

    pip install -r requirements.txt
    

Usage

To run the BCon enhancement process:

  1. Set the Paths to Your Datasets:

    • Update the dataset paths in bcon.py to point to your BlendCon images and annotations.
  2. Run the Script: Process the images using BCon and output the enhanced images along with preserved annotations:

    python bcon.py
    

Dataset

We provide sample datasets for testing and experimentation:

  • Sample Enhanced Images: 100 random BCon-enhanced images along with their corresponding BlendCon images, depth maps, and semantic masks are available in the Datasets folder.

  • Scraped Test Dataset (Scraped_Test_Set): Our test dataset, consisting of 1,257 scraped real-world construction site images used for evaluation, is included in the Datasets folder.

  • Sample datasets are available at: https://drive.google.com/drive/folders/13ZFP9vP5LWqvBlwDvovzZrwfNicsesSi?usp=sharing

  • Datasets folder structure is as follows:

    bcon/
    └── Datasets/
        ├── BlendCon_Samples/
        │   ├── depths/
        │   ├── imgs/
        │   ├── labels/
        │   └── masks/
        ├── BCon_Samples/
        └── Scraped_Test_Set/
            ├── imgs/
            └── labels/
    
    

Results

The object detection results, tested on the scraped test dataset, are summarized below:

Dataset # Images # Instances AP50–95 (%)
BlendCon 25,600 43,000 60.9
BCon 25,600 43,000 65.7
Real SODA + MOCS 12,800 43,000 65.6

These results demonstrate the effectiveness of the BCon framework in improving object detection performance on synthetic data.

Alt text for image 2

Contributing

We welcome contributions from the community. If you'd like to contribute, please follow these steps:

  1. Fork the repository.
  2. Create a new branch for your feature or bugfix.
  3. Commit your changes with clear messages.
  4. Submit a pull request describing your changes.

Citation

If you use this code or dataset in your research, please cite our paper:

@article{BCon2025,
  title={ControlNet-Based Domain Adaptation for Synthetic Construction Images via Graphical Simulation and Generative AI},
  author={Sina Davari, Daeho Kim, and Ali Tohidifar},
  journal={Automation in Construction},
  year={2025}
}

About

The repository for utilizing ControlNet to generate more realistic and diverse synthtic images.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages