BCon is a domain adaptation framework that enhances the realism and diversity of synthetic construction images using ControlNet with Stable Diffusion XL, while preserving full annotations essential for training deep neural networks (DNNs). This approach effectively bridges the domain gap inherent in synthetic data, reducing reliance on costly real-world data collection and annotation.
- Clone the Repository:
git clone https://github.com/SinaDavari/bcon cd bcon
If you prefer to create the environment using the provided environment.yaml file, you can follow these steps:
- Create the Environment from the YAML File:
conda env create -f environment.yaml
- Activate the Environment:
conda activate bcon
-
Create a new Conda environment:
conda create --name bcon python=3.9.2
-
Activate the environment:
conda activate bcon
-
Install the dependencies:
pip install -r requirements.txt
To run the BCon enhancement process:
-
Set the Paths to Your Datasets:
- Update the dataset paths in
bcon.pyto point to your BlendCon images and annotations.
- Update the dataset paths in
-
Run the Script: Process the images using BCon and output the enhanced images along with preserved annotations:
python bcon.py
We provide sample datasets for testing and experimentation:
-
Sample Enhanced Images: 100 random BCon-enhanced images along with their corresponding BlendCon images, depth maps, and semantic masks are available in the Datasets folder.
-
Scraped Test Dataset (Scraped_Test_Set): Our test dataset, consisting of 1,257 scraped real-world construction site images used for evaluation, is included in the Datasets folder.
-
Sample datasets are available at: https://drive.google.com/drive/folders/13ZFP9vP5LWqvBlwDvovzZrwfNicsesSi?usp=sharing
-
Datasets folder structure is as follows:
bcon/ └── Datasets/ ├── BlendCon_Samples/ │ ├── depths/ │ ├── imgs/ │ ├── labels/ │ └── masks/ ├── BCon_Samples/ └── Scraped_Test_Set/ ├── imgs/ └── labels/
The object detection results, tested on the scraped test dataset, are summarized below:
| Dataset | # Images | # Instances | AP50–95 (%) |
|---|---|---|---|
| BlendCon | 25,600 | 43,000 | 60.9 |
| BCon | 25,600 | 43,000 | 65.7 |
| Real SODA + MOCS | 12,800 | 43,000 | 65.6 |
These results demonstrate the effectiveness of the BCon framework in improving object detection performance on synthetic data.
We welcome contributions from the community. If you'd like to contribute, please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bugfix.
- Commit your changes with clear messages.
- Submit a pull request describing your changes.
If you use this code or dataset in your research, please cite our paper:
@article{BCon2025,
title={ControlNet-Based Domain Adaptation for Synthetic Construction Images via Graphical Simulation and Generative AI},
author={Sina Davari, Daeho Kim, and Ali Tohidifar},
journal={Automation in Construction},
year={2025}
}
