MITO: Enabling Non-Line-of-Sight Perception using Millimeter-waves through Real-World Datasets and Simulation Tools
Created by: Laura Dodds*, Tara Boroushaki*, and Fadel Adib
MITO_Video.mov
This repository is tested on Ubuntu 22.04 with Python 3.10.
- Clone this reposity
- Run
python3 -m venv mmwave_venvto create a new virtual environment - Run
source mmwave_venv/bin/activateto source the virtual environment - Run
python3 setup.py --installto run the install. Please note:- This installs numpy 1.24.3, which is required for PSBody. If you are not planning to run the simulation, you can use a different version of numpy by changing requirements.txt.
- This setup will install requirements as needed, depending on the settings in param.json. It will only install GPU requirements if use_cuda is true, and it will only install simulation/segmentation requirements if use_simulation/use_segmentation are true. Please update params.json accordingly before running python3 setup.py
- This setup assumes c++11. If your environment uses a different version of C++, please change line 51 in setup.py to match your environment. (Note: this only applies if you are planning to use a GPU for processing.)
Our data is stored on a hugging face repository. To clone a copy of this repository, please follow these steps:
- Make sure you run the following commands from the outermost directory of this repo (i.e., within the MITO_Codebase folder)
- Run
huggingface-cli login - Follow the URL printed in the terminal to create a new hugging face token (or use an existing one)
- Enter your hugging face token. You can choose whether to save this token as a git credential.
- Run
git clone https://huggingface.co/datasets/SignalKinetics/MITO_Dataset
You should now have a folder called MITO_Dataset which contains the processed files for all objects. If you would like access to the raw signal data, please email us.
Data can be visualized by running cd src/utils && python3 visualization.py. More details can be found in the documentation of that python file (or in the tutorial listed below).
The final classifier weights can be found in src/classification/checkpoints. Run test_classifier.py to see the result for the final weights on the test set. Run train_classifier.py to train the classifier.
Pretrained weights are available here.
This includes a zip file of trained weights for our full classifier and two microbenchmarks (for using only edge or only specular simulations). To test with pretrained weights, create a folder src/classification/checkpoints and extract the zip file within that folder beofre running test_classifier.py.
The simulation can be run with the run_simulation.sh file. See the file (or the tutorial listed below) for more details.
We provide the following tutorials to introduce different features of this repository. All tutorials are in the tutorials\ folder.
This tutorial introduces the dataset (contents and structure) and shows how to download, access, and visualize the data. If your goal is to build new models using the previously processed images, this tutorial should be sufficient for your goals. The remainder of the tutorials show more advanced functionality (e.g., building models on this dataset or simulating/processing new images.)
This tutorial shows how to use our open-source simulation tool to produce synthetic images for any 3D mesh. This can be used to produce more synthetic data than we have released in our initial dataset release.
This tutorial shows our approach for segmenting mmWave images using the SegmentAnything model (https://github.com/facebookresearch/segment-anything). This is a good example of using our data in downstream models.
This tutorial shows our approach for classifying mmWave images, using a custom classification network. This is a good example of buidling custom models with our dataset.
This is an advanced tutorial explaining in-depth how mmWave imaging works.
If you found MITO helpful for your research, please consider citing us with the following BibTeX entry:
@misc{dodds2025mitoenablingnonlineofsightperception,
title={MITO: Enabling Non-Line-of-Sight Perception using Millimeter-waves through Real-World Datasets and Simulation Tools},
author={Laura Dodds and Tara Boroushaki and Fadel Adib},
year={2025},
eprint={2502.10259},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2502.10259},
}