Given a set of images of a crime scene, we plan on reconstructing the scene in 3D. This is done by Utilising multiple views of a scene to generate a 3D rendering of the same. We will also be looking into further optimizing the baseline output by the neural net with a GCN.
| Name | SRN | |
|---|---|---|
| Dhruval PB | PES1UG19CS313 | dhruvalpb@pesu.pes.edu |
| Sai Mihir J | PES1UG19CS418 | saimihir.j@gmail.com |
| Akash Mehta | PES1UG19CS040 | akashmehta556@gmail.com |
This folder contains the implementation of a Graph Convolution Neural Network that performs Monocular Depth Estimation.
cd Code
git clone https://github.com/ArminMasoumian/GCNDepth.git- PyTorch1.2+, Python3.5+, Cuda10.0+
- mmcv==0.4.4
# This creates a new conda enviroment to run the model
conda create --name gcndepth python=3.7
conda activate gcndepth
# This installs the right pip and dependencies for the fresh python
conda install ipython
conda install pip
# Install required packages from requirements.txt
pip install -r requirements.txtconda activate gcndepth
cd ./GCNDepth
python3 infer.pyThis will generate depth maps which will be stored in the GCNDepth/assets/Outputs/Grayscale folder.
We've written scripts that back project these disparity maps into point clouds and save them as npy files in 3DRenders/PointClouds folder.
This folder contains an implementation of a Monocular SLAM algorithm which we use to estimate camera position and orientation.
xhost +local:docker
sudo apt install nvidia-docker2
sudo systemctl daemon-reload
sudo systemctl restart docker
docker build -t twitchslam .chmod +x ./Run.sh
./Run.sh # This will start the docker container
cd twitchslam # Once the container is up and running, go to the twitchslam directory
chmod +x ./Predict.py
./Predict.py # This generates the camera orientation predictions on the dataset we've usedThis folder contains the code for plotting the point clouds in 3D using pyQt.
pip install numpy pyqtgraphpython3 Plotter.py