Google Colab workflow for running COLMAP (sparse + dense reconstruction) and generating outputs commonly used in Gaussian Splatting pipelines, plus Blender utilities for post-processing and area estimation.
This repository currently contains:
COLMAP-colab.ipynb: the main end-to-end pipeline in Colab.utils/colmap2nerf.py: converts COLMAP outputs totransforms.json(Instant-NGP / NeRF style format).utils/colmap_depth_viz.py: renders COLMAP dense depth.binmaps into PNG visualizations.utils/barrierEstimate.py: Blender script to estimate protective barrier surface area over a selected region of terrain mesh.
Use this project when you want to:
- Reconstruct a scene from photos in Google Colab with COLMAP.
- Export camera transforms for NeRF or Gaussian Splatting tooling.
- Produce dense depth, fused point clouds, and meshes (Poisson method).
- Import the reconstructed mesh into Blender and estimate a draped barrier area over an area of interest.
- Google account with Google Drive access.
- Google Colab (GPU runtime).
- Blender (only for
utils/barrierEstimate.py).
The notebook installs required system packages in Colab, including build tools and graphics dependencies. You still need:
- A GPU runtime for acceleration purposes (also mandatory for COLMAP dense reconstruction).
- Enough Drive free space for:
- COLMAP build cache tarballs
- Input images
- Output reconstruction projects
utils/barrierEstimate.py must run inside Blender Python, not a standard terminal Python interpreter.
- Blender with Python API support (standard Blender install).
- Terrain mesh imported and scaled correctly (metric units recommended).
- Script execution from Blender Scripting workspace or text editor.
For stable SfM and dense MVS:
- High overlap between photos (typically 60 to 80 percent or more).
- Consistent exposure where possible.
- Limited motion blur.
- Avoid too many near-duplicate frames with minimal baseline.
The notebook is configured around paths like these:
- Input images source:
/content/drive/MyDrive/Gaussian-Splatting/images/<your_dataset_folder>
- COLMAP build cache:
/content/drive/MyDrive/COLMAP/colmap-build-cache
- Final copied outputs:
/content/drive/MyDrive/COLMAP/output
If you use different folders, update notebook parameters in the setup cells.
colmap-gs/
├── COLMAP-colab.ipynb
└── utils/
├── barrierEstimate.py
├── colmap2nerf.py
└── colmap_depth_viz.py
The notebook is organized in blocks.
- Mounts
/content/drive. - Clones this repository into
/content/colmap_gs. - Installs build and COLMAP dependencies via
apt.
- Uses cache metadata and tarball from Drive.
- If cache is valid for current settings, it restores quickly.
- If cache is missing or incompatible, it builds:
- Abseil
- Ceres
- COLMAP (headless, CUDA enabled)
- Caches the resulting install prefix back to Drive as a compressed tarball.
This design avoids rebuilding COLMAP every new Colab session.
The notebook creates a timestamped project root:
/content/<image_folder_name>_<timestamp>
with key paths:
images/database.dbsparse/undistorted/dense/
It copies images from the selected Drive source into images/.
Runs:
colmap feature_extractorcolmap exhaustive_matcher
with GPU toggled automatically based on runtime detection.
Runs:
colmap mappercolmap image_undistorter(for undistorted sparse workspace)colmap model_converter(binary model to TXT)python colmap_gs/utils/colmap2nerf.py ...to createtransforms.json
Runs:
colmap image_undistorter(dense workspace)colmap patch_match_stereocolmap stereo_fusion->fused.plycolmap poisson_mesher->meshed-poisson.ply
The notebook includes an optional block using:
from colmap_gs.utils.colmap_depth_viz import render_colmap_depth_maps
to generate colored PNG depth visualizations.
Copies entire timestamped project folder into:
/content/drive/MyDrive/COLMAP/output
<project_root>/
├── images/
├── database.db
├── sparse/
│ └── 0/
├── undistorted/
├── dense/
│ ├── stereo/
│ │ └── depth_maps/
│ ├── fused.ply
│ └── meshed-poisson.ply
└── transforms.json
If depth visualization is enabled, an additional depth_viz/ folder is created.
Purpose:
- Convert COLMAP text model (
cameras.txt,images.txt) into a NeRF styletransforms.json. - Optionally run FFmpeg to extract frames from video.
- Optionally run COLMAP end-to-end from images.
Key points:
- Supports several camera models from COLMAP.
- Computes sharpness per image via variance of Laplacian.
- Can keep original COLMAP coordinates (
--keep_colmap_coords) or reorient and recenter. - Adds
aabb_scalemetadata for downstream NeRF tooling. - Optional dynamic object masking with Detectron2 categories (
--mask_categories).
Minimal usage (after COLMAP sparse model was exported to TXT):
python utils/colmap2nerf.py \
--images /path/to/project/images \
--text /path/to/project/sparse/0 \
--out /path/to/project/transforms.json \
--aabb_scale 16Purpose:
- Read COLMAP dense depth maps (
*.geometric.bin,*.photometric.bin) and produce:- colormapped RGB PNGs.
- optional normalized 16-bit PNG depth images.
Expected input location:
<project_dir>/dense/stereo/depth_maps/...
Basic usage:
from utils.colmap_depth_viz import render_colmap_depth_maps
summary = render_colmap_depth_maps(
"/path/to/project_root",
cmap="turbo",
kinds=("geometric", "photometric"),
pmin=2,
pmax=98,
)
print(summary)Output location:
<project_dir>/depth_viz/<stereo_subfolder>/...
This script runs inside Blender and estimates the one-sided surface area of a draped "umbrella" mesh over a selected terrain region. It is especially useful for rough sizing of rockfall protection barriers after importing the reconstructed mesh from COLMAP outputs.
- Run
COLMAP-colab.ipynband generate a mesh (dense/meshed-poisson.ply). - Download or open that mesh in Blender.
- In Blender Edit Mode, select vertices or faces corresponding to your area of interest.
- Open
utils/barrierEstimate.pyin Blender's scripting editor. - Tune
CFGvalues if needed, then run the script. - Read area result from Blender console:
"[BarrierUmbrella] One-sided area estimate: <value> m^2"
The script also creates a new object (default name BarrierUmbrella).
- Detects which mesh actually has an active selection (handles multi-object edit mode).
- Reads selected points in world coordinates and estimates a mean surface normal.
- Builds a 2D projection plane and computes convex hull of selected points.
- Offsets the hull outward by
margin_m. - Lifts the initial polygon above terrain along
clearance_axis. - Builds an n-gon mesh, triangulates it, and subdivides it.
- Applies shrinkwrap to drape onto terrain and smooths the result.
- Enforces a minimum clearance from terrain along selected axis via BVH ray casting.
- Computes total world-space one-sided area by summing triangulated face areas.
margin_m: outward buffer around selected hull.subdivide_cuts: draping resolution.smooth_iterations,smooth_factor: smoothing strength.use_shrinkwrap,shrinkwrap_offset_m,shrinkwrap_method: draping behavior.force_world_xy: project in world XY instead of mean-normal plane.clearance_axis: axis used as "up" for clearance (X+,X-,Y+,Y-,Z+,Z-).axis_clearance_m: required minimum stand-off distance.initial_lift_along_axis_m: initial pre-shrinkwrap lift to reduce wrong-side snapping.apply_scale_to_terrain: applies terrain scale so units are consistent.
- Start with moderate subdivision (
20to40) and inspect result. - If umbrella sticks to wrong side, increase
initial_lift_along_axis_m, switch toPROJECTshrinkwrap, and verifyclearance_axis. - If area is too jagged, increase smoothing slightly or reduce subdivision.
- If area is too small, increase
margin_m. - Confirm Blender scene units are metric and that imported mesh scale is correct.
- The area of interest boundary is convexified by hull construction, so concave regions can be overestimated.
- Output is a one-sided surface area.
- Quality depends on mesh quality and selection quality.
- Very noisy meshes can bias draping and area.
Sparse reconstruction estimates camera poses and 3D points by minimizing reprojection error:
where:
-
$R_i,t_i$ : camera pose for image$i$ -
$X_j$ : 3D point$j$ -
$x_{ij}$ : observed pixel -
$\pi(\cdot)$ : camera projection model.
Selected 3D points are projected to a 2D basis, hull is computed, and polygon area in 2D can be written with the shoelace formula:
After draping and triangulation, 3D surface area is:
for each triangle
Clearance constraint enforced per umbrella vertex:
where:
-
$p_k$ : umbrella vertex -
$q_k$ : terrain hit point from raycast or nearest fallback -
$a$ : chosen clearance axis unit vector -
$c$ : required clearance (axis_clearance_m).
- Organize images in Google Drive under a dedicated folder.
- Open and run
COLMAP-colab.ipynbtop to bottom. - Confirm outputs in timestamped
project_rootand copied Drive output folder. - Import
meshed-poisson.plyinto Blender. - Select area of interest and run
utils/barrierEstimate.py. - Save reported area with your project metadata (dataset name, timestamp, CFG values used).
colmapnot found in Colab cells:- Ensure COLMAP build or cache-restore step completed successfully.
- Sparse model folder missing
0:- Check feature matching quality and image overlap.
- Dense reconstruction is too noisy:
- Adjust PatchMatch triangulation thresholds and improve image quality.
barrierEstimate.pysays no selection found:- Ensure terrain mesh is in Edit Mode and vertices or faces are selected.
- Unrealistic barrier area:
- Verify scale and units in Blender, then retune
margin_m, subdivision, and clearance settings.
- Verify scale and units in Blender, then retune
barrierEstimate.pyand Blender operations assume geometry scale represents real meters.- Reproducibility improves when you keep a record of notebook parameters and script
CFGvalues for each run.