Webnutil builds on PyNutil, a high-performance Python library for spatial analysis of histological brain section images using a reference brain atlases. It implements the core quantification algorithms from the QUINT workflow, designed to replicate and extend the Quantifier feature of the Nutil software (RRID: SCR_017183).
WebNutil is used to perform the quantification step in the QUINT online workflow.
The QUINT online workflow combines the use of a series of web-applications integrated in an online platform, accessible through EBRAINS at https://quint-online.apps.ebrains.eu/
Webnutil is used as part of the QUINT online workflow: https://quint-webtools.readthedocs.io/en/latest/
Programming: Arda Balkir and Harry Carey.
Conception, design and validation: Sharon C Yates, Maja A Puchades, Gergely Csucs and Jan G Bjaalie.
Puchades MA, Yates SC, Csucs G, Carey H, Balkir A, Leergaard TB, Bjaalie JG. Software and pipelines for registration and analyses of rodent brain image data in reference atlas space. Front Neuroinform. 2025 Sep 24;19:1629388. htpps://10.3389/fninf.2025.1629388.
Webnutil is developed by the Neural Systems Laboratory at the Institute of Basic Medical Sciences, University of Oslo, Norway with support from the EBRAINS infrastructure, and funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Framework Partnership Agreement No. 650003 (HBP FPA) and the European Union’s Horizon Europe Programme for Research Infrastructures Grant Agreement No. 101147319 (EBRAINS 2.0).
Report issues here on github or email: support@ebrains.eu
Webnutil-service performs automated spatial quantification of labeled structures in brain tissue sections by:
Object Detection: Identifying connected components in segmented images using advanced image processing.
Atlas Registration: Mapping detected objects to standardized brain atlas coordinates.
Spatial Quantification: Computing region-wise statistics including object counts, areas, and density metrics.
3D Visualization: Generating point clouds for interactive exploration in atlas space.
The service requires two primary inputs:
- Format:
series_abc_registration.json/walnfrom QuickNII, VisuAlign, WebAlign, or WebWarp. - Content: Linear and non-linear transformation parameters, anchoring vectors, slice metadata.
- Format: RGB images with unique color codes for labeled structures with names matching registration section numbers e.g. _s005
- Requirements: Consistent pixel intensities for target structures.
- Supported: Standard image formats (PNG, JPEG, TIFF) and DZI archives in .dzip zip stored format.
- Quantification Reports: Region-wise statistics in CSV/JSON format.
- 3D Point Clouds: Atlas-space coordinates for visualization in MeshView or other PointCloud visualizers.
- Per-section Analysis: Slice-by-slice analysis results.
- Hemispheric Statistics: Left/right brain region comparisons.
- WIP Slice regions visualizations
- FastAPI
- Redis
git clone https://github.com/your-org/webnutil-service.git
cd webnutil-service
pip install -r requirements.txt
pip install -r api_requirements.txt
pip install -r ui/requirements.txt (optional)To set up and start the UI
run_ui.ps1To start as a deployment
docker compose up -d
# Creates the API, worker and a redis podfrom nutil import Nutil
# Initialize the analysis
nt = Nutil(
segmentation_folder="./segmentations/",
alignment_json="./alignment.json",
colour=[255, 0, 0], # Target RGB color
atlas_path="./atlas/annotation_25.nrrd",
label_path="./atlas/labels.csv"
)
# Extract coordinates and quantify
nt.get_coordinates(object_cutoff=10, use_flat=False)
nt.quantify_coordinates()
# Get results as a pandas DataFrame
results = nt.get_region_summary()
print(results.sort_values(by="object_count", ascending=False))# With hemispheric analysis and damage assessment made in QuickNII
nt = Nutil(
segmentation_folder="./data/segmentations/",
alignment_json="./data/alignment_with_grid.json",
colour=[0, 255, 0],
atlas_path="./atlases/allen_2017_25um.nrrd",
label_path="./atlases/allen_labels.csv",
hemi_path="./atlases/hemisphere_mask.nrrd"
)
nt.get_coordinates(
object_cutoff=5, # Minimum object size (pixels)
use_flat=False, # Use 3D atlas (not flat files)
apply_damage_mask=True # Include damage analysis
)
# Quantification with custom regions
nt.quantify_coordinates()
# Export results
nt.save_analysis_output(
output_folder="./results/",
prepend="experiment_1_"
)results/
├── whole_series_report/
│ ├── experiment_1_whole_series_report.csv # Aggregated statistics
│ └── experiment_1_settings.json # Analysis parameters
├── per_section_reports/
│ ├── experiment_1_XYZ_s001_report.csv # Per-slice breakdown
│ └── experiment_1_XYZ_s002_report.csv
├── whole_series_meshview/ # Meshview compatible point clouds
│ ├── experiment_1_pixels.json # 3D point cloud (pixels)
│ └── experiment_1_centroids.json # 3D point cloud (centroids)
└── per_section_meshview/
├── experiment_1_XYZ_s001_pixels.json # Section-specific clouds
└── experiment_1_XYZ_s001_centroids.json
The system implements a multi-stage coordinate transformation pipeline to map 2D histological sections to 3D atlas space:
For a segmentation of dimensions
Pixel coordinates
When non-linear corrections are present, coordinates undergo triangular mesh-based transformation using Delaunay triangulation with marker-based deformation fields.
Transformation to 3D atlas coordinates using QuickNII anchoring vectors. For coordinates in the valid range
Where:
-
$\mathbf{o}$ = origin vector (3D voxel coordinates of top-left corner) -
$\mathbf{u}$ = horizontal axis vector (3D voxel coordinates of horizontal edge) -
$\mathbf{v}$ = vertical axis vector (3D voxel coordinates of vertical edge) -
$W_{reg}, H_{reg}$ = registration image width and height in pixels
Objects are identified using scikit-image's measure.label with 1-connectivity, accepting adjacent structures with no gaps:
labels = measure.label(binary_segmentation, connectivity=1)
objects = measure.regionprops(labels)For each detected object
-
Centroid:
$c_i = \frac{1}{|O_i|} \sum_{p \in O_i} p$ -
Area:
$A_i = |O_i|$ (pixel count) - Bounding region: Determined by pixel coordinates
Objects are assigned to atlas regions using majority voting:
Implementation Status: Validated
Pixel-level area splitting, ensuring accurate quantification when objects span multiple regions:
Atlas regions are mapped to segmentation space and areas calculated as:
Known Scaling Issue: When segmentations are larger than registration images, region areas can differ from Nutil. Nutil applies a global scaling factor, while webnutil-service resizes atlas maps using nearest-neighbor interpolation to preserve region label integrity.
*under investigation
- Binary mask generation: O(HW) (Optional)
- Connected component labeling: O(HW α(HW))
- Region assignment: O(n) where n = number of objects
Overall Complexity: O(HW α(HW)) where α is the inverse Ackermann function
HW refers to the image resolution
| Component | Status | Notes |
|---|---|---|
| Atlas map creation | Validated | Matches VisuAlign output for test datasets |
| Object detection | Validated | Object counts match output for test datasets |
| Area splitting | Validated | Pixel-level accuracy confirmed |
| Area fractions | Validated | Mathematical correctness verified |
| Coordinate transformation | Validated | 3D atlas space mapping accurate |
Reference: Atlas validation documented in PyNutil Issue #38
graph TD
A[Segmentation Images<br/>RGB] --> D[Atlas Registration<br/>Pipeline]
B[Atlas<br/>NRRD Volume] --> D
C[Alignment JSON<br/>QuickNII/VisuAlign] --> D
D --> E[Object Detection &<br/>Quantification]
E --> F[3D Point Clouds<br/>Meshview JSON]
E --> G[Statistical Reports<br/>CSV/JSON]
E --> H[Per-section Analysis<br/>Slice-by-slice]
Main analysis class providing the complete quantification pipeline.
Constructor Parameters:
segmentation_folder(str): Path to directory containing segmentation imagesalignment_json(str): Path to QuickNII/VisuAlign alignment filecolour(List[int]): RGB color triplet for target structures [R, G, B]atlas_path(str): Path to 3D atlas volume (.nrrd file)label_path(str): Path to atlas labels CSV filehemi_path(str, optional): Path to hemisphere mask volumecustom_region_path(str, optional): Path to custom region definitions
Key Methods:
get_coordinates(object_cutoff=0, use_flat=False, apply_damage_mask=True): Extract and transform coordinatesquantify_coordinates(): Perform statistical quantificationget_region_summary(): Return aggregated results DataFramesave_analysis_output(output_folder, prepend=""): Export all results
| Column | Description | Units |
|---|---|---|
idx |
Atlas region ID | - |
name |
Region name | - |
region_area |
Total region area | pixels |
object_count |
Number of detected objects | count |
pixel_count |
Total labeled pixels | pixels |
area_fraction |
Labeled area / total area | ratio |
left_hemi_* |
Left hemisphere statistics | various |
right_hemi_* |
Right hemisphere statistics | various |
the damaged and undamaged sections will be implemented in the future
Based on actual test runs with DZI-compressed brain sections (Allen Mouse Atlas 2017 on ttA_NOP dataset):
| Image Size | Objects Detected | Processing Time | Memory Peak |
|---|---|---|---|
| 5282×3804 | 5,655 | 1.8 seconds | 467 MB |
| 5697×3807 | 4,909 | 1.9 seconds | 474 MB |
| 5822×3983 | 3,507 | 1.8 seconds | 479 MB |
Based on O(HW α(HW)) complexity for object detection and measured performance:
| Image Size | Est. Objects | Est. Time | Est. Memory |
|---|---|---|---|
| 1K × 1K | ~400 | 0.2s | 80 MB |
| 2K × 2K | ~1,600 | 0.7s | 150 MB |
| 4K × 4K | ~6,400 | 2.8s | 350 MB |
| 8K × 8K | ~25,600 | 11.2s | 900 MB |
Estimates assume similar object density (~0.0004 objects/pixel) and include I/O overhead
Symptom: No objects detected despite visible structures Solution:
# Check exact RGB values in your image
import cv2
import numpy as np
img = cv2.imread("segmentation.png")
unique_colors = np.unique(img.reshape(-1, 3), axis=0)
print("Available colors:", unique_colors)
# Use color with tolerance
nt.get_coordinates(tolerance=10) # Allow ±10 RGB variationSymptom: Out of memory errors Solution:
# Process sections individually or reduce image size
# Use object_cutoff to filter small objects
nt.get_coordinates(object_cutoff=50) # Filter objects < 50 pixelsSymptom: Objects appear in wrong atlas locations Causes:
- Incorrect alignment JSON format
- Missing anchoring vectors
- Mismatched image dimensions
Solution: Verify alignment registration file (json/waln) structure and re-run registration step (QuickNII or WebWarp/WebAlign)