Disclaimer: All scripts and repository changes were produced by GPT-5 mini and Gemini (AIs) under instructions from Fastrodev.
This repository contains the data, analysis scripts, and a reproducible pipeline used for the "Great Solar Audit" (Audit v2). It consolidates the Eclipse audit analysis, a Jupyter notebook that reproduces the analytic formulas and Monte Carlo checks, and convenience Docker/local instructions to reproduce the results.
eclipse/— primary analysis code, data, and notebookBesselian_Rows_Audit.csv— per-event inputsUncertainty_Propagation.ipynb— analysis notebook (executed copy also produced)run_uncertainty.py,run_uncertainty_nopy.py— analytic uncertainty (numpy/pandas and fallback)run_nonlinear_mc.py— nonlinear Monte Carlo propagationplot_audit_shifts.py— per-event summary + histogramsensitivity.py— quick sensitivity checksdocker-entrypoint.sh,Dockerfile— container entrypoint and image
requirements.txt— Python packages needed for local runslogs/— generated logs and PNGs (created by scripts)
- Docker (recommended — consistent environment):
# build image from repo root (uses eclipse/Dockerfile)
docker build -f eclipse/Dockerfile -t fastro-eclipse:latest .
# run the container and persist outputs to host `logs/`:
docker run --rm -v "$(pwd)/eclipse:/usr/src/app/research/eclipse" -v "$(pwd)/logs:/usr/src/app/logs" fastro-eclipse:latest
# alternate (mount full repo):
docker run --rm -v "$(pwd):/usr/src/app" -v "$(pwd)/logs:/usr/src/app/logs" fastro-eclipse:latest- The container installs Python deps from
requirements.txtand runs the entrypointresearch/eclipse/docker-entrypoint.sh, which executes the audit pipeline inside/usr/src/app/research/eclipseand writes logs to/usr/src/app/logs. - When running, you may pass
HOST_UID/HOST_GIDto have produced files owned by your host user, for example:
docker run --rm -e HOST_UID=$(id -u) -e HOST_GID=$(id -g) -v "$(pwd):/usr/src/app" -v "$(pwd)/logs:/usr/src/app/logs" fastro-eclipse:latest- Local (virtualenv — optional):
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# to execute the notebook with nbconvert you also need:
pip install nbconvert ipykernel
# register a kernelspec named 'python3' so nbconvert can execute notebooks that request that kernel
python -m ipykernel install --user --name=python3 --display-name "python3"
# run the main scripts (from repo root)
python eclipse/run_uncertainty.py
python eclipse/run_nonlinear_mc.py
python eclipse/plot_audit_shifts.py
python eclipse/run_notebook.py
python eclipse/sensitivity.py- An executed copy of the notebook is produced by the project automation:
eclipse/Uncertainty_Propagation.executed.ipynb. - To reproduce locally with
nbconvert:
.venv/bin/python -m nbconvert --to notebook --execute eclipse/Uncertainty_Propagation.ipynb --output eclipse/Uncertainty_Propagation.executed.ipynb --ExecutePreprocessor.timeout=600 --output-dir=eclipseAudit methodology (summary)
- Inputs per event are taken from
Besselian_Rows_Audit.csvand used to compute penumbra width and the model prediction (seeUncertainty_Propagation.ipynbfor derivations). - The package compares Besselian-derived penumbra widths against a fixed solar diameter model and reports per-event shifts and parallax differences.
plot_audit_shifts.pyproduces a per-event summary fileeclipse/audit_shifts_summary.txtand a histogramlogs/audit_shifts_hist.png(requiresmatplotlib).
logs/Summary_Statistics_uncertainty.loglogs/Summary_Statistics_nonlinear.loglogs/Summary_Statistics_notebook.loglogs/audit_shifts_hist.pngeclipse/audit_shifts_summary.txteclipse/Uncertainty_Propagation.executed.ipynb
- Scripts in
eclipse/write logs to../logs(top-levellogs/) and expectBesselian_Rows_Audit.csvin the same folder. - For reproducible runs use Docker; local runs require a working virtualenv and the packages in
requirements.txt. - The project contains a fallback no-numpy analytic script
run_uncertainty_nopy.pyfor minimalist environments.