Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 29 additions & 2 deletions .github/workflows/ci-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,26 @@
name: ci-tests

# Triggers the workflow on pull-request or push events
on: [pull_request, push]
on:
push:
branches:
- "main"
- "v*x"
tags:
- "v*"
pull_request:
branches:
- "*"
workflow_dispatch:

jobs:
tests:
name: "Test Python ${{ matrix.version }}"
name: "Test Python ${{ matrix.version }} session= ${{ matrix.session }}"
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
session: ["tests", "doctests-docs", "doctests-api"]
defaults:
run:
shell: bash -l {0}
Expand Down Expand Up @@ -61,6 +75,19 @@ jobs:
mv iris-test-data-${IRIS_TEST_DATA_VERSION} ${GITHUB_WORKSPACE}/iris_test_data_download

- name: "Run tests"
if: matrix.session == 'tests'
run: |
ls ${GITHUB_WORKSPACE}/iris_test_data_download/test_data
OVERRIDE_TEST_DATA_REPOSITORY=${GITHUB_WORKSPACE}/iris_test_data_download/test_data PYTHONPATH=./tests:$PYTHONPATH pytest -v ./tests

- name: "Run doctests: Docs"
if: matrix.session == 'doctests-docs'
run: |
cd docs
pytest --doctest-glob="*.rst"

- name: "Run doctests: API"
if: matrix.session == 'doctests-api'
run: |
cd lib
pytest --doctest-modules
6 changes: 3 additions & 3 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ build:
os: ubuntu-20.04
tools:
python: mambaforge-4.10

jobs:
# Content here largely copied from Iris
# see : https://github.com/SciTools/iris/pull/4855
Expand All @@ -19,9 +18,10 @@ build:
pre_install:
- git stash
post_install:
- sphinx-apidoc -Mfe -o ./docs/api ./lib/ncdata
- towncrier build --yes
- git stash pop
pre_build:
- cd docs; make allapi
- cd docs; make towncrier

conda:
environment: requirements/readthedocs.yml
Expand Down
2 changes: 1 addition & 1 deletion docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ allapi:
sphinx-apidoc -Mfe -o ./details/api ../lib/ncdata

towncrier:
towncrier build --yes
towncrier build --keep

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
Expand Down
36 changes: 36 additions & 0 deletions docs/change_log.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,42 @@ Summary of key features by release number:

.. towncrier release notes start

Ncdata 0.3.0.dev4+dirty (2025-07-31)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Features
^^^^^^^^

- Added regular linkcheck gha. (`ISSUE#123 <https://github.com/pp-mo/ncdata/pull/123>`_)
- Make :meth:`~ncdata.iris.to_iris` use the full iris load processing,
instead of :meth:`iris.fileformats.netcdf.loader.load_cubes`.
This means you can use load controls such as callbacks and constraints. (`ISSUE#131 <https://github.com/pp-mo/ncdata/pull/131>`_)


Developer and Internal changes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

- Switch to towncrier for whats-new management. (`ISSUE#116 <https://github.com/pp-mo/ncdata/pull/116>`_)


Ncdata 0.3.0.dev4+dirty (2025-07-31)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Features
^^^^^^^^

- Added regular linkcheck gha. (`ISSUE#123 <https://github.com/pp-mo/ncdata/pull/123>`_)
- Make :meth:`~ncdata.iris.to_iris` use the full iris load processing,
instead of :meth:`iris.fileformats.netcdf.loader.load_cubes`.
This means you can use load controls such as callbacks and constraints. (`ISSUE#131 <https://github.com/pp-mo/ncdata/pull/131>`_)


Developer and Internal changes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

- Switch to towncrier for whats-new management. (`ISSUE#116 <https://github.com/pp-mo/ncdata/pull/116>`_)


v0.2.0
~~~~~~
Overhauled data manipulation APIs. Expanded and improved documentation.
Expand Down
1 change: 1 addition & 0 deletions docs/changelog_fragments/136.dev.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Made all docs examples into doctests; add doctest CI action.
9 changes: 9 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,10 @@
# ones.
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.doctest",
"sphinx.ext.intersphinx",
"sphinx.ext.napoleon",
"sphinx_copybutton",
]

intersphinx_mapping = {
Expand Down Expand Up @@ -108,6 +110,13 @@
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]

# -- copybutton extension -----------------------------------------------------
# See https://sphinx-copybutton.readthedocs.io/en/latest/
copybutton_prompt_text = r">>> |\.\.\. "
copybutton_prompt_is_regexp = True
copybutton_line_continuation_character = "\\"


# Various scheme control settings.
# See https://pydata-sphinx-theme.readthedocs.io/en/stable/user_guide/layout.html

Expand Down
111 changes: 81 additions & 30 deletions docs/userdocs/getting_started/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,49 +38,65 @@ and :attr:`~ncdata.NcData.attributes`:
>>> from ncdata import NcData, NcDimension, NcVariable
>>> data = NcData("myname")
>>> data
<ncdata._core.NcData object at 0x7f88118dd700>
<ncdata._core.NcData object at ...>
>>> print(data)
<NcData: myname
>

>>> dim = NcDimension("x", 3)
>>> data.dimensions.add(dim)
>>> data.dimensions['x'] is dim
True

>>> data.variables.add(NcVariable('vx', ["x"], dtype=float))
>>> print(data)
<NcData: myname
dimensions:
x = 3
<BLANKLINE>
variables:
<NcVariable(float64): vx(x)>
>

>>> data.dimensions['x'] is dim
True


Getting data to+from files
^^^^^^^^^^^^^^^^^^^^^^^^^^
The :mod:`ncdata.netcdf4` module provides simple means of reading and writing
NetCDF files via the `netcdf4-python package <http://unidata.github.io/netcdf4-python/>`_.

.. testsetup::

>>> from subprocess import check_output
>>> def ncdump(path):
... text = check_output(f'ncdump -h {path}', shell=True).decode()
... text = text.replace("\t", " " * 3)
... print(text)


Simple example:

.. code-block:: python

>>> from ncdata.netcdf4 import to_nc4, from_nc4

>>> filepath = "./tmp.nc"
>>> to_nc4(data, filepath)

>>> from subprocess import check_output
>>> print(check_output('ncdump -h tmp.nc', shell=True).decode())
>>> print(check_output("ncdump -h tmp.nc", shell=True).decode()) # doctest: +NORMALIZE_WHITESPACE
netcdf tmp {
dimensions:
x = 3 ;
x = 3 ;
variables:
double vx(x) ;
}

<BLANKLINE>
>>> data2 = from_nc4(filepath)
>>> print(data2)
<NcData: /
dimensions:
x = 3
<BLANKLINE>
variables:
<NcVariable(float64): vx(x)>
>

Please see `Converting between data formats`_ for more details.
Expand All @@ -93,32 +109,34 @@ which behaves like a dictionary:

.. code-block:: python

>>> var = NcVariable("vx", dimensions=["x"], dtype=float)
>>> data.variables.add(var)

>>> data.variables
{'vx': <ncdata._core.NcVariable object at ... >}
{'vx': <ncdata._core.NcVariable object at ...>}

>>> data.variables['vx'] is var
True
>>> var = NcVariable("newvar", dimensions=["x"], data=[1, 2, 3])
>>> data.variables.add(var)

>>> print(data)
<NcData: myname
dimensions:
x = 3

<BLANKLINE>
variables:
<NcVariable(float64): vx(x)>
<NcVariable(int64): newvar(x)>
>

>>> # remove again, for simpler subsequent testing
>>> del data.variables["newvar"]


Attributes
^^^^^^^^^^
Variables live in the ``attributes`` property of a :class:`~ncdata.NcData`
Attributes live in the ``attributes`` property of a :class:`~ncdata.NcData`
or :class:`~ncdata.NcVariable`:

.. code-block:: python

>>> var = data.variables["vx"]
>>> var.set_attrval('a', 1)
NcAttribute('a', 1)
>>> var.set_attrval('b', 'this')
Expand All @@ -137,7 +155,7 @@ or :class:`~ncdata.NcVariable`:
<NcData: myname
dimensions:
x = 3

<BLANKLINE>
variables:
<NcVariable(float64): vx(x)
vx:a = 1
Expand Down Expand Up @@ -182,7 +200,7 @@ There is also a 'rename' method of variables/attributes/groups:
<NcData: myname
dimensions:
x = 3

<BLANKLINE>
variables:
<NcVariable(float64): vx(x)
vx:qq = 'this'
Expand Down Expand Up @@ -230,32 +248,65 @@ Example code snippets :

.. code-block:: python

>>> from ndata.threadlock_sharing import enable_lockshare
>>> # (make sure that Iris and Ncdata won't conflict over netcdf access)
>>> from ncdata.threadlock_sharing import enable_lockshare
>>> enable_lockshare(iris=True, xarray=True)

.. code-block:: python

>>> from ncdata.netcdf import from_nc4
>>> ncdata = from_nc4("datapath.nc")
>>> from ncdata.netcdf4 import from_nc4
>>> data = from_nc4("tmp.nc")

.. code-block:: python

>>> from ncdata.iris import to_iris, from_iris
>>> xx, yy = to_iris(ncdata, ['x_wind', 'y_wind'])
>>> vv = (xx * xx + yy * yy) ** 0.5
>>> vv.units = xx.units
>>> from iris import FUTURE
>>> # (avoid some irritating warnings)
>>> FUTURE.save_split_attrs = True

>>> data = NcData(
... dimensions=[NcDimension("x", 3)],
... variables=[
... NcVariable("vx0", ["x"], data=[1, 2, 1],
... attributes={"long_name": "speed_x", "units": "m.s-1"}),
... NcVariable("vx1", ["x"], data=[3, 4, 6],
... attributes={"long_name": "speed_y", "units": "m.s-1"})
... ]
... )
>>> vx, vy = to_iris(data, constraints=['speed_x', 'speed_y'])
>>> print(vx)
speed_x / (m.s-1) (-- : 3)
>>> vv = (0.5 * (vx * vx + vy * vy)) ** 0.5
>>> vv.rename("v_mag")
>>> print(vv)
v_mag / (m.s-1) (-- : 3)

.. code-block:: python

>>> from ncdata.xarray import to_xarray
>>> xrds = to_xarray(from_iris(vv))
>>> xrds.to_zarr(out_path)
>>> xrds = to_xarray(from_iris([vx, vy, vv]))
>>> print(xrds)
<xarray.Dataset> Size: ...
Dimensions: (dim0: 3)
Dimensions without coordinates: dim0
Data variables:
vx0 (dim0) int64 ... dask.array<chunksize=(3,), meta=numpy.ma.MaskedArray>
vx1 (dim0) int64 ... dask.array<chunksize=(3,), meta=numpy.ma.MaskedArray>
v_mag (dim0) float64 ... dask.array<chunksize=(3,), meta=numpy.ma.MaskedArray>
Attributes:
Conventions: CF-1.7

.. code-block:: python

>>> from ncdata.iris_xarray import cubes_from_xarray
>>> vv2 = cubes_from_xarray(xrds)
>>> assert vv2 == vv
>>> readback = cubes_from_xarray(xrds)
>>> # warning: order is indeterminate!
>>> from iris.cube import CubeList
>>> readback = CubeList(sorted(readback, key=lambda cube: cube.name()))
>>> print(readback)
0: speed_x / (m.s-1) (-- : 3)
1: speed_y / (m.s-1) (-- : 3)
2: v_mag / (m.s-1) (-- : 3)


Thread safety
Expand All @@ -269,7 +320,7 @@ Thread safety

.. code-block:: python

>>> from ndata.threadlock_sharing import enable_lockshare
>>> from ncdata.threadlock_sharing import enable_lockshare
>>> enable_lockshare(iris=True, xarray=True)

See details at :ref:`thread_safety`.
Expand Down
7 changes: 5 additions & 2 deletions docs/userdocs/user_guide/common_operations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,9 @@ These however do *not* copy variable data arrays (either real or lazy), but prod

.. code-block::

>>> Construct a simple test dataset
>>> # Construct a simple test dataset
>>> import numpy as np
>>> from ncdata import NcData, NcDimension, NcVariable
>>> ds = NcData(
... dimensions=[NcDimension('x', 12)],
... variables=[NcVariable('vx', ['x'], np.ones(12))]
Expand All @@ -73,7 +75,8 @@ These however do *not* copy variable data arrays (either real or lazy), but prod
>>> # So changing one actually CHANGES THE OTHER ...
>>> ds.variables['vx'].data[6:] = 777
>>> ds_copy.variables['vx'].data
array([1., 1., 1., 1., 1., 1., 777., 777., 777., 777., 777., 777.])
array([ 1., 1., 1., 1., 1., 1., 777., 777., 777., 777., 777.,
777.])

If needed you can of course replace variable data with copies yourself, since you can
freely assign to ``.data``.
Expand Down
Loading
Loading