diff --git a/CHANGES.txt b/CHANGES.txt index 59b20fc9..718a4818 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,10 +1,26 @@ Changes -------------------------------------------------------------------------------- +3.0.0 +................................................................................ + +* Pythonic pipeline creation https://github.com/PDAL/python/pull/91 + +* Support streaming pipeline execution https://github.com/PDAL/python/pull/94 + +* Replace Cython with PyBind11 https://github.com/PDAL/python/pull/102 + +* Remove pdal.pio module https://github.com/PDAL/python/pull/101 + +* Move readers.numpy and filters.python to separate repository https://github.com/PDAL/python/pull/104 + +* Miscellaneous refactorings and cleanups + 2.3.5 ................................................................................ * Fix memory leak https://github.com/PDAL/python/pull/74 + * Handle metadata with invalid unicode by erroring https://github.com/PDAL/python/pull/74 2.3.0 @@ -29,5 +45,3 @@ Changes schedule at https://github.com/PDAL/python * Extension now builds and works under PDAL OSGeo4W64 on Windows. - - diff --git a/README.rst b/README.rst index 37ca51b9..481b50e3 100644 --- a/README.rst +++ b/README.rst @@ -142,14 +142,14 @@ The following more complex scenario demonstrates the full cycling between PDAL and Python: * Read a small testfile from GitHub into a Numpy array -* Filters those arrays with Numpy for Intensity +* Filters the array with Numpy for Intensity * Pass the filtered array to PDAL to be filtered again -* Write the filtered array to an LAS file. +* Write the final filtered array to a LAS file and a TileDB_ array + via the `TileDB-PDAL integration`_ using the `TileDB writer plugin`_ .. code-block:: python import pdal - import numpy as np data = "https://github.com/PDAL/PDAL/blob/master/test/data/las/1.2-with-color.las?raw=true" @@ -175,9 +175,12 @@ PDAL and Python: print(pipeline.execute()) # 387 points clamped = pipeline.arrays[0] - # Write our intensity data to an LAS file + # Write our intensity data to a LAS file and a TileDB array. For TileDB it is + # recommended to use Hilbert ordering by default with geospatial point cloud data, + # which requires specifying a domain extent. This can be determined automatically + # from a stats filter that computes statistics about each dimension (min, max, etc.). pipeline = pdal.Writer.las( - filename="clamped2.las", + filename="clamped.las", offset_x="auto", offset_y="auto", offset_z="auto", @@ -185,8 +188,14 @@ PDAL and Python: scale_y=0.01, scale_z=0.01, ).pipeline(clamped) + pipeline |= pdal.Filter.stats() | pdal.Writer.tiledb(array_name="clamped") print(pipeline.execute()) # 387 points + # Dump the TileDB array schema + import tiledb + with tiledb.open("clamped") as a: + print(a.schema) + Executing Streamable Pipelines ................................................................................ Streamable pipelines (pipelines that consist exclusively of streamable PDAL @@ -293,6 +302,9 @@ USE-CASE : Take a LiDAR map, create a mesh from the ground points, split into ti .. _`Numpy`: http://www.numpy.org/ .. _`schema`: http://www.pdal.io/dimensions.html .. _`metadata`: http://www.pdal.io/development/metadata.html +.. _`TileDB`: https://tiledb.com/ +.. _`TileDB-PDAL integration`: https://docs.tiledb.com/geospatial/pdal +.. _`TileDB writer plugin`: https://pdal.io/stages/writers.tiledb.html .. image:: https://github.com/PDAL/python/workflows/Build/badge.svg :target: https://github.com/PDAL/python/actions?query=workflow%3ABuild diff --git a/pdal/__init__.py b/pdal/__init__.py index 09d58be9..7bc49270 100644 --- a/pdal/__init__.py +++ b/pdal/__init__.py @@ -1,4 +1,4 @@ -__version__ = "2.4.2" +__version__ = "3.0.0" __all__ = ["Pipeline", "Stage", "Reader", "Filter", "Writer", "dimensions", "info"] from . import libpdalpython