Skip to content

Update tensorflow to 2.10.0#261

Closed
pyup-bot wants to merge 1 commit into
masterfrom
pyup-update-tensorflow-2.5.0-to-2.10.0
Closed

Update tensorflow to 2.10.0#261
pyup-bot wants to merge 1 commit into
masterfrom
pyup-update-tensorflow-2.5.0-to-2.10.0

Conversation

@pyup-bot
Copy link
Copy Markdown
Contributor

@pyup-bot pyup-bot commented Sep 6, 2022

This PR updates tensorflow from 2.5.0 to 2.10.0.

Changelog

2.10.0

Breaking Changes

*   Causal attention in `keras.layers.Attention` and `keras.layers.AdditiveAttention` is now specified in the `call()` method via the `use_causal_mask` argument (rather than in the constructor), for consistency with other layers.
*   Some files in `tensorflow/python/training` have been moved to `tensorflow/python/tracking` and `tensorflow/python/checkpoint`. Please update your imports accordingly, the old files will be removed in Release  2.11.
*   `tf.keras.optimizers.experimental.Optimizer` will graduate in Release 2.11, which means `tf.keras.optimizers.Optimizer` will be an alias of `tf.keras.optimizers.experimental.Optimizer`. The current `tf.keras.optimizers.Optimizer` will continue to be supported as `tf.keras.optimizers.legacy.Optimizer`, e.g., `tf.keras.optimizers.legacy.Adam`. Most users won't be affected by this change, but please check the [API doc](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental) if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer to `tf.keras.optimizers.legacy.Optimizer`.
*   RNG behavior change for `tf.keras.initializers`. Keras initializers will now use stateless random ops to generate random numbers.
*   Both seeded and unseeded initializers will always generate the same  values every time they are called (for a given variable shape). For unseeded initializers (`seed=None`), a random seed will be created and assigned at initializer creation  (different initializer instances get different seeds).
*   An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.

Major Features and Improvements

*   `tf.lite`:

 *   New operations supported:
       * tflite SelectV2 now supports 5D.
       * tf.einsum is supported with multiple unknown shapes.
       * tf.unsortedsegmentprod op is supported.
       * tf.unsortedsegmentmax op is supported.
       * tf.unsortedsegmentsum op is supported.
 *   Updates to existing operations:
       * tfl.scatter_nd now supports I1 for update arg.
 *   Upgrade Flatbuffers v2.0.5 from v1.12.0
 *   Better supporting `tf_type.variant` type in flatbuffer import/export.

*   `tf.keras`:

 *   `EinsumDense` layer moved from experimental to core. Its import path  moved from `tf.keras.layers.experimental.EinsumDense` to `tf.keras.layers.EinsumDense`.
 *   Added `tf.keras.utils.audio_dataset_from_directory` utility to easily generate audio classification datasets from directories of `.wav` files.
 *   Added `subset="both"` support in `tf.keras.utils.image_dataset_from_directory`,`tf.keras.utils.text_dataset_from_directory`, and `audio_dataset_from_directory`, to be used with the `validation_split` argument, for returning both dataset splits at once, as a tuple.
 *   Added `tf.keras.utils.split_dataset` utility to split a `Dataset` object or a list/tuple of arrays into two `Dataset` objects (e.g. train/test).
 *   Added step granularity to `BackupAndRestore` callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing.
 *   Added [`tf.keras.dtensor.experimental.optimizers.AdamW`](https://www.tensorflow.org/api_docs/python/tf/keras/dtensor/experimental/optimizers/AdamW). This optimizer is similar as the existing [`keras.optimizers.experimental.AdamW`](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/experimental/AdamW), and  works in the DTensor training use case.
 *   Improved masking support for [tf.keras.layers.MultiHeadAttention](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention).
 *   Implicit masks for `query`, `key` and `value` inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any `attention_mask` passed in directly when calling the layer. This can be used with [tf.keras.layers.Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) with `mask_zero=True` to automatically infer a correct padding mask.
 *   Added a `use_causal_mask` call time arugment to the layer. Passing `use_causal_mask=True` will compute a causal attention mask, and optionally combine it with any `attention_mask` passed in directly when calling the layer.
 *   Added `ignore_class` argument in the loss `SparseCategoricalCrossentropy` and metrics `IoU` and `MeanIoU`, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class).
 *   Added [`tf.keras.models.experimental.SharpnessAwareMinimization`](https://www.tensorflow.org/api_docs/python/tf/keras/models/experimental/SharpnessAwareMinimization). This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.

*   `tf.data`:

 *   Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See
     https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers for more details.
 *   Added `dataset_id` to `tf.data.experimental.service.register_dataset`. If provided, tf.data service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should call `register_dataset` with the same `dataset_id`.
 *   Added a new field, `inject_prefetch`, to `tf.data.experimental.OptimizationOptions`. If it is set to `True`, tf.data will now automatically add a `prefetch` transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with  data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, set `inject_prefetch=True` in `tf.data.experimental.OptimizationOptions`.
 *   Added a new value to `tf.data.Options.autotune.autotune_algorithm`: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.
 *   Added [`tf.data.experimental.from_list`](https://www.tensorflow.org/api_docs/python/tf/data/experimental/from_list), a new API for creating `Dataset`s from lists of elements.
 *   Graduated `tf.data.experimental.Counter` to [`tf.data.Dataset.counter`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset/#counter).

*   `tf.distribute`:

 *   Added [`tf.distribute.experimental.PreemptionCheckpointHandler`](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/PreemptionCheckpointHandler) to handle worker preemption/maintenance and cluster-wise consistent error reporting for `tf.distribute.MultiWorkerMirroredStrategy`. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.

*   `tf.math`:

 *   Added `tf.math.approx_max_k` and `tf.math.approx_min_k` which are the optimized alternatives to `tf.math.top_k` on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.

*   `tf.train`:

 *  Added `tf.train.TrackableView` which allows users to inspect the TensorFlow Trackable object (e.g. `tf.Module`, Keras Layers and models).
 *  Added `tf.train.CheckpointView` which allows users to inspect SavedModel objects and Checkpoint objects.

*   `tf.vectorized_map`:

 *   Added an optional parameter: `warn`. This parameter controls whether or  not warnings will be printed when operations in the provided `fn` fall back to a while loop.

*   XLA:
 *   MWMS is now compilable with XLA.

Bug Fixes and Other Changes

*  New argument `experimental_device_ordinal` in `LogicalDeviceConfiguration` to control the order of logical devices. (GPU only)

*   `tf.keras`:

 *   Changed the TensorBoard tag names produced by the `tf.keras.callbacks.TensorBoard` callback, so that summaries logged automatically for model weights now include either a `/histogram` or `/image` suffix in their tag names, in order to prevent tag name collisions across summary types.

*   When running on GPU (with cuDNN version 7.6.3 or later),`tf.nn.depthwise_conv2d` backprop to `filter` (and therefore also `tf.keras.layers.DepthwiseConv2D`) now operate deterministically (and `tf.errors.UnimplementedError` is no longer thrown) when op-determinism has been enabled via `tf.config.experimental.enable_op_determinism`. This closes
 issue [47174](https://github.com/tensorflow/tensorflow/issues/47174).

* `tf.random`
 * Added `tf.random.experimental.stateless_shuffle`, a stateless version of `tf.random.shuffle`.

Deprecations

*   The C++ `tensorflow::Code` and `tensorflow::Status` will become aliases of respectively `absl::StatusCode` and `absl::Status` in some future release.
 *   Use `tensorflow::OkStatus()` instead of `tensorflow::Status::OK()`.
 *   Stop constructing `Status` objects from `tensorflow::error::Code`.
 *   One MUST NOT access `tensorflow::errors::Code` fields. Accessing
     `tensorflow::error::Code` fields is fine.
     *   Use the constructors such as `tensorflow::errors:InvalidArgument` to create status using an error code without accessing it.
     *   Use the free functions such as `tensorflow::errors::IsInvalidArgument` if needed.
     *   In the last resort, use e.g. `static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT)` or `static_cast<int>(code)` for comparisons.
*   `tensorflow::StatusOr` will also become in the future alias to `absl::StatusOr`, so use `StatusOr::value` instead of `StatusOr::ConsumeValueOrDie`.



Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Abolfazl Shahbazi, Adam Lanicek, Amin Benarieb, andreii, Andrew Fitzgibbon, Andrew Goodbody, angerson, Ashiq Imran, Aurélien Geron, Banikumar Maiti (Intel Aipg), Ben Barsdell, Ben Mares, bhack, Bhavani Subramanian, Bill Schnurr, Byungsoo Oh, Chandra Sr Potula, Chengji Yao, Chris Carpita, Christopher Bate, chunduriv, Cliff Woolley, Cliffs Dover, Cloud Han, Code-Review-Doctor, DEKHTIARJonathan, Deven Desai, Djacon, Duncan Riach, fedotoff, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, guozhong.zhuang, Hui Peng, James Gerity, Jason Furmanek, Jonathan Dekhtiar, Jueon Park, Kaixi Hou, Kanvi Khanna, Keith Smiley, Koan-Sin Tan, Kulin Seth, kushanam, Learning-To-Play, Li-Wen Chang, lipracer, liuyuanqiang, Louis Sugy, Lucas David, Lukas Geiger, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, Meenakshi Venkataraman, Michal Szutenberg, Michele Di Giorgio, Mickaël Salamin, Nathan John Sircombe, Nathan Luehr, Neil Girdhar, Nils Reichardt, Nishidha Panpaliya, Nobuo Tsukamoto, Om Thakkar, Patrice Vignola, Philipp Hack, Pooya Jannaty, Prianka Liz Kariat, pshiko, Rajeshwar Reddy T, rdl4199, Rohit Santhanam, Rsanthanam-Amd, Sachin Muradi, Saoirse Stewart, Serge Panev, Shu Wang, Srinivasan Narayanamoorthy, Stella Stamenova, Stephan Hartmann, Sunita Nadampalli, synandi, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Trevor Morris, Xiaoming (Jason) Cui, Yimei Sun, Yong Tang, Yuanqiang Liu, Yulv-Git, Zhoulong Jiang, ZihengJiang

2.9.1

Add an upper bound for `protobuf` in `setup.py` since `protobuf` after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.

2.9.0

Breaking Changes

*   Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to [TensorFlow Decision Forests](https://github.com/tensorflow/decision-forests).
*   Build, Compilation and Packaging
 * TensorFlow is now compiled with `_GLIBCXX_USE_CXX11_ABI=1`. Downstream projects that encounter `std::__cxx11` or `[abi:cxx11]` linker errors will need to adopt this compiler option. See [the GNU C++ Library docs on Dual ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html).
 * TensorFlow Python wheels now specifically conform to [manylinux2014](https://peps.python.org/pep-0599/), an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see [pypa/manylinux](https://github.com/pypa/manylinux). This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
 * Discussion for these changes can be found on SIG Build's [TensorFlow Community Forum thread](https://discuss.tensorflow.org/t/tensorflow-linux-wheels-are-being-upgraded-to-manylinux2014/8339)
*   The `tf.keras.mixed_precision.experimental` API has been removed. The non-experimental symbols under `tf.keras.mixed_precision` have been available since TensorFlow 2.4 and should be used instead.
 * The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
   * Remove the word "experimental" from `tf.keras.mixed_precision` symbols. E.g., replace `tf.keras.mixed_precision.experimental.global_policy` with `tf.keras.mixed_precision.global_policy`.
   * Replace `tf.keras.mixed_precision.experimental.set_policy` with `tf.keras.mixed_precision.set_global_policy`. The experimental symbol `set_policy` was renamed to `set_global_policy` in the non-experimental API.
   * Replace `LossScaleOptimizer(opt, "dynamic")` with `LossScaleOptimizer(opt)`. If you pass anything other than `"dynamic"` to the second argument, see (1) of the next section.
 * In the following rare cases, you need to make more changes when switching to the non-experimental API:
   * If you passed anything other than `"dynamic"` to the `loss_scale` argument (the second argument) of `LossScaleOptimizer`:
       * The LossScaleOptimizer constructor takes in different arguments. See the [TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer](https://www.tensorflow.org/versions/r2.7/api_docs/python/tf/keras/mixed_precision/experimental/LossScaleOptimizer) for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer.
   * If you passed a value to the `loss_scale` argument (the second argument) of `Policy`:
       * The experimental version of `Policy` optionally took in a `tf.compat.v1.mixed_precision.LossScale` in the constructor, which defaulted to a dynamic loss scale for the `"mixed_float16"` policy and no loss scale for other policies. In `Model.compile`, if the model's policy had a loss scale, the optimizer would be wrapped with a `LossScaleOptimizer`. With the non-experimental `Policy`, there is no loss scale associated with the `Policy`, and `Model.compile` wraps the optimizer with a `LossScaleOptimizer` if and only if the policy is a `"mixed_float16"` policy. If you previously passed a `LossScale` to the experimental `Policy`, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a `LossScaleOptimizer` before passing it to `Model.compile`.
   * If you use the very rarely-used function `tf.keras.mixed_precision.experimental.get_layer_policy`:
       * Replace `tf.keras.mixed_precision.experimental.get_layer_policy(layer)` with `layer.dtype_policy`.
* `tf.mixed_precision.experimental.LossScale` and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed `tf.keras.mixed_precision.experimental` API. The symbols are still available under `tf.compat.v1.mixed_precision`.
* The `experimental_relax_shapes` heuristic for `tf.function` has been deprecated and replaced with `reduce_retracing` which encompasses broader heuristics to reduce the number of retraces (see below)

Major Features and Improvements

*   `tf.keras`:

 *   Added `tf.keras.applications.resnet_rs` models. This includes the
     `ResNetRS50`, `ResNetRS101`, `ResNetRS152`, `ResNetRS200`,
     `ResNetRS270`, `ResNetRS350` and `ResNetRS420` model architectures. The
     ResNetRS models are based on the architecture described in
     [Revisiting ResNets: Improved Training and Scaling Strategies](https://arxiv.org/pdf/2103.07579.pdf)
 *   Added `tf.keras.optimizers.experimental.Optimizer`. The reworked
     optimizer gives more control over different phases of optimizer calls,
     and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and
     RMSprop optimizers based on
     `tf.keras.optimizers.experimental.Optimizer`. Generally the new
     optimizers work in the same way as the old ones, but support new
     constructor arguments. In the future, the symbols
     `tf.keras.optimizers.Optimizer`/`Adam`/etc will point to the new
     optimizers, and the previous generation of optimizers will be moved to
     `tf.keras.optimizers.legacy.Optimizer`/`Adam`/etc.
 *   Added L2 unit normalization layer `tf.keras.layers.UnitNormalization`.
 *   Added `tf.keras.regularizers.OrthogonalRegularizer`, a new regularizer
     that encourages orthogonality between the rows (or columns) or a weight
     matrix.
 *   Added `tf.keras.layers.RandomBrightness` layer for image preprocessing.
 *   Added APIs for switching between interactive logging and absl logging.
     By default, Keras always writes the logs to stdout. However, this is not
     optimal in a non-interactive environment, where you don't have access to
     stdout, but can only view the logs. You can use
     `tf.keras.utils.disable_interactive_logging()` to write the logs to ABSL
     logging. You can also use `tf.keras.utils.enable_interactive_logging()`
     to change it back to stdout, or
     `tf.keras.utils.is_interactive_logging_enabled()` to check if
     interactive logging is enabled.
 *   Changed default value for the `verbose` argument of `Model.evaluate()`
     and `Model.predict()` to `"auto"`, which defaults to `verbose=1` for
     most cases and defaults to `verbose=2` when used with
     `ParameterServerStrategy` or with interactive logging disabled.
 *   Argument `jit_compile` in `Model.compile()` now applies to
     `Model.evaluate()` and `Model.predict()`. Setting `jit_compile=True` in
     `compile()` compiles the model's training, evaluation, and inference
     steps to [XLA](https://www.tensorflow.org/xla). Note that
     `jit_compile=True` may not necessarily work for all models.
 *   Added DTensor-related Keras APIs under `tf.keras.dtensor` namespace. The
     APIs are still classified as experimental. You are welcome to try it
     out. Please check the tutorial and guide on https://www.tensorflow.org/
     for more details about DTensor.

*   `tf.lite`:

 *   Added TFLite builtin op support for the following TF ops:
     *   `tf.math.argmin`/`tf.math.argmax` for input data type `tf.bool` on
         CPU.
     *   `tf.nn.gelu` op for output data type `tf.float32` and quantization
         on CPU.
 *   Add nominal support for unsigned 16-bit integer tensor types. Note that
     very few TFLite kernels support this type natively, so its use in mobile
     ML authoring is generally discouraged.
 *   Add support for unsigned 16-bit integer tensor types in cast op.
 *   Experimental support for lowering `list_ops.tensor_list_set_item` with
     `DynamicUpdateSlice`.
 *   Enabled a new MLIR-based dynamic range quantization backend by default
     *   The new backend is used for post-training int8 dynamic range
         quantization and post-training float16 quantization.
     *   Set `experimental_new_dynamic_range_quantizer` in
         tf.lite.TFLiteConverter to False to disable this change
 *   Native TF Lite variables are now enabled during conversion by default on
     all v2 TfLiteConverter entry points.
     `experimental_enable_resource_variables` on tf.lite.TFLiteConverter is
     now True by default and will be removed in the future.

*   `tf.function`:

 *   Custom classes used as arguments for `tf.function` can now specify rules
     regarding when retracing needs to occur by implementing the Tracing
     Protocol available through
     `tf.types.experimental.SupportsTracingProtocol`.
 *   `TypeSpec` classes (as associated with `ExtensionTypes`) also implement
     the Tracing Protocol which can be overridden if necessary.
 *   The newly introduced `reduce_retracing` option also uses the Tracing
     Protocol to proactively generate generalized traces similar to
     `experimental_relax_shapes` (which has now been deprecated).

*   Unified eager and `tf.function` execution:

 *   Eager mode can now execute each op as a `tf.function`, allowing for more
     consistent feature support in future releases.
 *   It is available for immediate use.
     *   See the `TF_RUN_EAGER_OP_AS_FUNCTION` environment variable in
         [eager context](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/eager/context.py).
     *   Eager performance should be similar with this feature enabled.
         *   A roughly 5us per-op overhead may be observed when running many
             small functions.
         *   Note a
             [known issue](https://github.com/tensorflow/tensorflow/issues/55414)
             with GPU performance.
     *   The behavior of `tf.function` itself is unaffected.
 *   Note: This feature will be enabled by default in an upcoming version of
     TensorFlow.

*   `tf.experimental.dtensor`: Added DTensor, an extension to TensorFlow for
 large-scale modeling with minimal changes to user code. You are welcome to
 try it out, though be aware that the DTensor API is experimental and up-to
 backward-incompatible changes. DTensor and Keras integration is published
 under `tf.keras.dtensor` in this release (refer to the `tf.keras` entry).
 The tutoral and guide for DTensor will be published on
 https://www.tensorflow.org/. Please stay tuned.

*   [oneDNN CPU performance optimizations](https://github.com/tensorflow/community/blob/master/rfcs/20210930-enable-onednn-ops.md)
 are available in Linux x86, Windows x86, and Linux aarch64 packages.

 *   **Linux x86 packages:**
     *   oneDNN optimizations are *enabled by default* on CPUs with
         neural-network-focused hardware features such as AVX512_VNNI,
         AVX512_BF16, AMX, etc.
         ([Intel Cascade Lake](https://www.intel.com/content/www/us/en/products/platforms/details/cascade-lake.html)
         and newer CPUs.)
         *   [Example performance speedups.](https://medium.com/intel-analytics-software/leverage-intel-deep-learning-optimizations-in-tensorflow-129faa80ee07)
     *   For older CPUs, oneDNN optimizations are disabled by default.
 *   **Windows x86 package:** oneDNN optimizations are disabled by default.
 *   **Linux aach64 (`--config=mkl_aarch64`) package:**
     *   Experimental oneDNN optimizations are disabled by default.
     *   If you experience issues with oneDNN optimizations on, we recommend
         turning them off.
 *   To explicitly enable or disable oneDNN optimizations, set the
     environment variable `TF_ENABLE_ONEDNN_OPTS` to `1` (enable) or `0`
     (disable) before running TensorFlow. (The variable is checked during
     `import tensorflow`.) To fall back to default settings, unset the
     environment variable.
 *   These optimizations can yield slightly different numerical results from
     when they are off due to floating-point round-off errors from different
     computation approaches and orders.
 *   To verify that the optimizations are on, look for a message with
     *"oneDNN custom operations are on"* in the log. If the exact phrase is
     not there, it means they are off.

Bug Fixes and Other Changes

*   `tf.data`:
 *   Fixed bug in `tf.data.experimental.parse_example_dataset` when `tf.io.RaggedFeatures` would specify `value_key` but no `partitions`. Before the fix, setting `value_key` but no `partitions` would result in the feature key being replaced by the value key, e.g. `{'value_key': <RaggedTensor>}` instead of `{'key': <RaggedTensor>}`. Now the correct feature key will be used. This aligns the behavior of `tf.data.experimental.parse_example_dataset` to match the behavior of `tf.io.parse_example`.
 *   Added a new field, `filter_parallelization`, to `tf.data.experimental.OptimizationOptions`. If it is set to `True`, tf.data will run `Filter` transformation with multiple threads. Its default value is `False` if not specified.

*   `tf.keras`:
 *   Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are `ShardedVariable`s (used for training with `tf.distribute.experimental.ParameterServerStrategy`).

*   `tf.random`:
 * Added `tf.random.experimental.index_shuffle`, for shuffling a sequence without materializing the sequence in memory.

*   `tf.RaggedTensor`:
 *   Introduced `tf.experimental.RowPartition`, which encodes how one dimension in a RaggedTensor relates to another, into the public API.
 *   Introduced `tf.experimental.DynamicRaggedShape`, which represents the shape of a RaggedTensor.

Security

*   Fixes a code injection in `saved_model_cli` ([CVE-2022-29216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216))
*   Fixes a missing validation which causes `TensorSummaryV2` to crash ([CVE-2022-29193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193))
*   Fixes a missing validation which crashes `QuantizeAndDequantizeV4Grad` ([CVE-2022-29192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192))
*   Fixes a missing validation which causes denial of service via `DeleteSessionTensor` ([CVE-2022-29194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194))
*   Fixes a missing validation which causes denial of service via `GetSessionTensor` ([CVE-2022-29191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191))
*   Fixes a missing validation which causes denial of service via `StagePeek` ([CVE-2022-29195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195))
*   Fixes a missing validation which causes denial of service via `UnsortedSegmentJoin` ([CVE-2022-29197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197))
*   Fixes a missing validation which causes denial of service via `LoadAndRemapMatrix` ([CVE-2022-29199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199))
*   Fixes a missing validation which causes denial of service via `SparseTensorToCSRSparseMatrix` ([CVE-2022-29198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198))
*   Fixes a missing validation which causes denial of service via `LSTMBlockCell` ([CVE-2022-29200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200))
*   Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196))
*   Fixes a `CHECK` failure in depthwise ops via overflows ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
*   Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles ([CVE-2022-29207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207))
*   Fixes a segfault due to missing support for quantized types ([CVE-2022-29205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205))
*   Fixes a missing validation which results in undefined behavior in `SparseTensorDenseAdd` ([CVE-2022-29206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206))
*   Fixes a missing validation which results in undefined behavior in `QuantizedConv2D` ([CVE-2022-29201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201))
*   Fixes an integer overflow in `SpaceToBatchND` ([CVE-2022-29203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203))
*   Fixes a segfault and OOB write due to incomplete validation in `EditDistance` ([CVE-2022-29208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208))
*   Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204))
*   Fixes a denial of service in `tf.ragged.constant` due to lack of validation ([CVE-2022-29202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202))
*   Fixes a segfault when `tf.histogram_fixed_width` is called with NaN values ([CVE-2022-29211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211))
*   Fixes a core dump when loading TFLite models with quantization ([CVE-2022-29212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212))
*   Fixes crashes stemming from incomplete validation in signal ops ([CVE-2022-29213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213))
*   Fixes a type confusion leading to `CHECK`-failure based denial of service ([CVE-2022-29209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209))
*   Fixes a heap buffer overflow due to incorrect hash function ([CVE-2022-29210](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29210))
*   Updates `curl` to `7.83.1` to handle ([CVE-2022-22576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576), ([CVE-2022-27774](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774), ([CVE-2022-27775](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775), ([CVE-2022-27776](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776), ([CVE-2022-27778](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778), ([CVE-2022-27779](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779), ([CVE-2022-27780](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780), ([CVE-2022-27781](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781), ([CVE-2022-27782](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782) and ([CVE-2022-30115](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115)
*   Updates `zlib` to `1.2.12` after `1.2.11` was pulled due to [security issue](https://www.openwall.com/lists/oss-security/2022/03/28/1)

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

Aaron Debattista, Abel Soares Siqueira, Abhishek Varma, Andrei Ivanov, andreii, Andrew Goodbody, apeltop, Arnab Dutta, Ashiq Imran, Banikumar Maiti (Intel Aipg), Ben Greiner, Benjamin Peterson, bhack, Christopher Bate, chunduriv, Copybara-Service, DEKHTIARJonathan, Deven Desai, Duncan Riach, Eric Kunze, Everton Constantino, Faruk D, Fredrik Knutsson, gadagashwini, Gauri1 Deshpande, gtiHibGele, Guozhong Zhuang, Islem-Esi, Ivanov Viktor, Jason Furmanek, Jason Zaman, Jim, Jinzhe Zeng, John Laxson, Jonas Eschle, Jonas Eschle 'Mayou36, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, KaurkerDevourer, Koan-Sin Tan, kushanam, Laramie Leavitt, Li-Wen Chang, lipracer, Louis Sugy, Lu Teng, Mahmoud Abuzaina, Malcolm Slaney, Malik Shahzad Muzaffar, Marek Šuppa, Matt Conley, Michael Melesse, Milos Puzovic, mohantym, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Patrice Vignola, peterjc123, Philip Turner, Rajeshwar Reddy T, Robert Kalmar, Rodrigo Formigone, Rohit Santhanam, rui, Sachin Muradi, Saduf2019, sandip, Scott Leishman, Serge Panev, Shi,Guangyong, Srinivasan Narayanamoorthy, stanley, Steven I Reeves, stevenireeves, sushreebarsa, Tamas Bela Feher, Tao He, Thomas Schmeyer, Tiago Almeida, Trevor Morris, Uday Bondhugula, Uwe L. Korn, Varghese, Jojimon, Vishnuvardhan Janapati, William Muir, William Raveane, xutianming, Yasuhiro Matsumoto, Yimei Sun, Yong Tang, Yu Feng, Yuriy Chernyshov, zhaozheng09

2.8.2

Add an upper bound for `protobuf` in `setup.py` since `protobuf` after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.

2.8.1

This releases introduces several vulnerability fixes:

*   Fixes a code injection in `saved_model_cli` ([CVE-2022-29216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216))
*   Fixes a missing validation which causes `TensorSummaryV2` to crash ([CVE-2022-29193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193))
*   Fixes a missing validation which crashes `QuantizeAndDequantizeV4Grad` ([CVE-2022-29192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192))
*   Fixes a missing validation which causes denial of service via `DeleteSessionTensor` ([CVE-2022-29194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194))
*   Fixes a missing validation which causes denial of service via `GetSessionTensor` ([CVE-2022-29191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191))
*   Fixes a missing validation which causes denial of service via `StagePeek` ([CVE-2022-29195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195))
*   Fixes a missing validation which causes denial of service via `UnsortedSegmentJoin` ([CVE-2022-29197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197))
*   Fixes a missing validation which causes denial of service via `LoadAndRemapMatrix` ([CVE-2022-29199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199))
*   Fixes a missing validation which causes denial of service via `SparseTensorToCSRSparseMatrix` ([CVE-2022-29198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198))
*   Fixes a missing validation which causes denial of service via `LSTMBlockCell` ([CVE-2022-29200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200))
*   Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196))
*   Fixes a `CHECK` failure in depthwise ops via overflows ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
*   Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles ([CVE-2022-29207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207))
*   Fixes a segfault due to missing support for quantized types ([CVE-2022-29205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205))
*   Fixes a missing validation which results in undefined behavior in `SparseTensorDenseAdd` ([CVE-2022-29206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206))
*   Fixes a missing validation which results in undefined behavior in `QuantizedConv2D` ([CVE-2022-29201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201))
*   Fixes an integer overflow in `SpaceToBatchND` ([CVE-2022-29203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203))
*   Fixes a segfault and OOB write due to incomplete validation in `EditDistance` ([CVE-2022-29208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208))
*   Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204))
*   Fixes a denial of service in `tf.ragged.constant` due to lack of validation ([CVE-2022-29202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202))
*   Fixes a segfault when `tf.histogram_fixed_width` is called with NaN values ([CVE-2022-29211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211))
*   Fixes a core dump when loading TFLite models with quantization ([CVE-2022-29212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212))
*   Fixes crashes stemming from incomplete validation in signal ops ([CVE-2022-29213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213))
*   Fixes a type confusion leading to `CHECK`-failure based denial of service ([CVE-2022-29209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209))
*   Fixes a heap buffer overflow due to incorrect hash function ([CVE-2022-29210](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29210))
*   Updates `curl` to `7.83.1` to handle ([CVE-2022-22576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576), ([CVE-2022-27774](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774), ([CVE-2022-27775](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775), ([CVE-2022-27776](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776), ([CVE-2022-27778](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778), ([CVE-2022-27779](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779), ([CVE-2022-27780](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780), ([CVE-2022-27781](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781), ([CVE-2022-27782](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782) and ([CVE-2022-30115](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-30115)
*   Updates `zlib` to `1.2.12` after `1.2.11` was pulled due to [security issue](https://www.openwall.com/lists/oss-security/2022/03/28/1)

2.8.0

Major Features and Improvements

*   `tf.lite`:

 *   Added TFLite builtin op support for the following TF ops:
     *   `tf.raw_ops.Bucketize` op on CPU.
     *   `tf.where` op for data types
         `tf.int32`/`tf.uint32`/`tf.int8`/`tf.uint8`/`tf.int64`.
     *   `tf.random.normal` op for output data type `tf.float32` on CPU.
     *   `tf.random.uniform` op for output data type `tf.float32` on CPU.
     *   `tf.random.categorical` op for output data type `tf.int64` on CPU.

*   `tensorflow.experimental.tensorrt`:

 *   `conversion_params` is now deprecated inside `TrtGraphConverterV2` in
     favor of direct arguments: `max_workspace_size_bytes`, `precision_mode`,
     `minimum_segment_size`, `maximum_cached_engines`, `use_calibration` and
     `allow_build_at_runtime`.
 *   Added a new parameter called `save_gpu_specific_engines` to the
     `.save()` function inside `TrtGraphConverterV2`. When `False`, the
     `.save()` function won't save any TRT engines that have been built. When
     `True` (default), the original behavior is preserved.
 *   `TrtGraphConverterV2` provides a new API called `.summary()` which
     outputs a summary of the inference converted by TF-TRT. It namely shows
     each `TRTEngineOp` with their input(s)' and output(s)' shape and dtype.
     A detailed version of the summary is available which prints additionally
     all the TensorFlow OPs included in each of the `TRTEngineOp`s.

*   `tf.tpu.experimental.embedding`:

 *   `tf.tpu.experimental.embedding.FeatureConfig` now takes an additional
     argument `output_shape` which can specify the shape of the output
     activation for the feature.
 *   `tf.tpu.experimental.embedding.TPUEmbedding` now has the same behavior
     as `tf.tpu.experimental.embedding.serving_embedding_lookup` which can
     take arbitrary rank of dense and sparse tensor. For ragged tensor,
     though the input tensor remains to be rank 2, the activations now can be
     rank 2 or above by specifying the output shape in the feature config or
     via the build method.

*   Add
 [`tf.config.experimental.enable_op_determinism`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_op_determinism),
 which makes TensorFlow ops run deterministically at the cost of performance.
 Replaces the `TF_DETERMINISTIC_OPS` environmental variable, which is now
 deprecated. The "Bug Fixes and Other Changes" section lists more
 determinism-related changes.

*   (Since TF 2.7) Add
 [PluggableDevice](https://blog.tensorflow.org/2021/06/pluggabledevice-device-plugins-for-TensorFlow.html)
 support to
 [TensorFlow Profiler](https://github.com/tensorflow/community/blob/master/rfcs/20210513-pluggable-profiler-for-tensorflow.md).

Bug Fixes and Other Changes

*   `tf.data`:

 *   Fixed a bug where setting `options.deterministic = False` would only
     modify one transformation to run non-deterministically, leaving other
     transformations deterministic. The option will now apply the same across
     all transformations.
 *   The optimization `parallel_batch` now becomes default if not disabled by
     users, which will parallelize copying of batch elements.
 *   Added the ability for `TensorSliceDataset` to identify and handle inputs
     that are files. This enables creating hermetic SavedModels when using
     datasets created from files.

*   `tf.lite`:

 *   Adds GPU Delegation support for serialization to Java API. This boosts
     initialization time up to 90% when OpenCL is available.
 *   Deprecated `Interpreter::SetNumThreads`, in favor of
     `InterpreterBuilder::SetNumThreads`.

*   `tf.keras`:

 *   Adds `tf.compat.v1.keras.utils.get_or_create_layer` to aid migration to
     TF2 by enabling tracking of nested keras models created in TF1-style,
     when used with the `tf.compat.v1.keras.utils.track_tf1_style_variables`
     decorator.
 *   Added a `tf.keras.layers.experimental.preprocessing.HashedCrossing`
     layer which applies the hashing trick to the concatenation of crossed
     scalar inputs. This provides a stateless way to try adding feature
     crosses of integer or string data to a model.
 *   Removed `keras.layers.experimental.preprocessing.CategoryCrossing`.
     Users should migrate to the `HashedCrossing` layer or use
     `tf.sparse.cross`/`tf.ragged.cross` directly.
 *   Added additional `standardize` and `split` modes to `TextVectorization`:
     *   `standardize="lower"` will lowercase inputs.
     *   `standardize="string_punctuation"` will remove all punctuation.
     *   `split="character"` will split on every unicode character.
 *   Added an `output_mode` argument to the `Discretization` and `Hashing`
     layers with the same semantics as other preprocessing layers. All
     categorical preprocessing layers now support `output_mode`.
 *   All preprocessing layer output will follow the compute dtype of a
     `tf.keras.mixed_precision.Policy`, unless constructed with
     `output_mode="int"` in which case output will be `tf.int64`. The output
     type of any preprocessing layer can be controlled individually by
     passing a `dtype` argument to the layer.
 *   `tf.random.Generator` for keras initializers and all RNG code.
 *   Added 3 new APIs for enable/disable/check the usage of
     `tf.random.Generator` in keras backend, which will be the new backend
     for all the RNG in Keras. We plan to switch on the new code path by
     default in tf 2.8, and the behavior change will likely to cause some
     breakage on user side (eg if the test is checking against some golden
     number). These 3 APIs will allow user to disable and switch back to
     legacy behavior if they prefer. In future (eg TF 2.10), we expect to
     totally remove the legacy code path (stateful random Ops), and these 3
     APIs will be removed as well.
 *   `tf.keras.callbacks.experimental.BackupAndRestore` is now available as
     `tf.keras.callbacks.BackupAndRestore`. The experimental endpoint is
     deprecated and will be removed in a future release.
 *   `tf.keras.experimental.SidecarEvaluator` is now available as
     `tf.keras.utils.SidecarEvaluator`. The experimental endpoint is
     deprecated and will be removed in a future release.
 *   Metrics update and collection logic in default `Model.train_step()` is
     now customizable via overriding `Model.compute_metrics()`.
 *   Losses computation logic in default `Model.train_step()` is now
     customizable via overriding `Model.compute_loss()`.
 *   `jit_compile` added to `Model.compile()` on an opt-in basis to compile
     the model's training step with [XLA](https://www.tensorflow.org/xla).
     Note that `jit_compile=True` may not necessarily work for all models.

*   Deterministic Op Functionality:

 *   Fix regression in deterministic selection of deterministic cuDNN
     convolution algorithms, a regression that was introduced in v2.5. Note
     that nondeterministic out-of-memory events while selecting algorithms
     could still lead to nondeterminism, although this is very unlikely. This
     additional, unlikely source will be eliminated in a later version.
 *   Add deterministic GPU implementations of:
     *   `tf.function(jit_compile=True)`'s that use `Scatter`.
     *   (since v2.7) Stateful ops used in `tf.data.Dataset`
     *   (since v2.7) `tf.convert_to_tensor` when fed with (sparse)
         `tf.IndexedSlices` (because it uses `tf.math.unsorted_segment_sum`)
     *   (since v2.7) `tf.gather` backprop (because `tf.convert_to_tensor`
         reduces `tf.gather`'s (sparse) `tf.IndexedSlices` gradients into its
         dense `params` input)
     *   (since v2.7) `tf.math.segment_mean`
     *   (since v2.7) `tf.math.segment_prod`
     *   (since v2.7) `tf.math.segment_sum`
     *   (since v2.7) `tf.math.unsorted_segment_mean`
     *   (since v2.7) `tf.math.unsorted_segment_prod`
     *   (since v2.7) `tf.math.unsorted_segment_sum`
     *   (since v2.7) `tf.math.unsorted_segment_sqrt`
     *   (since v2.7) `tf.nn.ctc_loss` (resolved, possibly in prior release,
         and confirmed with tests)
     *   (since v2.7)`tf.nn.sparse_softmax_crossentropy_with_logits`
 *   (since v2.7) Run `tf.scatter_nd` and other related scatter functions,
     such as `tf.tensor_scatter_nd_update`, on CPU (with significant
     performance penalty).
 *   Add determinism-unimplemented exception-throwing to the following ops.
     When op-determinism is expected (i.e. after
     `tf.config.experimental.enable_op_determinism` has been called), an
     attempt to use the specified paths through the following ops on a GPU
     will cause `tf.errors.UnimplementedError` (with an understandable
     message), unless otherwise specified, to be thrown.
     *   `FakeQuantWithMinMaxVarsGradient` and
         `FakeQuantWithMinMaxVarsPerChannelGradient`
     *   (since v2.7) `tf.compat.v1.get_seed` if the global random seed has
         not yet been set (via `tf.random.set_seed`). Throws `RuntimeError`
         from Python or `InvalidArgument` from C++
     *   (since v2.7) `tf.compat.v1.nn.fused_batch_norm` backprop to `offset`
         when `is_training=False`
     *   (since v2.7) `tf.image.adjust_contrast` forward
     *   (since v2.7) `tf.image.resize` with `method=ResizeMethod.NEAREST`
         backprop
     *   (since v2.7) `tf.linalg.svd`
     *   (since v2.7) `tf.math.bincount`
     *   (since v2.7) `tf.nn.depthwise_conv2d` backprop to `filter` when not
         using cuDNN convolution
     *   (since v2.7) `tf.nn.dilation2d` gradient
     *   (since v2.7) `tf.nn.max_pool_with_argmax` gradient
     *   (since v2.7) `tf.raw_ops.DebugNumericSummary` and
         `tf.raw_ops.DebugNumericSummaryV2`
     *   (since v2.7) `tf.timestamp`. Throws `FailedPrecondition`
     *   (since v2.7) `tf.Variable.scatter_add` (and other scatter methods,
         both on ref and resource variables)
     *   (since v2.7) The random-number-generating ops in the `tf.random`
         module when the global random seed has not yet been set (via
         `tf.random.set_seed`). Throws `RuntimeError` from Python or
         `InvalidArgument` from C++

*   TensorFlow-oneDNN no longer supports
 [explicit use of oneDNN blocked tensor format](https://github.com/tensorflow/tensorflow/pull/53288),
 e.g., setting the environment variable `TF_ENABLE_MKL_NATIVE_FORMAT` will
 not have any effect.

*   TensorFlow has been validated on Windows Subsystem for Linux 2 (aka WSL 2)
 for both GPUs and CPUs.

*   Due to security issues (see section below), all boosted trees code has been
 deprecated. Users should switch to
 [TensorFlow Decision Forests](https://github.com/tensorflow/decision-forests).
 TF's boosted trees code will be eliminated before the branch cut for TF 2.9
 and will no longer be present since that release.

Security

*   Fixes a floating point division by 0 when executing convolution operators
 ([CVE-2022-21725](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21725))
*   Fixes a heap OOB read in shape inference for `ReverseSequence`
 ([CVE-2022-21728](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21728))
*   Fixes a heap OOB access in `Dequantize`
 ([CVE-2022-21726](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21726))
*   Fixes an integer overflow in shape inference for `Dequantize`
 ([CVE-2022-21727](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21727))
*   Fixes a heap OOB access in `FractionalAvgPoolGrad`
 ([CVE-2022-21730](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21730))
*   Fixes an overflow and divide by zero in `UnravelIndex`
 ([CVE-2022-21729](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21729))
*   Fixes a type confusion in shape inference for `ConcatV2`
 ([CVE-2022-21731](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21731))
*   Fixes an OOM in `ThreadPoolHandle`
 ([CVE-2022-21732](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21732))
*   Fixes an OOM due to integer overflow in `StringNGrams`
 ([CVE-2022-21733](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21733))
*   Fixes more issues caused by incomplete validation in boosted trees code
 ([CVE-2021-41208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41208))
*   Fixes an integer overflows in most sparse component-wise ops
 ([CVE-2022-23567](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23567))
*   Fixes an integer overflows in `AddManySparseToTensorsMap`
 ([CVE-2022-23568](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23568))
*   Fixes a number of `CHECK`-failures in `MapStage`
 ([CVE-2022-21734](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21734))
*   Fixes a division by zero in `FractionalMaxPool`
 ([CVE-2022-21735](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21735))
*   Fixes a number of `CHECK`-fails when building invalid/overflowing tensor
 shapes
 ([CVE-2022-23569](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23569))
*   Fixes an undefined behavior in `SparseTensorSliceDataset`
 ([CVE-2022-21736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21736))
*   Fixes an assertion failure based denial of service via faulty bin count
 operations
 ([CVE-2022-21737](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21737))
*   Fixes a reference binding to null pointer in `QuantizedMaxPool`
 ([CVE-2022-21739](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21739))
*   Fixes an integer overflow leading to crash in `SparseCountSparseOutput`
 ([CVE-2022-21738](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21738))
*   Fixes a heap overflow in `SparseCountSparseOutput`
 ([CVE-2022-21740](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21740))
*   Fixes an FPE in `BiasAndClamp` in TFLite
 ([CVE-2022-23557](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23557))
*   Fixes an FPE in depthwise convolutions in TFLite
 ([CVE-2022-21741](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21741))
*   Fixes an integer overflow in TFLite array creation
 ([CVE-2022-23558](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23558))
*   Fixes an integer overflow in TFLite
 ([CVE-2022-23559](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23559))
*   Fixes a dangerous OOB write in TFLite
 ([CVE-2022-23561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23561))
*   Fixes a vulnerability leading to read and write outside of bounds in TFLite
 ([CVE-2022-23560](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23560))
*   Fixes a set of vulnerabilities caused by using insecure temporary files
 ([CVE-2022-23563](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23563))
*   Fixes an integer overflow in Range resulting in undefined behavior and OOM
 ([CVE-2022-23562](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23562))
*   Fixes a vulnerability where missing validation causes `tf.sparse.split` to
 crash when `axis` is a tuple
 ([CVE-2021-41206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41206))
*   Fixes a `CHECK`-fail when decoding resource handles from proto
 ([CVE-2022-23564](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23564))
*   Fixes a `CHECK`-fail with repeated `AttrDef`
 ([CVE-2022-23565](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23565))
*   Fixes a heap OOB write in Grappler
 ([CVE-2022-23566](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23566))
*   Fixes a `CHECK`-fail when decoding invalid tensors from proto
 ([CVE-2022-23571](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23571))
*   Fixes a null-dereference when specializing tensor type
 ([CVE-2022-23570](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23570))
*   Fixes a crash when type cannot be specialized
 ([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572))
*   Fixes a heap OOB read/write in `SpecializeType`
 ([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574))
*   Fixes an unitialized variable access in `AssignOp`
 ([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573))
*   Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize`
 ([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575))
*   Fixes an integer overflow in `OpLevelCostEstimator::CalculateOutputSize`
 ([CVE-2022-23576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23576))
*   Fixes a null dereference in `GetInitOp`
 ([CVE-2022-23577](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23577))
*   Fixes a memory leak when a graph node is invalid
 ([CVE-2022-23578](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23578))
*   Fixes an abort caused by allocating a vector that is too large
 ([CVE-2022-23580](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23580))
*   Fixes multiple `CHECK`-failures during Grappler's `IsSimplifiableReshape`
 ([CVE-2022-23581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23581))
*   Fixes multiple `CHECK`-failures during Grappler's `SafeToRemoveIdentity`
 ([CVE-2022-23579](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23579))
*   Fixes multiple `CHECK`-failures in `TensorByteSize`
 ([CVE-2022-23582](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23582))
*   Fixes multiple `CHECK`-failures in binary ops due to type confusion
 ([CVE-2022-23583](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23583))
*   Fixes a use after free in `DecodePng` kernel
 ([CVE-2022-23584](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23584))
*   Fixes a memory leak in decoding PNG images
 ([CVE-2022-23585](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23585))
*   Fixes multiple `CHECK`-fails in `function.cc`
 ([CVE-2022-23586](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23586))
*   Fixes multiple `CHECK`-fails due to attempting to build a reference tensor
 ([CVE-2022-23588](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23588))
*   Fixes an integer overflow in Grappler cost estimation of crop and resize
 operation
 ([CVE-2022-23587](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23587))
*   Fixes a null pointer dereference in Grappler's `IsConstant`
 ([CVE-2022-23589](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23589))
*   Fixes a `CHECK` failure in constant folding
 ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
*   Fixes a stack overflow due to self-recursive function in `GraphDef`
 ([CVE-2022-23591](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23591))
*   Fixes a heap OOB access in `RunForwardTypeInference`
 ([CVE-2022-23592](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23592))
*   Fixes a crash due to erroneous `StatusOr`
 ([CVE-2022-23590](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23590))
*   Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR)
 ([CVE-2022-23594](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23594))
*   Fixes a segfault in `simplifyBroadcast` (MLIR)
 ([CVE-2022-23593](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23593))
*   Fixes a null pointer dereference in `BuildXlaCompilationCache` (XLA)
 ([CVE-2022-23595](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23595))
*   Updates `icu` to `69.1` to handle
 [CVE-2020-10531](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10531)

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

8bitmp3, Adam Lanicek, ag.ramesh, alesapin, Andrew Goodbody, annasuheyla, Ariel
Elkin, Arnab Dutta, Ben Barsdell, bhack, cfRod, Chengji Yao, Christopher Bate,
dan, Dan F-M, David Korczynski, DEKHTIARJonathan, dengzhiyuan, Deven Desai,
Duncan Riach, Eli Osherovich, Ewout Ter Hoeven, ez2take, Faijul Amin, fo40225,
Frederic Bastien, gadagashwini, Gauri1 Deshpande, Georgiy Manuilov, Guilherme De
Lázari, Guozhong Zhuang, H1Gdev, homuler, Hongxu Jia, Jacky_Yin, jayfurmanek,
jgehw, Jhalak Patel, Jinzhe Zeng, Johan Gunnarsson, Jonathan Dekhtiar, Kaixi
Hou, Kanvi Khanna, Kevin Cheng, Koan-Sin Tan, Kruglov-Dmitry, Kun Lu, Lemo,
Lequn Chen, long.chen, Louis Sugy, Mahmoud Abuzaina, Mao, Marius Brehler, Mark
Harfouche, Martin Patz, Maxiwell S. Garcia, Meenakshi Venkataraman, Michael
Melesse, Mrinal Tyagi, Måns Nilsson, Nathan John Sircombe, Nathan Luehr, Nilesh
Agarwalla, Oktay Ozturk, Patrice Vignola, Pawel-Polyai, Rama Ketineni, Ramesh
Sampath, Reza Rahimi, Rob Suderman, Robert Kalmar, Rohit Santhanam, Sachin
Muradi, Saduf2019, Samuel Marks, Shi,Guangyong, Sidong-Wei, Srinivasan
Narayanamoorthy, Srishti Srivastava, Steven I Reeves, stevenireeves, Supernovae,
Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Thomas Schmeyer,
tilakrayal, Valery Mironov, Victor Guo, Vignesh Kothapalli, Vishnuvardhan
Janapati, wamuir, Wang,Quintin, William Muir, William Raveane, Yash Goel, Yimei
Sun, Yong Tang, Yuduo Wu

2.7.3

Add an upper bound for `protobuf` in `setup.py` since `protobuf` after version 3.20 is currently incompatible with TensorFlow. See https://github.com/tensorflow/tensorflow/issues/53234, https://github.com/protocolbuffers/protobuf/issues/9954 and https://github.com/tensorflow/tensorflow/issues/56077.

2.7.2

This releases introduces several vulnerability fixes:

*   Fixes a code injection in `saved_model_cli` ([CVE-2022-29216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29216))
*   Fixes a missing validation which causes `TensorSummaryV2` to crash ([CVE-2022-29193](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29193))
*   Fixes a missing validation which crashes `QuantizeAndDequantizeV4Grad` ([CVE-2022-29192](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29192))
*   Fixes a missing validation which causes denial of service via `DeleteSessionTensor` ([CVE-2022-29194](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29194))
*   Fixes a missing validation which causes denial of service via `GetSessionTensor` ([CVE-2022-29191](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29191))
*   Fixes a missing validation which causes denial of service via `StagePeek` ([CVE-2022-29195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29195))
*   Fixes a missing validation which causes denial of service via `UnsortedSegmentJoin` ([CVE-2022-29197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29197))
*   Fixes a missing validation which causes denial of service via `LoadAndRemapMatrix` ([CVE-2022-29199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29199))
*   Fixes a missing validation which causes denial of service via `SparseTensorToCSRSparseMatrix` ([CVE-2022-29198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29198))
*   Fixes a missing validation which causes denial of service via `LSTMBlockCell` ([CVE-2022-29200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29200))
*   Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29196))
*   Fixes a `CHECK` failure in depthwise ops via overflows ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
*   Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles ([CVE-2022-29207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29207))
*   Fixes a segfault due to missing support for quantized types ([CVE-2022-29205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29205))
*   Fixes a missing validation which results in undefined behavior in `SparseTensorDenseAdd` ([CVE-2022-29206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29206))
*   Fixes a missing validation which results in undefined behavior in `QuantizedConv2D` ([CVE-2022-29201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29201))
*   Fixes an integer overflow in `SpaceToBatchND` ([CVE-2022-29203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29203))
*   Fixes a segfault and OOB write due to incomplete validation in `EditDistance` ([CVE-2022-29208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29208))
*   Fixes a missing validation which causes denial of service via `Conv3DBackpropFilterV2` ([CVE-2022-29204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29204))
*   Fixes a denial of service in `tf.ragged.constant` due to lack of validation ([CVE-2022-29202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29202))
*   Fixes a segfault when `tf.histogram_fixed_width` is called with NaN values ([CVE-2022-29211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29211))
*   Fixes a core dump when loading TFLite models with quantization ([CVE-2022-29212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29212))
*   Fixes crashes stemming from incomplete validation in signal ops ([CVE-2022-29213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29213))
*   Fixes a type confusion leading to `CHECK`-failure based denial of service ([CVE-2022-29209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-29209))
*   Updates `curl` to `7.83.1` to handle ([CVE-2022-22576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-22576), ([CVE-2022-27774](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27774), ([CVE-2022-27775](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27775), ([CVE-2022-27776](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27776), ([CVE-2022-27778](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27778), ([CVE-2022-27779](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27779), ([CVE-2022-27780](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27780), ([CVE-2022-27781](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27781), ([CVE-2022-27782](https://cve.mitre.org/cgi-bin/cvename.cgi?name=VE-2022-27782) and ([CVE-2022-3

@pyup-bot
Copy link
Copy Markdown
Contributor Author

Closing this in favor of #283

@pyup-bot pyup-bot closed this Nov 16, 2022
@geblanco geblanco deleted the pyup-update-tensorflow-2.5.0-to-2.10.0 branch November 16, 2022 20:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant