Skip to content

Update tensorflow to 2.8.1#228

Closed
pyup-bot wants to merge 1 commit into
masterfrom
pyup-update-tensorflow-2.5.0-to-2.8.1
Closed

Update tensorflow to 2.8.1#228
pyup-bot wants to merge 1 commit into
masterfrom
pyup-update-tensorflow-2.5.0-to-2.8.1

Conversation

@pyup-bot
Copy link
Copy Markdown
Contributor

This PR updates tensorflow from 2.5.0 to 2.8.1.

Changelog

2.8.0

Major Features and Improvements

*   `tf.lite`:

 *   Added TFLite builtin op support for the following TF ops:
     *   `tf.raw_ops.Bucketize` op on CPU.
     *   `tf.where` op for data types
         `tf.int32`/`tf.uint32`/`tf.int8`/`tf.uint8`/`tf.int64`.
     *   `tf.random.normal` op for output data type `tf.float32` on CPU.
     *   `tf.random.uniform` op for output data type `tf.float32` on CPU.
     *   `tf.random.categorical` op for output data type `tf.int64` on CPU.

*   `tensorflow.experimental.tensorrt`:

 *   `conversion_params` is now deprecated inside `TrtGraphConverterV2` in
     favor of direct arguments: `max_workspace_size_bytes`, `precision_mode`,
     `minimum_segment_size`, `maximum_cached_engines`, `use_calibration` and
     `allow_build_at_runtime`.
 *   Added a new parameter called `save_gpu_specific_engines` to the
     `.save()` function inside `TrtGraphConverterV2`. When `False`, the
     `.save()` function won't save any TRT engines that have been built. When
     `True` (default), the original behavior is preserved.
 *   `TrtGraphConverterV2` provides a new API called `.summary()` which
     outputs a summary of the inference converted by TF-TRT. It namely shows
     each `TRTEngineOp` with their input(s)' and output(s)' shape and dtype.
     A detailed version of the summary is available which prints additionally
     all the TensorFlow OPs included in each of the `TRTEngineOp`s.

*   `tf.tpu.experimental.embedding`:

 *   `tf.tpu.experimental.embedding.FeatureConfig` now takes an additional
     argument `output_shape` which can specify the shape of the output
     activation for the feature.
 *   `tf.tpu.experimental.embedding.TPUEmbedding` now has the same behavior
     as `tf.tpu.experimental.embedding.serving_embedding_lookup` which can
     take arbitrary rank of dense and sparse tensor. For ragged tensor,
     though the input tensor remains to be rank 2, the activations now can be
     rank 2 or above by specifying the output shape in the feature config or
     via the build method.

*   Add
 [`tf.config.experimental.enable_op_determinism`](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_op_determinism),
 which makes TensorFlow ops run deterministically at the cost of performance.
 Replaces the `TF_DETERMINISTIC_OPS` environmental variable, which is now
 deprecated. The "Bug Fixes and Other Changes" section lists more
 determinism-related changes.

*   (Since TF 2.7) Add
 [PluggableDevice](https://blog.tensorflow.org/2021/06/pluggabledevice-device-plugins-for-TensorFlow.html)
 support to
 [TensorFlow Profiler](https://github.com/tensorflow/community/blob/master/rfcs/20210513-pluggable-profiler-for-tensorflow.md).

Bug Fixes and Other Changes

*   `tf.data`:

 *   Fixed a bug where setting `options.deterministic = False` would only
     modify one transformation to run non-deterministically, leaving other
     transformations deterministic. The option will now apply the same across
     all transformations.
 *   The optimization `parallel_batch` now becomes default if not disabled by
     users, which will parallelize copying of batch elements.
 *   Added the ability for `TensorSliceDataset` to identify and handle inputs
     that are files. This enables creating hermetic SavedModels when using
     datasets created from files.

*   `tf.lite`:

 *   Adds GPU Delegation support for serialization to Java API. This boosts
     initialization time up to 90% when OpenCL is available.
 *   Deprecated `Interpreter::SetNumThreads`, in favor of
     `InterpreterBuilder::SetNumThreads`.

*   `tf.keras`:

 *   Adds `tf.compat.v1.keras.utils.get_or_create_layer` to aid migration to
     TF2 by enabling tracking of nested keras models created in TF1-style,
     when used with the `tf.compat.v1.keras.utils.track_tf1_style_variables`
     decorator.
 *   Added a `tf.keras.layers.experimental.preprocessing.HashedCrossing`
     layer which applies the hashing trick to the concatenation of crossed
     scalar inputs. This provides a stateless way to try adding feature
     crosses of integer or string data to a model.
 *   Removed `keras.layers.experimental.preprocessing.CategoryCrossing`.
     Users should migrate to the `HashedCrossing` layer or use
     `tf.sparse.cross`/`tf.ragged.cross` directly.
 *   Added additional `standardize` and `split` modes to `TextVectorization`:
     *   `standardize="lower"` will lowercase inputs.
     *   `standardize="string_punctuation"` will remove all puncuation.
     *   `split="character"` will split on every unicode character.
 *   Added an `output_mode` argument to the `Discretization` and `Hashing`
     layers with the same semantics as other preprocessing layers. All
     categorical preprocessing layers now support `output_mode`.
 *   All preprocessing layer output will follow the compute dtype of a
     `tf.keras.mixed_precision.Policy`, unless constructed with
     `output_mode="int"` in which case output will be `tf.int64`. The output
     type of any preprocessing layer can be controlled individually by
     passing a `dtype` argument to the layer.
 *   `tf.random.Generator` for keras initializers and all RNG code.
 *   Added 3 new APIs for enable/disable/check the usage of
     `tf.random.Generator` in keras backend, which will be the new backend
     for all the RNG in Keras. We plan to switch on the new code path by
     default in tf 2.8, and the behavior change will likely to cause some
     breakage on user side (eg if the test is checking against some golden
     nubmer). These 3 APIs will allow user to disable and switch back to
     legacy behavior if they prefer. In future (eg TF 2.10), we expect to
     totally remove the legacy code path (stateful random Ops), and these 3
     APIs will be removed as well.
 *   `tf.keras.callbacks.experimental.BackupAndRestore` is now available as
     `tf.keras.callbacks.BackupAndRestore`. The experimental endpoint is
     deprecated and will be removed in a future release.
 *   `tf.keras.experimental.SidecarEvaluator` is now available as
     `tf.keras.utils.SidecarEvaluator`. The experimental endpoint is
     deprecated and will be removed in a future release.
 *   Metrics update and collection logic in default `Model.train_step()` is
     now customizable via overriding `Model.compute_metrics()`.
 *   Losses computation logic in default `Model.train_step()` is now
     customizable via overriding `Model.compute_loss()`.
 *   `jit_compile` added to `Model.compile()` on an opt-in basis to compile
     the model's training step with [XLA](https://www.tensorflow.org/xla).
     Note that `jit_compile=True` may not necessarily work for all models.

*   Deterministic Op Functionality:

 *   Fix regression in deterministic selection of deterministic cuDNN
     convolution algorithms, a regression that was introduced in v2.5. Note
     that nondeterministic out-of-memory events while selecting algorithms
     could still lead to nondeterminism, although this is very unlikely. This
     additional, unlikely source will be eliminated in a later version.
 *   Add determinsitic GPU implementations of:
     *   `tf.function(jit_compile=True)`'s that use `Scatter`.
     *   (since v2.7) Stateful ops used in `tf.data.Dataset`
     *   (since v2.7) `tf.convert_to_tensor` when fed with (sparse)
         `tf.IndexedSlices` (because it uses `tf.math.unsorted_segment_sum`)
     *   (since v2.7) `tf.gather` backprop (because `tf.convert_to_tensor`
         reduces `tf.gather`'s (sparse) `tf.IndexedSlices` gradients into its
         dense `params` input)
     *   (since v2.7) `tf.math.segment_mean`
     *   (since v2.7) `tf.math.segment_prod`
     *   (since v2.7) `tf.math.segment_sum`
     *   (since v2.7) `tf.math.unsorted_segment_mean`
     *   (since v2.7) `tf.math.unsorted_segment_prod`
     *   (since v2.7) `tf.math.unsorted_segment_sum`
     *   (since v2.7) `tf.math.unsorted_segment_sqrt`
     *   (since v2.7) `tf.nn.ctc_loss` (resolved, possibly in prior release,
         and confirmed with tests)
     *   (since v2.7)`tf.nn.sparse_softmax_crossentropy_with_logits`
 *   (since v2.7) Run `tf.scatter_nd` and other related scatter functions,
     such as `tf.tensor_scatter_nd_update`, on CPU (with significant
     performance penalty).
 *   Add determinism-unimplemented exception-throwing to the following ops.
     When op-determinism is expected (i.e. after
     `tf.config.experimental.enable_op_determinism` has been called), an
     attempt to use the specified paths through the following ops on a GPU
     will cause `tf.errors.UnimplementedError` (with an understandable
     message), unless otherwise specified, to be thrown.
     *   `FakeQuantWithMinMaxVarsGradient` and
         `FakeQuantWithMinMaxVarsPerChannelGradient`
     *   (since v2.7) `tf.compat.v1.get_seed` if the global random seed has
         not yet been set (via `tf.random.set_seed`). Throws `RuntimeError`
         from Python or `InvalidArgument` from C++
     *   (since v2.7) `tf.compat.v1.nn.fused_batch_norm` backprop to `offset`
         when `is_training=False`
     *   (since v2.7) `tf.image.adjust_contrast` forward
     *   (since v2.7) `tf.image.resize` with `method=ResizeMethod.NEAREST`
         backprop
     *   (since v2.7) `tf.linalg.svd`
     *   (since v2.7) `tf.math.bincount`
     *   (since v2.7) `tf.nn.depthwise_conv2d` backprop to `filter` when not
         using cuDNN convolution
     *   (since v2.7) `tf.nn.dilation2d` gradient
     *   (since v2.7) `tf.nn.max_pool_with_argmax` gradient
     *   (since v2.7) `tf.raw_ops.DebugNumericSummary` and
         `tf.raw_ops.DebugNumericSummaryV2`
     *   (since v2.7) `tf.timestamp`. Throws `FailedPrecondition`
     *   (since v2.7) `tf.Variable.scatter_add` (and other scatter methods,
         both on ref and resource variables)
     *   (since v2.7) The random-number-generating ops in the `tf.random`
         module when the global random seed has not yet been set (via
         `tf.random.set_seed`). Throws `RuntimeError` from Python or
         `InvalidArgument` from C++

*   TensorFlow-oneDNN no longer supports
 [explicit use of oneDNN blocked tensor format](https://github.com/tensorflow/tensorflow/pull/53288),
 e.g., setting the environment variable `TF_ENABLE_MKL_NATIVE_FORMAT` will
 not have any effect.

*   TensorFlow has been validated on Windows Subsystem for Linux 2 (aka WSL 2)
 for both GPUs and CPUs.

*   Due to security issues (see section below), all boosted trees code has been
 deprecated. Users should switch to
 [TensorFlow Decision Forests](https://github.com/tensorflow/decision-forests).
 TF's boosted trees code will be eliminated before the branch cut for TF 2.9
 and will no longer be present since that release.

Security

*   Fixes a floating point division by 0 when executing convolution operators
 ([CVE-2022-21725](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21725))
*   Fixes a heap OOB read in shape inference for `ReverseSequence`
 ([CVE-2022-21728](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21728))
*   Fixes a heap OOB access in `Dequantize`
 ([CVE-2022-21726](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21726))
*   Fixes an integer overflow in shape inference for `Dequantize`
 ([CVE-2022-21727](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21727))
*   Fixes a heap OOB access in `FractionalAvgPoolGrad`
 ([CVE-2022-21730](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21730))
*   Fixes an overflow and divide by zero in `UnravelIndex`
 ([CVE-2022-21729](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21729))
*   Fixes a type confusion in shape inference for `ConcatV2`
 ([CVE-2022-21731](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21731))
*   Fixes an OOM in `ThreadPoolHandle`
 ([CVE-2022-21732](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21732))
*   Fixes an OOM due to integer overflow in `StringNGrams`
 ([CVE-2022-21733](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21733))
*   Fixes more issues caused by incomplete validation in boosted trees code
 ([CVE-2021-41208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41208))
*   Fixes an integer overflows in most sparse component-wise ops
 ([CVE-2022-23567](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23567))
*   Fixes an integer overflows in `AddManySparseToTensorsMap`
 ([CVE-2022-23568](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23568))
*   Fixes a number of `CHECK`-failures in `MapStage`
 ([CVE-2022-21734](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21734))
*   Fixes a division by zero in `FractionalMaxPool`
 ([CVE-2022-21735](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21735))
*   Fixes a number of `CHECK`-fails when building invalid/overflowing tensor
 shapes
 ([CVE-2022-23569](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23569))
*   Fixes an undefined behavior in `SparseTensorSliceDataset`
 ([CVE-2022-21736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21736))
*   Fixes an assertion failure based denial of service via faulty bin count
 operations
 ([CVE-2022-21737](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21737))
*   Fixes a reference binding to null pointer in `QuantizedMaxPool`
 ([CVE-2022-21739](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21739))
*   Fixes an integer overflow leading to crash in `SparseCountSparseOutput`
 ([CVE-2022-21738](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21738))
*   Fixes a heap overflow in `SparseCountSparseOutput`
 ([CVE-2022-21740](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21740))
*   Fixes an FPE in `BiasAndClamp` in TFLite
 ([CVE-2022-23557](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23557))
*   Fixes an FPE in depthwise convolutions in TFLite
 ([CVE-2022-21741](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21741))
*   Fixes an integer overflow in TFLite array creation
 ([CVE-2022-23558](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23558))
*   Fixes an integer overflow in TFLite
 ([CVE-2022-23559](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23559))
*   Fixes a dangerous OOB write in TFLite
 ([CVE-2022-23561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23561))
*   Fixes a vulnerability leading to read and write outside of bounds in TFLite
 ([CVE-2022-23560](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23560))
*   Fixes a set of vulnerabilities caused by using insecure temporary files
 ([CVE-2022-23563](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23563))
*   Fixes an integer overflow in Range resulting in undefined behavior and OOM
 ([CVE-2022-23562](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23562))
*   Fixes a vulnerability where missing validation causes `tf.sparse.split` to
 crash when `axis` is a tuple
 ([CVE-2021-41206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41206))
*   Fixes a `CHECK`-fail when decoding resource handles from proto
 ([CVE-2022-23564](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23564))
*   Fixes a `CHECK`-fail with repeated `AttrDef`
 ([CVE-2022-23565](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23565))
*   Fixes a heap OOB write in Grappler
 ([CVE-2022-23566](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23566))
*   Fixes a `CHECK`-fail when decoding invalid tensors from proto
 ([CVE-2022-23571](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23571))
*   Fixes a null-dereference when specializing tensor type
 ([CVE-2022-23570](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23570))
*   Fixes a crash when type cannot be specialized
 ([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572))
*   Fixes a heap OOB read/write in `SpecializeType`
 ([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574))
*   Fixes an unitialized variable access in `AssignOp`
 ([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573))
*   Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize`
 ([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575))
*   Fixes an integer overflow in `OpLevelCostEstimator::CalculateOutputSize`
 ([CVE-2022-23576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23576))
*   Fixes a null dereference in `GetInitOp`
 ([CVE-2022-23577](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23577))
*   Fixes a memory leak when a graph node is invalid
 ([CVE-2022-23578](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23578))
*   Fixes an abort caused by allocating a vector that is too large
 ([CVE-2022-23580](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23580))
*   Fixes multiple `CHECK`-failures during Grappler's `IsSimplifiableReshape`
 ([CVE-2022-23581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23581))
*   Fixes multiple `CHECK`-failures during Grappler's `SafeToRemoveIdentity`
 ([CVE-2022-23579](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23579))
*   Fixes multiple `CHECK`-failures in `TensorByteSize`
 ([CVE-2022-23582](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23582))
*   Fixes multiple `CHECK`-failures in binary ops due to type confusion
 ([CVE-2022-23583](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23583))
*   Fixes a use after free in `DecodePng` kernel
 ([CVE-2022-23584](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23584))
*   Fixes a memory leak in decoding PNG images
 ([CVE-2022-23585](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23585))
*   Fixes multiple `CHECK`-fails in `function.cc`
 ([CVE-2022-23586](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23586))
*   Fixes multiple `CHECK`-fails due to attempting to build a reference tensor
 ([CVE-2022-23588](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23588))
*   Fixes an integer overflow in Grappler cost estimation of crop and resize
 operation
 ([CVE-2022-23587](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23587))
*   Fixes a null pointer dereference in Grappler's `IsConstant`
 ([CVE-2022-23589](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23589))
*   Fixes a `CHECK` failure in constant folding
 ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
*   Fixes a stack overflow due to self-recursive function in `GraphDef`
 ([CVE-2022-23591](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23591))
*   Fixes a heap OOB access in `RunForwardTypeInference`
 ([CVE-2022-23592](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23592))
*   Fixes a crash due to erroneous `StatusOr`
 ([CVE-2022-23590](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23590))
*   Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR)
 ([CVE-2022-23594](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23594))
*   Fixes a segfault in `simplifyBroadcast` (MLIR)
 ([CVE-2022-23593](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23593))
*   Fixes a null pointer dereference in `BuildXlaCompilationCache` (XLA)
 ([CVE-2022-23595](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23595))
*   Updates `icu` to `69.1` to handle
 [CVE-2020-10531](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10531)

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

8bitmp3, Adam Lanicek, ag.ramesh, alesapin, Andrew Goodbody, annasuheyla, Ariel
Elkin, Arnab Dutta, Ben Barsdell, bhack, cfRod, Chengji Yao, Christopher Bate,
dan, Dan F-M, David Korczynski, DEKHTIARJonathan, dengzhiyuan, Deven Desai,
Duncan Riach, Eli Osherovich, Ewout Ter Hoeven, ez2take, Faijul Amin, fo40225,
Frederic Bastien, gadagashwini, Gauri1 Deshpande, Georgiy Manuilov, Guilherme De
Lázari, Guozhong Zhuang, H1Gdev, homuler, Hongxu Jia, Jacky_Yin, jayfurmanek,
jgehw, Jhalak Patel, Jinzhe Zeng, Johan Gunnarsson, Jonathan Dekhtiar, Kaixi
Hou, Kanvi Khanna, Kevin Cheng, Koan-Sin Tan, Kruglov-Dmitry, Kun Lu, Lemo,
Lequn Chen, long.chen, Louis Sugy, Mahmoud Abuzaina, Mao, Marius Brehler, Mark
Harfouche, Martin Patz, Maxiwell S. Garcia, Meenakshi Venkataraman, Michael
Melesse, Mrinal Tyagi, Måns Nilsson, Nathan John Sircombe, Nathan Luehr, Nilesh
Agarwalla, Oktay Ozturk, Patrice Vignola, Pawel-Polyai, Rama Ketineni, Ramesh
Sampath, Reza Rahimi, Rob Suderman, Robert Kalmar, Rohit Santhanam, Sachin
Muradi, Saduf2019, Samuel Marks, Shi,Guangyong, Sidong-Wei, Srinivasan
Narayanamoorthy, Srishti Srivastava, Steven I Reeves, stevenireeves, Supernovae,
Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Thomas Schmeyer,
tilakrayal, Valery Mironov, Victor Guo, Vignesh Kothapalli, Vishnuvardhan
Janapati, wamuir, Wang,Quintin, William Muir, William Raveane, Yash Goel, Yimei
Sun, Yong Tang, Yuduo Wu

2.7.1

This releases introduces several vulnerability fixes:

*   Fixes a floating point division by 0 when executing convolution operators
 ([CVE-2022-21725](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21725))
*   Fixes a heap OOB read in shape inference for `ReverseSequence`
 ([CVE-2022-21728](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21728))
*   Fixes a heap OOB access in `Dequantize`
 ([CVE-2022-21726](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21726))
*   Fixes an integer overflow in shape inference for `Dequantize`
 ([CVE-2022-21727](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21727))
*   Fixes a heap OOB access in `FractionalAvgPoolGrad`
 ([CVE-2022-21730](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21730))
*   Fixes an overflow and divide by zero in `UnravelIndex`
 ([CVE-2022-21729](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21729))
*   Fixes a type confusion in shape inference for `ConcatV2`
 ([CVE-2022-21731](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21731))
*   Fixes an OOM in `ThreadPoolHandle`
 ([CVE-2022-21732](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21732))
*   Fixes an OOM due to integer overflow in `StringNGrams`
 ([CVE-2022-21733](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21733))
*   Fixes more issues caused by incomplete validation in boosted trees code
 ([CVE-2021-41208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41208))
*   Fixes an integer overflows in most sparse component-wise ops
 ([CVE-2022-23567](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23567))
*   Fixes an integer overflows in `AddManySparseToTensorsMap`
 ([CVE-2022-23568](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23568))
*   Fixes a number of `CHECK`-failures in `MapStage`
 ([CVE-2022-21734](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21734))
*   Fixes a division by zero in `FractionalMaxPool`
 ([CVE-2022-21735](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21735))
*   Fixes a number of `CHECK`-fails when building invalid/overflowing tensor
 shapes
 ([CVE-2022-23569](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23569))
*   Fixes an undefined behavior in `SparseTensorSliceDataset`
 ([CVE-2022-21736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21736))
*   Fixes an assertion failure based denial of service via faulty bin count
 operations
 ([CVE-2022-21737](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21737))
*   Fixes a reference binding to null pointer in `QuantizedMaxPool`
 ([CVE-2022-21739](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21739))
*   Fixes an integer overflow leading to crash in `SparseCountSparseOutput`
 ([CVE-2022-21738](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21738))
*   Fixes a heap overflow in `SparseCountSparseOutput`
 ([CVE-2022-21740](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21740))
*   Fixes an FPE in `BiasAndClamp` in TFLite
 ([CVE-2022-23557](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23557))
*   Fixes an FPE in depthwise convolutions in TFLite
 ([CVE-2022-21741](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21741))
*   Fixes an integer overflow in TFLite array creation
 ([CVE-2022-23558](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23558))
*   Fixes an integer overflow in TFLite
 ([CVE-2022-23559](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23559))
*   Fixes a dangerous OOB write in TFLite
 ([CVE-2022-23561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23561))
*   Fixes a vulnerability leading to read and write outside of bounds in TFLite
 ([CVE-2022-23560](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23560))
*   Fixes a set of vulnerabilities caused by using insecure temporary files
 ([CVE-2022-23563](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23563))
*   Fixes an integer overflow in Range resulting in undefined behavior and OOM
 ([CVE-2022-23562](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23562))
*   Fixes a vulnerability where missing validation causes `tf.sparse.split` to
 crash when `axis` is a tuple
 ([CVE-2021-41206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41206))
*   Fixes a `CHECK`-fail when decoding resource handles from proto
 ([CVE-2022-23564](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23564))
*   Fixes a `CHECK`-fail with repeated `AttrDef`
 ([CVE-2022-23565](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23565))
*   Fixes a heap OOB write in Grappler
 ([CVE-2022-23566](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23566))
*   Fixes a `CHECK`-fail when decoding invalid tensors from proto
 ([CVE-2022-23571](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23571))
*   Fixes a null-dereference when specializing tensor type
 ([CVE-2022-23570](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23570))
*   Fixes a crash when type cannot be specialized
 ([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572))
*   Fixes a heap OOB read/write in `SpecializeType`
 ([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574))
*   Fixes an unitialized variable access in `AssignOp`
 ([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573))
*   Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize`
 ([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575))
*   Fixes an integer overflow in `OpLevelCostEstimator::CalculateOutputSize`
 ([CVE-2022-23576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23576))
*   Fixes a null dereference in `GetInitOp`
 ([CVE-2022-23577](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23577))
*   Fixes a memory leak when a graph node is invalid
 ([CVE-2022-23578](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23578))
*   Fixes an abort caused by allocating a vector that is too large
 ([CVE-2022-23580](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23580))
*   Fixes multiple `CHECK`-failures during Grappler's `IsSimplifiableReshape`
 ([CVE-2022-23581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23581))
*   Fixes multiple `CHECK`-failures during Grappler's `SafeToRemoveIdentity`
 ([CVE-2022-23579](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23579))
*   Fixes multiple `CHECK`-failures in `TensorByteSize`
 ([CVE-2022-23582](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23582))
*   Fixes multiple `CHECK`-failures in binary ops due to type confusion
 ([CVE-2022-23583](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23583))
*   Fixes a use after free in `DecodePng` kernel
 ([CVE-2022-23584](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23584))
*   Fixes a memory leak in decoding PNG images
 ([CVE-2022-23585](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23585))
*   Fixes multiple `CHECK`-fails in `function.cc`
 ([CVE-2022-23586](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23586))
*   Fixes multiple `CHECK`-fails due to attempting to build a reference tensor
 ([CVE-2022-23588](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23588))
*   Fixes an integer overflow in Grappler cost estimation of crop and resize
 operation
 ([CVE-2022-23587](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23587))
*   Fixes a null pointer dereference in Grappler's `IsConstant`
 ([CVE-2022-23589](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23589))
*   Fixes a `CHECK` failure in constant folding
 ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
*   Fixes a stack overflow due to self-recursive function in `GraphDef`
 ([CVE-2022-23591](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23591))
*   Fixes a crash due to erroneous `StatusOr`
 ([CVE-2022-23590](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23590))
*   Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR)
 ([CVE-2022-23594](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23594))
*   Fixes a null pointer dereference in `BuildXlaCompilationCache` (XLA)
 ([CVE-2022-23595](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23595))
*   Updates `icu` to `69.1` to handle
 [CVE-2020-10531](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10531)

2.7.0

Breaking Changes

*   `tf.keras`:

 *   The methods `Model.fit()`, `Model.predict()`, and `Model.evaluate()`
     will no longer uprank input data of shape `(batch_size,)` to become
     `(batch_size, 1)`. This enables `Model` subclasses to process scalar
     data in their `train_step()`/`test_step()`/`predict_step()` methods. \
     Note that this change may break certain subclassed models. You can
     revert back to the previous behavior by adding upranking yourself in the
     `train_step()`/`test_step()`/`predict_step()` methods, e.g. `if
     x.shape.rank == 1: x = tf.expand_dims(x, axis=-1)`. Functional models as
     well as Sequential models built with an explicit input shape are not
     affected.
 *   The methods `Model.to_yaml()` and `keras.models.model_from_yaml` have
     been replaced to raise a `RuntimeError` as they can be abused to cause
     arbitrary code execution. It is recommended to use JSON serialization
     instead of YAML, or, a better alternative, serialize to H5.
 *   `LinearModel` and `WideDeepModel` are moved to the
     `tf.compat.v1.keras.models.` namespace
     (`tf.compat.v1.keras.models.LinearModel` and
     `tf.compat.v1.keras.models.WideDeepModel`), and their `experimental`
     endpoints (`tf.keras.experimental.models.LinearModel` and
     `tf.keras.experimental.models.WideDeepModel`) are being deprecated.
 *   RNG behavior change for all `tf.keras.initializers` classes. For any
     class constructed with a fixed seed, it will no longer generate same
     value when invoked multiple times. Instead, it will return different
     value, but a determinisitic sequence. This change will make the
     initialize behavior align between v1 and v2.

*   `tf.lite`:

 *   Rename fields `SignatureDef` table in schema to maximize the parity with
     TF SavedModel's Signature concept.
 *   Deprecate Makefile builds. Makefile users need to migrate their builds
     to CMake or Bazel. Please refer to the
     [Build TensorFlow Lite with CMake](https://www.tensorflow.org/lite/guide/build_cmake)
     and
     [Build TensorFlow Lite for ARM boards](https://www.tensorflow.org/lite/guide/build_arm)
     for the migration.
 *   Deprecate `tflite::OpResolver::GetDelegates`. The list returned by
     TfLite's `BuiltinOpResolver::GetDelegates` is now always empty. Instead,
     recommend using new method `tflite::OpResolver::GetDelegateCreators` in
     order to achieve lazy initialization on TfLite delegate instances.

*   TF Core:

 *   `tf.Graph.get_name_scope()` now always returns a string, as documented.
     Previously, when called within `name_scope("")` or `name_scope(None)`
     contexts, it returned `None`; now it returns the empty string.
 *   `tensorflow/core/ir/` contains a new MLIR-based Graph dialect that is
     isomorphic to GraphDef and will be used to replace GraphDef-based (e.g.,
     Grappler) optimizations.
 *   Deprecated and removed `attrs()` function in shape inference. All
     attributes should be queried by name now (rather than range returned) to
     enable changing the underlying storage there.
 *   The following Python symbols were accidentally added in earlier versions
     of TensorFlow and now are removed. Each symbol has a replacement that
     should be used instead, but note the replacement's argument names are
     different.
     *   `tf.quantize_and_dequantize_v4` (accidentally introduced in
         TensorFlow 2.4): Use `tf.quantization.quantize_and_dequantize_v2`
         instead.
     *   `tf.batch_mat_mul_v3` (accidentally introduced in TensorFlow 2.6):
         Use `tf.linalg.matmul` instead.
     *   `tf.sparse_segment_sum_grad` (accidentally introduced in TensorFlow
         2.6): Use `tf.raw_ops.SparseSegmentSumGrad` instead. Directly
         calling this op is typically not necessary, as it is automatically
         used when computing the gradient of `tf.sparse.segment_sum`.
 *   Renaming of tensorflow::int64 to int_64_t in numerous places (the former
     is an alias for the latter) which could result in needing to regenerate
     selective op registration headers else execution would fail with
     unregistered kernels error.

*   Modular File System Migration:

 *   Support for S3 and HDFS file systems has been migrated to a modular file
     systems based approach and is now available in
     https://github.com/tensorflow/io. The `tensorflow-io` python package
     should be installed for S3 and HDFS support with tensorflow.

Major Features and Improvements

*   Improvements to the TensorFlow debugging experience:

 *   Previously, TensorFlow error stack traces involved many internal frames,
     which could be challenging to read through, while not being actionable
     for end users. As of TF 2.7, TensorFlow filters internal frames in most
     errors that it raises, to keep stack traces short, readable, and focused
     on what's actionable for end users (their own code).

 This behavior can be disabled by calling
 `tf.debugging.disable_traceback_filtering()`, and can be re-enabled via
 `tf.debugging.enable_traceback_filtering()`. If you are debugging a
 TensorFlow-internal issue (e.g. to prepare a TensorFlow PR), make sure to
 disable traceback filtering. You can check whether this feature is currently
 enabled by calling `tf.debugging.is_traceback_filtering_enabled()`.

 Note that this feature is only available with Python 3.7 or higher.

 *   Improve the informativeness of error messages raised by Keras
     `Layer.__call__()`, by adding the full list of argument values passed to
     the layer in every exception.

*   Introduce the `tf.compat.v1.keras.utils.track_tf1_style_variables`
 decorator, which enables using large classes of tf1-style variable_scope,
 `get_variable`, and `compat.v1.layer`-based components from within TF2
 models running with TF2 behavior enabled.

*   `tf.data`:

 *   tf.data service now supports auto-sharding. Users specify the sharding
     policy with `tf.data.experimental.service.ShardingPolicy` enum. It can
     be one of `OFF` (equivalent to today's `"parallel_epochs"` mode),
     `DYNAMIC` (equivalent to today's `"distributed_epoch"` mode), or one of
     the static sharding policies: `FILE`, `DATA`, `FILE_OR_DATA`, or `HINT`
     (corresponding to values of `tf.data.experimental.AutoShardPolicy`).

     Static sharding (auto-sharding) requires the number of tf.data service
     workers be fixed. Users need to specify the worker addresses in
     `tensorflow.data.experimental.DispatcherConfig`.

 *   `tf.data.experimental.service.register_dataset` now accepts optional
     `compression` argument.

*   Keras:

 *   `tf.keras.layers.Conv` now includes a public `convolution_op` method.
     This method can be used to simplify the implementation of Conv
     subclasses. There are two primary ways to use this new method. The first
     is to use the method directly in your own `call` method: `python class
     StandardizedConv2D(tf.keras.layers.Conv2D): def call(self, inputs):
     mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True)
     return self.convolution_op(inputs, (self.kernel - mean) / tf.sqrt(var +
     1e-10))` Alternatively, you can override `convolution_op`: `python class
     StandardizedConv2D(tf.keras.Layer): def convolution_op(self, inputs,
     kernel): mean, var = tf.nn.moments(kernel, axes=[0, 1, 2],
     keepdims=True)  Author code uses std + 1e-5 return
     super().convolution_op(inputs, (kernel - mean) / tf.sqrt(var + 1e-10))`
 *   Added `merge_state()` method to `tf.keras.metrics.Metric` for use in
     distributed computations.
 *   Added `sparse` and `ragged` options to
     `tf.keras.layers.TextVectorization` to allow for `SparseTensor` and
     `RaggedTensor` outputs from the layer.

*   distribute.experimental.rpc package:

 *   distribute.experimental.rpc package introduces APIs to create a GRPC
     based server to register tf.function methods and a GRPC client to invoke
     remote registered methods. RPC APIs are intended for multi-client setups
     i.e. server and clients are started in separate binaries independently.

 *   Example usage to create server: python server =
     tf.distribute.experimental.rpc.Server.create("grpc", "127.0.0.1:1234")
     tf.function(input_signature=[ tf.TensorSpec([], tf.int32),
     tf.TensorSpec([], dtypes.int32) ]) def _remote_multiply(a, b): return
     tf.math.multiply(a, b)

     server.register("multiply", _remote_multiply) 

 *   Example usage to create client: `python client =
     tf.distribute.experimental.rpc.Client.create("grpc", address) a =
     tf.constant(2, dtype=tf.int32) b = tf.constant(3, dtype=tf.int32)
     result = client.multiply(a, b)`

*   `tf.lite`:

 *   Add experimental API `experimental_from_jax` to support conversion from
     Jax models to TensorFlow Lite.
 *   Support uint32 data type for cast op.
 *   Support int8 data type for cast op.
 *   Add experimental quantization debugger `tf.lite.QuantizationDebugger`
 *   Add lite.experimental.authoring.compatible API
     *   A Python decorator to provide a way to check TFLite compatibility
         issue of `tf.function`. This returns a callable object which
         validates TFLite compatibility. If an incompatible operation is
         encountered during execution, an exception will be raised with
         information about the incompatible ops.
 *   Add lite.experimental.Analyzer API
     *   An experimental tool to analyze TFLite flatbuffer models. This API
         can be used to investigate TFLite model structure and check
         compatibility with GPU delegate.

*   Extension Types

 *   Add experimental API to define new Python classes that can be handled by
     TensorFlow APIs. To create an extension type, simply define a Python
     class with `tf.experimental.ExtensionType` as its base, and use type
     annotations to specify the type for each field. E.g.: `python class
     MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask:
     tf.Tensor` The `tf.ExtensionType` base class works similarly to
     [`typing.NamedTuple`](https://docs.python.org/3/library/typing.html#typing.NamedTuple)
     and
     [`dataclasses.dataclass`](https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass)
     from the standard Python library.
 *   Extension types are supported by Keras, tf.data, TF-hub, SavedModel,
     tf.function, control flow ops, py_function, and distribution strategy.
 *   Add "dispatch decorators" that can be used to override the default
     behavior of TensorFlow ops (such as `tf.add` or `tf.concat`) when they
     are applied to ExtensionType values.
 *   The `BatchableExtensionType` API can be used to define extension types
     that support APIs that make use of batching, such as `tf.data.Dataset`
     and `tf.map_fn`.
 *   For more information, see the
     [Extension types guide](https://www.tensorflow.org/guide/extension_type).

Bug Fixes and Other Changes

*   TF Core:
 *   Random number generation (RNG) system
     *   Add argument `alg` to `tf.random.stateless_*` functions to
         explicitly select the RNG algorithm.
     *   Add `tf.nn.experimental.stateless_dropout`, a stateless version of
         `tf.nn.dropout`.
     *   `tf.random.Generator` now can be created inside the scope of
         `tf.distribute.experimental.ParameterServerStrategy` and
         `tf.distribute.experimental.CentralStorageStrategy`.
 *   Add an experimental session config
     `tf.experimental.disable_functional_ops_lowering` which disables
     functional control flow op lowering optimization. This is useful when
     executing within a portable runtime where control flow op kernels may
     not be loaded due to selective registration.
 *   Add a new experimental argument `experimental_is_anonymous` to
     `tf.lookup.StaticHashTable.__init__` to create the table in anonymous
     mode. In this mode, the table resource can only be accessed via resource
     handles (not resource names) and will be deleted automatically when all
     resource handles pointing to it are gone.
*   `tf.data`:
 *   Introduce the `tf.data.experimental.at` API which provides random access
     for input pipelines that consist of transformations that support random
     access. The initial set of transformations that support random access
     includes:
     `tf.data.Dataset.from_tensor_slices`,`tf.data.Dataset.shuffle`,
     `tf.data.Dataset.batch`, `tf.data.Dataset.shard`, `tf.data.Dataset.map`,
     and `tf.data.Dataset.range`.
 *   Promote `tf.data.Options.experimental_deterministic` API to
     `tf.data.Options.deterministic` and deprecate the experimental endpoint.
 *   Move autotuning options
     from`tf.data.Options.experimental_optimization.autotune*` to a newly
     created `tf.data.Options.autotune.*` and remove support for
     `tf.data.Options.experimental_optimization.autotune_buffers`.
 *   Add support for user-defined names of tf.data core Python API, which can
     be used to disambiguate tf.data events in TF Profiler Trace Viewer.
 *   Promote `tf.data.experimental.sample_from_datasets` API to
     `tf.data.Dataset.sample_from_datasets` and deprecate the experimental
     endpoint.
 *   Added `TF_GPU_ALLOCATOR=cuda_malloc_async` that use cudaMallocAsync from
     CUDA 11.2. This could become the default in the future.
*   TF SavedModel:
 *   Custom gradients are now saved by default. See
     `tf.saved_model.SaveOptions` to disable this.
 *   The saved_model_cli's `--input_examples` inputs are now restricted to
     python literals to avoid code injection.
*   XLA:
 *   Add a new API that allows custom call functions to signal errors. The
     old API will be deprecated in a future release. See
     https://www.tensorflow.org/xla/custom_call for details.
 *   XLA:GPU reductions are deterministic by default (reductions within
     `jit_compile=True` are now deterministic).
 *   XLA:GPU works with Horovod (OSS contribution by Trent Lo from NVidia)
 *   XLA:CPU and XLA:GPU can compile tf.unique and tf.where when shapes are
     provably correct at compile time.
*   `tf.saved_model.save`:
 *   When saving a model, not specifying a namespace whitelist for custom ops
     with a namespace will now default to allowing rather than rejecting them
     all.
*   Deterministic Op Functionality (enabled by setting the environment variable
 `TF_DETERMINISTIC_OPS` to `"true"` or `"1"`):
 *   Add determinsitic GPU implementations of:
     *   `tf.math.segment_sum`
     *   `tf.math.segment_prod`
     *   `tf.math.segment_mean`
     *   `tf.math.unsorted_segment_sum`
     *   `tf.math.unsorted_segment_prod`
     *   `tf.math.unsorted_segment_sqrt`
     *   `tf.math.unsorted_segment_mean`
     *   `tf.gather` backprop
     *   `tf.convert_to_tensor` when fed with (sparse) `tf.IndexedSlices`
     *   `tf.nn.sparse_softmax_crossentropy_with_logits`
     *   `tf.nn.ctc_loss` (resolved, possibly in prior release, and confirmed
         with tests)
     *   stateful ops used in `tf.data.Dataset`
 *   Run the following ops on CPU (with significant performance penalty):
     *   `tf.scatter_nd` and other related scatter functions, such as
         `tf.tensor_scatter_nd_update`
 *   Add determinism-unimplemented exception-throwing to the following ops.
     When op-determinism is expected (i.e. when the environment variable
     `TF_DETERMINISTIC_OPS` is set to `"true"` or `"1"`), an attempt to use
     the specified paths through the following ops on a GPU will cause
     `tf.errors.UnimplementedError` (with an understandable message), unless
     otherwise specified, to be thrown.
     *   `tf.compat.v1.nn.fused_batch_norm` backprop to `offset` when
         `is_training=False`
     *   `tf.image.adjust_contrast` forward
     *   `tf.nn.depthwise_conv2d` backprop to `filter` when not using cuDNN
         convolution
     *   `tf.image.resize` with `method=ResizeMethod.NEAREST` backprop
     *   `tf.math.bincount` - TODO: confirm exception added
     *   `tf.raw_ops.DebugNumericSummary` and
         `tf.raw_ops.DebugNumericSummaryV2`
     *   `tf.Variable.scatter_add` (and other scatter methods, both on ref
         and resource variables)
     *   `tf.linalg.svd`
     *   `tf.nn.dilation2d` gradient
     *   `tf.nn.max_pool_with_argmax` gradient
     *   `tf.timestamp`. Throws `FailedPrecondition`
     *   The random-number-generating ops in the `tf.random` module when the
         global random seed has not yet been set (via `tf.random.set_seed`).
         Throws `RuntimeError` from Python or `InvalidArgument` from C++
     *   `tf.compat.v1.get_seed` if the global random seed has not yet been
         set (via `tf.random.set_seed`). Throws `RuntimeError` from Python or
         `InvalidArgument` from C++

Security

*   Fixes a code injection issue in `saved_model_cli`
 ([CVE-2021-41228](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41228))
*   Fixes a vulnerability due to use of uninitialized value in Tensorflow
 ([CVE-2021-41225](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41225))
*   Fixes a heap OOB in `FusedBatchNorm` kernels
 ([CVE-2021-41223](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41223))
*   Fixes an arbitrary memory read in `ImmutableConst`
 ([CVE-2021-41227](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41227))
*   Fixes a heap OOB in `SparseBinCount`
 ([CVE-2021-41226](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41226))
*   Fixes a heap OOB in `SparseFillEmptyRows`
 ([CVE-2021-41224](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41224))
*   Fixes a segfault due to negative splits in `SplitV`
 ([CVE-2021-41222](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41222))
*   Fixes segfaults and vulnerabilities caused by accesses to invalid memory
 during shape inference in `Cudnn*` ops
 ([CVE-2021-41221](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41221))
*   Fixes a null pointer exception when `Exit` node is not preceded by `Enter`
 op
 ([CVE-2021-41217](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41217))
*   Fixes an integer division by 0 in `tf.raw_ops.AllToAll`
 ([CVE-2021-41218](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41218))
*   Fixes a use after free and a memory leak in `CollectiveReduceV2`
 ([CVE-2021-41220](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41220))
*   Fixes an undefined behavior via `nullptr` reference binding in sparse matrix
 multiplication
 ([CVE-2021-41219](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41219))
*   Fixes a heap buffer overflow in `Transpose`
 ([CVE-2021-41216](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41216))
*   Prevents deadlocks arising from mutually recursive `tf.function` objects
 ([CVE-2021-41213](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41213))
*   Fixes a null pointer exception in `DeserializeSparse`
 ([CVE-2021-41215](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41215))
*   Fixes an undefined behavior arising from reference binding to `nullptr` in
 `tf.ragged.cross`
 ([CVE-2021-41214](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41214))
*   Fixes a heap OOB read in `tf.ragged.cross`
 ([CVE-2021-41212](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41212))
*   Fixes a heap OOB in shape inference for `QuantizeV2`
 ([CVE-2021-41211](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41211))
*   Fixes a heap OOB read in all `tf.raw_ops.QuantizeAndDequantizeV*` ops
 ([CVE-2021-41205](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41205))
*   Fixes an FPE in `ParallelConcat`
 ([CVE-2021-41207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41207))
*   Fixes FPE issues in convolutions with zero size filters
 ([CVE-2021-41209](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41209))
*   Fixes a heap OOB read in `tf.raw_ops.SparseCountSparseOutput`
 ([CVE-2021-41210](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41210))
*   Fixes vulnerabilities caused by incomplete validation in boosted trees code
 ([CVE-2021-41208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41208))
*   Fixes vulnerabilities caused by incomplete validation of shapes in multiple
 TF ops
 ([CVE-2021-41206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41206))
*   Fixes a segfault produced while copying constant resource tensor
 ([CVE-2021-41204](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41204))
*   Fixes a vulnerability caused by unitialized access in
 `EinsumHelper::ParseEquation`
 ([CVE-2021-41201](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41201))
*   Fixes several vulnerabilities and segfaults caused by missing validation
 during checkpoint loading
 ([CVE-2021-41203](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41203))
*   Fixes an overflow producing a crash in `tf.range`
 ([CVE-2021-41202](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41202))
*   Fixes an overflow producing a crash in `tf.image.resize` when size is large
 ([CVE-2021-41199](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41199))
*   Fixes an overflow producing a crash in `tf.tile` when tiling tensor is large
 ([CVE-2021-41198](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41198))
*   Fixes a vulnerability produced due to incomplete validation in
 `tf.summary.create_file_writer`
 ([CVE-2021-41200](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41200))
*   Fixes multiple crashes due to overflow and `CHECK`-fail in ops with large
 tensor shapes
 ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
*   Fixes a crash in `max_pool3d` when size argument is 0 or negative
 ([CVE-2021-41196](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41196))
*   Fixes a crash in `tf.math.segment_*` operations
 ([CVE-2021-41195](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41195))
*   Updates `curl` to `7.78.0` to handle
 [CVE-2021-22922](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-22922),
 [CVE-2021-22923](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-22923),
 [CVE-2021-22924](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-22924),
 [CVE-2021-22925](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-22925),
 and
 [CVE-2021-22926](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-22926).

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

8bitmp3, Abhilash Majumder, abhilash1910, AdeshChoudhar, Adrian Garcia
Badaracco, Adrian Ratiu, ag.ramesh, Aleksandr Nikolaev, Alexander Bosch,
Alexander Grund, Annie Tallund, Anush Elangovan, Artem Sokolovskii, azazhu,
Balint Cristian, Bas Aarts, Ben Barsdell, bhack, cfRod, Cheney-Wang, Cheng Ren,
Christopher Bate, collin, Danila Bespalov, David Datascientist, Deven Desai,
Duncan Riach, Ehsan Kia, Ellie, Fan Du, fo40225, Frederic Bastien, fsx950223,
Gauri1 Deshpande, geetachavan1, Guillaume Klein, guozhong.zhuang, helen, Håkon
Sandsmark, japm48, jgehw, Jinzhe Zeng, Jonathan Dekhtiar, Kai Zhu, Kaixi Hou,
Kanvi Khanna, Koan-Sin Tan, Koki Ibukuro, Kulin Seth, KumaTea, Kun-Lu, Lemo,
lipracer, liuyuanqiang, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia,
mdfaijul, metarutaiga, Michal Szutenberg, nammbash, Neil Girdhar, Nishidha
Panpaliya, Nyadla-Sys, Patrice Vignola, Peter Kasting, Philipp Hack, PINTO0309,
Prateek Gupta, puneeshkhanna, Rahul Butani, Rajeshwar Reddy T, Reza Rahimi,
RinozaJiffry, rmothukuru, Rohit Santhanam, Saduf2019, Samuel Marks, sclarkson,
Sergii Khomenko, Sheng, Yang, Sidong-Wei, slowy07, Srinivasan Narayanamoorthy,
Srishti Srivastava, stanley, Stella Alice Schlotter, Steven I Reeves,
stevenireeves, svobora, Takayoshi Koizumi, Tamas Bela Feher, Thibaut
Goetghebuer-Planchon, Trent Lo, Twice, Varghese, Jojimon, Vishnuvardhan
Janapati, Wang Yanzhang, Wang,Quintin, William Muir, William Raveane, Yasir
Modak, Yasuhiro Matsumoto, Yi Li, Yong Tang, zhaozheng09, Zhoulong Jiang,
zzpmiracle

2.6.3

This releases introduces several vulnerability fixes:

*   Fixes a floating point division by 0 when executing convolution operators
 ([CVE-2022-21725](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21725))
*   Fixes a heap OOB read in shape inference for `ReverseSequence`
 ([CVE-2022-21728](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21728))
*   Fixes a heap OOB access in `Dequantize`
 ([CVE-2022-21726](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21726))
*   Fixes an integer overflow in shape inference for `Dequantize`
 ([CVE-2022-21727](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21727))
*   Fixes a heap OOB access in `FractionalAvgPoolGrad`
 ([CVE-2022-21730](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21730))
*   Fixes an overflow and divide by zero in `UnravelIndex`
 ([CVE-2022-21729](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21729))
*   Fixes a type confusion in shape inference for `ConcatV2`
 ([CVE-2022-21731](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21731))
*   Fixes an OOM in `ThreadPoolHandle`
 ([CVE-2022-21732](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21732))
*   Fixes an OOM due to integer overflow in `StringNGrams`
 ([CVE-2022-21733](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21733))
*   Fixes more issues caused by incomplete validation in boosted trees code
 ([CVE-2021-41208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41208))
*   Fixes an integer overflows in most sparse component-wise ops
 ([CVE-2022-23567](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23567))
*   Fixes an integer overflows in `AddManySparseToTensorsMap`
 ([CVE-2022-23568](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23568))
*   Fixes a number of `CHECK`-failures in `MapStage`
 ([CVE-2022-21734](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21734))
*   Fixes a division by zero in `FractionalMaxPool`
 ([CVE-2022-21735](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21735))
*   Fixes a number of `CHECK`-fails when building invalid/overflowing tensor
 shapes
 ([CVE-2022-23569](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23569))
*   Fixes an undefined behavior in `SparseTensorSliceDataset`
 ([CVE-2022-21736](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21736))
*   Fixes an assertion failure based denial of service via faulty bin count
 operations
 ([CVE-2022-21737](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21737))
*   Fixes a reference binding to null pointer in `QuantizedMaxPool`
 ([CVE-2022-21739](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21739))
*   Fixes an integer overflow leading to crash in `SparseCountSparseOutput`
 ([CVE-2022-21738](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21738))
*   Fixes a heap overflow in `SparseCountSparseOutput`
 ([CVE-2022-21740](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21740))
*   Fixes an FPE in `BiasAndClamp` in TFLite
 ([CVE-2022-23557](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23557))
*   Fixes an FPE in depthwise convolutions in TFLite
 ([CVE-2022-21741](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-21741))
*   Fixes an integer overflow in TFLite array creation
 ([CVE-2022-23558](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23558))
*   Fixes an integer overflow in TFLite
 ([CVE-2022-23559](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23559))
*   Fixes a dangerous OOB write in TFLite
 ([CVE-2022-23561](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23561))
*   Fixes a vulnerability leading to read and write outside of bounds in TFLite
 ([CVE-2022-23560](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23560))
*   Fixes a set of vulnerabilities caused by using insecure temporary files
 ([CVE-2022-23563](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23563))
*   Fixes an integer overflow in Range resulting in undefined behavior and OOM
 ([CVE-2022-23562](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23562))
*   Fixes a vulnerability where missing validation causes `tf.sparse.split` to
 crash when `axis` is a tuple
 ([CVE-2021-41206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41206))
*   Fixes a `CHECK`-fail when decoding resource handles from proto
 ([CVE-2022-23564](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23564))
*   Fixes a `CHECK`-fail with repeated `AttrDef`
 ([CVE-2022-23565](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23565))
*   Fixes a heap OOB write in Grappler
 ([CVE-2022-23566](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23566))
*   Fixes a `CHECK`-fail when decoding invalid tensors from proto
 ([CVE-2022-23571](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23571))
*   Fixes a null-dereference when specializing tensor type
 ([CVE-2022-23570](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23570))
*   Fixes a crash when type cannot be specialized
 ([CVE-2022-23572](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23572))
*   Fixes a heap OOB read/write in `SpecializeType`
 ([CVE-2022-23574](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23574))
*   Fixes an unitialized variable access in `AssignOp`
 ([CVE-2022-23573](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23573))
*   Fixes an integer overflow in `OpLevelCostEstimator::CalculateTensorSize`
 ([CVE-2022-23575](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23575))
*   Fixes an integer overflow in `OpLevelCostEstimator::CalculateOutputSize`
 ([CVE-2022-23576](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23576))
*   Fixes a null dereference in `GetInitOp`
 ([CVE-2022-23577](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23577))
*   Fixes a memory leak when a graph node is invalid
 ([CVE-2022-23578](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23578))
*   Fixes an abort caused by allocating a vector that is too large
 ([CVE-2022-23580](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23580))
*   Fixes multiple `CHECK`-failures during Grappler's `IsSimplifiableReshape`
 ([CVE-2022-23581](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23581))
*   Fixes multiple `CHECK`-failures during Grappler's `SafeToRemoveIdentity`
 ([CVE-2022-23579](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23579))
*   Fixes multiple `CHECK`-failures in `TensorByteSize`
 ([CVE-2022-23582](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23582))
*   Fixes multiple `CHECK`-failures in binary ops due to type confusion
 ([CVE-2022-23583](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23583))
*   Fixes a use after free in `DecodePng` kernel
 ([CVE-2022-23584](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23584))
*   Fixes a memory leak in decoding PNG images
 ([CVE-2022-23585](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23585))
*   Fixes multiple `CHECK`-fails in `function.cc`
 ([CVE-2022-23586](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23586))
*   Fixes multiple `CHECK`-fails due to attempting to build a reference tensor
 ([CVE-2022-23588](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23588))
*   Fixes an integer overflow in Grappler cost estimation of crop and resize
 operation
 ([CVE-2022-23587](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23587))
*   Fixes a null pointer dereference in Grappler's `IsConstant`
 ([CVE-2022-23589](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23589))
*   Fixes a `CHECK` failure in constant folding
 ([CVE-2021-41197](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41197))
*   Fixes a stack overflow due to self-recursive function in `GraphDef`
 ([CVE-2022-23591](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23591))
*   Fixes a null pointer dereference in `BuildXlaCompilationCache` (XLA)
 ([CVE-2022-23595](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23595))
*   Updates `icu` to `69.1` to handle
 [CVE-2020-10531](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10531)

2.6.2

Fixes an issue where `keras`, `tensorflow_estimator` and `tensorboard` were
missing proper upper bounds and resulted in broken installs after TF 2.7 release

2.6.1

This release introduces several vulnerability fixes:

*   Fixes a code injection issue in `saved_model_cli`
 ([CVE-2021-41228](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41228))
*   Fixes a vulnerability due to use of uninitialized value in Tensorflow
 ([CVE-2021-41225](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41225))
*   Fixes a heap OOB in `FusedBatchNorm` kernels
 ([CVE-2021-41223](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41223))
*   Fixes an arbitrary memory read in `ImmutableConst`
 ([CVE-2021-41227](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41227))
*   Fixes a heap OOB in `SparseBinCount`
 ([CVE-2021-41226](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41226))
*   Fixes a heap OOB in `SparseFillEmptyRows`
 ([CVE-2021-41224](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41224))
*   Fixes a segfault due to negative spl

@pyup-bot
Copy link
Copy Markdown
Contributor Author

Closing this in favor of #231

@pyup-bot pyup-bot closed this May 23, 2022
@geblanco geblanco deleted the pyup-update-tensorflow-2.5.0-to-2.8.1 branch May 23, 2022 18:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant