diff --git a/index.bs b/index.bs
index 6016c0ab..959b4d4b 100644
--- a/index.bs
+++ b/index.bs
@@ -318,19 +318,19 @@ video summarization such as [[Video-Summarization-with-LSTM]].
### Noise Suppression ### {#usecase-noise-suppression}
-A web-based video conferencing application records received audio streams, but
-usually the background noise is everywhere. The application leverages real-time
-noise suppression using Recurrent Neural Network such as [[RNNoise]] for
-suppressing background dynamic noise like baby cry or dog barking to improve
+A web-based video conferencing application records received audio streams, but
+usually the background noise is everywhere. The application leverages real-time
+noise suppression using Recurrent Neural Network such as [[RNNoise]] for
+suppressing background dynamic noise like baby cry or dog barking to improve
audio experiences in video conferences.
### Detecting fake video ### {#usecase-detecting-fake-video}
-A user is exposed to realistic fake videos generated by ‘deepfake’ on the web.
-The fake video can swap the speaker’s face into the president’s face to incite
-a user politically or to manipulate user’s opinion. The deepfake detection
-applications such as [[FaceForensics++]] analyze the videos and protect a user against
-the fake videos or images. When she watches a fake video on the web, the
+A user is exposed to realistic fake videos generated by ‘deepfake’ on the web.
+The fake video can swap the speaker’s face into the president’s face to incite
+a user politically or to manipulate user’s opinion. The deepfake detection
+applications such as [[FaceForensics++]] analyze the videos and protect a user against
+the fake videos or images. When she watches a fake video on the web, the
detection application alerts her of the fraud video in real-time.
## Framework Use Cases ## {#usecases-framework}
@@ -472,7 +472,7 @@ during inference, as well as the output values of inference.
At inference time, every {{MLOperand}} will be bound to a tensor (the actual data).
The {{MLGraphBuilder}} interface enables the creation of {{MLOperand}}s.
-A key part of the {{MLGraphBuilder}} interface are the operations (such as
+A key part of the {{MLGraphBuilder}} interface are the operations (such as
{{MLGraphBuilder}}.{{MLGraphBuilder/gemm()}} and {{MLGraphBuilder}}.{{MLGraphBuilder/softmax()}}). The operations have a functional
semantics, with no side effects.
Each operation invocation conceptually returns a distinct new value, without
@@ -481,7 +481,7 @@ changing the value of any other {{MLOperand}}.
The runtime values (of {{MLOperand}}s) are tensors, which are essentially multidimensional
arrays. The representation of the tensors is implementation dependent, but it typically
includes the array data stored in some buffer (memory) and some metadata describing the
-array data (such as its shape).
+array data (such as its shape).
As mentioned above, the operations have a functional semantics. This allows the implementation
to potentially share the array data between multiple tensors. For example, the implementation
@@ -495,27 +495,27 @@ Before the execution, the computation graph that is used to compute one or more
There are multiple ways by which the graph may be compiled. The {{MLGraphBuilder}}.{{MLGraphBuilder/build()}} method compiles the graph in the background without blocking the calling thread, and returns a {{Promise}} that resolves to an {{MLGraph}}. The {{MLGraphBuilder}}.{{MLGraphBuilder/buildSync()}} method compiles the graph immediately on the calling thread, which must be a worker thread running on CPU or GPU device, and returns an {{MLGraph}}. Both compilation methods produce an {{MLGraph}} that represents a compiled graph for optimal execution.
Once the {{MLGraph}} is constructed, there are multiple ways by which the graph may be executed. The
-{{MLContext}}.{{MLContext/computeSync()}} method represents a way the execution of the graph is carried out immediately
-on the calling thread, which must also be a worker thread, either on a CPU or GPU device. The execution
+{{MLContext}}.{{MLContext/computeSync()}} method represents a way the execution of the graph is carried out immediately
+on the calling thread, which must also be a worker thread, either on a CPU or GPU device. The execution
produces the results of the computation from all the inputs bound to the graph.
The {{MLContext}}.{{MLContext/compute()}} method represents a way the execution of the graph is performed asynchronously
-either on a parallel timeline in a separate worker thread for the CPU execution or on a GPU timeline in a GPU
-command queue. This method returns immediately without blocking the calling thread while the actual execution is
-offloaded to a different timeline. This type of execution is appropriate when the responsiveness of the calling
-thread is critical to good user experience. The computation results will be placed at the bound outputs at the
-time the operation is successfully completed on the offloaded timeline at which time the calling thread is
+either on a parallel timeline in a separate worker thread for the CPU execution or on a GPU timeline in a GPU
+command queue. This method returns immediately without blocking the calling thread while the actual execution is
+offloaded to a different timeline. This type of execution is appropriate when the responsiveness of the calling
+thread is critical to good user experience. The computation results will be placed at the bound outputs at the
+time the operation is successfully completed on the offloaded timeline at which time the calling thread is
signaled. This type of execution supports both the CPU and GPU device.
-In both the {{MLContext}}.{{MLContext/compute()}} and {{MLContext}}.{{MLContext/computeSync()}} execution methods, the caller supplies
+In both the {{MLContext}}.{{MLContext/compute()}} and {{MLContext}}.{{MLContext/computeSync()}} execution methods, the caller supplies
the input values using {{MLNamedArrayBufferViews}}, binding the input {{MLOperand}}s to their values. The caller
then supplies pre-allocated buffers for output {{MLOperand}}s using {{MLNamedArrayBufferViews}}.
-The {{MLCommandEncoder}} interface created by the {{MLContext}}.{{MLContext/createCommandEncoder()}} method supports
-a graph execution method that provides the maximum flexibility to callers that also utilize WebGPU in their
-application. It does this by placing the workload required to initialize and compute the results of the
-operations in the graph onto a {{GPUCommandBuffer}}. The callers are responsible for the eventual submission
-of this workload on the {{GPUQueue}} through the WebGPU queue submission mechanism. Once the submitted workload
+The {{MLCommandEncoder}} interface created by the {{MLContext}}.{{MLContext/createCommandEncoder()}} method supports
+a graph execution method that provides the maximum flexibility to callers that also utilize WebGPU in their
+application. It does this by placing the workload required to initialize and compute the results of the
+operations in the graph onto a {{GPUCommandBuffer}}. The callers are responsible for the eventual submission
+of this workload on the {{GPUQueue}} through the WebGPU queue submission mechanism. Once the submitted workload
is completely executed, the result is avaialble in the bound output buffers.
## Device Selection ## {#programming-model-device-selection}
@@ -539,10 +539,10 @@ The following table summarizes the types of resource supported by the context cr
API {#api}
=====================
-## navigator.ml ## {#api-navigator-ml}
+## The navigator.ml interface ## {#api-navigator-ml}
A {{ML}} object is available in the {{Window}} and {{DedicatedWorkerGlobalScope}} contexts through the {{Navigator}}
-and {{WorkerNavigator}} interfaces respectively and is exposed via `navigator.ml`:
+and {{WorkerNavigator}} interfaces respectively and is exposed via `navigator.ml`.
-## ML ## {#api-ml}
+## The ML interface ## {#api-ml}
+### Permissions Policy Integration ### {#permissions-policy-integration}
+
+This specification defines a policy-controlled feature identified by the
+string "webnn".
+Its default allowlist is 'self'.
+
+### The {{ML/createContext()}} method ### {#api-ml-createcontext}
The {{ML/createContext()}} method steps are:
-1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=allowed to use=] the [=webnn-feature|webnn=] feature, then throw a "{{SecurityError!!exception}}" {{DOMException}} and abort these steps.
-1. Let |promise| be [=a new promise=].
+1. If [=this=]'s [=relevant global object=]'s [=associated Document=] is not [=allowed to use=] the [=webnn-feature|webnn=] feature, return [=a new promise=] [=rejected=] with a "{{SecurityError}}" and abort these steps.
+1. Return [=a new promise=] |promise| and run the following steps [=in parallel=].
1. Let |context| be a new {{MLContext}} object.
-1. Switch on the method's first argument:
+1. Let |options| be the first argument.
+1. Switch on |options|:
webnn".
-Its default allowlist is 'self'.
-
-## MLContext ## {#api-mlcontext}
+## The MLContext interface ## {#api-mlcontext}
The {{MLContext}} interface represents a global state of neural network compute workload and execution processes. Each {{MLContext}} object has associated [=context type=], [=device type=] and [=power preference=].
The context type is the type of the execution context that manages the resources and facilitates the compilation and execution of the neural network graph:
@@ -660,9 +656,6 @@ interface MLContext {};
: \[[powerPreference]] of type [=power preference=]
::
The {{MLContext}}'s [=power preference=].
- : \[[implementation]]
- ::
- The underlying implementation provided by the User Agent.
// The mean reductions happen over the spatial dimensions of the input
@@ -1719,7 +1730,7 @@ partial interface MLGraphBuilder {
const mean = builder.reduceMean(input, reduceOptions);
const variance = builder.reduceMean(
builder.pow(
- builder.sub(input, mean),
+ builder.sub(input, mean),
buider.constant(2)),
reduceOptions
);
@@ -1733,7 +1744,7 @@ partial interface MLGraphBuilder {
builder.div(
builder.sub(input, mean),
buidler.pow(
- builder.add(variance, options.epsilon),
+ builder.add(variance, options.epsilon),
builder.constant(0.5))
)
),
@@ -1743,7 +1754,7 @@ partial interface MLGraphBuilder {