Skip to content

Clarify/simplify graph execution on GPU / MLCommandEncoder #333

@zolkis

Description

@zolkis

A few purely API design related comments on MLCommandEncoder.
(Context: I am trying to update some of the algorithms, and specify missing ones, and I stumbled upon these questions).

Graph initialization

  • Currently the note in MLGraphBuilder refers to graph initialization (for default context), but the link is for MLCommandEncoder graph init, which is not clear how is related.
  • Looks like we need a generic graph init step, probably in MLGraph section, or in MLGraphBuilder, and perhaps also one in MLCommandEncoder if it's different. But we need to be specific on how is it different.
  • why MLCommandEncoder.initializeGraph() is synchronous (looks highly parallel, on a different timeline)? If it's indeed synchronous with no side effects, and done exactly once per graph, why not include it in the createCommandEncoder() constructor, or in the current case, the factory method? To clarify this, we need an algorithm, or a note that informally describes graph init (per context type, eventually).
  • The MLCommandEncoder object is created from a context and the initializeGraph() method takes a graph as an argument. One might think that a single MLCommandEncoder object might be used for handling multiple graphs, but from the text the intent seem to be that the command encoder is meant to cover the execution phases for a single graph. For another graph, one could use a different MLCommandEncoder object (even if in the implementation it is bound to a singleton).

Proposal (question): collate the command encoder factory with graph initialization, to bind the MLCommandEncoder to that graph (and context). That might even be an async method.

partial interface MLContext {
    Promise<MLCommandEncoder> createCommandEncoder(MLGraph graph);
};

Command encoder steps

Assuming the change above, the interface becomes:

[SecureContext, Exposed=(Window, DedicatedWorker)]
interface MLCommandEncoder { 
    undefined dispatch(MLGraph graph, MLNamedGPUResources inputs, MLNamedGPUResources outputs);
    GPUCommandBuffer finish(optional GPUCommandBufferDescriptor descriptor = {});
};
  • It is not clear why dispatch() and finish() need to be separated? I guess it is to provide clarity on when the optional descriptor may be used? We need some examples to cast clarity on this. How are scripts supposed to manage the command encoding? May dispatch() be called multiple times before finish()? The intent is not clear, since AFAIK a single dispatch will manage all the parallel operations involved in queuing graph execution. I understand it follows the DirectML API flow, but I challenge that the usage in this API can be simpler, because it is not a generic GPU API, but a specific application of it.

  • In principle, we should encapsulate what we can, and we should expose only the control points for scripts.
    Actually I even challenge why (in the Web ML spec) is not enough to specify a single async method in MLContext?

partial interface MLContext {
    Promise<GPUCommandBuffer> createCommandEncoder(  // name might change
        MLGraph graph, 
        MLNamedGPUResources inputs, 
        MLNamedGPUResources outputs,
        optional GPUCommandBufferDescriptor descriptor = {});
};

The implementation may still follow the workflow of initializing the graph, dispatch and finish - but why expose that to the WebML scripts?

  • Web ML should also specify how to further use the GPUCommandBuffer.

In summary, I would appreciate more explanation/examples/script use cases for this (graph init and graph execution on GPU).

I am aware of this relevant comment and the (still open) discussion in #264 as well - somewhat related to this.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions