Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 9 additions & 3 deletions docs/langref/hybrid_script.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,17 +68,23 @@ to LLVM module.
Tuning
~~~~~~

**Under construction, not supported yet.**

Follow up the example above, you can use some tvm like interfaces to tune the code:

.. code-block:: python

i, j = c.op.axis
sch = tvm.create_schedule(op)
jo, ji = sch.split(j, 4)
sch.vectorize(ji)

``split``, ``reorder``, and loop_annotation will be supported!
For now, you can use loop annotations (``unroll``, ``parallel``, ``vectorize``, and ``bind``),
loop manipulation (``split`` and ``fuse``), and ``reorder``.

.. note::

This is a preliminary function, so users should be in charge of the correctness
of the functionality after tuning. Specifically, users should be careful when
fusing and reorderding imperfect loops.

Loops
~~~~~
Expand Down
3 changes: 3 additions & 0 deletions include/tvm/operation.h
Original file line number Diff line number Diff line change
Expand Up @@ -459,6 +459,8 @@ class HybridOpNode : public OperationNode {
Array<Tensor> inputs;
/*! \brief Symbolic placeholder representation of outputs */
Array<Tensor> outputs;
/*! \brief The axis of iterations */
Array<IterVar> axis;
/*! \brief the statement that generates the computation. This is
* slightly different from the body in ExternOpNode. All the output
* tensors keep its own name specified by users in the script.
Expand Down Expand Up @@ -500,6 +502,7 @@ class HybridOpNode : public OperationNode {
v->Visit("attrs", &attrs);
v->Visit("inputs", &inputs);
v->Visit("outputs", &outputs);
v->Visit("axis", &axis);
v->Visit("body", &body);
}
EXPORT static Operation make(std::string name,
Expand Down
7 changes: 5 additions & 2 deletions python/tvm/tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ class ComputeOp(Operation):
"""Compute operation."""
@property
def axis(self):
"""Represent axis of IterVar, only defined when it is a ComputeOp"""
"""Represent axis of IterVar, defined when it is a ComputeOp"""
return self.__getattr__("axis")

@property
Expand Down Expand Up @@ -184,4 +184,7 @@ class ExternOp(Operation):
@register_node
class HybridOp(Operation):
"""Hybrid operation."""
pass
@property
def axis(self):
"""Represent axis of IterVar, also defined when it is a HybridOp"""
return self.__getattr__("axis")
1 change: 1 addition & 0 deletions src/op/compute_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -212,6 +212,7 @@ void ComputeOpNode::GatherBound(
const Operation& self,
const std::unordered_map<Tensor, TensorDom>& tensor_dom,
std::unordered_map<IterVar, Range>* out_dom_map) const {
CHECK_EQ(self.operator->(), this);
const TensorDom& tdom = tensor_dom.at(self.output(0));
for (size_t i = 0; i < this->axis.size(); ++i) {
Range r = arith::Union(tdom.data.at(i)).cover_range(this->axis[i]->dom);
Expand Down
Loading