Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
530 commits
Select commit Hold shift + click to select a range
a71b34d
[ATTR] Introduce Integer container (#1994)
tqchen Oct 25, 2018
d74b7bb
[RELAY] Add structural hashing for Relay (#1977)
jroesch Oct 25, 2018
824db6f
fix typo in resnet definition (#1995)
eqy Oct 26, 2018
096fa48
[RELAY] Fix compilation under clang-4.0 (#1998)
tqchen Oct 26, 2018
f7b9f3b
[RELAY][OP] Split (#1876)
srkreddy1238 Oct 26, 2018
27d30f2
initialize base class in copy constructors (#2006)
yangchen-MS Oct 26, 2018
cf39ff1
[RELAY] Add occurs check before unification (#2012)
wweic Oct 27, 2018
2563f36
[RELAY]reshape_like (#1950)
siju-samuel Oct 27, 2018
f4b0383
[TOPI][CUDA] batched int8 conv2d (#1961)
vinx13 Oct 27, 2018
21dc6a4
[RELAY][OP] Fix conv2d NHWC type inference. (#2019)
tqchen Oct 28, 2018
247ea6d
[OPENCL][RUNTIME] Fix race condition of modules (#2018)
kazum Oct 28, 2018
d061fd4
[DOCKER] temporary revert cuda version to cuda8 (#2021)
tqchen Oct 28, 2018
af96077
save (#2015)
MarisaKirisame Oct 28, 2018
d331f1f
[TF] ignore Truncate in cast (#2022)
yzhliu Oct 28, 2018
9b0ec34
[DOCKER][GOLANG] fix golang version. (#2023)
srkreddy1238 Oct 28, 2018
8949716
[RELAY][PASS] FoldScaleAxis Forward (#2020)
tqchen Oct 28, 2018
8c352ab
Add attrs package (#2025)
jroesch Oct 29, 2018
d79633a
[intrin]support fmod for cuda (#1964)
xqdan Oct 29, 2018
d915318
Do not mutate GlobalVar's checked_type field. (#2026)
jroesch Oct 29, 2018
fd87cad
[Relay] DQN Port (#2009)
joshpoll Oct 29, 2018
0308989
[PASS]unroll loops with extent=1 (#2027)
xqdan Oct 29, 2018
1260671
[Relay] DCGAN port (#2010)
joshpoll Oct 29, 2018
bfc8c68
[RELAY]prelu op support (#2016)
siju-samuel Oct 29, 2018
5984fab
Refine porting x86 NCHWc conv to AutoTVM (#1993)
yzhliu Oct 29, 2018
4fb2d7e
[PASS] add a pass for the specific hardware accelarator when it is no…
kun-zh Oct 29, 2018
4823d55
[Frontend][MXNet] Change mxnet graph traversal from recursion to iter…
icemelon Oct 29, 2018
feca27e
[RELAY][PASS] FoldScaleAxis Backward (#2024)
tqchen Oct 30, 2018
ea74668
Conditional Loop Partitioning - Extending to remove if conditions (#1…
anijain2305 Oct 30, 2018
4f7da63
[YOLO]yolo op added in frontend and removed from topi (#1974)
siju-samuel Oct 30, 2018
7941ea6
Fix a bug in inject-virtual-thread (#2039)
kun-zh Oct 30, 2018
1570a1a
[TOPI][AUTOTVM] Improve style (#2034)
merrymercy Oct 30, 2018
6c4aa81
[RELAY][OP] Maketuple to be resolved when containing incompleteType …
tqchen Oct 30, 2018
be4b7c1
[RELAY][RUNTIME] Add Relay interpreter and compiler for TVM runtime s…
jroesch Oct 30, 2018
8400038
[TEAM] Add Zhi Chen as a reviewer. (#2040)
ZihengJiang Oct 31, 2018
a5a3651
[AUTOTVM] Misc fix to document and style (#2035)
merrymercy Oct 31, 2018
e74a8ca
Better gemm support for cublas and cpu (#1967)
cnuernber Oct 31, 2018
3bed874
[RELAY/PASS] Simplify inference. (#2033)
ZihengJiang Oct 31, 2018
6837dcb
[RELAY] MobileNet (#1997)
eqy Oct 31, 2018
8a6690d
[TOPI] Add dilation argument to conv2d and depthwise_conv2d (#1970)
vinx13 Oct 31, 2018
c4ada6c
[NNVM/TOPI][OP] gather_nd (#2041)
icemelon Oct 31, 2018
801ab88
[Cleanliness] [Easy] Make TVM leak-sanitizer and Wnon-virtual-dtor cl…
ajtulloch Nov 1, 2018
3fd770b
[RELAY][RUNTIME] Refactor interpreter and graph_runtime into consiste…
jroesch Nov 1, 2018
dff4bb6
Refine CMakeLists.txt (#2049)
lixiaoquan Nov 1, 2018
1baac57
[TOPI] Fix adding dilation arguments (#2047)
merrymercy Nov 1, 2018
75bbf44
[FRONTEND][TENSORFLOW] Enhancements. (#1923)
srkreddy1238 Nov 1, 2018
657ec0c
[NNVM][OP] Allow two input tensors with different type in reshape_lik…
icemelon Nov 2, 2018
e23116f
Rename relay::Environment to relay::Module (#2054)
jroesch Nov 2, 2018
7f01770
[RELAY][RUNTIME] Add compute and schedule attributes for all ops in r…
jroesch Nov 5, 2018
8617947
[RELAY][BACKEND] CompileEngine refactor. (#2059)
tqchen Nov 5, 2018
365b52b
print import_llvm ir in tensorize tutorial (#2064)
yzhliu Nov 6, 2018
187ee42
Add a testcase of dilated conv2d int8 (#2065)
vinx13 Nov 6, 2018
ba3aeb2
[FRONTEND][ONNX] fixed operator converter for Split in onnx frontend…
Nov 6, 2018
a0e8998
[CODEGEN][LLVM] Cache packed func ptr, lift alloca (#2070)
tqchen Nov 6, 2018
b9bfa76
Allow to use negative index of array in python (#2069)
vinx13 Nov 6, 2018
7499b46
fix asan check heap-use-after-free (#2071)
xqdan Nov 6, 2018
f77cf82
Fix a crash in android_deploy demo. (#2073)
DaoDaoSu Nov 7, 2018
f62740e
[Frontend][MXNet] argmax, argmin ops support (#2048)
imorinaga Nov 7, 2018
cf64a12
Fix conv2d int8 schedule on CUDA (#2074)
vinx13 Nov 9, 2018
733dac4
[OPENCL] Make use of cpu device when gpu device doesn't exist. (#2076)
lixiaoquan Nov 9, 2018
aa96b87
[TEAM] vinx13 -> Reviewer (#2083)
tqchen Nov 9, 2018
c1af6fc
[RELAY] CompileEngine update, nn conv2d, fix dense, pool. (#2082)
tqchen Nov 9, 2018
43ee739
[TVM] [NNPACK] Modernize and improve NNPACK bindings (#2084)
ajtulloch Nov 10, 2018
f4a24b2
Add NNPACK to CI (#2085)
ajtulloch Nov 10, 2018
073d43d
[TOPI][CUDA] int8 group conv2d (#2075)
vinx13 Nov 10, 2018
30aa366
[VTA] Improved RPC for VTA (#2043)
liangfu Nov 12, 2018
075f595
[TOPI] depthwise-conv2d in NCHW[x]c layout for x86 (#2045)
yzhliu Nov 13, 2018
fe6510c
[FRONTEND][ONNX]add Pad, ReduceMax, ReduceMin, ReduceMean and ReduceS…
Nov 13, 2018
528f684
[RELAY] Fix type info after mutation in simplify inference (#2093)
vinx13 Nov 13, 2018
b4cd00b
[RELAY][PASS] General OpFusion. (#2090)
tqchen Nov 13, 2018
0d1ba8c
[RELAY][OP] strided_slice (#2094)
tqchen Nov 13, 2018
c548266
[Relay][OP]NMS (#1929)
kevinthesun Nov 13, 2018
9731dff
[Jenkinsfile] Build NNPACK and run tests in `ci-cpu` (#2095)
ajtulloch Nov 14, 2018
e3bfedc
Fix error in fuse_ops.cc (#2098)
MarisaKirisame Nov 14, 2018
80ddfc1
Update fuse_ops.cc (#2102)
MarisaKirisame Nov 14, 2018
28d4c1c
[RELAY][PASS] Bind, FoldConstant (#2100)
tqchen Nov 14, 2018
654e8c5
[RELAY][PASS] FuseOps, fix input fusion rule for conv2d (#2110)
tqchen Nov 15, 2018
0f40aa2
[Bugfix] Recover original layout when alter_layout function return No…
yzhliu Nov 15, 2018
1387e78
[RELAY] bugfix type functor caching (#2113)
tqchen Nov 15, 2018
6e4e3b2
[RELAY][PASS] Make FoldConst context and target invariant (#2114)
tqchen Nov 15, 2018
d5dade4
[NNPACK] temporary disable nnpack test (#2115)
tqchen Nov 15, 2018
f666b42
Docs: Fix links (#2118)
ruslo Nov 15, 2018
e5443cd
Fix doc of strided_slice (#2103)
vinx13 Nov 15, 2018
6792040
[NNPACK] Add check for NNPACK being available (`nnp_initialize()` suc…
ajtulloch Nov 15, 2018
e46ac1a
clarify NNVM’s LLVM requirement (#2117)
headupinclouds Nov 16, 2018
194a373
[RELAY][[PASS] Consolidate ForwardRewrite pass. (#2124)
tqchen Nov 16, 2018
f6119e4
[TOPI] Improve performance for dilated convolution (#2107)
Rasterer Nov 17, 2018
55ee7c6
[nnvm] Add caffe2 frontend (#1981)
hlu1 Nov 18, 2018
03c78fa
[Relay] compute & schedule for relu, softmax (#2127)
yzhliu Nov 18, 2018
81ff1ef
[SCHEDULE] Fix boundary check (#2126)
junrushao Nov 18, 2018
6fef53b
[TOPHUB] fix x86 backend after introducing dilation (#2129)
merrymercy Nov 19, 2018
49bd3b0
[HYBRID FRONTEND] Modify hybrid script to new interface; hybrid op su…
were Nov 19, 2018
8ac7417
Update README.md typo (#2132)
javierlorenzod Nov 19, 2018
70e140f
Relay Op sprint (part 2) - Level 1 - log_softmax (#2128)
anijain2305 Nov 19, 2018
291dbfb
[COMMUNITY] new community guideline (#2077)
tqchen Nov 19, 2018
a51a120
[TOPI] Minor fix in the LSTM recipe (#2131)
junrushao Nov 19, 2018
5444c67
[WIP] [RPC] clean up uploaded modules (#2121)
eqy Nov 19, 2018
a43dd3b
[RELAY]sch & comp for ops in nn.py (#2092)
siju-samuel Nov 19, 2018
510c9d5
[RELAY][BACKEND] Enable PlanMemory in the graph runtime. (#2120)
tqchen Nov 19, 2018
7199a4a
[Relay][Op] Add test for batch_flatten (#2134)
slyubomirsky Nov 20, 2018
5ab9847
[RELAY]Slice_like support (#2014)
siju-samuel Nov 20, 2018
b51524b
[COMMUNITY] Update contributor list to reflect new guideline. (#2138)
tqchen Nov 20, 2018
af47719
Update CONTRIBUTORS.md
tqchen Nov 20, 2018
d3aa793
[TEAM] Huyuwei -> committer (#2139)
tqchen Nov 20, 2018
3359baf
[TEAM] adityaatluri -> committer (#2140)
tqchen Nov 20, 2018
99d8706
[TEAM] Laurawly -> committer (#2141)
tqchen Nov 20, 2018
699bc5b
[TEAM] Lianmin Zheng -> committer (#2142)
yzhliu Nov 20, 2018
b91c076
Add nick to committer (#2143)
icemelon Nov 20, 2018
253a356
Update CONTRIBUTORS.md
tqchen Nov 20, 2018
c1506ea
fix dcgan layer naming overlap (#2145)
joshpoll Nov 21, 2018
55c3cdf
Fix relative import in x86 conv2d (#2149)
vinx13 Nov 21, 2018
7725518
[RELAY] Move Layout to tvm Node system (#2125)
merrymercy Nov 21, 2018
984f4fb
tensorflow frontend supports user given outputs (#1913)
Nov 21, 2018
3d1a415
[FRONTEND][TENSORFLOW] Enable strided_slice with fix. (#2002)
srkreddy1238 Nov 21, 2018
54d776f
[FRONTEND][TENSORFLOW] Fix a typo in _matmul (#2152)
alexeyr Nov 21, 2018
b5bb67e
[Relay] Port LSTM to Relay for testing (#2011)
slyubomirsky Nov 21, 2018
150fb85
Alter op layout for group_conv2d on CUDA (#2148)
vinx13 Nov 21, 2018
02c28d7
[TOPI] Fix atlest1d for reduce and squeeze (#2147)
tqchen Nov 21, 2018
3f220e2
Reverse shape dims of weight type (#2155)
slyubomirsky Nov 22, 2018
d76c982
[DOCS] fix link (#2157)
imorinaga Nov 22, 2018
ef02bec
[APPS] add an external dll call example (#2156)
tqchen Nov 22, 2018
de004c5
[RELAY][PASS] CombineParallelConv2D (#2089)
vinx13 Nov 22, 2018
fc1a823
[RELAY]Testing Inception, Squeezenet, VGG port (#2013)
siju-samuel Nov 24, 2018
fd330f1
[Relay][Op] Add compute, schedule, and tests for expand_dims and sque…
slyubomirsky Nov 25, 2018
0fff6b8
Compare relay and numpy outputs in graph runtime test (#2164)
masahi Nov 25, 2018
29928a2
Relay reshape reshape_like compute and schedule (#2159)
siju-samuel Nov 25, 2018
30ea64f
[RELAY][FRONTEND] Initial MXNet frontend support. (#2163)
tqchen Nov 25, 2018
2377f6d
Fix str decoding error on non-English Windows (#2158)
kice Nov 25, 2018
ed8725d
[COMMUNITY] @phisiart -> Committer (#2165)
tqchen Nov 26, 2018
1a7e01e
[RELAY][OP] Move computes to cxx, enable concat as injective (#2166)
tqchen Nov 26, 2018
22950ab
[RELAY]sch and compute for reduce ops (#2091)
siju-samuel Nov 26, 2018
9b77dff
[PASS] PostOrderVisit (#2169)
ZihengJiang Nov 26, 2018
8b23838
[RELAY] Add multiref trigger to ForwardRewrite (#2168)
tqchen Nov 26, 2018
deeb5d3
[Relay] Register compute and schedule for upsampling, with miscellane…
masahi Nov 26, 2018
e29021e
[RELAY]take and transpose comp and schd (#2135)
siju-samuel Nov 26, 2018
5de489d
[Relay] Densenet benchmark (#2154)
slyubomirsky Nov 26, 2018
56853d2
[Relay][Pass] Fix CombineParallelConv2D (#2167)
vinx13 Nov 27, 2018
e8f28d0
[RELAY]full, full_like compute and schedule (#2170)
siju-samuel Nov 27, 2018
a93369d
[RELAY][IR] Introduce IdNode to preserve var id across rewriting (#2178)
tqchen Nov 27, 2018
f7f54e0
[Relay]resize op compute and schedule (#2172)
siju-samuel Nov 28, 2018
d329e2d
fixing nnvm tutorial typo (#2188)
manimo-rl Nov 28, 2018
0157a9a
[Relay]where compute and schedule (#2179)
siju-samuel Nov 28, 2018
f90be3f
[BACKEND][CODEGEN] C codegen with tests (#2161)
mutinifni Nov 28, 2018
69b1b63
[Tutorial]NLP Sequence to sequence model for translation (#1815)
siju-samuel Nov 29, 2018
7a69969
[FRONTEND][TENSORFLOW] Support AttrValue that has different types of …
lixiaoquan Nov 29, 2018
6de0572
[TOPI] Add tensor multiplication. (#2106)
Vooblin Nov 29, 2018
04a5ee7
Update comments for the API tvm.lower (#2193)
liangdzou Nov 29, 2018
85c7c78
[COMMUNITY] @grwlf -> Reviewer (#2190)
tqchen Nov 29, 2018
ce9a07e
dockerfile cpu changes (#2191)
joshpoll Nov 29, 2018
5cf3265
[DOCS] Introduction to Relay IR. (#2185)
tqchen Nov 29, 2018
8c7e409
[Relay]collapse_sum and broadcast_to compute & schedule (#2180)
siju-samuel Nov 29, 2018
618f6b1
[RELAY]missing schedules updated (#2196)
siju-samuel Nov 29, 2018
c749507
[TVM] Fix segfault for CanonicalSimplify(x % -1) (#2194)
sgrechanik-h Nov 29, 2018
e7307d6
Remove redundant item from langref/relay_op.rst (#2192)
liangdzou Nov 29, 2018
c754a79
Fix logging in autotvm record (#2195)
vinx13 Nov 29, 2018
6001894
fix llvm dependency bug (#2198)
mutinifni Nov 29, 2018
ae2c7fd
[Relay] Alter Op Layout (#2150)
merrymercy Nov 30, 2018
6ddfab9
[PASS] InstrumentBoundCheckers pass (#2079)
denis0x0D Nov 30, 2018
b90411a
[Relay] Add support for tuple node in operator fusion (#2187)
masahi Nov 30, 2018
3ca4326
NOTICE (#2203)
tqchen Nov 30, 2018
555308a
[TVM] Fix llvm codegen (div by power of 2) (#2204)
sgrechanik-h Nov 30, 2018
bd9d031
[Relay][Pass] Fold constant tuple (#2201)
vinx13 Nov 30, 2018
03fe5f7
added int type axis for relay reduce ops (#2199)
siju-samuel Nov 30, 2018
4eb187a
[SCHEDULE] Fix code lowering when loop condition depends on outer axi…
tqchen Dec 1, 2018
ab297f0
Python security issue about mktemp() and abspath() (#2202)
lihaozhehw Dec 1, 2018
ed6b9ad
[DOCKER] inheritate javahome (#2210)
tqchen Dec 1, 2018
a8a0dc2
Update arm cpu depthwise convolution based on latest code
FrozenGene Dec 1, 2018
5d60d6f
Modify lint issue
FrozenGene Dec 1, 2018
41d3295
Fix depthwise convolution infer shape error and lint issue.
FrozenGene Dec 1, 2018
b0664d6
Fix conv2d infer shape issue in HWOI kernel layout
FrozenGene Dec 1, 2018
5d9efaf
[RELAY][PASS] Memorize FoldScaleAxis backward transform result (#2214)
vinx13 Dec 2, 2018
0fd635c
Run verifier during LLVM code generation (#2211)
ajtulloch Dec 2, 2018
e6f5e21
[RELAY][OP] end to end support for pad op. (#2213)
srkreddy1238 Dec 2, 2018
8f89bbe
[DOC][Relay]: Add API docs for Relay. (#1750)
jroesch Dec 2, 2018
1cb2271
[RELAY] bugfix. (#2215)
srkreddy1238 Dec 2, 2018
29b2e01
[Relay][RFC] Relay IR Text Format (#1781)
joshpoll Dec 2, 2018
8fd1dfe
[Relay] Parser Tests (#2209)
joshpoll Dec 3, 2018
2403d1b
Fix misprint (#2223)
ruslo Dec 3, 2018
0bbbd81
[RELAY][PASS] Fix expr subst and CombineParallelConv2D (#2218)
vinx13 Dec 4, 2018
c9d6870
Port from_nnvm to NNVM as to_relay (#2144)
jroesch Dec 4, 2018
d2bf9a2
[DEBUG]Fix debugger message mess in display_debug_result (#2228)
Rasterer Dec 4, 2018
5062650
[RELAY][PASS] Check Positiveness in FoldScaleAxis (#2220)
vinx13 Dec 4, 2018
19c3d0e
Remove redact date_vec op
FrozenGene Dec 5, 2018
71d6423
[TOPHUB] Set vulkan as alias for opencl (#2230)
denis0x0D Dec 5, 2018
68fdb34
[contrib][nnpack] remove training-optimized ops (#2224)
hlu1 Dec 6, 2018
8afd9b5
[typo] fucntion ==> function (#2239)
liangdzou Dec 6, 2018
2003619
[typo] sin ==> in (#2238)
liangdzou Dec 6, 2018
35cdddf
fix dump ir (#2235)
xqdan Dec 6, 2018
301f979
Fix misprint (#2243)
ruslo Dec 6, 2018
79b3846
Add test case of argmax for detecting out of bound access (#2234)
FrozenGene Dec 6, 2018
46d755a
[typo] fucn => func (#2240)
liangdzou Dec 6, 2018
467f3c6
[FRONTEND][TENSORFLOW]Add Split and realdiv op support (#2123)
Rasterer Dec 6, 2018
0ae7aa5
Use unsafe_get in nnvm (#2247)
changlan Dec 6, 2018
98a6160
[COMMUNITY] @masahi -> Committer (#2252)
yzhliu Dec 7, 2018
97c1606
GetChar() in base64.h should return int, not char (#2255)
hcho3 Dec 7, 2018
9d9c283
add c backend to CreateTarget (#2256)
liangdzou Dec 7, 2018
f0b0383
Fix missing sigmoid intrinsic in C++ (#2231)
sergei-mironov Dec 7, 2018
3fd0ce4
allows constant param in op construct (#2257)
were Dec 8, 2018
e8a5694
Generate predicates for non-root iteration variables as well (#2258)
derisavi Dec 8, 2018
033cd47
[COMMUNITY] @ajtulloch -> Reviewer (#2236)
ZihengJiang Dec 8, 2018
130c7d5
[RUNTIME][GOLANG] TVM runtime for golang v0.1 (#1470)
srkreddy1238 Dec 9, 2018
aa0a7b5
Improve CanonicalSimplify to handle Min, Max(#2248) (#2261)
wweic Dec 9, 2018
83b0f5a
[Hybrid Script] Support logical and/or; support 0 < a < 5 clause (#2264)
were Dec 11, 2018
3c11bef
Fix serialization issue (#2263)
jroesch Dec 11, 2018
f1279dc
Allow long type values in shape list (#1806)
apivovarov Dec 11, 2018
a518a05
Fix misprint (#2272)
ruslo Dec 11, 2018
bbf441b
correct mistake in muladd function logic (#2269)
SeanDongX Dec 11, 2018
4444e75
[DOC]Remove non-existent parameter doc (#2277)
wweic Dec 12, 2018
45981a8
Testcases of onnx (#2274)
siju-samuel Dec 12, 2018
75b72c2
Fix a issue when running with graph_runtime_debug in python (#2271)
liangfu Dec 12, 2018
06940ee
[FRONTEND][TENSORFLOW] Bugfix (#2267)
srkreddy1238 Dec 12, 2018
55c9efd
[AUTOTVM] Use range in AnnotateSpace to fix JSON serialization (#2278)
vinx13 Dec 12, 2018
07bc51c
typo: Xlinx => Xilinx (#2283)
liangdzou Dec 13, 2018
7b06e38
[BUGFIX] [Hybrid Script] fix in-correct value index in hybrid script …
were Dec 13, 2018
85dca0f
[FRONTEND][TENSORFLOW] Support Unstack and Split (#2105)
alexeyr Dec 13, 2018
397241e
[DOC]Update documentation (#2286)
wweic Dec 14, 2018
4ad048a
[DOC] fix installation doc (#2290)
pn11 Dec 14, 2018
946b5aa
[RELAY] Fix alter_op_layout (#2289)
merrymercy Dec 14, 2018
c287e0c
[TOPI] NCHWc added input shape 4 condition, intel graphics conv2d sch…
Laurawly Dec 14, 2018
93ba792
[CI] Golang unit test trigger for Jenkins (#2266)
srkreddy1238 Dec 15, 2018
f2e1bbf
[RELAY] Support concatenate. (#2298)
ZihengJiang Dec 17, 2018
0728920
[RELAY] Add broadcast_to operator (#2276)
jroesch Dec 18, 2018
1739cf4
added error checking to loading symbol json (#2301)
samskalicky Dec 18, 2018
94ff83a
[Relay][doc] Update the description of returns in mxnet.py (#2309)
zhiics Dec 18, 2018
6fb304d
[PASS] Avoid recursion in FoldScaleAxis (#2299)
tqchen Dec 18, 2018
36d6318
add relay and autotvm in readme (#2312)
yongwww Dec 18, 2018
19c4ba1
Bundled interpreter demo (#2297)
ajtulloch Dec 19, 2018
1d148c1
[Hybrid Script] Inter-function call supported! (#2287)
were Dec 19, 2018
f007917
[DOC] Codebase walkthrough with vector add example (#2273)
masahi Dec 20, 2018
f6f21c2
[TVM] Move check_numerical_grads to tvm.testing_ (#2314)
sgrechanik-h Dec 20, 2018
6feb5b8
[relay][op] multibox_transform_loc (#2315)
zhiics Dec 20, 2018
e54408d
[COMMUNITY] @eqy -> Committer (#2311)
icemelon Dec 20, 2018
e78e432
[Relay][Frontend] Add MXNet test example for relay (#2316)
icemelon Dec 20, 2018
21e3a5d
[BUGFIX] Seg fault in memory planing for symbolic shape (#2317)
icemelon Dec 21, 2018
8aee172
Small refactors and bug fixes. (#2281)
jroesch Dec 22, 2018
bb9e184
[NNVM] Fix dtype of output of pad. (#2331)
lixiaoquan Dec 24, 2018
e12f310
[ROCM] Make sure all bit code files exist (#2323)
masahi Dec 24, 2018
a2e77a8
[Relay][docs] Details on comp. graphs in Relay dev intro (#2324)
slyubomirsky Dec 24, 2018
2be6673
[RELAY] Add missing arg in vgg (#2329)
vinx13 Dec 24, 2018
d37c088
[Relay][Docs] Fix broken bullet points in Relay operator addition tut…
slyubomirsky Dec 24, 2018
7d4ea4d
[RELAY][AUTOTVM] Extract tuning tasks from Relay programs (#2181)
eqy Dec 24, 2018
d7ff19a
[FRONTEND][TENSORFLOW] Bugfix (#2326)
dominicsymes Dec 24, 2018
3a187c9
[DOCS] typo "@func myfunc" => "func @myfunc" (#2333)
liangdzou Dec 25, 2018
fa1315a
[relay][frontend] Enable ssd test by attaching schedules to multibox …
zhiics Dec 25, 2018
9aabf96
Add a the ability to trigger debugging in the interpreter without rec…
jroesch Dec 25, 2018
9c48af8
[TOPI][CUDA] Add reorder option in int8 conv2d (#2327)
vinx13 Dec 25, 2018
fcb0981
[RELAY] Inline scalar compute (#2335)
vinx13 Dec 26, 2018
97dd830
[NNVM] Fix dtype of output of mean. (#2334)
lixiaoquan Dec 26, 2018
7e9e45d
[Relay][OP] Add cast op (#2319)
icemelon Dec 26, 2018
8d91569
[COMMUNITY] @srkreddy1238 -> Committer (#2339)
tqchen Dec 27, 2018
bfc259b
Merge branch 'master' into arm_cpu_depthwise_convolution
FrozenGene Dec 27, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from others in the community.
Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/dmlc/tvm/blob/master/CONTRIBUTORS.md#reviewers).
28 changes: 25 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -91,10 +91,8 @@ ENV/
*~
*.pyc
*~
build
config.mk
config.cmake
build_*
Win32
*.dir
perf
Expand Down Expand Up @@ -179,15 +177,39 @@ perf
*.h5
synset.txt
cat.jpg
cat.png
docs.tgz
cat.png
*.mlmodel
tvm_u.*
tvm_t.*
# Mac OS X
.DS_Store
build*

# Jetbrain
.idea
.ipython
.jupyter
.nv
.pylint.d
.python_history
.pytest_cache
.local

# tmp file
.nfs*

# keys
*.pem
*.p12
*.pfx
*.cer
*.crt
*.der

# patch sentinel
patched.txt

# Python type checking
.mypy_cache/
.pyre/
6 changes: 3 additions & 3 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
[submodule "dmlc-core"]
path = dmlc-core
path = 3rdparty/dmlc-core
url = https://github.com/dmlc/dmlc-core
[submodule "HalideIR"]
path = HalideIR
path = 3rdparty/HalideIR
url = https://github.com/dmlc/HalideIR
[submodule "dlpack"]
path = dlpack
path = 3rdparty/dlpack
url = https://github.com/dmlc/dlpack
1 change: 1 addition & 0 deletions 3rdparty/HalideIR
Submodule HalideIR added at a08e26
210 changes: 210 additions & 0 deletions 3rdparty/compiler-rt/builtin_fp16.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,210 @@
/*
* Copyright (c) 2009-2015 by llvm/compiler-rt contributors
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.

* Copyright (c) 2018 by Contributors
* \file builtin_fp16.cc
* \brief Functions for conversion between fp32 and fp16, adopted from compiler-rt.
*/

#include <cstdint>

static inline uint32_t __clz(uint32_t x) {
// count leading zeros
int n = 32;
uint32_t y;

y = x >>16; if (y) { n = n -16; x = y; }
y = x >> 8; if (y) { n = n - 8; x = y; }
y = x >> 4; if (y) { n = n - 4; x = y; }
y = x >> 2; if (y) { n = n - 2; x = y; }
y = x >> 1; if (y) return n - 2;
return n - x;
}

template <typename SRC_T, typename SRC_REP_T, int SRC_SIG_BITS,
typename DST_T, typename DST_REP_T, int DST_SIG_BITS>
static inline DST_T __truncXfYf2__(SRC_T a) {
// Various constants whose values follow from the type parameters.
// Any reasonable optimizer will fold and propagate all of these.
const int srcBits = sizeof(SRC_T) * 8;
const int srcExpBits = srcBits - SRC_SIG_BITS - 1;
const int srcInfExp = (1 << srcExpBits) - 1;
const int srcExpBias = srcInfExp >> 1;

const SRC_REP_T srcMinNormal = SRC_REP_T(1) << SRC_SIG_BITS;
const SRC_REP_T srcSignificandMask = srcMinNormal - 1;
const SRC_REP_T srcInfinity = (SRC_REP_T)srcInfExp << SRC_SIG_BITS;
const SRC_REP_T srcSignMask = SRC_REP_T(1) << (SRC_SIG_BITS + srcExpBits);
const SRC_REP_T srcAbsMask = srcSignMask - 1;
const SRC_REP_T roundMask = (SRC_REP_T(1) << (SRC_SIG_BITS - DST_SIG_BITS)) - 1;
const SRC_REP_T halfway = SRC_REP_T(1) << (SRC_SIG_BITS - DST_SIG_BITS - 1);
const SRC_REP_T srcQNaN = SRC_REP_T(1) << (SRC_SIG_BITS - 1);
const SRC_REP_T srcNaNCode = srcQNaN - 1;

const int dstBits = sizeof(DST_T) * 8;
const int dstExpBits = dstBits - DST_SIG_BITS - 1;
const int dstInfExp = (1 << dstExpBits) - 1;
const int dstExpBias = dstInfExp >> 1;

const int underflowExponent = srcExpBias + 1 - dstExpBias;
const int overflowExponent = srcExpBias + dstInfExp - dstExpBias;
const SRC_REP_T underflow = (SRC_REP_T)underflowExponent << SRC_SIG_BITS;
const SRC_REP_T overflow = (SRC_REP_T)overflowExponent << SRC_SIG_BITS;

const DST_REP_T dstQNaN = DST_REP_T(1) << (DST_SIG_BITS - 1);
const DST_REP_T dstNaNCode = dstQNaN - 1;

// Break a into a sign and representation of the absolute value
const union { SRC_T f; SRC_REP_T i; } src_rep = {.f = a};
const SRC_REP_T aRep = src_rep.i;
const SRC_REP_T aAbs = aRep & srcAbsMask;
const SRC_REP_T sign = aRep & srcSignMask;
DST_REP_T absResult;

if (aAbs - underflow < aAbs - overflow) {
// The exponent of a is within the range of normal numbers in the
// destination format. We can convert by simply right-shifting with
// rounding and adjusting the exponent.
absResult = aAbs >> (SRC_SIG_BITS - DST_SIG_BITS);
absResult -= (DST_REP_T)(srcExpBias - dstExpBias) << DST_SIG_BITS;

const SRC_REP_T roundBits = aAbs & roundMask;
// Round to nearest
if (roundBits > halfway)
absResult++;
// Ties to even
else if (roundBits == halfway)
absResult += absResult & 1;
}
else if (aAbs > srcInfinity) {
// a is NaN.
// Conjure the result by beginning with infinity, setting the qNaN
// bit and inserting the (truncated) trailing NaN field.
absResult = (DST_REP_T)dstInfExp << DST_SIG_BITS;
absResult |= dstQNaN;
absResult |= ((aAbs & srcNaNCode) >> (SRC_SIG_BITS - DST_SIG_BITS)) & dstNaNCode;
}
else if (aAbs >= overflow) {
// a overflows to infinity.
absResult = (DST_REP_T)dstInfExp << DST_SIG_BITS;
}
else {
// a underflows on conversion to the destination type or is an exact
// zero. The result may be a denormal or zero. Extract the exponent
// to get the shift amount for the denormalization.
const int aExp = aAbs >> SRC_SIG_BITS;
const int shift = srcExpBias - dstExpBias - aExp + 1;

const SRC_REP_T significand = (aRep & srcSignificandMask) | srcMinNormal;

// Right shift by the denormalization amount with sticky.
if (shift > SRC_SIG_BITS) {
absResult = 0;
} else {
const bool sticky = significand << (srcBits - shift);
SRC_REP_T denormalizedSignificand = significand >> shift | sticky;
absResult = denormalizedSignificand >> (SRC_SIG_BITS - DST_SIG_BITS);
const SRC_REP_T roundBits = denormalizedSignificand & roundMask;
// Round to nearest
if (roundBits > halfway)
absResult++;
// Ties to even
else if (roundBits == halfway)
absResult += absResult & 1;
}
}

// Apply the signbit to (DST_T)abs(a).
const DST_REP_T result = absResult | sign >> (srcBits - dstBits);
const union { DST_T f; DST_REP_T i; } dst_rep = {.i = result};
return dst_rep.f;
}

template<typename SRC_T, typename SRC_REP_T, int SRC_SIG_BITS,
typename DST_T, typename DST_REP_T, int DST_SIG_BITS>
static inline DST_T __extendXfYf2__(SRC_T a) {
// Various constants whose values follow from the type parameters.
// Any reasonable optimizer will fold and propagate all of these.
const int srcBits = sizeof(SRC_T) * 8;
const int srcExpBits = srcBits - SRC_SIG_BITS - 1;
const int srcInfExp = (1 << srcExpBits) - 1;
const int srcExpBias = srcInfExp >> 1;

const SRC_REP_T srcMinNormal = SRC_REP_T(1) << SRC_SIG_BITS;
const SRC_REP_T srcInfinity = (SRC_REP_T)srcInfExp << SRC_SIG_BITS;
const SRC_REP_T srcSignMask = SRC_REP_T(1) << (SRC_SIG_BITS + srcExpBits);
const SRC_REP_T srcAbsMask = srcSignMask - 1;
const SRC_REP_T srcQNaN = SRC_REP_T(1) << (SRC_SIG_BITS - 1);
const SRC_REP_T srcNaNCode = srcQNaN - 1;

const int dstBits = sizeof(DST_T)*8;
const int dstExpBits = dstBits - DST_SIG_BITS - 1;
const int dstInfExp = (1 << dstExpBits) - 1;
const int dstExpBias = dstInfExp >> 1;

const DST_REP_T dstMinNormal = DST_REP_T(1) << DST_SIG_BITS;

// Break a into a sign and representation of the absolute value
const union { SRC_T f; SRC_REP_T i; } src_rep = {.f = a};
const SRC_REP_T aRep = src_rep.i;
const SRC_REP_T aAbs = aRep & srcAbsMask;
const SRC_REP_T sign = aRep & srcSignMask;
DST_REP_T absResult;

// If sizeof(SRC_REP_T) < sizeof(int), the subtraction result is promoted
// to (signed) int. To avoid that, explicitly cast to SRC_REP_T.
if ((SRC_REP_T)(aAbs - srcMinNormal) < srcInfinity - srcMinNormal) {
// a is a normal number.
// Extend to the destination type by shifting the significand and
// exponent into the proper position and rebiasing the exponent.
absResult = (DST_REP_T)aAbs << (DST_SIG_BITS - SRC_SIG_BITS);
absResult += (DST_REP_T)(dstExpBias - srcExpBias) << DST_SIG_BITS;
}

else if (aAbs >= srcInfinity) {
// a is NaN or infinity.
// Conjure the result by beginning with infinity, then setting the qNaN
// bit (if needed) and right-aligning the rest of the trailing NaN
// payload field.
absResult = (DST_REP_T)dstInfExp << DST_SIG_BITS;
absResult |= (DST_REP_T)(aAbs & srcQNaN) << (DST_SIG_BITS - SRC_SIG_BITS);
absResult |= (DST_REP_T)(aAbs & srcNaNCode) << (DST_SIG_BITS - SRC_SIG_BITS);
}
else if (aAbs) {
// a is denormal.
// renormalize the significand and clear the leading bit, then insert
// the correct adjusted exponent in the destination type.
const int scale = __clz(aAbs) - __clz(srcMinNormal);
absResult = (DST_REP_T)aAbs << (DST_SIG_BITS - SRC_SIG_BITS + scale);
absResult ^= dstMinNormal;
const int resultExponent = dstExpBias - srcExpBias - scale + 1;
absResult |= (DST_REP_T)resultExponent << DST_SIG_BITS;
}
else {
// a is zero.
absResult = 0;
}

// Apply the signbit to (DST_T)abs(a).
const DST_REP_T result = absResult | (DST_REP_T)sign << (dstBits - srcBits);
const union { DST_T f; DST_REP_T i; } dst_rep = {.i = result};
return dst_rep.f;
}
1 change: 1 addition & 0 deletions 3rdparty/dlpack
Submodule dlpack added at bee4d1
1 change: 1 addition & 0 deletions 3rdparty/dmlc-core
Submodule dmlc-core added at 519d01
Loading