Skip to content

Conversation

@jcf94
Copy link
Contributor

@jcf94 jcf94 commented May 26, 2021

Add fast_softmax support in fast_math pass.

p.s. AutoScheduler can easily support the fast_softmax implementation.

@jcf94 jcf94 requested review from comaniac and merrymercy May 26, 2021 11:37
@comaniac comaniac merged commit 4344540 into apache:main May 26, 2021
@comaniac
Copy link
Contributor

Thanks @jcf94

@merrymercy
Copy link
Member

This PR will break other code that does not use auto-scheduler.
If some existing scripts do not use auto-scheduler but enable FastMath pass, their performance will be degraded after this PR (for cpu backends), or they just cannot run (for GPU backends)

@jcf94
Copy link
Contributor Author

jcf94 commented May 27, 2021

This PR will break other code that does not use auto-scheduler.
If some existing scripts do not use auto-scheduler but enable FastMath pass, their performance will be degraded after this PR (for cpu backends), or they just cannot run (for GPU backends)

Yeah, actually I'm now writing schedules for fast_softmax in cuda, this does comes to be a problem.

@jcf94 jcf94 deleted the fast_softmax branch May 27, 2021 14:55
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Jun 17, 2021
* Add fast_softmax support in fast_math pass

* Lintfix

* Update
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Jun 17, 2021
* Add fast_softmax support in fast_math pass

* Lintfix

* Update
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants