Skip to content

render equations in markdown files#1721

Merged
wanghan-iapcm merged 11 commits intodeepmodeling:develfrom
njzjz:math
May 21, 2022
Merged

render equations in markdown files#1721
wanghan-iapcm merged 11 commits intodeepmodeling:develfrom
njzjz:math

Conversation

@njzjz
Copy link
Member

@njzjz njzjz commented May 19, 2022

@codecov-commenter
Copy link

codecov-commenter commented May 19, 2022

Codecov Report

Merging #1721 (d3baa41) into devel (962c9a8) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##            devel    #1721   +/-   ##
=======================================
  Coverage   76.11%   76.11%           
=======================================
  Files          95       95           
  Lines        7929     7929           
=======================================
  Hits         6035     6035           
  Misses       1894     1894           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 962c9a8...d3baa41. Read the comment docs.

@njzjz njzjz marked this pull request as ready for review May 20, 2022 00:08

where $L_e$, $L_f$, and $L_v$ denote the loss in energy, force and virial, respectively. $p_e$, $p_f$, and $p_v$ give the prefactors of the energy, force and virial losses. The prefectors may not be a constant, rather it changes linearly with the learning rate. Taking the force prefactor for example, at training step $t$, it is given by

$$p_f(t) = p_f^0 \frac{ \alpha(t) }{ \alpha(0) } + p_f^\infty ( 1 - \frac{ \alpha(t) }{ \alpha(0) })$$
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shall tell the user $p_f^0$ is set by start_pref_f, and so on..

Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm May 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or we do not use latex math here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept both equations.

where `t` is the training step.
* During the training, the learning rate decays exponentially from `start_lr` to `stop_lr` following the formula:

$$ \alpha(t) = \alpha_0 \lambda ^ { t / \tau } $$
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again we shell tell the user $\alpha_0$ is set by start_lr. Or we do not use latex math.

@wanghan-iapcm wanghan-iapcm merged commit eb2a3c3 into deepmodeling:devel May 21, 2022
@njzjz njzjz deleted the math branch May 21, 2022 02:56
mingzhong15 pushed a commit to mingzhong15/deepmd-kit that referenced this pull request Jan 15, 2023
* render equations in markdown files

* enable dollarmath in the sphinx

* fix Angstrom sybol; it's not supported by mathjax

* fix equations of the learning rate

* fix equations

* fix equations

* fix equations

* fix equations

* Update lammps-command.md

* use \cdots for ...

* add the detailed keys
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants