Merged
Conversation
The neural language model scaling law is typically meant to be computed on a loss averaged over the entire training sample. Currently it is computed within-batch only, which frequently sees losses below 1.69 the of natural entropy of text. Here we now compute the scaling law and the resultant effective number of model parameters on the exponentially moving average loss for a server, which should greatly improve the definition of the result.
Contributor
Author
Comparative analysis of BIT-601We compare the neuron stats of two types of nakamoto validators, one on current master branch, and the other on the BIT-601 branch. The change from master is that BIT-601 now applies the scaling law on the average loss across multiple batches, instead of on each batch_loss separately like master. master: # estimate the effective number of model parameters from the batch_loss
_num_params = scaling_law_loss_to_params(_loss)BIT-601: # estimate the effective number of model parameters from EMA loss
_num_params = scaling_law_loss_to_params(torch.tensor(stats['loss_nxt']))We expect a change in
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.


BIT-601 Scaling law on EMA loss
The neural language model scaling law [1] is typically meant to be computed on a loss averaged over the entire training data. Currently it is computed within-batch only, which frequently sees losses below 1.69 (the natural entropy of text).
Here we now compute the scaling law and the resultant effective number of model parameters on the exponentially moving average loss for a server, which should greatly improve the definition of the result.
[1] (OpenAI scaling laws) Kaplan, Jared, et al. "Scaling laws for neural language models." arXiv:2001.08361 (2020)