Conversation
loubnabnl
left a comment
There was a problem hiding this comment.
Looks great! I just left a comment on the order of the losses and the batch size we can fit
| batch = {k: v[:args.train_batch_size_select] for k,v in batch.items()} | ||
| elif args.selection_method == "rholoss": | ||
| with torch.no_grad(): | ||
| out = model(batch, labels=batch, use_cache=False).loss |
There was a problem hiding this comment.
Comment about the batch size: we're assuming that we can fit a batch size of 320 with our workers, but I think we can only fit 12 sequences on A100 40GB (so on 16 workers: batch of 16*12=192).
So we should probably either incorporate gradient accumulation and store the losses for 2 iterations (2 * 10 (small bz) * 16 gpus=320) or we can change the batch sizes from 320/32 to something that suits us with a 10% ratio like 160/16. In the paper they just talk about the 10% ratio but I'm not sure if using large batches si also important?
There was a problem hiding this comment.
There are no gradients here which means that a) we can likely fit a bigger batch size than 12 b) instead of grad acc. we can just run multiple times right after another & store the losses if it doesnt fit
There was a problem hiding this comment.
yes right! by grad acc. I also meant doing similar iterations over the losses
examples/research_projects/codeparrot/scripts/codeparrot_training.py
Outdated
Show resolved
Hide resolved
examples/research_projects/codeparrot/scripts/codeparrot_training.py
Outdated
Show resolved
Hide resolved
examples/research_projects/codeparrot/scripts/codeparrot_training.py
Outdated
Show resolved
Hide resolved
…ing.py Co-authored-by: Loubna Ben Allal <44069155+loubnabnl@users.noreply.github.com>
…ing.py Co-authored-by: Loubna Ben Allal <44069155+loubnabnl@users.noreply.github.com>
…ing.py Co-authored-by: Loubna Ben Allal <44069155+loubnabnl@users.noreply.github.com>
…nd update accelerate
|
Summary of the changes I added:
|
Amazing work - do you want me to add the last point you mentioned? |
|
You can add it you have time, otherwise I will add it later 🤗 |
Done, but not tested. May have a bug 👻 |
* Typos/fixes to link syntax * Trying section headers * Add header formatting for Rule #3
* added flash attention for opt * added to list * fix use cache (#3) * style fix * fix text * test fix2 * reverted until 689f599 * torch fx tests are working now! * small fix * added TODO docstring * changes * comments and .md file modification --------- Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Add a convenience method for building in your own name scope * Second attempt at auto layer building * Revert "Second attempt at auto layer building" This reverts commit e03a3aaecf9ec41a805582b83cbdfe3290a631be. * Attempt #3 * Revert "Attempt #3" This reverts commit b9df7a0857560d29b5abbed6127d9e9eca77cf47. * Add missing attributes that we're going to need later * Add some attributes we're going to need later * A fourth attempt! Feel the power flow through you! * Revert "A fourth attempt! Feel the power flow through you!" This reverts commit 6bf4aaf3875d6f28485f50187617a4c616c8aff7. * Add more values we'll need later * TF refactor that we'll need later * Revert "TF refactor that we'll need later" This reverts commit ca07202fb5b7b7436b893baa8d688b4f348ea7b9. * Revert "Revert "TF refactor that we'll need later"" This reverts commit 1beb0f39f293ed9c27594575e1c849aadeb15c13. * make fixup * Attempt five! * Revert "Attempt five!" This reverts commit 3302207958dfd0374b0447a51c06eea51a506044. * Attempt six - this time don't add empty methods * Revert "Attempt six - this time don't add empty methods" This reverts commit 67d60129be75416b6beb8f47c7d38d77b18d79bb. * Attempt seven - better base model class detection! * Revert "Attempt seven - better base model class detection!" This reverts commit 5f14845e92ea0e87c598da933bfbfee10f553bc9. * Another attribute we'll need later * Try again with the missing attribute! * Revert "Try again with the missing attribute!" This reverts commit 760c6f30c5dffb3e04b0e73c34a77d1882a0fef7. * This is the attempt that will pierce the heavens! * Revert "This is the attempt that will pierce the heavens!" This reverts commit c868bb657de057aca7a5260350a3f831fc4dfee6. * Attempt seven - snag list is steadily decreasing * Revert "Attempt seven - snag list is steadily decreasing" This reverts commit 46fbd975deda64429bfb3e5fac4fc0370c00d316. * Attempt eight - will an empty snag list do it? * Revert "Attempt eight - will an empty snag list do it?" This reverts commit 7c8a3c2b083253649569e9877e02054ae5cec67b. * Fixes to Hubert issues that cause problems later * Trying again with Conv1D/SeparableConv fixes * Revert "Trying again with Conv1D/SeparableConv fixes" This reverts commit 55092bca952bc0f750aa1ffe246a640bf1e2036e. * Apply the build shape fixes to Wav2Vec2 as well * One more attempt! * Revert "One more attempt!" This reverts commit 5ac3e4cb01b9458cc93312873725f9444ae7261c. * Another attempt! * Revert "Another attempt!" This reverts commit ea16d890e019d7de8792a3b8e72f3b1c02adae50. * Let's see how many failures we get without the internal build method * Fix OpenAI * Fix MobileBERT * (Mostly) fix GroupVIT * Fix BLIP * One more BLIP fix * One more BLIP fix! * Fix Regnet * Finally fully fix GroupViT * Fix Data2Vec and add the new AdaptivePool * Fix Segformer * Fix Albert * Fix Deberta/DebertaV2 * Fix XLM * Actually fix XLM * Fix Flaubert * Fix lxmert * Fix Resnet * Fix ConvBERT * Fix ESM * Fix Convnext / ConvnextV2 * Fix SAM * Fix Efficientformer * Fix LayoutLMv3 * Fix speech_to_text * Fix mpnet and mobilevit * Fix Swin * Fix CTRL * Fix CVT * Fix DPR * Fix Wav2Vec2 * Fix T5 * Fix Hubert * Fix GPT2 * Fix Whisper * Fix DeiT * Fix the encoder-decoder / dual-encoder classes * make fix-copies * build in name scope * Fix summarization test * Fix tied weight names for BART + Blenderbot * Fix tied weight name building * Fix to TFESM weight building * Update TF SAM * Expand all the shapes out into Big Boy Shapes
No description provided.