Add optimized training: 14% faster (107→92 ms/step on M3 Max)#21
Closed
tomdif wants to merge 2 commits intomaderix:mainfrom
Closed
Add optimized training: 14% faster (107→92 ms/step on M3 Max)#21tomdif wants to merge 2 commits intomaderix:mainfrom
tomdif wants to merge 2 commits intomaderix:mainfrom
Conversation
New train_opt target with NEON-vectorized Adam, fp16 activation/gradient caching, concurrent dW dispatch, pre-allocated buffers, and optional Metal GPU support. Tested on M3 Max with stories110M. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
3b188f5 to
09e9c99
Compare
train_opt had a hardcoded MODEL_PATH that didn't match the working directory, causing fallback to random init. Now accepts positional model path argument (e.g., ./train_opt stories110M.bin). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Author
|
Closing this — after benchmarking No point adding complexity that doesn't improve on what you already have. Nice work on the ANE offload approach. |
dev-erik
added a commit
to dev-erik/ANE
that referenced
this pull request
Mar 3, 2026
…timized training (train_opt), double-buffered async ANE training (train_double_buffer), Qwen2.5-0.5B LLM inference (inference/). Added get_path() env var support and SEC_FLAGS to all new targets. Skipped PR maderix#22 (binary blob risk).
dev-erik
added a commit
to dev-erik/ANE
that referenced
this pull request
Mar 3, 2026
…timized training (train_opt), double-buffered async ANE training (train_double_buffer), Qwen2.5-0.5B LLM inference (inference/). Added get_path() env var support and SEC_FLAGS to all new targets. Skipped PR maderix#22 (binary blob risk).
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds an optimized
train_optvariant alongside the existingtrain_large, achieving significant speedups on M3 Max with stories110M. No changes to the original training code — this is purely additive.Benchmarks (M3 Max, stories110M, steady-state):
train_largetrain_optNote: macOS 15.4 Sequoia update improved ANE scheduling significantly (~23% faster baseline vs 15.3).
What's changed
New files:
training/stories_cpu_ops_opt.h— NEON-vectorized Adam optimizer + vectorized embedding opstraining/train_opt.m— Optimized training loop with all improvements belowModified files:
training/stories_io.h— Addedio_read_raw_fp16()helper for raw fp16 memcpy from IOSurfacetraining/Makefile— Addedtrain_optbuild targetOptimizations
1. NEON-vectorized Adam optimizer (~3x faster Adam)
4-wide NEON intrinsics with
vrsqrteq_f32+ one Newton-Raphson iteration for fast reciprocal sqrt. Scalar tail for non-aligned remainder. Significant win over 110M params.2. fp16 activation & gradient caching (~3.4ms saved per step)
_Float16*via raw memcpy from IOSurface, skipping fp16→fp32 NEON conversion on the main thread. Conversion deferred to dW dispatch blocks.3. Pre-allocated per-step buffers (~132 malloc/free eliminated per step)
LayerCapturesstruct pre-allocates all 11 fp32 + 5 fp16 dW capture buffers per layer at startup. Dispatch blocks just memcpy into pre-allocated memory instead of malloc+memcpy+free.4. Concurrent dW dispatch queue
Changed
DISPATCH_QUEUE_SERIAL→DISPATCH_QUEUE_CONCURRENTfor weight gradient computation. Individual sgemm calls dispatched independently. Addedsetenv("VECLIB_MAXIMUM_THREADS", "2", 1)to prevent cblas thread oversubscription.5. Dead read elimination (~1.1ms saved)
Removed unnecessary h1/h3 reads from fwdFFN IOSurface output during forward pass — backward already copies directly via
io_copy.6. Vectorized embedding ops
embed_lookup_opt: memcpy rows + singlevDSP_mtrans(vs scalar scatter).embed_backward_opt:vDSP_mtrans+vDSP_vaddper token row.7. Optional Metal GPU for dW (off by default)
MPSMatrixMultiplicationfor weight gradients on GPU. Disabled by default because it causes memory bandwidth contention with ANE on M3 Max (~28ms regression). Available via--metalflag for testing on other hardware.Build & test
Drops in alongside
train_large— existing code untouched.Test plan
train_largebaseline (10.7% faster on 15.4, 14.1% on 15.3)--metalflag (functional but slower due to bandwidth contention)🤖 Generated with Claude Code