Skip to content

Conversation

@samuelstanton
Copy link
Collaborator

No description provided.

sdstanton1 and others added 12 commits May 22, 2025 17:59
Add HuggingFace ecosystem compatibility while preserving cortex innovations:

- NeuralTreeConfig: PretrainedConfig subclass for HF ecosystem integration
- NeuralTreeModel: PreTrainedModel wrapper for cortex architecture
- HuggingFaceRoot: Any HF transformer as cortex root node
- Dual-mode configuration: Both HF native and Hydra compatibility
- JSON serialization support for HF model hub
- Comprehensive test coverage with modern pytest patterns

Enables cortex models to work with HF pipelines, model hub, and tooling
while maintaining all existing ML innovations and backward compatibility.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Move tokenization from model forward pass to dataloader workers for
parallel execution and ~2x GPU utilization improvement:

- CortexDataset: Base class with transform separation (dataloader vs model)
- SequenceDataset: Concrete implementation for sequence data
- TransformerRootV2: Updated root accepting pre-tokenized inputs
- RedFluorescentProteinDatasetV2: Migration example for existing datasets
- Comprehensive integration tests validating GPU utilization improvements

Key benefit: Eliminates tokenization bottleneck by running string processing
in parallel dataloader workers while GPU processes previous batch.

Maintains backward compatibility with deprecation warnings.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Implement static corruption system enabling PyTorch torch.compile optimization for neural tree architecture. Features separate corruption processes for tokens (mask) vs embeddings (Gaussian), static computation graphs without dynamic branching, and comprehensive test coverage with all 14 tests passing.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Modernize PyTorch Lightning integration with callback-based architecture and comprehensive multi-task training support. Features clean separation of model and training logic, weight averaging callbacks, and full compatibility with v2/v3 neural tree infrastructure including torch.compile support.

Key Components:
- NeuralTreeLightningV2: Modern Lightning module with multi-task training
- WeightAveragingCallback: Callback-based EMA with state management
- Comprehensive test suite: 26/26 tests passing (100% success rate)
- HuggingFace compatibility: Works with TransformerRootV2/V3
- Documentation: Parameter standardization roadmap

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Following up on Milestone 4 (Lightning Integration v2), this commit:
- Removes deprecated model classes (TransformerRootV2/V3, CortexDataset, neural_tree_model)
- Updates imports and tests to use modernized components
- Adds HuggingFace configuration files for protein tasks
- Adds working example in examples/hf_fluorescence_fast.py
- Updates CLAUDE.md with project instructions and milestones
- Cleans up test suite to match new architecture

This sets the stage for implementing parallel tokenization in dataloaders
with tokenizer ownership by root nodes.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit adds support for efficient HuggingFace dataset integration with
tokenization that leverages PyTorch's dataloader parallelism:

1. **Tokenizer ownership by roots**: HuggingFaceRoot now provides
   get_tokenizer_config() method that returns configuration for tokenizer
   instantiation in data loaders

2. **HFTaskDataModule**: New data module that uses dataset.map() for
   efficient tokenization following HuggingFace best practices:
   - Lazy/memory-mapped datasets with Apache Arrow format
   - Parallel tokenization with multiprocessing support
   - Disk caching of tokenized results
   - Batch processing to control memory usage

3. **Updated RegressionTask**: Now supports both HuggingFace tokenized
   inputs and legacy column-based inputs, enabling gradual migration

4. **Tree building**: NeuralTreeLightningV2.build_tree() now passes
   tokenizer config from root nodes to task data modules

5. **Test coverage**: Added comprehensive unit tests for both the new
   HFTaskDataModule and updated RegressionTask

The design ensures tokenization happens once during dataset preparation
rather than repeatedly in the dataloader, while maintaining the principle
that roots own their tokenizers.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants