-
Notifications
You must be signed in to change notification settings - Fork 8
Reorganize AlphaFold 3 modules into dedicated subpackage #115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add frame_aligned_point_error: AlphaFold's Frame Aligned Point Error loss - Add distogram_loss: AlphaFold 3 distogram loss for distance distribution prediction - Add smooth_local_distance_difference_test: smooth differentiable LDDT loss - Fix compose_rotation_matrix for torch.compile compatibility - Fix rotation_matrix_to_quaternion to handle arbitrary batch dimensions - All operators use torch.finfo(dtype).eps for automatic numerical stability - Comprehensive test suites with property-based testing using Hypothesis - Full ASV benchmark coverage for performance monitoring
Implements Algorithm 28 from the AlphaFold 3 paper for weighted rigid alignment of 3D point sets. This is a weighted generalization of the Kabsch algorithm that finds optimal rotation and translation to align input points to target points using per-point weights. Key features: - Handles arbitrary batch dimensions - Uses torch.finfo(dtype).eps for numerical stability - Stops gradients on output (detach) as specified in algorithm - Handles reflection correction via SVD determinant check - Full test coverage with property-based testing - ASV benchmarks for performance monitoring - torch.compile compatible
Implementation includes: - OuterProductMean: Computes outer products across MSA sequences then averages - MultipleSequenceAlignment: Main MSA processing block with multiple sub-modules - MSAPairWeightedAveraging: Attention-based weighted averaging from pair to MSA - Triangle multiplication modules (incoming/outgoing): Triangular updates - Triangle attention modules (starting/ending node): Triangular attention - Transition: MLP with gated linear units - Coordinate frame operations: express_coordinates_in_frame, compute_alignment_error Critical bug fixes: - Fixed OuterProductMean to preserve MSA evolutionary information - Corrected softmax dimension in MSAPairWeightedAveraging attention - Added comprehensive input validation and error handling - Improved numerical stability and tensor dimension handling All modules support: - Gradient computation and autograd compatibility - Batch processing and torch.compile optimization - Multiple precision (float32/float64) and device support - Comprehensive testing with property-based validation
- Rename MSA -> AlphaFold3MSA - Rename ConfidenceHead -> AlphaFold3Confidence - Rename DiffusionModule -> AlphaFold3Diffusion - Rename DistogramHead -> AlphaFold3Distogram - Rename TemplateEmbedder -> AlphaFold3TemplateEmbedder - Rename AlphaFold3Inference -> AlphaFold3 Update all imports, exports, tests, benchmarks, and documentation to use consistent AlphaFold3 prefixed naming convention.
- Move all AlphaFold 3 modules to src/beignet/nn/alphafold3/ - Extract AtomAttentionDecoder from SampleDiffusion into separate module - Clean up diffusers integration with proper imports - Simplify root nn/__init__.py to only export main AlphaFold3 class - Add comprehensive alphafold3/__init__.py with all component exports - Fix import paths throughout codebase for new package structure
- Reorganized all AlphaFold 3 modules into dedicated alphafold3 subpackage - Fixed circular imports in sample diffusion module - Corrected missing function exports in main module - Updated all test imports for new package structure - Resolved tensor dimension mismatches in neural network modules - Fixed Hypothesis deadline issues in property-based tests - Added comprehensive test coverage for all AlphaFold 3 components - Ensured all 355 tests pass with proper module organization
|
Claude finished @0x00b1's task —— View job AlphaFold 3 Module Reorganization ReviewI've conducted a comprehensive review of this major reorganization. This is an impressive implementation with excellent code quality overall. Review Tasks:
Note: Unable to execute commands due to permission restrictions, but conducted thorough static analysis. ✅ StrengthsArchitecture & Organization
Code Quality
Testing Excellence
Performance & Benchmarking
|
|
Claude finished @0x00b1's task —— View job Claude finished @0x00b1's task —— AlphaFold 3 Module Reorganization ReviewI've conducted a comprehensive review of this major reorganization. This is an impressive implementation with excellent code quality overall. Review Tasks:
Note: Unable to execute commands due to permission restrictions, but conducted thorough static analysis. ✅ StrengthsArchitecture & Organization
Code Quality
Testing Excellence
Performance & Benchmarking
|
Summary
alphafold3subpackage undersrc/beignet/nn/alphafold3/src/beignet/nn/functional/alphafold3/Test plan
uv run python -m pytest)uv run ruff checkanduv run ruff format)