At the 2025 MSST conference it was brought to my attention there was no generic S3 interoperability testing framework. I suggested we just get generative AI to create it.
Why wait?
MSST-S3 is a comprehensive interoperability testing framework designed to validate S3 API compatibility across different storage implementations. Whether you're developing an S3-compatible storage system, evaluating vendor solutions, or ensuring consistent behavior across multiple S3 providers, MSST-S3 provides a standardized test suite to verify compliance and identify implementation differences.
618 comprehensive S3 API tests with 94.2% pass rate on MinIO:
- ✅ 592 tests ported from versitygw
- ✅ 75 test files covering all major S3 operations
- ✅ 96% MinIO S3 API compatibility
- ✅ Full documentation of all test results
📊 View Complete Test Results →
MSST-S3 enables you to:
- Validate S3 Implementations: Test any S3-compatible storage system against a comprehensive suite of API tests
- Compare Vendor Solutions: Run the same tests against multiple S3 providers to identify behavioral differences
- Ensure API Compliance: Verify that your S3 implementation correctly handles standard S3 operations
- Identify Edge Cases: Discover implementation-specific quirks and limitations across different S3 systems
- Benchmark Performance: Compare performance characteristics across different S3 implementations
- Storage Vendor Testing: Validate that your S3-compatible storage product correctly implements the S3 API
- Migration Planning: Test compatibility between source and destination S3 systems before migration
- Multi-Cloud Strategy: Ensure consistent behavior across AWS S3, MinIO, Ceph RGW, and other S3 providers
- CI/CD Integration: Automated testing of S3 compatibility in your development pipeline
- Compliance Verification: Ensure your S3 implementation meets organizational requirements
- Python 3.8 or higher
- Access to one or more S3-compatible endpoints (or use local MinIO)
- S3 credentials (access key and secret key)
# Setup
git clone https://github.com/your-org/msst-s3.git
cd msst-s3
make venv
source venv/bin/activate
# Configure for MinIO Docker
make defconfig-docker-demo
docker run -d -p 9000:9000 -e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin minio/minio server /data
# Run validation
python scripts/production-validation.py --config s3_config.yaml --quickExpected result: ✓ PRODUCTION READY - All requirements met
# Clone the repository
git clone https://github.com/your-org/msst-s3.git
cd msst-s3
# Install dependencies
make install-depsThe simplest way to start testing S3 compatibility:
# Full automated demo with MinIO, synthetic data, and tests
make defconfig-docker-demo
make test-with-docker
# This will:
# 1. Start MinIO in Docker
# 2. Populate synthetic test data
# 3. Run compatibility tests
# 4. Generate results report# Start MinIO manually
docker run -d --name minio \
-p 9000:9000 -p 9001:9001 \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin \
quay.io/minio/minio server /data --console-address ":9001"
# Configure and run basic tests
make defconfig-basic
make test# Load AWS configuration template
make defconfig-aws-s3
# Edit .config to add your AWS credentials
vi .config
# Set CONFIG_S3_ACCESS_KEY and CONFIG_S3_SECRET_KEY
# Run tests
make testPre-configured test profiles for common scenarios:
Basic Configurations:
- basic: Minimal configuration for quick S3 compatibility checks
- minio-local: Test against local MinIO instance (localhost:9000)
- aws-s3: Full test suite for AWS S3 (requires credentials)
Docker-based Configurations:
- docker-demo: Automated demo with MinIO and synthetic data
- docker-rustfs: Test against RustFS (Rust-based S3 storage)
- docker-localstack: Test against LocalStack (AWS emulator)
- docker-ceph: Test against Ceph RadosGW
- docker-garage: Test against Garage S3
- docker-seaweedfs: Test against SeaweedFS S3
SDK Configurations: (see docs/DEFCONFIGS.md)
- defconfigs/boto3_latest.yaml: Test with latest Python boto3 SDK
- defconfigs/boto3_1.26.yaml: Test with boto3 version 1.26.x
- defconfigs/aws_sdk_go_v2.yaml: Test with Go SDK v2
- defconfigs/aws_sdk_java_v2.yaml: Test with Java SDK v2
For custom configurations, use the interactive menu system:
# Configure your S3 endpoints and test parameters
make menuconfigNavigate through the menu to configure:
- S3 endpoint URLs
- Authentication credentials
- Test categories to run
- Performance test parameters
- Output preferences
# Run all enabled tests
make test
# Run specific test categories
make test-basic # Basic CRUD operations
make test-multipart # Multipart upload tests
make test-versioning # Object versioning tests
make test-acl # Access control tests
make test-performance # Performance benchmarks
# Run a specific test
make test TEST=001
# Run tests for a specific group
make test GROUP=aclMSST-S3 includes Docker configurations for testing multiple S3 implementations without manual setup:
| Provider | Port | Endpoint | Description |
|---|---|---|---|
| MinIO | 9000 | http://localhost:9000 | High-performance S3-compatible storage |
| RustFS | 9002 | http://localhost:9002 | High-performance Rust-based S3 storage |
| LocalStack | 4566 | http://localhost:4566 | AWS services emulator |
| Ceph RadosGW | 8082 | http://localhost:8082 | Ceph's S3 interface |
| Garage | 3900 | http://localhost:3900 | Distributed S3 storage |
| SeaweedFS | 8333 | http://localhost:8333 | Distributed file system with S3 API |
# Start all S3 providers
make docker-up
# Start specific provider
make docker-minio
make docker-localstack
make docker-ceph
# Check status
make docker-status
# View logs
make docker-logs PROVIDER=minio
# Stop all containers
make docker-downGenerate test data automatically:
# Populate data after configuring endpoint
make populate-data
# The script creates:
# - Multiple buckets with different configurations
# - Objects of various sizes (1KB to 10MB)
# - Different file types (binary, text, JSON, CSV)
# - Nested directory structures
# - Versioned objects (if supported)# Complete automated test with Docker provider
make defconfig-docker-demo # Configure for Docker MinIO
make docker-minio # Start MinIO container
make populate-data # Generate test data
make test # Run tests
# Or use the all-in-one command:
make defconfig-docker-demo
make test-with-dockerMSST-S3 includes a comparison tool to evaluate different S3 backends side-by-side, measuring both compatibility and performance.
We provide a pre-generated comparison report and tools to run your own:
📊 View Comparison Report → compare-s3-minio-vs-rustfs.md
| Metric | MinIO | RustFS |
|---|---|---|
| Pass Rate | 98.7% | 94.7% |
| Avg Test Duration | 0.867s | 0.792s |
| Performance | Baseline | 10% faster |
| Maturity | Production | Alpha |
| License | AGPL v3 | Apache 2.0 |
# Ensure Python dependencies are installed
python3 -m venv .venv
source .venv/bin/activate
pip install boto3 pyyaml click# Start MinIO (if not already running)
docker run -d --name msst-minio \
-p 9000:9000 -p 9001:9001 \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin \
quay.io/minio/minio server /data --console-address ":9001"
# Start RustFS
docker run -d --name msst-rustfs \
-p 9002:9000 -p 9003:9001 \
rustfs/rustfs:latest# Basic comparison (basic, multipart, versioning tests)
python scripts/compare-backends.py \
-b minio -b rustfs \
-g basic -g multipart -g versioning \
--no-start-containers
# Full comparison with performance tests
python scripts/compare-backends.py \
-b minio -b rustfs \
-g basic -g multipart -g versioning -g performance \
-o comparison-results \
-r my-comparison-report.md \
--no-start-containers
# With automatic container management
python scripts/compare-backends.py \
-b minio -b rustfs \
-g basic -g multipart \
--start-containers \
--stop-containers| Option | Description |
|---|---|
-b, --backends |
Backends to compare (minio, rustfs) |
-g, --groups |
Test groups to run (basic, multipart, versioning, etc.) |
-o, --output-dir |
Directory for detailed results |
-r, --report |
Output markdown report filename |
-j, --parallel-jobs |
Number of parallel test workers |
--start-containers |
Auto-start Docker containers |
--stop-containers |
Auto-stop containers after tests |
The generated report includes:
- Executive Summary: Overall pass rates and performance metrics
- Visual Comparisons: ASCII bar charts for quick assessment
- Category Breakdown: Results per test category (basic, multipart, etc.)
- Test Differences: Tests that passed on one backend but failed on another
- Feature Analysis: Which S3 features each backend supports
- Performance Deep Dive: Slowest tests and timing comparisons
- Recommendations: Use case guidance for each backend
MSST-S3 includes a comprehensive production validation suite to verify S3 systems are ready for production deployment. All tests pass with 100% success rate on MinIO.
Choose the right validation level for your needs:
| Strategy | Time | Tests | Command |
|---|---|---|---|
| Smoke Test | 2-5 min | Basic ops | make test TEST="001 002 003" |
| Critical Path | 5-10 min | Data integrity & errors | python scripts/production-validation.py --quick |
| Feature Test | 15-30 min | Specific features | make test GROUP=multipart |
| Full Validation | 30-60 min | All tests | python scripts/production-validation.py |
Run critical tests only (5-10 minutes):
python scripts/production-validation.py --config s3_config.yaml --quickComplete production readiness assessment (30-60 minutes):
python scripts/production-validation.py --config s3_config.yamlThe framework has been fully validated with MinIO achieving:
- 100% pass rate across all 11 production tests
- Data integrity: MD5/ETag validation confirmed
- Performance: <50ms latency for small objects, >10MB/s for large
- Concurrency: 50+ operations/second sustained
- Production ready status confirmed
| Category | Tests | Coverage | Requirement |
|---|---|---|---|
| Critical - Data Integrity | 004-006 | 30% | 100% pass |
| Error Handling | 011-012 | 20% | 100% pass |
| Multipart Upload | 100-102 | 15% | 100% pass |
| Versioning | 200 | 5% | 80% pass |
| Performance | 600-601 | 10% | 90% pass |
The validation script checks:
- ✅ Data Integrity: 100% verification with checksums
- ✅ Latency: p99 < 1 second for small objects
- ✅ Throughput: > 10 MB/s for large objects
- ✅ Concurrency: Handle 50+ simultaneous operations
- ✅ Error Recovery: Automatic retry with exponential backoff
S3 PRODUCTION VALIDATION SUITE
================================================================================
✓ Critical Data Integrity: 100.0% (3/3 passed)
✓ Error Handling & Recovery: 100.0% (2/2 passed)
✓ Multipart Operations: 100.0% (3/3 passed)
✓ Performance Benchmarks: 100.0% (2/2 passed)
Overall: 100.0% passed
================================================================================
✓ PRODUCTION READY - All requirements met
Reports are generated in:
validation-report.json- Machine-readable resultsvalidation-report.txt- Human-readable summary
MSST-S3 includes a sophisticated SDK capability system that allows testing against different AWS SDK implementations (boto3, aws-sdk-go-v2, aws-sdk-java-v2, etc.) while accounting for their behavioral differences.
Different AWS SDK implementations have different behaviors:
- URL encoding varies ('+' treated as space vs '+')
- Different retry policies (standard, adaptive, legacy)
- Different checksum algorithms (CRC32C vs MD5)
- Different addressing styles (virtual-hosted vs path-style)
The SDK capability system allows tests to adapt their expectations based on the SDK being used.
# Test with boto3 (Python)
python3 scripts/test-runner.py --sdk boto3 --sdk-version latest -v
# Test with Go SDK v2
python3 scripts/test-runner.py --sdk aws-sdk-go-v2 --sdk-version 1.30.0 -v
# Test with Java SDK v2
python3 scripts/test-runner.py --sdk aws-sdk-java-v2 --sdk-version latest -v
# Use a defconfig file
python3 scripts/test-runner.py --defconfig defconfigs/boto3_latest.yaml -v📘 Complete SDK Capability Guide →
-
Configure Multiple Endpoints
Create configuration profiles for each S3 system:
# Configure AWS S3 make menuconfig # Save as .config.aws # Configure MinIO make menuconfig # Save as .config.minio # Configure Ceph RGW make menuconfig # Save as .config.ceph
-
Run Tests Against Each Implementation
# Test AWS S3 cp .config.aws .config make test mv results/latest results/aws-s3 # Test MinIO cp .config.minio .config make test mv results/latest results/minio # Test Ceph RGW cp .config.ceph .config make test mv results/latest results/ceph-rgw
-
Compare Results
The test suite generates detailed reports showing:
- Pass/fail status for each test
- Response time comparisons
- Error messages and incompatibilities
- Performance metrics
For automated testing across multiple vendors, use the Ansible integration:
# playbooks/inventory/hosts
[s3_vendors]
aws ansible_host=s3.amazonaws.com
minio ansible_host=minio.example.com
ceph ansible_host=ceph.example.com
# Run tests on all vendors
make ansible-run
make ansible-resultsComplete S3 API Coverage - 618 tests across all major operations:
- ✅ CreateBucket, DeleteBucket, ListBuckets
- ✅ Bucket policies (PutBucketPolicy, GetBucketPolicy, DeleteBucketPolicy)
- ✅ Bucket ACLs (PutBucketAcl, GetBucketAcl)
- ✅ Bucket tagging (PutBucketTagging, GetBucketTagging, DeleteBucketTagging)
- ✅ Bucket versioning configuration
⚠️ Bucket CORS (limited MinIO support)⚠️ Bucket ownership controls (limited MinIO support)
- ✅ PutObject with metadata, checksums, conditionals
- ✅ GetObject with ranges, conditionals
- ✅ HeadObject with conditionals
- ✅ DeleteObject with versioning
- ✅ DeleteObjects (bulk delete)
- ✅ CopyObject with metadata, directives, conditionals
- ✅ GetObjectAttributes with checksums
- ✅ CreateMultipartUpload with metadata, checksums, storage class
- ✅ UploadPart with checksums, size validation
- ✅ UploadPartCopy with ranges, checksums
- ✅ CompleteMultipartUpload with ordering, checksums, MpuObjectSize
- ✅ AbortMultipartUpload with race conditions
- ✅ ListMultipartUploads with pagination, markers
- ✅ ListParts with pagination, markers
- ✅ PutBucketVersioning (Enabled, Suspended)
- ✅ GetBucketVersioning
- ✅ Object operations with version IDs
- ✅ ListObjectVersions with pagination
- ✅ Delete markers
- ✅ PutObjectTagging, GetObjectTagging, DeleteObjectTagging
- ✅ Tag limits (10 tags max)
- ✅ Tag key/value length limits
- ✅ Tagging with versioning
- ✅ PutObjectLockConfiguration, GetObjectLockConfiguration
- ✅ PutObjectRetention, GetObjectRetention
- ✅ PutObjectLegalHold, GetObjectLegalHold
⚠️ Some features have limited MinIO support
- ✅ ListObjectsV1 with prefixes, delimiters, markers
- ✅ ListObjectsV2 with continuation tokens, start-after
- ✅ Pagination and filtering
- ✅ Special character handling
- ✅ Special characters in keys (spaces, unicode, etc.)
- ✅ Empty objects (0 bytes)
- ✅ Large objects (multi-GB)
- ✅ Conditional requests (If-Match, If-None-Match, etc.)
- ✅ Checksums (CRC32, SHA1, SHA256, CRC32C)
- ✅ ETags and metadata preservation
- ✅ Race conditions in multipart uploads
- ✅ Error handling and edge cases
📊 View Detailed Test Coverage →
Test runs provide comprehensive results:
- Console Output: Real-time test execution with pass/fail status
- pytest Reports: Standard pytest output with detailed failure information
- Test Summary: Pass/fail/skip counts and execution time
- Failure Analysis: Detailed error messages and root cause identification
Pass Rate: 94.2% (582/618 tests) on MinIO
- ✅ Passed (582): Tests that execute successfully
- ❌ Failed (8): Known compatibility differences (documented)
- ⏭️ Skipped (28): Features not supported by MinIO
📊 View Complete Test Results & Analysis →
Common incompatibility patterns documented in test results:
- Unsupported Features: CORS, ownership controls (MinIO specific)
- Behavioral Differences: Different error codes or optional features
- Performance Variations: Documented in performance metrics
- Edge Cases: Key length limits, special characters
All failures and skips are fully documented with:
- Root cause analysis
- Impact assessment
- Workarounds and alternatives
- MinIO-specific compatibility notes
- Baseline Testing: Always test against AWS S3 as the reference implementation
- Isolated Environments: Use dedicated test buckets to avoid interference
- Credential Management: Store credentials securely, never commit them
- Regular Testing: Run tests regularly to catch regressions
- Custom Tests: Extend the framework with vendor-specific tests when needed
Create new test files in the appropriate category directory:
# tests/basic/099-custom-test.py
from tests.common.fixtures import s3_client, test_bucket
def test_custom_operation(s3_client, test_bucket):
"""Test vendor-specific S3 operation"""
# Your test implementation
passAdd vendor-specific settings in the configuration:
make menuconfig
# Navigate to "Vendor-Specific Settings"
# Configure vendor-specific parametersname: S3 Compatibility Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: make install-deps
- name: Configure tests
run: |
echo "CONFIG_S3_ENDPOINT_URL=\"${{ secrets.S3_ENDPOINT }}\"" > .config
echo "CONFIG_S3_ACCESS_KEY=\"${{ secrets.S3_ACCESS_KEY }}\"" >> .config
echo "CONFIG_S3_SECRET_KEY=\"${{ secrets.S3_SECRET_KEY }}\"" >> .config
- name: Run tests
run: make test
- name: Upload results
uses: actions/upload-artifact@v2
with:
name: test-results
path: results/Format Python code before committing:
make style- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Format code with
make style - Submit a pull request
Comprehensive documentation is available in the docs/ directory:
| Document | Description |
|---|---|
| TEST_RESULTS.md | Complete test execution results, pass/fail analysis, MinIO compatibility report |
| TEST_PORTING_STATUS.md | Detailed porting history, all 592 tests documented with batch information |
| TESTING_GUIDE.md | Guide to running tests, test organization, and best practices |
| PRODUCTION_TEST_PLAN.md | Production validation strategies and critical path testing |
| DOCKER_SETUP.md | Docker configuration for testing multiple S3 implementations |
| VALIDATION_STRATEGIES.md | Different validation approaches and when to use them |
| SDK_CAPABILITIES.md | SDK capability system guide - test against multiple AWS SDKs (boto3, Go, Java, etc.) |
| SDK_IMPLEMENTATION_SUMMARY.md | Implementation summary of the SDK capability system |
| DEFCONFIGS.md | Guide to using SDK defconfig files for different SDK implementations |
- 📊 View Full Test Results - Complete pass/fail analysis
- 📝 Test Porting History - Detailed porting progress (100% complete!)
- 🎯 MinIO Compatibility - 96% compatibility rating
For issues, questions, or contributions, please visit the GitHub repository.
This test suite incorporates comprehensive S3 API tests ported from:
- versitygw - 592 S3 integration tests (Apache License 2.0)
Special thanks to the versitygw project for their excellent S3 API test coverage.
- Luis Chamberlain mcgrof@kernel.org - Project lead and supervisor
- Claude AI - Test porting, documentation, and automation
Porting Achievement: 🎉 100% Complete - All 592 versitygw tests successfully ported!