Your files, your control — password-protected, audited, and secure.
- Why We Needed Something Better
- What Inspired It: The Cost of Getting It Wrong
- So, What Is SecShare?
- Why We Built It
- The Impact at Upstox
- How It Works
- Architecture Overview
- Key Features
- Screenshots
- Getting Started
- Configuration
- Code-Level Highlights
- Background Jobs & Cron Tasks
- Security
- Project Structure
- Deployment
- Contributing
- Contributors
- License
At Upstox, like most companies, we rely on multiple teams — from marketing to operations — to work with external vendors. And very often, that means sharing files: reports, invoices, ad creatives, customer lists, compliance documents.
Now here's the problem:
Third-party platforms like Dropbox or Google Drive are convenient — but they aren't secure enough for handling PII or business-sensitive data.
As a result, we put in place a strict security policy: no usage of unsanctioned file-sharing tools. Dropbox, Drive, and others are blocked across our network. But that brought a new problem — how do teams securely send files externally when email limits them to 25MB?
This left everyone stuck: they had files to send, but no secure channel to send them through.
And that's why we built SecShare.
Security isn't about blocking business — it's about enabling it, the right way. SecShare was our answer to that need.
Our decision to block third-party sharing tools wasn't just about policy — it was about preventing real risks.
Over the years, multiple high-profile incidents have shown what can go wrong when file sharing isn't done securely:
- Dropbox Breach (2012–2016): Nearly 70 million user records leaked after employee credentials were reused, partly due to insecure file handling.
- Accellion File Transfer Hack (2021): Dozens of institutions — including Shell and universities — were affected after a legacy file-sharing platform was exploited.
- Misconfigured Google Drive Links: Multiple global incidents have seen sensitive HR files, legal documents, and even passport scans exposed due to open sharing permissions.
These were wake-up calls. Files were leaving companies without encryption, without audit trails, without approval.
We knew we had to solve it at the source — not just block usage.
SecShare is an internal file-sharing tool we built at Upstox to allow employees to securely share files with external parties — while maintaining full control over:
- Who can upload — Azure AD B2C SSO authentication
- Why it's being shared — Business justification required
- Who's approving it — Manager approval workflow
- How long it stays accessible — Configurable TTL (1-7 days)
- Who downloaded it, and when — Complete audit trail
In short: it enforces accountability at every step of the process — without getting in the way of business.
As an organization that values data privacy, we wanted a solution that met both security and operational needs. That meant:
- Every file uploaded is tied to a business justification.
- Every shared file goes through manager approval.
- Every action leaves behind a verifiable audit trail for compliance and accountability.
- All of this is achieved cost-effectively, without relying on third-party enterprise tools that are expensive and complicated.
What started as a necessity within the AppSec team quickly became an invaluable tool for the entire company. Today, SecShare is not just a project — it's a core part of our daily operations.
Today, SecShare is the only approved way to share files at Upstox. All other mechanisms (e.g., Dropbox, Google Drive, email attachments) are blocked for security reasons. Here's the impact:
- Single secure file-sharing channel for internal and external sharing
- Full audit trail for compliance with regulations and internal policies
- Annual savings of $100-150k compared to third-party commercial tools, while maintaining control over the infrastructure
What started as a security necessity has grown into a vital part of the company's operations.
-
Upload: Employees log in via SSO, upload a file, specify TTL (expiry time), enter a business justification, and select recipient emails (All protected by Cloudflare Turnstile).
-
Manager Approval: The system identifies the employee's manager, and sends them an approval email with a password-protected preview.
-
Approval/Reject: Managers approve or reject from the email or via portal.
-
Link Sharing: If approved, a secure password-protected link is sent to the recipient. If rejected, the file is never shared.
-
Auto Expiry & Archival: Links expire after TTL. Files are automatically moved to Deep Archive after 15 days, and metadata is archived too.
All of this happens with minimal effort from employees — but full security under the hood.
┌─────────────┐
│ Employee │
│ (Internal) │
└──────┬──────┘
│
│ 1. SSO Login (Azure AD B2C)
▼
┌─────────────────────┐
│ Upload Application │
│ (Port 5000) │
│ Private Network │
└──────┬──────────────┘
│
│ 2. Upload File
│ - Cloudflare Turnstile verification
│ - Rate limiting
│ - File validation
│
│ 3. Encrypt File
│ - AES-CTR streaming encryption
│ - KMS Data Encryption Key (DEK)
│ - Nonce stored in S3 metadata
│
│ 4. Upload to S3
│ - Multipart upload
│ - Server-side KMS encryption
│ - Metadata stored
│
│ 5. Store Metadata (MySQL)
│ - Unique ID generation
│ - TTL calculation
│ - Status: pending
▼
┌─────────────────────┐
│ AWS S3 + KMS │
│ MySQL RDS │
└──────┬──────────────┘
│
│ 6. Identify Manager
│ - Query employee_manager_details table
│
│ 7. Send Approval Email
│ - Manager password generated
│ - Approval/reject links
│ - File preview option
▼
┌─────────────────────┐
│ Manager │
└──────┬──────────────┘
│
│ 8. Review & Decide
│ - Approve: Generate user passwords
│ - Reject: Notify uploader
▼
┌─────────────────────┐
│ Upload Application │
└──────┬──────────────┘
│
│ 9. On Approval:
│ - Generate download passwords
│ - Update status: approved
│ - Send download links to recipients
▼
┌─────────────────────┐
│ Download Application│
│ (Port 5001) │
│ Public Subdomain │
└──────┬──────────────┘
│
│ 10. Recipient clicks link
│
│ 11. Password Verification
│ - Cloudflare Turnstile
│ - Hash comparison
│ - TTL validation
│
│ 12. Decrypt & Stream File
│ - Retrieve DEK from S3 metadata
│ - Decrypt DEK with KMS
│ - AES-CTR streaming decryption
│ - Stream to browser
▼
┌─────────────┐
│ End User │
│ (External) │
└─────────────┘
Here's how the components fit together:
- Azure AD B2C authentication via MSAL
- Matches Azure AD SAML login for employees
- Secure session management with URLSafeTimedSerializer
- Port: 5000 (configurable via
INTERNAL_HTTP_PORT) - Access: Internal network, Azure AD B2C protected
- Features:
- Cloudflare Turnstile bot protection
- Rate limiting (Flask-Limiter)
- File upload with AES-CTR streaming encryption
- Manager approval workflow
- Admin interface (
/appsecroutes) - Upload history with pagination
- Employee-manager mapping
- Port: 5001 (configurable via
EXTERNAL_HTTP_PORT) - Access: Public subdomain, password-protected downloads
- Features:
- Cloudflare Turnstile verification on password submit
- Rate limiting
- Password verification
- AES-CTR streaming decryption
- TTL validation
- Direct file streaming to browser
- Encryption: AES-CTR streaming upload + KMS envelope encryption
- Storage Class: Standard → Deep Archive (after 15 days)
- Metadata: Nonce and encrypted DEK stored in object metadata
- Multipart Upload: Supports large files (up to 3GB configurable)
- Read/Write Separation: Separate reader and writer endpoints
- Tables:
file_metadata: Live file metadata, TTL, approvals, justificationfile_metadata_archive: Archived metadata with deep_archive_statusemployee_manager_details: Employee-manager mapping (synced from Azure AD)
- Duplicate Handling:
ON DUPLICATE KEY UPDATEfor efficient upserts
- SMTP: TLS-encrypted email delivery
- Notifications:
- Approval emails to managers
- Download links to recipients
- Rejection notifications to uploaders
- Status updates
- Archive Job (
move_to_archive.py): Moves expired files to S3 Deep Archive, archives metadata - Employee Sync (
get_details.py): Syncs employee-manager mapping from Azure AD Graph API
- Routes:
/appsec/*(limited access) - Features:
- View live file metadata
- View archived file metadata
- Manage employee-manager details
- Update manager emails for urgent uploads
- Audit and debug capabilities
- Protects uploads & public downloads from abuse
- Bot protection and rate limiting
- Site key and secret stored in AWS Secrets Manager
- Backend: Python 3.10+, Flask 3.1.0
- Authentication: Azure AD B2C (MSAL)
- Storage: AWS S3 with KMS encryption
- Database: MySQL (RDS) with read/write separation
- Encryption: PyCryptodome (AES-256-CTR)
- Email: SMTP with TLS
- Bot Protection: Cloudflare Turnstile
- Rate Limiting: Flask-Limiter
- Archiving: AWS Glacier Deep Archive
- ✅ 3GB Uploads: Configurable via S3-stored config file (
config_upshare.yaml) - ✅ Password Protection: Both managers and recipients get unique, hashed passwords
- ✅ TTL Enforcement: Links and passwords auto-expire after the set duration (1-7 days)
- ✅ Audit Logging: Who shared what, with whom, when, and why — all traceable
- ✅ Manager Approval Flow: No sharing happens without explicit approval
- ✅ CAPTCHA + Rate Limiting: Prevents abuse of both upload and download endpoints
- ✅ S3 + KMS Encryption: Files are encrypted with customer-managed keys (CMK)
- ✅ Metadata Archival: MySQL metadata is moved to archive table after 15 days
- ✅ Deep Archive: Files are moved to cost-effective S3 Deep Archive after TTL
- ✅ Admin Portal: See employee-manager mapping, view live + archived file metadata, override manager emails for urgent uploads (limited access only)
- ✅ AES-CTR Streaming Encryption: Efficient streaming encryption for large files
- ✅ KMS Envelope Encryption: Data Encryption Keys (DEK) encrypted with KMS CMK
- ✅ Multipart Upload: Supports large files with streaming encryption
- ✅ Read/Write Database Separation: Optimized database access patterns
- ✅ Azure AD Integration: Automatic employee-manager mapping sync
- ✅ Comprehensive Error Handling: Graceful error handling and user feedback
- ✅ Logging: Comprehensive logging for debugging and audit trails
- Python 3.10 or higher
- AWS Account with appropriate IAM permissions:
- S3 read/write access
- KMS key usage permissions
- Secrets Manager read access
- Azure AD B2C tenant configured
- MySQL database (RDS recommended) with read/write separation
- SMTP server for email notifications
- Cloudflare Turnstile account (optional but recommended)
-
Clone the repository
git clone https://github.com/your-org/secshare.git cd secshare -
Create a virtual environment
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Configure AWS credentials
aws configure # Or set environment variables: export AWS_ACCESS_KEY_ID=your_access_key export AWS_SECRET_ACCESS_KEY=your_secret_key export AWS_DEFAULT_REGION=ap-south-1
-
Set environment variables
export NODE_ENV=uat # or 'prod' for production export INTERNAL_HTTP_PORT=5000 export EXTERNAL_HTTP_PORT=5001
-
Create configuration file in S3
Upload
config_upshare.yamlto your S3 bucket. The bucket name should be:company-uat-secshare-appsecfor UATcompany-secshare-appsecfor Production
Example configuration structure:
uat: s3: bucket_name: your-uat-bucket region: ap-south-1 db: mysql: writer_host: your-writer-host.rds.amazonaws.com reader_host: your-reader-host.rds.amazonaws.com writer_user: secshare_writer reader_user: secshare_reader database: secshare_db port: 3306 azure: client_id: your-azure-client-id tenant_id: your-azure-tenant-id secrets: secret_name: arn:aws:secretsmanager:region:account:secret:name region_name: ap-south-1 mail: mail_server: smtp.example.com mail_port: 587 turnstile: site_key: your-turnstile-site-key file: max_size: 3221225472 # 3GB in bytes allowed_extensions: ['pdf', 'doc', 'docx', 'xls', 'xlsx', 'txt', 'zip', 'rar'] admin_access: users: ['admin@example.com']
-
Configure AWS Secrets Manager
Create a secret in AWS Secrets Manager with the following keys:
api.external.db.mysql.writer_passwordapi.external.db.mysql.reader_passwordapi.external.azure.client_secretapi.external.salt_secretapi.external.secret_keyapi.external.smtp.smtp_usernameapi.external.smtp.smtp_passwordapi.external.turnstile_secretapi.external.s3.kms_key_secret
-
Set up database schema
Create the following tables in your MySQL database:
CREATE TABLE file_metadata ( unique_id VARCHAR(255) PRIMARY KEY, file_name VARCHAR(500) NOT NULL, ttl INT NOT NULL, password_hash VARCHAR(255), manager_password VARCHAR(255), recipients TEXT, business_justification TEXT, message TEXT, expiry DATETIME NOT NULL, status ENUM('pending', 'approved', 'rejected') DEFAULT 'pending', uploaded_on DATETIME DEFAULT CURRENT_TIMESTAMP, uploaded_by VARCHAR(255), user_password_hash VARCHAR(255), INDEX idx_expiry (expiry), INDEX idx_status (status), INDEX idx_uploaded_by (uploaded_by) ); CREATE TABLE employee_manager_details ( employee_email VARCHAR(255) PRIMARY KEY, employee_name VARCHAR(255), employee_designation VARCHAR(255), manager_email VARCHAR(255), manager_name VARCHAR(255), manager_designation VARCHAR(255), department VARCHAR(255), INDEX idx_manager_email (manager_email) ); CREATE TABLE file_metadata_archive ( unique_id VARCHAR(255) PRIMARY KEY, file_name VARCHAR(500) NOT NULL, ttl INT NOT NULL, password_hash VARCHAR(255), manager_password VARCHAR(255), recipients TEXT, business_justification TEXT, message TEXT, expiry DATETIME NOT NULL, status ENUM('pending', 'approved', 'rejected') DEFAULT 'pending', uploaded_on DATETIME DEFAULT CURRENT_TIMESTAMP, uploaded_by VARCHAR(255), user_password_hash VARCHAR(255), deep_archive_status ENUM('yes', 'no') DEFAULT 'no', INDEX idx_deep_archive_status (deep_archive_status), INDEX idx_expiry (expiry) );
-
Run the application
python main.py
This will start both applications:
- Upload app: http://localhost:5000
- Download app: http://localhost:5001
| Variable | Description | Default |
|---|---|---|
NODE_ENV |
Environment (uat/prod) | uat |
INTERNAL_HTTP_PORT |
Upload app port | 5000 |
EXTERNAL_HTTP_PORT |
Download app port | 5001 |
The application uses a YAML configuration file (config_upshare.yaml) stored in an S3 bucket and secrets from AWS Secrets Manager. The configuration includes:
- Environment-specific settings (UAT/PROD)
- Database connection parameters: Writer/reader hosts, users, database name, port
- S3 bucket names and KMS key IDs
- Azure AD B2C settings: Client ID, tenant ID
- Mail server configuration: SMTP server, port
- File size limits and allowed extensions: Max size (default 3GB), allowed extensions
- Admin access user list
Purpose: Stores file upload information and status.
Columns and Descriptions:
unique_id: Unique identifier (generated from file hash, TTL, filename, uploader)file_name: Original filename (normalized)ttl: Time-to-live in days (1-7)password_hash: Hashed user download password (bcrypt)manager_password: Hashed manager approval password (bcrypt)recipients: Comma-separated recipient emailsbusiness_justification: Required upload justificationmessage: Optional user messageexpiry: Calculated expiration timestamp (uploaded_on + TTL)status: File status (pending/approved/rejected)uploaded_on: Upload timestampuploaded_by: Uploader email (from Azure AD)user_password_hash: Additional user password hash
Purpose: Maps employees to their managers (synced from Azure AD).
Columns and Descriptions:
employee_email: Employee email address (PRIMARY KEY)employee_name: Employee display nameemployee_designation: Employee job titlemanager_email: Manager email addressmanager_name: Manager display namemanager_designation: Manager job titledepartment: Department name
Note: Uses ON DUPLICATE KEY UPDATE for efficient syncs.
Purpose: Stores archived file metadata.
Columns and Descriptions:
- All fields from
file_metadatatable deep_archive_status: Glacier Deep Archive status (yes/no)
Note: Files are moved to this table after 15 days or TTL expiration.
Files are uploaded to AWS S3 with the following security measures:
- AES-CTR Streaming Encryption: Files are encrypted using AES in CTR mode with a randomly generated nonce (8 bytes)
- KMS Envelope Encryption:
- Data Encryption Key (DEK) is generated using AWS KMS (
generate_data_key) - DEK is encrypted with KMS Customer Managed Key (CMK)
- Encrypted DEK is stored in S3 object metadata as
x-amz-key
- Data Encryption Key (DEK) is generated using AWS KMS (
- Multipart Upload: Large files are uploaded in 5MB chunks with streaming encryption
- Metadata Storage: Nonce is stored in S3 object metadata as
nonce(hex-encoded) - Server-Side Encryption: Additional KMS encryption at S3 level (
ServerSideEncryption='aws:kms')
Code Location: upload/libs/s3Handler.py
# Generate nonce for CTR mode
nonce = get_random_bytes(8)
cipher = AES.new(dek, AES.MODE_CTR, nonce=nonce)
# Store nonce and encrypted DEK in metadata
metadata = {'x-amz-key': encrypted_dek, 'nonce': nonce.hex()}File metadata is tracked in MySQL with the following features:
- Read/Write Separation: Separate reader and writer connections for performance
- Duplicate Handling:
ON DUPLICATE KEY UPDATEfor efficient upserts in employee-manager table - Unique ID Generation: Generated from file hash, TTL, filename, and uploader email
- TTL Validation: Expiry calculated as
uploaded_on + TTL days - Status Management: Tracks pending → approved/rejected workflow
Code Location: upload/models/dbModel.py, download/models/dbModel.py
The system implements a secure approval workflow:
- Manager Identification: Queries
employee_manager_detailstable based on uploader email - Password Generation: Generates unique manager password (hashed with bcrypt)
- Email Notification: Sends approval email with:
- Approval link:
/api/approve/<unique_id> - Reject link:
/api/reject/<unique_id> - Manager preview link:
/manager/<unique_id>
- Approval link:
- On Approval:
- Generates user download passwords (one per recipient)
- Updates status to 'approved'
- Sends download links to recipients
- On Rejection:
- Updates status to 'rejected'
- Notifies uploader via email
Code Location: upload/controllers/apiController.py
Files are moved to S3 Deep Archive after 15 days to minimize storage costs:
- Archive Job:
jobs/move_to_archive.pyruns periodically (cron) - Process:
- Queries
file_metadata_archivefor files withdeep_archive_status = 'no' - Checks S3 storage class (avoids duplicate transitions)
- Transitions to
DEEP_ARCHIVEstorage class - Updates
deep_archive_status = 'yes'in database
- Queries
- Metadata Archival: Metadata moved to
file_metadata_archivetable after 15 days - Database Optimization: Keeps primary
file_metadatatable lean
Code Location: jobs/move_to_archive.py
- Upload: AES-CTR mode with streaming encryption (
upload/libs/s3Handler.py) - Download: AES-CTR mode with streaming decryption (
download/libs/fileDecryptor.py) - Key Management: AWS KMS for DEK encryption/decryption
- Nonce Handling: 8-byte nonce stored in S3 metadata, retrieved during decryption
Purpose: Moves expired files to S3 Deep Archive and archives metadata.
Schedule: Run daily (recommended: 2 AM).
Crontab Entry:
# Add to crontab
0 2 * * * /usr/bin/python3 /path/to/jobs/move_to_archive.pyWhat it does:
- Fetches all files from
file_metadata_archivewithdeep_archive_status = 'no' - Checks if files exist in S3 and are not already in Deep Archive
- Transitions files to
DEEP_ARCHIVEstorage class - Updates
deep_archive_status = 'yes'in database - Logs all operations for audit
Configuration: Uses same config as main application (S3 bucket, KMS key, database).
Purpose: Syncs employee-manager mapping from Azure AD Graph API.
Schedule: Run weekly (recommended: Sunday 2 AM).
Crontab Entry:
# Add to crontab
0 2 * * 0 /usr/bin/python3 /path/to/jobs/get_details.pyWhat it does:
- Authenticates with Azure AD using client credentials
- Fetches all users from Microsoft Graph API
- For each user, fetches manager details
- Inserts/updates
employee_manager_detailstable usingON DUPLICATE KEY UPDATE - Handles pagination for large organizations
Configuration: Requires Azure AD app registration with Graph API permissions.
- File Encryption: AES-256-CTR mode with streaming encryption
- Key Management: AWS KMS Customer Managed Keys (CMK) for envelope encryption
- Key Storage: Encrypted Data Encryption Keys (DEK) stored in S3 object metadata
- Transport Security: HTTPS/TLS for all communications
- Nonce Management: Random 8-byte nonce per file, stored securely in S3 metadata
- Internal Access: Azure AD B2C SSO with MSAL (Microsoft Authentication Library)
- Session Management: URLSafeTimedSerializer for secure, signed cookies
- Manager Verification: Database-backed manager role verification
- Admin Access: Configurable admin user list in S3 config
- Password Hashing: bcrypt with salt for all passwords
- Bot Protection: Cloudflare Turnstile integration on upload and download
- Rate Limiting: Flask-Limiter with configurable limits per endpoint
- Input Validation: Comprehensive sanitization and validation
- SQL Injection Prevention: Parameterized queries throughout
- XSS Protection: Input sanitization and secure Jinja2 templates
- CSRF Protection: Secure cookie settings (HttpOnly, Secure, SameSite)
- TTL Enforcement: Automatic expiration of links and passwords
- Audit Logging: Comprehensive logging of all operations (who, what, when, why)
- File Expiration: Automatic TTL enforcement with database queries
- Archive Management: Automated archiving to Glacier Deep Archive
- Metadata Tracking: Complete audit trail in database
- Business Justification: Required for all uploads (compliance requirement)
secshare/
├── upload/ # Internal upload application
│ ├── __init__.py # Flask app factory
│ ├── run.py # Application entry point
│ ├── config.py # Configuration loader (S3 + Secrets Manager)
│ ├── controllers/ # Request handlers
│ │ ├── appController.py # Main app routes (home, history, approvals)
│ │ ├── apiController.py # API endpoints (upload, approve, reject)
│ │ ├── authController.py # Authentication (Azure AD B2C)
│ │ └── adminController.py # Admin routes (/appsec/*)
│ ├── models/ # Data models
│ │ └── dbModel.py # Database operations (FileModel, EmployeeModel, DbManager)
│ ├── libs/ # Business logic
│ │ ├── fileEncryption.py # File encryption (KMS DEK generation)
│ │ ├── s3Handler.py # S3 operations (multipart upload, AES-CTR)
│ │ ├── mailHandler.py # Email notifications (SMTP)
│ │ ├── cookieHandler.py # Cookie management
│ │ └── secretsManager.py # AWS Secrets Manager
│ ├── routes/ # Route blueprints
│ │ ├── appRoute.py # App routes blueprint
│ │ ├── apiRoute.py # API routes blueprint
│ │ ├── authRoute.py # Auth routes blueprint
│ │ └── adminRoute.py # Admin routes blueprint
│ ├── templates/ # HTML templates
│ │ ├── home.html # Upload interface
│ │ ├── history.html # Upload history
│ │ ├── approvals.html # Manager approvals dashboard
│ │ ├── processing.html # Processing status
│ │ └── admin/ # Admin templates
│ │ ├── home.html
│ │ ├── table.html
│ │ └── update_manager_email.html
│ └── tests/ # Unit tests
│ ├── test_api_controller.py
│ ├── test_app_controller.py
│ └── test_utils.py
│
├── download/ # External download application
│ ├── __init__.py # Flask app factory
│ ├── run.py # Application entry point
│ ├── config.py # Configuration loader
│ ├── controllers/ # Request handlers
│ │ ├── appController.py # Main app routes (download pages)
│ │ └── apiController.py # API endpoints (password verification)
│ ├── models/ # Data models
│ │ └── dbModel.py # Database operations
│ ├── libs/ # Business logic
│ │ └── fileDecryptor.py # File decryption (AES-CTR streaming)
│ ├── routes/ # Route blueprints
│ │ ├── appRoute.py # App routes blueprint
│ │ └── apiRoute.py # API routes blueprint
│ ├── templates/ # HTML templates
│ │ ├── download_home.html
│ │ ├── password_prompt.html
│ │ ├── manager_password_prompt.html
│ │ ├── download_and_redirect.html
│ │ └── error.html
│ └── tests/ # Unit tests
│ ├── test_api_controller.py
│ └── test_app_controller.py
│
├── shared/ # Shared utilities
│ ├── libs/
│ │ └── secretsManager.py # Shared secrets manager
│ ├── utils/
│ │ ├── logger.py # Logging configuration
│ │ └── decorators.py # Common decorators (login_required)
│ └── static/
│ └── favicon.png
│
├── jobs/ # Background jobs
│ ├── move_to_archive.py # Archive job (S3 → Deep Archive)
│ └── get_details.py # Employee-manager sync (Azure AD → MySQL)
│
├── docker/ # Docker configuration
│ └── Dockerfile
│
├── os_assets/ # Screenshots and diagrams
│ └── *.png
│
├── main.py # Application launcher (starts both apps)
├── requirements.txt # Python dependencies
├── README # This file
└── pom.xml # Maven configuration (if applicable)
-
Build the Docker image
docker build -t secshare:latest -f docker/Dockerfile . -
Run the container
docker run -d \ -p 5000:5000 \ -p 5001:5001 \ -e NODE_ENV=prod \ -e INTERNAL_HTTP_PORT=5000 \ -e EXTERNAL_HTTP_PORT=5001 \ -e AWS_ACCESS_KEY_ID=your_key \ -e AWS_SECRET_ACCESS_KEY=your_secret \ --name secshare \ secshare:latest
-
Set up reverse proxy (Nginx recommended)
# Upload app (internal) server { listen 443 ssl; server_name secshare.company.app; location / { proxy_pass http://localhost:5000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } # Download app (external) server { listen 443 ssl; server_name secshare.company.com; location / { proxy_pass http://localhost:5001; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
-
Set up process manager (systemd recommended)
Create
/etc/systemd/system/secshare.service:[Unit] Description=SecShare File Sharing Application After=network.target [Service] Type=simple User=secshare WorkingDirectory=/opt/secshare Environment="NODE_ENV=prod" Environment="INTERNAL_HTTP_PORT=5000" Environment="EXTERNAL_HTTP_PORT=5001" ExecStart=/usr/bin/python3 /opt/secshare/main.py Restart=always RestartSec=10 [Install] WantedBy=multi-user.target
Enable and start:
sudo systemctl enable secshare
sudo systemctl start secshare- Configure SSL certificates (Let's Encrypt recommended)
- Set up monitoring and logging (CloudWatch, Datadog, etc.)
- Configure automated backups (RDS automated backups, S3 versioning)
Set up cron jobs for archive and sync tasks:
# Edit crontab
crontab -e
# Add these lines:
# Archive job - runs daily at 2 AM
0 2 * * * /usr/bin/python3 /opt/secshare/jobs/move_to_archive.py >> /var/log/secshare/archive.log 2>&1
# Employee-manager sync - runs weekly on Sunday at 2 AM
0 2 * * 0 /usr/bin/python3 /opt/secshare/jobs/get_details.py >> /var/log/secshare/sync.log 2>&1We believe in the power of open source to drive innovation and help others solve similar problems. SecShare is available under the Apache 2.0 license. Here's how you can get involved:
- Deploy it: Use it as-is or customize it for your team or company
- Extend it: Build new features such as:
- Integration with other tools (Slack, Teams, etc.)
- Advanced admin controls
- Multi-cloud support (Azure Blob, GCS)
- Enhanced reporting and analytics
- Improve it: Fix bugs, update documentation, or contribute new ideas to the project
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests for new functionality
- Ensure all tests pass (
pytest) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 style guidelines
- Use type hints where appropriate
- Add docstrings to all functions and classes
- Keep functions focused and small
- Write meaningful commit messages
- Update documentation for new features
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Built with ❤️ for secure file sharing
Your files, your control — password-protected, audited, and secure.
Explore and contribute to SecShare on GitHub. Your contributions will help make this tool better and more secure for teams worldwide.









