diff --git a/CANDIDATE_README.md b/CANDIDATE_README.md index e69de29..967803a 100644 --- a/CANDIDATE_README.md +++ b/CANDIDATE_README.md @@ -0,0 +1,444 @@ +# AI Meeting Digest - Full-Stack Implementation + +## 1. Technology Choices + +* **Frontend:** `Next.js 15 with TypeScript and shadcn/ui` +* **Backend:** `FastAPI (Python)` +* **Database:** `PostgreSQL` +* **AI Service:** `Google Gemini 2.0 Flash` + +### Why I chose this stack: + +**Next.js 15 with TypeScript**: Selected for its excellent developer experience, built-in TypeScript support, App Router for modern React patterns, and excellent performance with automatic code splitting. The server-side rendering capabilities provide great SEO and initial load performance. + +**shadcn/ui**: Chosen for its modern, accessible components built on Radix UI and Tailwind CSS. Provides a consistent design system with copy-paste components that are fully customizable and follow best practices for accessibility. + +**FastAPI**: Selected for its modern async/await support, automatic API documentation generation, excellent type safety with Pydantic, and high performance. FastAPI's built-in OpenAPI documentation makes API development and testing seamless. + +**PostgreSQL**: Chosen as a robust, production-ready relational database with excellent JSON support, ACID compliance, and strong ecosystem. Perfect for storing structured meeting digest data with UUID support for shareable links. + +**Google Gemini 2.0 Flash**: Selected for its speed, cost-effectiveness, and excellent structured output capabilities. Gemini 2.0 Flash provides both standard and streaming responses, making it ideal for real-time user experiences. + +**Additional Libraries**: +- **SQLAlchemy 2.0**: Modern ORM with excellent async support and type safety +- **Alembic**: Database migration management for schema versioning +- **Pydantic**: Data validation and serialization with automatic API documentation +- **Uvicorn**: High-performance ASGI server +- **Tailwind CSS**: Utility-first CSS framework for rapid UI development +- **Sonner**: Modern toast notifications for better UX + +## 2. How to Run the Project + +### Prerequisites +- Node.js 18+ and npm +- Python 3.8+ +- PostgreSQL 12+ +- Google Gemini API key from [AI Studio](https://aistudio.google.com/app/apikey) + +### Backend Setup + +1. **Navigate to Backend Directory** + ```bash + cd backend + ``` + +2. **Create Virtual Environment** + ```bash + python -m venv work4u + source work4u/bin/activate # On Windows: work4u\Scripts\activate + ``` + +3. **Install Dependencies** + ```bash + pip install -r requirements.txt + ``` + +4. **Configure Environment** + ```bash + cp .env.example .env + ``` + + Edit `.env` with your settings: + ```env + # Database + DATABASE_URL=postgresql://your_user:your_password@localhost:5432/meeting_digest_db + DB_USER=your_user + DB_PASSWORD=your_password + DB_HOST=localhost + DB_PORT=5432 + DB_NAME=meeting_digest_db + + # Google Gemini API + GEMINI_API_KEY=your_actual_gemini_api_key_here + + # App Settings + SECRET_KEY=your-production-secret-key + DEBUG=true + HOST=0.0.0.0 + PORT=8000 + ``` + +5. **Setup Database** + ```bash + # Make sure PostgreSQL is running + python setup_db.py + ``` + +6. **Run Tests (Optional)** + ```bash + pytest + # or running each test seperately like + python test_config.py + ``` + +7. **Start the Server** + ```bash + python run_server.py + ``` + +8. **Access the API** + - API: http://localhost:8000 + - Interactive Docs: http://localhost:8000/docs + - Alternative Docs: http://localhost:8000/redoc + +### Frontend Setup + +1. **Navigate to Frontend Directory** + ```bash + cd frontend + ``` + +2. **Install Dependencies** + ```bash + npm install + ``` + +3. **Configure Environment (Optional)** + ```bash + # Create .env.local file + echo "NEXT_PUBLIC_API_URL=http://localhost:8000" > .env.local + ``` + +4. **Start the Frontend Server** + ```bash + npm run dev + ``` + +### Access the Application + +- **Frontend**: http://localhost:3000 (or the port shown in terminal) +- **Backend API**: http://localhost:8000 +- **API Documentation**: http://localhost:8000/docs + +## 3. Project Structure + +### Backend Structure +``` +backend/ +├── run_server.py # Main application entry point +├── setup_db.py # Database initialization script +├── requirements.txt # Python dependencies +├── alembic.ini # Database migration configuration +├── alembic/ # Database migration files +│ ├── env.py +│ ├── script.py.mako +│ └── versions/ +│ └── 001_initial_migration.py +└── src/ + ├── main.py # FastAPI application setup + ├── config.py # Environment configuration + ├── database.py # Database connection and models + ├── models.py # SQLAlchemy models + ├── schemas.py # Pydantic schemas for API + ├── services.py # Business logic layer + ├── ai_service.py # AI/LLM integration + └── api/ + └── digests.py # API endpoints +``` + +### Frontend Structure +``` +frontend/ +├── src/ +│ ├── app/ # Next.js App Router +│ │ ├── page.tsx # Landing page +│ │ ├── layout.tsx # Root layout +│ │ ├── globals.css # Global styles +│ │ ├── digests/ +│ │ │ └── page.tsx # Digest list page +│ │ └── digest/ +│ │ ├── [id]/ +│ │ │ └── page.tsx # Individual digest view +│ │ └── share/ +│ │ └── [publicId]/ +│ │ └── page.tsx # Public sharing page +│ ├── components/ +│ │ ├── ui/ # shadcn/ui components +│ │ └── DigestCreator.tsx # Main digest creation component +│ └── lib/ +│ ├── api.ts # API client functions +│ └── utils.ts # Utility functions +├── components.json # shadcn/ui configuration +├── tailwind.config.js # Tailwind CSS configuration +└── package.json # Node.js dependencies +``` + +### API Usage Examples + +**Create a digest:** +```bash +curl -X POST "http://localhost:8000/api/v1/digests/" \ + -H "Content-Type: application/json" \ + -d '{"transcript": "Your meeting transcript here..."}' +``` + +**Get all digests:** +```bash +curl "http://localhost:8000/api/v1/digests/" +``` + +**Get digest by ID:** +```bash +curl "http://localhost:8000/api/v1/digests/1" +``` + +**Get shareable digest:** +```bash +curl "http://localhost:8000/api/v1/digests/share/{uuid}" +``` + +## 4. Frontend Implementation + +### Core Features + +**DigestCreator Component** (`src/components/DigestCreator.tsx`) +- Real-time streaming digest generation with word-by-word animation +- Sample transcript loading for testing +- Auto-expanding text areas +- Visibility controls (public/private) +- Toast notifications for user feedback + +**Digest Management** (`src/app/digests/page.tsx`) +- Grid-based digest listing +- Delete functionality with confirmation +- Share link generation +- Visibility status indicators +- Responsive design + +**Individual Digest View** (`src/app/digest/[id]/page.tsx`) +- Full digest display with transcript +- Visibility toggle controls +- Share link management +- Navigation breadcrumbs + +**Public Sharing** (`src/app/digest/share/[publicId]/page.tsx`) +- Public access without authentication +- Clean, read-only interface +- UUID-based secure sharing + +### Key Technical Features + +**1. Streaming Implementation** +- Server-Sent Events (SSE) for real-time digest generation +- Word-by-word animation with smooth transitions +- Proper error handling and connection management +- Automatic reconnection on failures + +**2. UI/UX Enhancements** +- Gradient backgrounds and modern design +- Smooth animations and transitions +- Responsive layout for all screen sizes +- Loading states and progress indicators + +**3. TypeScript Integration** +- Full type safety across all components +- API response type definitions +- Props and state type validation +- IDE intellisense support + +## 5. Design Decisions & Trade-offs + +### Architecture Decisions + +**1. Full-Stack Architecture** +- **Frontend**: Next.js 15 with App Router for modern React development +- **Backend**: FastAPI with layered architecture for scalability +- **Database**: PostgreSQL with SQLAlchemy ORM +- **AI Integration**: Google Gemini 2.0 Flash for intelligent content generation + +**2. Frontend Architecture** +- **Component-Based**: shadcn/ui components for consistent design +- **Type Safety**: Full TypeScript implementation across all layers +- **State Management**: React hooks with proper error boundaries +- **Routing**: Next.js App Router for file-based routing + +**3. Backend Layered Architecture** +- **API Layer** (`api/digests.py`): HTTP request handling and validation +- **Service Layer** (`services.py`): Business logic and data transformation +- **Data Layer** (`models.py`, `database.py`): Database operations and schema +- **External Services** (`ai_service.py`): AI integration abstraction + +*Trade-off*: More files and complexity, but excellent separation of concerns and testability. + +**4. Database Design** +```sql +meeting_digests ( + id: Integer (Primary Key) + public_id: UUID (For sharing) + original_transcript: Text + summary_overview: Text + key_decisions: Text (JSON) + action_items: Text (JSON) + created_at: DateTime + updated_at: DateTime + is_public: Boolean +) +``` + +*Trade-off*: Storing JSON as text vs. separate tables. Chose JSON for simplicity while maintaining PostgreSQL JSON query capabilities. + +**5. Streaming Implementation** +- Server-Sent Events for real-time communication +- Word-by-word content delivery for better UX +- Proper CORS configuration for cross-origin requests +- Error handling and reconnection logic + +**6. Configuration Management** +- Pydantic Settings for type-safe environment variable loading +- Automatic validation and type conversion +- Clear separation of development/production configs + +**7. Error Handling Strategy** +- Comprehensive exception handling in AI service +- Graceful fallback parsing when JSON fails +- HTTP status codes follow REST conventions +- Detailed error messages for debugging + +### Challenge Features Implemented + +**✅ Full-Stack Implementation** +- Complete Next.js frontend with modern UI +- FastAPI backend with streaming capabilities +- Real-time digest generation with word-by-word display +- Responsive design for all device sizes + +**✅ Shareable Digest Links** +- UUID-based public identifiers +- Separate endpoint `/share/{public_id}` for public access +- Visibility control with `is_public` flag +- Secure, non-guessable URLs + +**✅ Real-time Streaming Response** +- Server-Sent Events (SSE) implementation +- Streaming endpoint `/digests/stream` +- Progressive text rendering capability +- Word-by-word animation for better UX +- Fallback to standard response if streaming fails + +**✅ Modern UI/UX** +- shadcn/ui component library integration +- Gradient backgrounds and smooth animations +- Auto-expanding text areas +- Loading states and progress indicators +- Toast notifications for user feedback + +### AI Integration Approach + +**Prompt Engineering**: +- Structured JSON output requirements +- Clear formatting guidelines +- Fallback parsing for non-JSON responses +- Context-aware instructions + +**Error Resilience**: +- Multiple parsing strategies (JSON → Regex → Manual) +- Graceful degradation when AI service fails +- Timeout handling for long requests + +### What I Would Do Differently With More Time + +1. **Enhanced Testing** + - Integration tests with test database + - AI service mocking for consistent testing + - Load testing for streaming endpoints + - Frontend integration tests with Cypress/Playwright + - Component testing with React Testing Library + +2. **Production Readiness** + - Docker containerization for both frontend and backend + - Database connection pooling optimization + - Redis caching for frequent requests + - Rate limiting and authentication + - Monitoring and logging improvements + - CI/CD pipeline setup + +3. **Advanced Features** + - User authentication and accounts + - Bulk digest processing + - Custom prompt templates + - Digest export (PDF, Word) + - Real-time collaboration features + - Advanced search and filtering + - Analytics and usage tracking + - Multi-language support + +4. **Performance Optimizations** + - Database indexing strategy + - Response caching + - Background job processing + - CDN for static assets + +## 6. Implementation Summary + +This project demonstrates a complete full-stack application with: + +### Backend Highlights +- **FastAPI Framework**: Modern, fast, and well-documented Python API +- **Layered Architecture**: Clean separation of concerns for maintainability +- **SQLAlchemy ORM**: Type-safe database operations with migrations +- **Google Gemini Integration**: Advanced AI-powered content generation +- **Streaming Support**: Real-time digest generation with SSE + +### Frontend Highlights +- **Next.js 15**: Latest React framework with App Router +- **shadcn/ui**: Modern, accessible component library +- **TypeScript**: Full type safety across the application +- **Responsive Design**: Works seamlessly on all device sizes +- **Real-time Features**: Streaming digest generation with animations + +### Key Technical Achievements +- **Word-by-word streaming**: Implemented SSE for real-time content delivery +- **Public sharing**: UUID-based secure digest sharing +- **Error resilience**: Comprehensive error handling and fallback strategies +- **Type safety**: Full TypeScript implementation with proper API types +- **Modern UI**: Gradient backgrounds, animations, and responsive design + +## 7. API Endpoints Summary + +- `POST /api/v1/digests/` - Create digest from transcript +- `POST /api/v1/digests/stream` - Create digest with streaming response +- `GET /api/v1/digests/` - List all digests (paginated) +- `GET /api/v1/digests/{id}` - Get digest by integer ID +- `GET /api/v1/digests/share/{uuid}` - Get digest by public UUID +- `DELETE /api/v1/digests/{id}` - Delete digest +- `PATCH /api/v1/digests/{id}/visibility` - Update sharing settings +- `GET /api/v1/health` - Health check +- `GET /docs` - Interactive API documentation + +## Database Schema + +The `meeting_digests` table stores all digest information with: +- Primary key for internal references +- UUID for secure public sharing +- Full transcript preservation +- Structured AI output (JSON) +- Timestamps for audit trail +- Visibility control for sharing + +This backend implementation provides a robust, scalable foundation for the AI Meeting Digest service with modern Python practices, comprehensive error handling, and production-ready architecture. + +## 8. AI Usage Log + +### GitHub Copilot Usage Throughout Development + +--- \ No newline at end of file diff --git a/backend/.env.example b/backend/.env.example new file mode 100644 index 0000000..9252386 --- /dev/null +++ b/backend/.env.example @@ -0,0 +1,19 @@ +# Database +DATABASE_URL=postgresql://postgres:password@localhost:5432/meeting_digest_db +DB_USER=postgres +DB_PASSWORD=password +DB_HOST=localhost +DB_PORT=5432 +DB_NAME=meeting_digest_db + +# Google Gemini API +GEMINI_API_KEY=your_gemini_api_key_here + +# App Settings +SECRET_KEY=your-secret-key-here +DEBUG=True +HOST=0.0.0.0 +PORT=8000 + +# CORS +ALLOWED_ORIGINS=http://localhost:3000,http://localhost:5173 diff --git a/backend/.gitignore b/backend/.gitignore new file mode 100644 index 0000000..c4dd8d7 --- /dev/null +++ b/backend/.gitignore @@ -0,0 +1,180 @@ +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.nox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.py,cover +.hypothesis/ +.pytest_cache/ +cover/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py +db.sqlite3 +db.sqlite3-journal + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +.pybuilder/ +target/ + +# Jupyter Notebook +.ipynb_checkpoints + +# IPython +profile_default/ +ipython_config.py + +# pyenv +# For a library or package, you might want to ignore these files since the code is +# intended to run in multiple environments; otherwise, check them in: +# .python-version + +# pipenv +# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. +# However, in case of collaboration, if having platform-specific dependencies or dependencies +# having no cross-platform support, pipenv may install dependencies that don't work, or not +# install all needed dependencies. +#Pipfile.lock + +# poetry +# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. +# This is especially recommended for binary packages to ensure reproducibility, and is more +# commonly ignored for libraries. +# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control +#poetry.lock + +# pdm +# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. +#pdm.lock +# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it +# in version control. +# https://pdm.fming.dev/#use-with-ide +.pdm.toml + +# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm +__pypackages__/ + +# Celery stuff +celerybeat-schedule +celerybeat.pid + +# SageMath parsed files +*.sage.py + +# Environments +.env +.venv +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# Spyder project settings +.spyderproject +.spyproject + +# Rope project settings +.ropeproject + +# mkdocs documentation +/site + +# mypy +.mypy_cache/ +.dmypy.json +dmypy.json + +# Pyre type checker +.pyre/ + +# pytype static type analyzer +.pytype/ + +# Cython debug symbols +cython_debug/ + +# PyCharm +# JetBrains specific template is maintained in a separate JetBrains.gitignore that can +# be added to the global gitignore or merged into this project gitignore. For a PyCharm +# project, it is recommended to ignore the .idea/ folder. +.idea/ + +# VS Code +.vscode/ + +# OS generated files +.DS_Store +.DS_Store? +._* +.Spotlight-V100 +.Trashes +ehthumbs.db +Thumbs.db + +# Database +*.db +*.sqlite +*.sqlite3 + +# Alembic +alembic/versions/*.py +!alembic/versions/001_initial_migration.py diff --git a/backend/README.md b/backend/README.md new file mode 100644 index 0000000..6332dd2 --- /dev/null +++ b/backend/README.md @@ -0,0 +1,328 @@ +# AI Meeting Digest Backend + +A FastAPI-based backend service that processes meeting transcripts and generates AI-powered summaries using Google's Gemini API. + +## Features + +- 🤖 **AI-Powered Summarization**: Uses Google Gemini API to generate structured meeting summaries +- 📊 **PostgreSQL Database**: Persistent storage for transcripts and summaries +- 🔄 **Real-time Streaming**: Optional streaming responses for better UX +- 🔗 **Shareable Links**: Public URLs for sharing digest summaries +- 🚀 **FastAPI Framework**: Modern, fast, and well-documented API +- 🔒 **CORS Support**: Configured for frontend integration + +## API Endpoints + +### Core Endpoints + +- `POST /api/v1/digests/` - Create a new digest from transcript +- `POST /api/v1/digests/stream` - Create digest with streaming response +- `GET /api/v1/digests/` - List all digests (paginated) +- `GET /api/v1/digests/{id}` - Get specific digest by ID +- `GET /api/v1/digests/share/{public_id}` - Get shared digest by public ID +- `DELETE /api/v1/digests/{id}` - Delete a digest +- `PATCH /api/v1/digests/{id}/visibility` - Update digest sharing settings + +### Utility Endpoints + +- `GET /` - API welcome message +- `GET /api/v1/health` - Health check +- `GET /docs` - Interactive API documentation (Swagger UI) +- `GET /redoc` - Alternative API documentation (ReDoc) + +## Project Structure + +``` +backend/ +├── src/ +│ ├── __init__.py +│ ├── main.py # FastAPI application +│ ├── config.py # Configuration settings +│ ├── database.py # Database connection and session +│ ├── models.py # SQLAlchemy models +│ ├── schemas.py # Pydantic schemas +│ ├── services.py # Business logic services +│ ├── ai_service.py # Google Gemini integration +│ └── api/ +│ ├── __init__.py # API router setup +│ └── digests.py # Digest endpoints +├── alembic/ # Database migrations +│ ├── versions/ +│ ├── env.py +│ └── script.py.mako +├── requirements.txt # Python dependencies +├── .env.example # Environment variables template +├── alembic.ini # Alembic configuration +├── setup_db.py # Database setup script +└── run_server.py # Development server script +``` + +## Quick Start + +### 1. Prerequisites + +- Python 3.8+ +- PostgreSQL 12+ +- Google Gemini API key + +### 2. Environment Setup + +```bash +# Clone and navigate to backend +cd backend + +# Create virtual environment +python -m venv venv +source venv/bin/activate # On Windows: venv\Scripts\activate + +# Install dependencies +pip install -r requirements.txt + +# Copy environment template +cp .env.example .env +``` + +### 3. Configure Environment + +Edit `.env` file with your settings: + +```env +# Database +DATABASE_URL=postgresql://your_user:your_password@localhost:5432/meeting_digest_db +DB_USER=your_user +DB_PASSWORD=your_password +DB_HOST=localhost +DB_PORT=5432 +DB_NAME=meeting_digest_db + +# Google Gemini API +GEMINI_API_KEY=your_gemini_api_key_here + +# App Settings +SECRET_KEY=your-secret-key-here +DEBUG=True +HOST=0.0.0.0 +PORT=8000 + +# CORS +ALLOWED_ORIGINS=http://localhost:3000,http://localhost:5173 +``` + +### 4. Database Setup + +```bash +# Make sure PostgreSQL is running +# Create database and run migrations +python setup_db.py +``` + +### 5. Start the Server + +```bash +# Development server +python run_server.py + +# Or using uvicorn directly +uvicorn src.main:app --reload --host 0.0.0.0 --port 8000 +``` + +The API will be available at: +- Main API: http://localhost:8000 +- Documentation: http://localhost:8000/docs +- Alternative docs: http://localhost:8000/redoc + +## Database Schema + +### MeetingDigest Table + +| Column | Type | Description | +|--------|------|-------------| +| id | Integer | Primary key | +| public_id | UUID | Public identifier for sharing | +| original_transcript | Text | Raw meeting transcript | +| summary_overview | Text | Brief meeting overview | +| key_decisions | Text | JSON array of key decisions | +| action_items | Text | JSON array of action items | +| full_summary | Text | Complete AI response | +| created_at | DateTime | Creation timestamp | +| updated_at | DateTime | Last update timestamp | +| is_public | Boolean | Whether digest is shareable | + +## API Usage Examples + +### Create a Digest + +```bash +curl -X POST "http://localhost:8000/api/v1/digests/" \ + -H "Content-Type: application/json" \ + -d '{ + "transcript": "Meeting started at 9 AM. John discussed the quarterly results. We decided to increase the marketing budget by 20%. Sarah will prepare the presentation by Friday." + }' +``` + +### Get All Digests + +```bash +curl "http://localhost:8000/api/v1/digests/" +``` + +### Get Shared Digest + +```bash +curl "http://localhost:8000/api/v1/digests/share/{public_id}" +``` + +## Development + +### Running Tests + +```bash +# Install test dependencies +pip install pytest pytest-asyncio httpx + +# Run tests +pytest +``` + +### Database Migrations + +```bash +# Create new migration +alembic revision --autogenerate -m "description" + +# Apply migrations +alembic upgrade head + +# Rollback migration +alembic downgrade -1 +``` + +### Code Formatting + +```bash +# Install formatting tools +pip install black isort + +# Format code +black src/ +isort src/ +``` + +## Configuration Options + +### Environment Variables + +| Variable | Default | Description | +|----------|---------|-------------| +| DATABASE_URL | postgresql://... | Complete database URL | +| GEMINI_API_KEY | "" | Google Gemini API key | +| DEBUG | True | Enable debug mode | +| HOST | 0.0.0.0 | Server host | +| PORT | 8000 | Server port | +| ALLOWED_ORIGINS | localhost:3000,localhost:5173 | CORS allowed origins | + +### Feature Flags + +- **Streaming**: Enable/disable streaming responses +- **Public Sharing**: Control digest sharing functionality +- **Debug Mode**: Enhanced logging and error details + +## Error Handling + +The API includes comprehensive error handling: + +- **400 Bad Request**: Invalid input data +- **404 Not Found**: Resource not found +- **500 Internal Server Error**: Server or AI service errors + +All errors return JSON with descriptive messages: + +```json +{ + "detail": "Error description", + "error_code": "OPTIONAL_ERROR_CODE" +} +``` + +## Performance Considerations + +- **Database Connection Pooling**: Configured for concurrent requests +- **AI API Rate Limiting**: Handles Gemini API limitations +- **Response Caching**: Future enhancement for repeated queries +- **Pagination**: List endpoints support skip/limit parameters + +## Security + +- **CORS Configuration**: Restricts frontend origins +- **Input Validation**: Pydantic schemas validate all inputs +- **SQL Injection Protection**: SQLAlchemy ORM prevents injection +- **API Key Security**: Environment-based configuration + +## Deployment + +### Docker (Optional) + +```dockerfile +FROM python:3.11-slim + +WORKDIR /app +COPY requirements.txt . +RUN pip install -r requirements.txt + +COPY src/ ./src/ +COPY alembic/ ./alembic/ +COPY alembic.ini . + +EXPOSE 8000 +CMD ["uvicorn", "src.main:app", "--host", "0.0.0.0", "--port", "8000"] +``` + +### Production Considerations + +- Use a production WSGI server (Gunicorn + Uvicorn) +- Configure proper logging +- Set up health checks +- Use environment-specific configurations +- Enable database connection pooling +- Configure reverse proxy (Nginx) + +## Troubleshooting + +### Common Issues + +1. **Database Connection Failed** + - Check PostgreSQL is running + - Verify connection credentials + - Ensure database exists + +2. **Gemini API Errors** + - Verify API key is correct + - Check quota/rate limits + - Ensure internet connectivity + +3. **Import Errors** + - Verify virtual environment is activated + - Check all dependencies are installed + - Ensure Python path is correct + +### Debugging + +Enable debug mode for detailed error information: + +```env +DEBUG=True +``` + +Check logs for detailed error traces and database queries. + +## Contributing + +1. Follow PEP 8 style guidelines +2. Add type hints to all functions +3. Write docstrings for public methods +4. Include tests for new features +5. Update this README for significant changes + +## License + +This project is part of the work4u interview assignment. diff --git a/backend/alembic.ini b/backend/alembic.ini new file mode 100644 index 0000000..6001d0e --- /dev/null +++ b/backend/alembic.ini @@ -0,0 +1,109 @@ +# A generic, single database configuration. + +[alembic] +# path to migration scripts +script_location = alembic + +# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s +# Uncomment the line below if you want the files to be prepended with date and time +# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s + +# sys.path path, will be prepended to sys.path if present. +# defaults to the current working directory. +prepend_sys_path = . + +# timezone to use when rendering the date within the migration file +# as well as the filename. +# If specified, requires the python-dateutil library that can be +# installed by adding `alembic[tz]` to the pip requirements +# string value is passed to dateutil.tz.gettz() +# leave blank for localtime +# timezone = + +# max length of characters to apply to the +# "slug" field +# truncate_slug_length = 40 + +# set to 'true' to run the environment during +# the 'revision' command, regardless of autogenerate +# revision_environment = false + +# set to 'true' to allow .pyc and .pyo files without +# a source .py file to be detected as revisions in the +# versions/ directory +# sourceless = false + +# version path separator; As mentioned above, this is the character used to split +# version_locations. The default within new alembic.ini files is "os", which uses +# os.pathsep. If this key is omitted entirely, it falls back to the legacy +# behavior of splitting on spaces and/or commas. +# Valid values for version_path_separator are: +# +# version_path_separator = : +# version_path_separator = ; +# version_path_separator = space +version_path_separator = os + +# set to 'true' to search source files recursively +# in each "version_locations" directory +# new in Alembic version 1.10 +# recursive_version_locations = false + +# the output encoding used when revision files +# are written from script.py.mako +# output_encoding = utf-8 + +sqlalchemy.url = postgresql://username:password@localhost:5432/meeting_digest_db + + +[post_write_hooks] +# post_write_hooks defines scripts or Python functions that are run +# on newly generated revision scripts. See the documentation for further +# detail and examples + +# format using "black" - use the console_scripts runner, against the "black" entrypoint +# hooks = black +# black.type = console_scripts +# black.entrypoint = black +# black.options = -l 79 REVISION_SCRIPT_FILENAME + +# lint with attempts to fix using "ruff" - use the exec runner, execute a binary +# hooks = ruff +# ruff.type = exec +# ruff.executable = %(here)s/.venv/bin/ruff +# ruff.options = --fix REVISION_SCRIPT_FILENAME + +# Logging configuration +[loggers] +keys = root,sqlalchemy,alembic + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = WARN +handlers = console +qualname = + +[logger_sqlalchemy] +level = WARN +handlers = +qualname = sqlalchemy.engine + +[logger_alembic] +level = INFO +handlers = +qualname = alembic + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(levelname)-5.5s [%(name)s] %(message)s +datefmt = %H:%M:%S diff --git a/backend/alembic/env.py b/backend/alembic/env.py new file mode 100644 index 0000000..a0fbaa7 --- /dev/null +++ b/backend/alembic/env.py @@ -0,0 +1,86 @@ +from logging.config import fileConfig +from sqlalchemy import engine_from_config +from sqlalchemy import pool +from alembic import context +import os +import sys + +# Add the src directory to the path so we can import our models +sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'src')) + +from src.models import Base +from src.config import settings + +# this is the Alembic Config object, which provides +# access to the values within the .ini file in use. +config = context.config + +# Interpret the config file for Python logging. +# This line sets up loggers basically. +if config.config_file_name is not None: + fileConfig(config.config_file_name) + +# add your model's MetaData object here +# for 'autogenerate' support +target_metadata = Base.metadata + +# other values from the config, defined by the needs of env.py, +# can be acquired: +# my_important_option = config.get_main_option("my_important_option") +# ... etc. + +def get_url(): + return settings.database_url + +def run_migrations_offline() -> None: + """Run migrations in 'offline' mode. + + This configures the context with just a URL + and not an Engine, though an Engine is acceptable + here as well. By skipping the Engine creation + we don't even need a DBAPI to be available. + + Calls to context.execute() here emit the given string to the + script output. + + """ + url = get_url() + context.configure( + url=url, + target_metadata=target_metadata, + literal_binds=True, + dialect_opts={"paramstyle": "named"}, + ) + + with context.begin_transaction(): + context.run_migrations() + + +def run_migrations_online() -> None: + """Run migrations in 'online' mode. + + In this scenario we need to create an Engine + and associate a connection with the context. + + """ + configuration = config.get_section(config.config_ini_section) + configuration["sqlalchemy.url"] = get_url() + connectable = engine_from_config( + configuration, + prefix="sqlalchemy.", + poolclass=pool.NullPool, + ) + + with connectable.connect() as connection: + context.configure( + connection=connection, target_metadata=target_metadata + ) + + with context.begin_transaction(): + context.run_migrations() + + +if context.is_offline_mode(): + run_migrations_offline() +else: + run_migrations_online() diff --git a/backend/alembic/script.py.mako b/backend/alembic/script.py.mako new file mode 100644 index 0000000..55df286 --- /dev/null +++ b/backend/alembic/script.py.mako @@ -0,0 +1,24 @@ +"""${message} + +Revision ID: ${up_revision} +Revises: ${down_revision | comma,n} +Create Date: ${create_date} + +""" +from alembic import op +import sqlalchemy as sa +${imports if imports else ""} + +# revision identifiers, used by Alembic. +revision = ${repr(up_revision)} +down_revision = ${repr(down_revision)} +branch_labels = ${repr(branch_labels)} +depends_on = ${repr(depends_on)} + + +def upgrade() -> None: + ${upgrades if upgrades else "pass"} + + +def downgrade() -> None: + ${downgrades if downgrades else "pass"} diff --git a/backend/alembic/versions/001_initial_migration.py b/backend/alembic/versions/001_initial_migration.py new file mode 100644 index 0000000..5cd8469 --- /dev/null +++ b/backend/alembic/versions/001_initial_migration.py @@ -0,0 +1,40 @@ +"""Initial migration - Create meeting_digests table + +Revision ID: 001 +Revises: +Create Date: 2025-01-26 10:00:00.000000 + +""" +from alembic import op +import sqlalchemy as sa +from sqlalchemy.dialects import postgresql + +# revision identifiers +revision = '001' +down_revision = None +branch_labels = None +depends_on = None + +def upgrade(): + # Create meeting_digests table + op.create_table( + 'meeting_digests', + sa.Column('id', sa.Integer(), nullable=False), + sa.Column('public_id', postgresql.UUID(as_uuid=True), nullable=True), + sa.Column('original_transcript', sa.Text(), nullable=False), + sa.Column('summary_overview', sa.Text(), nullable=True), + sa.Column('key_decisions', sa.Text(), nullable=True), + sa.Column('action_items', sa.Text(), nullable=True), + sa.Column('full_summary', sa.Text(), nullable=True), + sa.Column('created_at', sa.DateTime(), nullable=True), + sa.Column('updated_at', sa.DateTime(), nullable=True), + sa.Column('is_public', sa.Boolean(), nullable=True), + sa.PrimaryKeyConstraint('id') + ) + op.create_index(op.f('ix_meeting_digests_id'), 'meeting_digests', ['id'], unique=False) + op.create_index(op.f('ix_meeting_digests_public_id'), 'meeting_digests', ['public_id'], unique=True) + +def downgrade(): + op.drop_index(op.f('ix_meeting_digests_public_id'), table_name='meeting_digests') + op.drop_index(op.f('ix_meeting_digests_id'), table_name='meeting_digests') + op.drop_table('meeting_digests') diff --git a/backend/requirements.txt b/backend/requirements.txt new file mode 100644 index 0000000..5f3c3c8 --- /dev/null +++ b/backend/requirements.txt @@ -0,0 +1,13 @@ +fastapi==0.104.1 +uvicorn[standard]==0.24.0 +sqlalchemy==2.0.23 +psycopg2-binary==2.9.9 +alembic==1.13.0 +pydantic==2.5.0 +pydantic-settings==2.1.0 +python-dotenv==1.0.0 +google-generativeai==0.3.2 +python-multipart==0.0.6 +fastapi-cors==0.0.6 +pytest==7.4.3 +httpx==0.25.2 diff --git a/backend/run_server.py b/backend/run_server.py new file mode 100755 index 0000000..df9b75a --- /dev/null +++ b/backend/run_server.py @@ -0,0 +1,28 @@ +#!/usr/bin/env python3 +""" +Development server script for the AI Meeting Digest backend. +""" + +import uvicorn +import sys +from pathlib import Path + +# Add src to path +sys.path.append(str(Path(__file__).parent / "src")) + +from src.config import settings + +if __name__ == "__main__": + print("🚀 Starting AI Meeting Digest API Server...") + print(f"📍 Host: {settings.host}:{settings.port}") + print(f"📚 Docs: http://{settings.host}:{settings.port}/docs") + print(f"🔧 Debug mode: {settings.debug}") + print("-" * 50) + + uvicorn.run( + "src.main:app", + host=settings.host, + port=settings.port, + reload=settings.debug, + log_level="info" if settings.debug else "warning" + ) diff --git a/backend/setup_db.py b/backend/setup_db.py new file mode 100755 index 0000000..976a3c5 --- /dev/null +++ b/backend/setup_db.py @@ -0,0 +1,108 @@ +#!/usr/bin/env python3 +""" +Database setup script for the AI Meeting Digest backend. +This script creates the database and runs migrations. +""" + +import os +import sys +import subprocess +from pathlib import Path + +# Add src to path +sys.path.append(str(Path(__file__).parent / "src")) + +from src.config import settings +from src.database import engine +from sqlalchemy import create_engine, text +import psycopg2 +from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT + +def create_database(): + """Create the PostgreSQL database if it doesn't exist.""" + try: + # Connect to PostgreSQL server (without specifying database) + conn = psycopg2.connect( + host=settings.db_host, + port=settings.db_port, + user=settings.db_user, + password=settings.db_password + ) + conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) + cursor = conn.cursor() + + # Check if database exists + cursor.execute(f"SELECT 1 FROM pg_database WHERE datname = '{settings.db_name}'") + exists = cursor.fetchone() + + if not exists: + cursor.execute(f"CREATE DATABASE {settings.db_name}") + print(f"✅ Database '{settings.db_name}' created successfully!") + else: + print(f"✅ Database '{settings.db_name}' already exists!") + + cursor.close() + conn.close() + + except Exception as e: + print(f"❌ Error creating database: {e}") + return False + + return True + +def run_migrations(): + """Run Alembic migrations.""" + try: + # Run migrations + result = subprocess.run(["alembic", "upgrade", "head"], + capture_output=True, text=True) + + if result.returncode == 0: + print("✅ Database migrations completed successfully!") + return True + else: + print(f"❌ Migration failed: {result.stderr}") + return False + + except Exception as e: + print(f"❌ Error running migrations: {e}") + return False + +def test_connection(): + """Test database connection.""" + try: + with engine.connect() as conn: + result = conn.execute(text("SELECT version()")) + version = result.fetchone()[0] + print(f"✅ Database connection successful!") + print(f"📊 PostgreSQL version: {version}") + return True + except Exception as e: + print(f"❌ Database connection failed: {e}") + return False + +def main(): + print("🚀 Setting up AI Meeting Digest Database...") + print(f"📍 Database: {settings.db_name}") + print(f"🏠 Host: {settings.db_host}:{settings.db_port}") + print(f"👤 User: {settings.db_user}") + print("-" * 50) + + # Step 1: Create database + if not create_database(): + sys.exit(1) + + # Step 2: Test connection + if not test_connection(): + sys.exit(1) + + # Step 3: Run migrations + if not run_migrations(): + sys.exit(1) + + print("-" * 50) + print("🎉 Database setup completed successfully!") + print("🚀 You can now start the FastAPI server with: uvicorn src.main:app --reload") + +if __name__ == "__main__": + main() diff --git a/backend/src/__init__.py b/backend/src/__init__.py new file mode 100644 index 0000000..fd90962 --- /dev/null +++ b/backend/src/__init__.py @@ -0,0 +1,2 @@ +# AI Meeting Digest Backend +__version__ = "1.0.0" diff --git a/backend/src/ai_service.py b/backend/src/ai_service.py new file mode 100644 index 0000000..5cce434 --- /dev/null +++ b/backend/src/ai_service.py @@ -0,0 +1,143 @@ +import google.generativeai as genai +from typing import Generator, Dict, Any +import json +import re +from .config import settings + +class GeminiService: + def __init__(self): + genai.configure(api_key=settings.gemini_api_key) + self.model = genai.GenerativeModel('gemini-2.0-flash') + + def get_prompt(self) -> str: + return """ + Please analyze the following meeting transcript and provide a structured summary in JSON format. + + The JSON response should have exactly this structure: + { + "overview": "A brief one-paragraph overview of the meeting", + "key_decisions": ["Decision 1", "Decision 2", "Decision 3"], + "action_items": ["Action item 1 - Assigned to Person", "Action item 2 - Assigned to Person"] + } + + Important guidelines: + - Keep the overview to 2-3 sentences maximum + - Extract only concrete decisions that were actually made + - For action items, always include who is responsible when mentioned + - If no decisions or action items are found, use empty arrays + - Ensure the response is valid JSON + + Meeting Transcript: + """ + + def generate_digest(self, transcript: str) -> Dict[str, Any]: + """Generate a digest for a meeting transcript.""" + try: + prompt = self.get_prompt() + transcript + response = self.model.generate_content(prompt) + + # Extract JSON from the response + response_text = response.text.strip() + + # Try to find JSON in the response + json_match = re.search(r'\{.*\}', response_text, re.DOTALL) + if json_match: + json_str = json_match.group() + try: + parsed_response = json.loads(json_str) + return self._validate_response(parsed_response) + except json.JSONDecodeError: + pass + + # Fallback: parse manually if JSON parsing fails + return self._parse_fallback_response(response_text) + + except Exception as e: + print(f"Error generating digest: {str(e)}") + return { + "overview": "Error processing transcript. Please try again.", + "key_decisions": [], + "action_items": [] + } + + def generate_digest_stream(self, transcript: str) -> Generator[str, None, None]: + """Generate a digest with streaming response.""" + try: + prompt = self.get_prompt() + transcript + response = self.model.generate_content(prompt, stream=True) + + buffer = "" + for chunk in response: + if chunk.text: + buffer += chunk.text + # Split buffer into words and yield them one by one + words = buffer.split() + if len(words) > 1: + # Keep the last word in buffer (it might be incomplete) + complete_words = words[:-1] + buffer = words[-1] + + for word in complete_words: + yield word + " " + # If this is likely the end of a sentence or chunk, yield the buffer + elif chunk.text.endswith(('.', '!', '?', '\n', '}')) and buffer.strip(): + yield buffer + buffer = "" + + # Yield any remaining content in buffer + if buffer.strip(): + yield buffer + + except Exception as e: + yield f"Error: {str(e)}" + + def _validate_response(self, response: Dict[str, Any]) -> Dict[str, Any]: + """Validate and clean the AI response.""" + return { + "overview": response.get("overview", "No overview provided"), + "key_decisions": response.get("key_decisions", []) if isinstance(response.get("key_decisions"), list) else [], + "action_items": response.get("action_items", []) if isinstance(response.get("action_items"), list) else [] + } + + def _parse_fallback_response(self, response_text: str) -> Dict[str, Any]: + """Fallback parser for when JSON parsing fails.""" + lines = response_text.split('\n') + overview = "" + key_decisions = [] + action_items = [] + + current_section = None + + for line in lines: + line = line.strip() + if not line: + continue + + # Try to identify sections + if "overview" in line.lower() or "summary" in line.lower(): + current_section = "overview" + # Extract overview from the same line if present + if ":" in line: + overview = line.split(":", 1)[1].strip() + elif "decision" in line.lower(): + current_section = "decisions" + elif "action" in line.lower(): + current_section = "actions" + elif line.startswith("-") or line.startswith("•") or line.startswith("*"): + # This is a bullet point + item = line[1:].strip() + if current_section == "decisions": + key_decisions.append(item) + elif current_section == "actions": + action_items.append(item) + elif current_section == "overview" and not overview: + overview = line + + return { + "overview": overview or "Meeting summary not available", + "key_decisions": key_decisions, + "action_items": action_items + } + +# Initialize the service +gemini_service = GeminiService() diff --git a/backend/src/api/__init__.py b/backend/src/api/__init__.py new file mode 100644 index 0000000..1bc6c1b --- /dev/null +++ b/backend/src/api/__init__.py @@ -0,0 +1,12 @@ +from fastapi import APIRouter +from .digests import router as digests_router + +api_router = APIRouter() + +# Include all API routes +api_router.include_router(digests_router) + +# Health check endpoint +@api_router.get("/health") +async def health_check(): + return {"status": "healthy", "message": "AI Meeting Digest API is running"} diff --git a/backend/src/api/digests.py b/backend/src/api/digests.py new file mode 100644 index 0000000..1afe223 --- /dev/null +++ b/backend/src/api/digests.py @@ -0,0 +1,150 @@ +from fastapi import APIRouter, Depends, HTTPException, status +from fastapi.responses import StreamingResponse +from sqlalchemy.orm import Session +from typing import List +import uuid +import json +from .. import schemas +from ..database import get_db +from ..services import DigestService, convert_to_digest_response, convert_to_digest_detail, convert_to_digest_list +from ..ai_service import gemini_service + +router = APIRouter(prefix="/api/v1/digests", tags=["digests"]) + +@router.options("/stream") +async def options_stream(): + """Handle preflight OPTIONS request for streaming endpoint.""" + return {"message": "OK"} + +@router.post("/", response_model=schemas.DigestResponse, status_code=status.HTTP_201_CREATED) +async def create_digest( + request: schemas.TranscriptRequest, + db: Session = Depends(get_db) +): + """Create a new meeting digest from transcript.""" + try: + service = DigestService(db) + digest = service.create_digest(request.transcript) + return convert_to_digest_response(digest) + except Exception as e: + raise HTTPException( + status_code=status.HTTP_500_INTERNAL_SERVER_ERROR, + detail=f"Failed to process transcript: {str(e)}" + ) + +@router.post("/stream") +async def create_digest_stream( + request: schemas.TranscriptRequest, + db: Session = Depends(get_db) +): + """Create a digest with streaming response.""" + import asyncio + + async def generate(): + try: + # First, get the complete response + ai_response = gemini_service.generate_digest(request.transcript) + + # Format the response for streaming + formatted_response = f"""{{ + "overview": "{ai_response['overview']}", + "key_decisions": {json.dumps(ai_response['key_decisions'])}, + "action_items": {json.dumps(ai_response['action_items'])} +}}""" + + # Stream word by word + words = formatted_response.split() + for i, word in enumerate(words): + yield f"data: {json.dumps({'content': word + ' ', 'is_complete': False})}\n\n" + await asyncio.sleep(0.05) # Small delay for word-by-word effect + + # Save to database + service = DigestService(db) + digest = service.create_digest_from_parsed_response( + request.transcript, + ai_response + ) + + yield f"data: {json.dumps({'content': '', 'is_complete': True, 'digest_id': digest.id})}\n\n" + + except Exception as e: + yield f"data: {json.dumps({'error': str(e)})}\n\n" + + return StreamingResponse( + generate(), + media_type="text/plain", + headers={ + "Cache-Control": "no-cache", + "Connection": "keep-alive", + "Access-Control-Allow-Origin": "*" + } + ) + +@router.get("/", response_model=List[schemas.DigestListResponse]) +async def get_all_digests( + skip: int = 0, + limit: int = 100, + db: Session = Depends(get_db) +): + """Get all meeting digests.""" + service = DigestService(db) + digests = service.get_all_digests(skip=skip, limit=limit) + return [convert_to_digest_list(digest) for digest in digests] + +@router.get("/{digest_id}", response_model=schemas.DigestDetailResponse) +async def get_digest(digest_id: int, db: Session = Depends(get_db)): + """Get a specific digest by ID.""" + service = DigestService(db) + digest = service.get_digest_by_id(digest_id) + + if not digest: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="Digest not found" + ) + + return convert_to_digest_detail(digest) + +@router.get("/share/{public_id}", response_model=schemas.DigestDetailResponse) +async def get_shared_digest(public_id: uuid.UUID, db: Session = Depends(get_db)): + """Get a shared digest by public ID.""" + service = DigestService(db) + digest = service.get_digest_by_public_id(public_id) + + if not digest: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="Shared digest not found or not public" + ) + + return convert_to_digest_detail(digest) + +@router.delete("/{digest_id}", status_code=status.HTTP_204_NO_CONTENT) +async def delete_digest(digest_id: int, db: Session = Depends(get_db)): + """Delete a digest.""" + service = DigestService(db) + success = service.delete_digest(digest_id) + + if not success: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="Digest not found" + ) + +@router.patch("/{digest_id}/visibility", response_model=schemas.DigestDetailResponse) +async def update_digest_visibility( + digest_id: int, + is_public: bool, + db: Session = Depends(get_db) +): + """Update digest visibility for sharing.""" + service = DigestService(db) + digest = service.update_digest_visibility(digest_id, is_public) + + if not digest: + raise HTTPException( + status_code=status.HTTP_404_NOT_FOUND, + detail="Digest not found" + ) + + return convert_to_digest_detail(digest) diff --git a/backend/src/config.py b/backend/src/config.py new file mode 100644 index 0000000..99ed59c --- /dev/null +++ b/backend/src/config.py @@ -0,0 +1,39 @@ +import os +from pydantic_settings import BaseSettings, SettingsConfigDict +from typing import List + +class Settings(BaseSettings): + # Database + database_url: str = "postgresql://username:password@localhost:5432/meeting_digest_db" + db_user: str = "username" + db_password: str = "password" + db_host: str = "localhost" + db_port: int = 5432 + db_name: str = "meeting_digest_db" + + # Google Gemini API + gemini_api_key: str = "" + + # App Settings + secret_key: str = "your-secret-key-here" + debug: bool = True + host: str = "0.0.0.0" + port: int = 8000 + + # CORS - String that will be split into a list + allowed_origins: str = "http://localhost:3000,http://localhost:3001,http://localhost:5173" + + # Pydantic v2 configuration + model_config = SettingsConfigDict( + env_file=".env", + env_file_encoding="utf-8", + case_sensitive=False, + extra="ignore" + ) + + @property + def allowed_origins_list(self) -> List[str]: + """Convert the comma-separated string to a list.""" + return [origin.strip() for origin in self.allowed_origins.split(",") if origin.strip()] + +settings = Settings() diff --git a/backend/src/database.py b/backend/src/database.py new file mode 100644 index 0000000..d6b23ae --- /dev/null +++ b/backend/src/database.py @@ -0,0 +1,26 @@ +from sqlalchemy import create_engine, MetaData +from sqlalchemy.ext.declarative import declarative_base +from sqlalchemy.orm import sessionmaker +from .config import settings + +# Create SQLAlchemy engine +engine = create_engine( + settings.database_url, + echo=settings.debug, + pool_pre_ping=True, + pool_recycle=300 +) + +# Create SessionLocal class +SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine) + +# Create Base class +Base = declarative_base() + +# Dependency to get DB session +def get_db(): + db = SessionLocal() + try: + yield db + finally: + db.close() diff --git a/backend/src/example.py b/backend/src/example.py index e69de29..61ab9ee 100644 --- a/backend/src/example.py +++ b/backend/src/example.py @@ -0,0 +1,65 @@ +from fastapi import FastAPI, HTTPException +from fastapi.middleware.cors import CORSMiddleware +from fastapi.responses import JSONResponse +import uvicorn +from contextlib import asynccontextmanager + +from .config import settings +from .database import engine, Base +from .api import api_router + +# Create tables on startup +@asynccontextmanager +async def lifespan(app: FastAPI): + # Startup + Base.metadata.create_all(bind=engine) + yield + # Shutdown + pass + +# Create FastAPI app +app = FastAPI( + title="AI Meeting Digest API", + description="A FastAPI backend for generating AI-powered meeting digests", + version="1.0.0", + docs_url="/docs", + redoc_url="/redoc", + lifespan=lifespan +) + +# Add CORS middleware +app.add_middleware( + CORSMiddleware, + allow_origins=settings.allowed_origins, + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + +# Include API routes +app.include_router(api_router) + +# Root endpoint +@app.get("/") +async def root(): + return { + "message": "Welcome to AI Meeting Digest API", + "docs": "/docs", + "health": "/api/v1/health" + } + +# Global exception handler +@app.exception_handler(Exception) +async def global_exception_handler(request, exc): + return JSONResponse( + status_code=500, + content={"detail": "Internal server error", "error": str(exc)} + ) + +if __name__ == "__main__": + uvicorn.run( + "main:app", + host=settings.host, + port=settings.port, + reload=settings.debug + ) \ No newline at end of file diff --git a/backend/src/main.py b/backend/src/main.py new file mode 100644 index 0000000..283acbd --- /dev/null +++ b/backend/src/main.py @@ -0,0 +1,65 @@ +from fastapi import FastAPI, HTTPException +from fastapi.middleware.cors import CORSMiddleware +from fastapi.responses import JSONResponse +import uvicorn +from contextlib import asynccontextmanager + +from .config import settings +from .database import engine, Base +from .api import api_router + +# Create tables on startup +@asynccontextmanager +async def lifespan(app: FastAPI): + # Startup + Base.metadata.create_all(bind=engine) + yield + # Shutdown + pass + +# Create FastAPI app +app = FastAPI( + title="AI Meeting Digest API", + description="A FastAPI backend for generating AI-powered meeting digests", + version="1.0.0", + docs_url="/docs", + redoc_url="/redoc", + lifespan=lifespan +) + +# Add CORS middleware +app.add_middleware( + CORSMiddleware, + allow_origins=settings.allowed_origins_list, + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + +# Include API routes +app.include_router(api_router) + +# Root endpoint +@app.get("/") +async def root(): + return { + "message": "Welcome to AI Meeting Digest API", + "docs": "/docs", + "health": "/api/v1/health" + } + +# Global exception handler +@app.exception_handler(Exception) +async def global_exception_handler(request, exc): + return JSONResponse( + status_code=500, + content={"detail": "Internal server error", "error": str(exc)} + ) + +if __name__ == "__main__": + uvicorn.run( + "main:app", + host=settings.host, + port=settings.port, + reload=settings.debug + ) \ No newline at end of file diff --git a/backend/src/models.py b/backend/src/models.py new file mode 100644 index 0000000..c57c859 --- /dev/null +++ b/backend/src/models.py @@ -0,0 +1,22 @@ +from sqlalchemy import Column, Integer, String, Text, DateTime, Boolean +from sqlalchemy.dialects.postgresql import UUID +from datetime import datetime +import uuid +from .database import Base + +class MeetingDigest(Base): + __tablename__ = "meeting_digests" + + id = Column(Integer, primary_key=True, index=True) + public_id = Column(UUID(as_uuid=True), default=uuid.uuid4, unique=True, index=True) + original_transcript = Column(Text, nullable=False) + summary_overview = Column(Text) + key_decisions = Column(Text) # JSON string of decisions list + action_items = Column(Text) # JSON string of action items list + full_summary = Column(Text) # Complete AI response + created_at = Column(DateTime, default=datetime.utcnow) + updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow) + is_public = Column(Boolean, default=False) # For shareable links + + def __repr__(self): + return f"" diff --git a/backend/src/schemas.py b/backend/src/schemas.py new file mode 100644 index 0000000..513bce8 --- /dev/null +++ b/backend/src/schemas.py @@ -0,0 +1,51 @@ +from pydantic import BaseModel, Field +from typing import List, Optional +from datetime import datetime +import uuid + +class TranscriptRequest(BaseModel): + transcript: str = Field(..., min_length=1, description="The meeting transcript to analyze") + +class DigestResponse(BaseModel): + id: int + public_id: uuid.UUID + summary_overview: str + key_decisions: List[str] + action_items: List[str] + created_at: datetime + is_public: bool = False + + class Config: + from_attributes = True + +class DigestListResponse(BaseModel): + id: int + public_id: uuid.UUID + summary_overview: str + created_at: datetime + is_public: bool = False + + class Config: + from_attributes = True + +class DigestDetailResponse(BaseModel): + id: int + public_id: uuid.UUID + original_transcript: str + summary_overview: str + key_decisions: List[str] + action_items: List[str] + created_at: datetime + updated_at: datetime + is_public: bool = False + + class Config: + from_attributes = True + +class StreamResponse(BaseModel): + content: str + is_complete: bool = False + +class ErrorResponse(BaseModel): + detail: str + error_code: Optional[str] = None diff --git a/backend/src/services.py b/backend/src/services.py new file mode 100644 index 0000000..55b9180 --- /dev/null +++ b/backend/src/services.py @@ -0,0 +1,125 @@ +from sqlalchemy.orm import Session +from typing import List, Optional +import json +import uuid +from . import models, schemas +from .ai_service import gemini_service + +class DigestService: + def __init__(self, db: Session): + self.db = db + + def create_digest(self, transcript: str) -> models.MeetingDigest: + """Create a new meeting digest from transcript.""" + # Generate AI summary + ai_response = gemini_service.generate_digest(transcript) + + # Create database record + db_digest = models.MeetingDigest( + original_transcript=transcript, + summary_overview=ai_response["overview"], + key_decisions=json.dumps(ai_response["key_decisions"]), + action_items=json.dumps(ai_response["action_items"]), + full_summary=json.dumps(ai_response), + is_public=True # Enable sharing by default + ) + + self.db.add(db_digest) + self.db.commit() + self.db.refresh(db_digest) + + return db_digest + + def create_digest_from_parsed_response(self, transcript: str, ai_response: dict) -> models.MeetingDigest: + """Create a new meeting digest from already parsed AI response.""" + # Create database record + db_digest = models.MeetingDigest( + original_transcript=transcript, + summary_overview=ai_response["overview"], + key_decisions=json.dumps(ai_response["key_decisions"]), + action_items=json.dumps(ai_response["action_items"]), + full_summary=json.dumps(ai_response), + is_public=True # Enable sharing by default + ) + + self.db.add(db_digest) + self.db.commit() + self.db.refresh(db_digest) + + return db_digest + + def get_all_digests(self, skip: int = 0, limit: int = 100) -> List[models.MeetingDigest]: + """Get all meeting digests with pagination.""" + return self.db.query(models.MeetingDigest)\ + .order_by(models.MeetingDigest.created_at.desc())\ + .offset(skip)\ + .limit(limit)\ + .all() + + def get_digest_by_id(self, digest_id: int) -> Optional[models.MeetingDigest]: + """Get a specific digest by ID.""" + return self.db.query(models.MeetingDigest)\ + .filter(models.MeetingDigest.id == digest_id)\ + .first() + + def get_digest_by_public_id(self, public_id: uuid.UUID) -> Optional[models.MeetingDigest]: + """Get a specific digest by public ID (for sharing).""" + return self.db.query(models.MeetingDigest)\ + .filter(models.MeetingDigest.public_id == public_id)\ + .filter(models.MeetingDigest.is_public == True)\ + .first() + + def delete_digest(self, digest_id: int) -> bool: + """Delete a digest by ID.""" + digest = self.get_digest_by_id(digest_id) + if digest: + self.db.delete(digest) + self.db.commit() + return True + return False + + def update_digest_visibility(self, digest_id: int, is_public: bool) -> Optional[models.MeetingDigest]: + """Update the visibility of a digest.""" + digest = self.get_digest_by_id(digest_id) + if digest: + digest.is_public = is_public + self.db.commit() + self.db.refresh(digest) + return digest + return None + +def convert_to_digest_response(digest: models.MeetingDigest) -> schemas.DigestResponse: + """Convert database model to response schema.""" + return schemas.DigestResponse( + id=digest.id, + public_id=digest.public_id, + summary_overview=digest.summary_overview, + key_decisions=json.loads(digest.key_decisions) if digest.key_decisions else [], + action_items=json.loads(digest.action_items) if digest.action_items else [], + created_at=digest.created_at, + is_public=digest.is_public + ) + +def convert_to_digest_detail(digest: models.MeetingDigest) -> schemas.DigestDetailResponse: + """Convert database model to detailed response schema.""" + return schemas.DigestDetailResponse( + id=digest.id, + public_id=digest.public_id, + original_transcript=digest.original_transcript, + summary_overview=digest.summary_overview, + key_decisions=json.loads(digest.key_decisions) if digest.key_decisions else [], + action_items=json.loads(digest.action_items) if digest.action_items else [], + created_at=digest.created_at, + updated_at=digest.updated_at, + is_public=digest.is_public + ) + +def convert_to_digest_list(digest: models.MeetingDigest) -> schemas.DigestListResponse: + """Convert database model to list response schema.""" + return schemas.DigestListResponse( + id=digest.id, + public_id=digest.public_id, + summary_overview=digest.summary_overview[:200] + "..." if len(digest.summary_overview) > 200 else digest.summary_overview, + created_at=digest.created_at, + is_public=digest.is_public + ) diff --git a/backend/test_basic.py b/backend/test_basic.py new file mode 100644 index 0000000..139c7c8 --- /dev/null +++ b/backend/test_basic.py @@ -0,0 +1,65 @@ +""" +Basic tests for the AI Meeting Digest backend. +Run with: pytest test_basic.py +""" + +import pytest +import sys +from pathlib import Path + +# Add src to path +sys.path.append(str(Path(__file__).parent / "src")) + +def test_imports(): + """Test that all modules can be imported successfully.""" + try: + from src.main import app + from src.config import settings + from src.database import Base, engine + from src.models import MeetingDigest + from src.ai_service import gemini_service + print("✅ All imports successful!") + assert True + except Exception as e: + print(f"❌ Import error: {e}") + assert False, f"Import failed: {e}" + +def test_config_loading(): + """Test configuration loading.""" + from src.config import settings + + # Check that essential settings are loaded + assert settings.database_url is not None + assert settings.gemini_api_key is not None + assert settings.host is not None + assert settings.port is not None + print("✅ Configuration loaded successfully!") + +def test_database_models(): + """Test database model creation.""" + from src.models import MeetingDigest + + # Test model instantiation + digest = MeetingDigest( + original_transcript="Test transcript", + summary_overview="Test overview", + key_decisions='["Decision 1"]', + action_items='["Action 1"]' + ) + + assert digest.original_transcript == "Test transcript" + assert digest.summary_overview == "Test overview" + print("✅ Database models work correctly!") + +def test_ai_service_structure(): + """Test AI service structure.""" + from src.ai_service import GeminiService, gemini_service + + # Test service instantiation + assert gemini_service is not None + assert hasattr(gemini_service, 'generate_digest') + assert hasattr(gemini_service, 'generate_digest_stream') + print("✅ AI service structure is correct!") + +if __name__ == "__main__": + pytest.main([__file__, "-v"]) diff --git a/backend/test_config.py b/backend/test_config.py new file mode 100644 index 0000000..c95b4c3 --- /dev/null +++ b/backend/test_config.py @@ -0,0 +1,52 @@ +#!/usr/bin/env python3 +""" +Test script to verify that environment variables are being loaded correctly. +""" + +import sys +from pathlib import Path + +# Add src to path +sys.path.append(str(Path(__file__).parent / "src")) + +def test_config(): + print("🔧 Testing Configuration Loading...") + print("-" * 50) + + try: + from src.config import settings + + print(f"✅ Database URL: {settings.database_url}") + print(f"✅ Database Host: {settings.db_host}") + print(f"✅ Database Port: {settings.db_port}") + print(f"✅ Database Name: {settings.db_name}") + print(f"✅ Debug Mode: {settings.debug}") + print(f"✅ Server Host: {settings.host}") + print(f"✅ Server Port: {settings.port}") + print(f"✅ Allowed Origins: {settings.allowed_origins_list}") + + # Check if Gemini API key is set (but don't show the actual key) + if settings.gemini_api_key and settings.gemini_api_key != "your_gemini_api_key_here": + print("✅ Gemini API Key: ***CONFIGURED***") + else: + print("⚠️ Gemini API Key: NOT SET (using default)") + + print("-" * 50) + print("✅ Configuration loaded successfully!") + + # Test if .env file is being read + import os + if os.path.exists(".env"): + print("✅ .env file exists and is being read") + else: + print("⚠️ .env file not found - using defaults") + + return True + + except Exception as e: + print(f"❌ Configuration error: {e}") + return False + +if __name__ == "__main__": + success = test_config() + sys.exit(0 if success else 1) diff --git a/backend/test_endpoints.py b/backend/test_endpoints.py new file mode 100644 index 0000000..45a6871 --- /dev/null +++ b/backend/test_endpoints.py @@ -0,0 +1,73 @@ +""" +Quick test to demonstrate the backend functionality and endpoint usage. +""" + +import requests +import json + +# Sample transcript for testing +SAMPLE_TRANSCRIPT = """Meeting started at 2 PM. John discussed the quarterly budget increase. +Sarah will prepare the financial report by Friday. +We decided to hire two new developers for the mobile team. +Action items: Mark will review the vendor contracts by Tuesday.""" + +def test_backend_functionality(): + """Test the main backend functionality.""" + base_url = "http://localhost:8000" + + print("🚀 Testing AI Meeting Digest Backend") + print("=" * 50) + + try: + # Test health endpoint + print("1. Testing health endpoint...") + response = requests.get(f"{base_url}/api/v1/health") + if response.status_code == 200: + print("✅ Health check passed") + + # Test digest creation + print("\n2. Creating digest...") + response = requests.post( + f"{base_url}/api/v1/digests/", + json={"transcript": SAMPLE_TRANSCRIPT} + ) + + if response.status_code == 201: + digest = response.json() + print("✅ Digest created successfully!") + print(f" ID: {digest['id']}") + print(f" Public ID: {digest['public_id']}") + print(f" Overview: {digest['summary_overview']}") + + # Test both endpoint types + digest_id = digest['id'] + public_id = digest['public_id'] + + print(f"\n3. Testing endpoint access...") + print(f" Integer ID endpoint: /api/v1/digests/{digest_id}") + print(f" UUID endpoint: /api/v1/digests/share/{public_id}") + + # Test integer endpoint + response = requests.get(f"{base_url}/api/v1/digests/{digest_id}") + if response.status_code == 200: + print("✅ Integer ID endpoint works") + + # Test UUID endpoint + response = requests.get(f"{base_url}/api/v1/digests/share/{public_id}") + if response.status_code == 200: + print("✅ UUID sharing endpoint works") + + print(f"\n🎉 Backend is fully functional!") + print(f"📚 API docs: {base_url}/docs") + + else: + print(f"❌ Digest creation failed: {response.status_code}") + print(f" Response: {response.text}") + + except requests.exceptions.ConnectionError: + print("❌ Server not running. Start with: python run_server.py") + except Exception as e: + print(f"❌ Error: {e}") + +if __name__ == "__main__": + test_backend_functionality() diff --git a/backend/test_transcript.py b/backend/test_transcript.py new file mode 100644 index 0000000..e920505 --- /dev/null +++ b/backend/test_transcript.py @@ -0,0 +1,193 @@ +""" +Test the backend with a sample transcript using direct API calls. +""" + +import sys +from pathlib import Path +import requests +import json +import time + +# Test transcript +TRANSCRIPT = """Meeting Transcript +Meeting Title: Project Phoenix - Weekly Sync +Date: July 25, 2025 +Time: 10:00 AM +Attendees: Priya Sharma (Project Manager), Mark Chen (Lead Engineer), Sarah Jenkins (Head of Marketing), David Miller (UX/UI Designer) + +Priya Sharma (10:01:15): "Alright everyone, good morning! Thanks for joining. Hope you all had a good week. Let's kick off our weekly sync for Project Phoenix. The main goals for today are to get a status update on the beta build, review the initial marketing assets, and address the user feedback from our internal testing last week. Let's start with you, Mark. How's the engineering team looking?" + +Mark Chen (10:02:05): "Morning, Priya. Things are progressing well. We've successfully integrated the new payment API from Stripe. It's stable. However, we did run into a significant bug on the Android build. The push notification service is crashing the app on older Android versions, specifically Android 11 and 12." + +Priya Sharma (10:02:40): "Oof, that's not great. What's the impact on our launch timeline?" + +Mark Chen (10:02:55): "It's a high-priority issue. We've dedicated two developers to it full-time. Best case, we have a patch by end-of-day Tuesday. Worst case, it pushes our beta code freeze back by a full week. The core issue seems to be a deprecated library we're using. We need to refactor that module." + +Sarah Jenkins (10:03:30): "A week's delay would be a problem for us. We have the influencer campaign scheduled to kick off on August 15th. Any delay in the public beta link would mean we have to reschedule with them, and that's always messy." + +Priya Sharma (10:04:05): "Understood, Sarah. Okay, Mark, let's make this bug our number one priority. Can you provide a status update in the main channel by EOD Monday? We need to know if that Tuesday timeline is holding. For now, let's tentatively plan for the delay and Sarah, can you check what a one-week slip would do to the influencer contracts? Just as a contingency." + +Sarah Jenkins (10:04:45): "Will do. On a more positive note, the creative team has finalized the initial set of social media ads for the launch campaign. I've dropped a link in the chat. We focused on the 'effortless collaboration' angle we discussed. David, we'd love your team's eyes on them to ensure they're consistent with the app's UI." + +David Miller (10:05:20): "Thanks, Sarah. Just opened them... wow, these look sharp. The color palette is perfect. One minor thought: the screenshot used in the third ad shows the old dashboard layout. We updated that in the last sprint to include the new 'Quick Add' button." + +Mark Chen (10:05:55): "Oh, good catch, David. Yeah, that's the old UI. We can get you a high-res screenshot of the new dashboard by this afternoon." + +Sarah Jenkins (10:06:10): "Perfect, thanks, Mark! That's an easy fix. David, any other feedback from the internal user testing you wanted to share?" + +David Miller (10:06:30): "Yes. Overall, feedback was positive. People found the onboarding process very intuitive. The main point of friction was the file-sharing feature. Users reported that it wasn't clear if they were sharing with an individual or with the entire project team. The dialog box needs to be more explicit." + +Priya Sharma (10:07:15): "Okay, that sounds like a critical usability issue. Is that a quick fix, David?" + +David Miller (10:07:30): "My team has already mocked up a new design for the sharing modal. It uses clearer iconography and text. I'll share the Figma link with you and Mark right after this call. I think it's a straightforward change for the front-end team." + +Mark Chen (10:08:00): "Okay, send it over. If it's just a front-end tweak, we can probably squeeze it into the next sprint without impacting the bug fix timeline." + +Priya Sharma (10:08:25): "Excellent. So, to recap the key decisions and actions: + +Engineering's top priority is the Android push notification bug. Mark will update us on Monday. + +Sarah will investigate the contingency plan for a one-week slip in the marketing campaign. + +Mark will provide Sarah with an updated screenshot of the new dashboard for the ad creative. + +David is sending the revised sharing modal design, and Mark's team will assess its inclusion in the next sprint. + +Priya Sharma (10:09:40): "Does that cover everything? Any other business?" + +(Silence) + +Priya Sharma (10:09:55): "Great. Let's keep the communication flowing in Slack, especially on that bug. Thanks for a productive meeting, everyone. Let's connect again next Friday, same time. Have a great weekend." + +Sarah Jenkins (10:10:10): "You too, Priya. Bye all." + +Mark Chen (10:10:12): "Thanks. Bye." + +David Miller (10:10:15): "See you." + +Recording Stopped: 10:10:30 AM""" + +def test_backend_with_transcript(): + """Test the backend by starting server and making real API calls.""" + + base_url = "http://localhost:8000" + + print("🧪 Testing Backend with Sample Transcript...") + print("-" * 60) + + try: + # Test 1: Check if server is running + print("1️⃣ Testing server connection...") + response = requests.get(f"{base_url}/", timeout=5) + + if response.status_code == 200: + print("✅ Server is running!") + print(f" Response: {response.json()}") + else: + print(f"❌ Server returned status {response.status_code}") + return False + + except requests.exceptions.ConnectionError: + print("❌ Cannot connect to server. Make sure it's running on localhost:8000") + print(" Start server with: python run_server.py") + return False + except Exception as e: + print(f"❌ Error connecting to server: {e}") + return False + + try: + # Test 2: Check health endpoint + print("\n2️⃣ Testing health endpoint...") + response = requests.get(f"{base_url}/api/v1/health", timeout=5) + + if response.status_code == 200: + print("✅ Health check passed!") + print(f" Response: {response.json()}") + else: + print(f"❌ Health check failed with status {response.status_code}") + + except Exception as e: + print(f"❌ Health check error: {e}") + + try: + # Test 3: Create digest with transcript + print("\n3️⃣ Testing digest creation with sample transcript...") + print(" 📝 Sending transcript to AI service...") + + payload = {"transcript": TRANSCRIPT} + response = requests.post( + f"{base_url}/api/v1/digests/", + json=payload, + timeout=30 # AI requests can take longer + ) + + if response.status_code == 201: + print("✅ Digest created successfully!") + result = response.json() + + print("\n📊 AI-Generated Summary:") + print("-" * 40) + print(f"Overview: {result['summary_overview']}") + print(f"\nKey Decisions:") + for i, decision in enumerate(result['key_decisions'], 1): + print(f" {i}. {decision}") + print(f"\nAction Items:") + for i, action in enumerate(result['action_items'], 1): + print(f" {i}. {action}") + print(f"\nCreated: {result['created_at']}") + print(f"Public ID: {result['public_id']}") + print(f"Shareable: {result['is_public']}") + + # Test 4: Get all digests + print("\n4️⃣ Testing digest retrieval...") + response = requests.get(f"{base_url}/api/v1/digests/", timeout=5) + + if response.status_code == 200: + digests = response.json() + print(f"✅ Retrieved {len(digests)} digests") + + if digests: + digest_id = digests[0]['id'] + + # Test 5: Get specific digest + print(f"\n5️⃣ Testing specific digest retrieval (ID: {digest_id})...") + response = requests.get(f"{base_url}/api/v1/digests/{digest_id}", timeout=5) + + if response.status_code == 200: + print("✅ Retrieved specific digest successfully!") + detailed = response.json() + print(f" Original transcript length: {len(detailed['original_transcript'])} characters") + else: + print(f"❌ Failed to retrieve specific digest: {response.status_code}") + + # Test 6: Test shareable link + public_id = digests[0]['public_id'] + print(f"\n6️⃣ Testing shareable link (Public ID: {public_id})...") + response = requests.get(f"{base_url}/api/v1/digests/share/{public_id}", timeout=5) + + if response.status_code == 200: + print("✅ Shareable link works!") + else: + print(f"❌ Shareable link failed: {response.status_code}") + else: + print(f"❌ Failed to retrieve digests: {response.status_code}") + + elif response.status_code == 500: + print("❌ Server error - likely database or AI service issue") + print(f" Response: {response.text}") + else: + print(f"❌ Digest creation failed with status {response.status_code}") + print(f" Response: {response.text}") + + except requests.exceptions.Timeout: + print("❌ Request timed out - AI service might be slow or unavailable") + except Exception as e: + print(f"❌ Error testing digest creation: {e}") + + print("\n" + "=" * 60) + print("🎉 Backend testing completed!") + print("💡 Start the server with: python run_server.py") + print("📚 View API docs at: http://localhost:8000/docs") + +if __name__ == "__main__": + test_backend_with_transcript() diff --git a/backend/work4u/bin/Activate.ps1 b/backend/work4u/bin/Activate.ps1 new file mode 100644 index 0000000..2fb3852 --- /dev/null +++ b/backend/work4u/bin/Activate.ps1 @@ -0,0 +1,241 @@ +<# +.Synopsis +Activate a Python virtual environment for the current PowerShell session. + +.Description +Pushes the python executable for a virtual environment to the front of the +$Env:PATH environment variable and sets the prompt to signify that you are +in a Python virtual environment. Makes use of the command line switches as +well as the `pyvenv.cfg` file values present in the virtual environment. + +.Parameter VenvDir +Path to the directory that contains the virtual environment to activate. The +default value for this is the parent of the directory that the Activate.ps1 +script is located within. + +.Parameter Prompt +The prompt prefix to display when this virtual environment is activated. By +default, this prompt is the name of the virtual environment folder (VenvDir) +surrounded by parentheses and followed by a single space (ie. '(.venv) '). + +.Example +Activate.ps1 +Activates the Python virtual environment that contains the Activate.ps1 script. + +.Example +Activate.ps1 -Verbose +Activates the Python virtual environment that contains the Activate.ps1 script, +and shows extra information about the activation as it executes. + +.Example +Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv +Activates the Python virtual environment located in the specified location. + +.Example +Activate.ps1 -Prompt "MyPython" +Activates the Python virtual environment that contains the Activate.ps1 script, +and prefixes the current prompt with the specified string (surrounded in +parentheses) while the virtual environment is active. + +.Notes +On Windows, it may be required to enable this Activate.ps1 script by setting the +execution policy for the user. You can do this by issuing the following PowerShell +command: + +PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser + +For more information on Execution Policies: +https://go.microsoft.com/fwlink/?LinkID=135170 + +#> +Param( + [Parameter(Mandatory = $false)] + [String] + $VenvDir, + [Parameter(Mandatory = $false)] + [String] + $Prompt +) + +<# Function declarations --------------------------------------------------- #> + +<# +.Synopsis +Remove all shell session elements added by the Activate script, including the +addition of the virtual environment's Python executable from the beginning of +the PATH variable. + +.Parameter NonDestructive +If present, do not remove this function from the global namespace for the +session. + +#> +function global:deactivate ([switch]$NonDestructive) { + # Revert to original values + + # The prior prompt: + if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { + Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt + Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT + } + + # The prior PYTHONHOME: + if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { + Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME + Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME + } + + # The prior PATH: + if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { + Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH + Remove-Item -Path Env:_OLD_VIRTUAL_PATH + } + + # Just remove the VIRTUAL_ENV altogether: + if (Test-Path -Path Env:VIRTUAL_ENV) { + Remove-Item -Path env:VIRTUAL_ENV + } + + # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: + if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { + Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force + } + + # Leave deactivate function in the global namespace if requested: + if (-not $NonDestructive) { + Remove-Item -Path function:deactivate + } +} + +<# +.Description +Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the +given folder, and returns them in a map. + +For each line in the pyvenv.cfg file, if that line can be parsed into exactly +two strings separated by `=` (with any amount of whitespace surrounding the =) +then it is considered a `key = value` line. The left hand string is the key, +the right hand is the value. + +If the value starts with a `'` or a `"` then the first and last character is +stripped from the value before being captured. + +.Parameter ConfigDir +Path to the directory that contains the `pyvenv.cfg` file. +#> +function Get-PyVenvConfig( + [String] + $ConfigDir +) { + Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" + + # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). + $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue + + # An empty map will be returned if no config file is found. + $pyvenvConfig = @{ } + + if ($pyvenvConfigPath) { + + Write-Verbose "File exists, parse `key = value` lines" + $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath + + $pyvenvConfigContent | ForEach-Object { + $keyval = $PSItem -split "\s*=\s*", 2 + if ($keyval[0] -and $keyval[1]) { + $val = $keyval[1] + + # Remove extraneous quotations around a string value. + if ("'""".Contains($val.Substring(0, 1))) { + $val = $val.Substring(1, $val.Length - 2) + } + + $pyvenvConfig[$keyval[0]] = $val + Write-Verbose "Adding Key: '$($keyval[0])'='$val'" + } + } + } + return $pyvenvConfig +} + + +<# Begin Activate script --------------------------------------------------- #> + +# Determine the containing directory of this script +$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition +$VenvExecDir = Get-Item -Path $VenvExecPath + +Write-Verbose "Activation script is located in path: '$VenvExecPath'" +Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" +Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" + +# Set values required in priority: CmdLine, ConfigFile, Default +# First, get the location of the virtual environment, it might not be +# VenvExecDir if specified on the command line. +if ($VenvDir) { + Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" +} +else { + Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." + $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") + Write-Verbose "VenvDir=$VenvDir" +} + +# Next, read the `pyvenv.cfg` file to determine any required value such +# as `prompt`. +$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir + +# Next, set the prompt from the command line, or the config file, or +# just use the name of the virtual environment folder. +if ($Prompt) { + Write-Verbose "Prompt specified as argument, using '$Prompt'" +} +else { + Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" + if ($pyvenvCfg -and $pyvenvCfg['prompt']) { + Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" + $Prompt = $pyvenvCfg['prompt']; + } + else { + Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virutal environment)" + Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" + $Prompt = Split-Path -Path $venvDir -Leaf + } +} + +Write-Verbose "Prompt = '$Prompt'" +Write-Verbose "VenvDir='$VenvDir'" + +# Deactivate any currently active virtual environment, but leave the +# deactivate function in place. +deactivate -nondestructive + +# Now set the environment variable VIRTUAL_ENV, used by many tools to determine +# that there is an activated venv. +$env:VIRTUAL_ENV = $VenvDir + +if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { + + Write-Verbose "Setting prompt to '$Prompt'" + + # Set the prompt to include the env name + # Make sure _OLD_VIRTUAL_PROMPT is global + function global:_OLD_VIRTUAL_PROMPT { "" } + Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT + New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt + + function global:prompt { + Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " + _OLD_VIRTUAL_PROMPT + } +} + +# Clear PYTHONHOME +if (Test-Path -Path Env:PYTHONHOME) { + Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME + Remove-Item -Path Env:PYTHONHOME +} + +# Add the venv to the PATH +Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH +$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" diff --git a/backend/work4u/bin/activate b/backend/work4u/bin/activate new file mode 100644 index 0000000..283a1b3 --- /dev/null +++ b/backend/work4u/bin/activate @@ -0,0 +1,66 @@ +# This file must be used with "source bin/activate" *from bash* +# you cannot run it directly + +deactivate () { + # reset old environment variables + if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then + PATH="${_OLD_VIRTUAL_PATH:-}" + export PATH + unset _OLD_VIRTUAL_PATH + fi + if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then + PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" + export PYTHONHOME + unset _OLD_VIRTUAL_PYTHONHOME + fi + + # This should detect bash and zsh, which have a hash command that must + # be called to get it to forget past commands. Without forgetting + # past commands the $PATH changes we made may not be respected + if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r 2> /dev/null + fi + + if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then + PS1="${_OLD_VIRTUAL_PS1:-}" + export PS1 + unset _OLD_VIRTUAL_PS1 + fi + + unset VIRTUAL_ENV + if [ ! "${1:-}" = "nondestructive" ] ; then + # Self destruct! + unset -f deactivate + fi +} + +# unset irrelevant variables +deactivate nondestructive + +VIRTUAL_ENV="/Users/william/Downloads/work4u-interview-main/backend/work4u" +export VIRTUAL_ENV + +_OLD_VIRTUAL_PATH="$PATH" +PATH="$VIRTUAL_ENV/bin:$PATH" +export PATH + +# unset PYTHONHOME if set +# this will fail if PYTHONHOME is set to the empty string (which is bad anyway) +# could use `if (set -u; : $PYTHONHOME) ;` in bash +if [ -n "${PYTHONHOME:-}" ] ; then + _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" + unset PYTHONHOME +fi + +if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then + _OLD_VIRTUAL_PS1="${PS1:-}" + PS1="(work4u) ${PS1:-}" + export PS1 +fi + +# This should detect bash and zsh, which have a hash command that must +# be called to get it to forget past commands. Without forgetting +# past commands the $PATH changes we made may not be respected +if [ -n "${BASH:-}" -o -n "${ZSH_VERSION:-}" ] ; then + hash -r 2> /dev/null +fi diff --git a/backend/work4u/bin/activate.csh b/backend/work4u/bin/activate.csh new file mode 100644 index 0000000..fd251aa --- /dev/null +++ b/backend/work4u/bin/activate.csh @@ -0,0 +1,25 @@ +# This file must be used with "source bin/activate.csh" *from csh*. +# You cannot run it directly. +# Created by Davide Di Blasi . +# Ported to Python 3.3 venv by Andrew Svetlov + +alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; test "\!:*" != "nondestructive" && unalias deactivate' + +# Unset irrelevant variables. +deactivate nondestructive + +setenv VIRTUAL_ENV "/Users/william/Downloads/work4u-interview-main/backend/work4u" + +set _OLD_VIRTUAL_PATH="$PATH" +setenv PATH "$VIRTUAL_ENV/bin:$PATH" + + +set _OLD_VIRTUAL_PROMPT="$prompt" + +if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then + set prompt = "(work4u) $prompt" +endif + +alias pydoc python -m pydoc + +rehash diff --git a/backend/work4u/bin/activate.fish b/backend/work4u/bin/activate.fish new file mode 100644 index 0000000..be6a6b7 --- /dev/null +++ b/backend/work4u/bin/activate.fish @@ -0,0 +1,64 @@ +# This file must be used with "source /bin/activate.fish" *from fish* +# (https://fishshell.com/); you cannot run it directly. + +function deactivate -d "Exit virtual environment and return to normal shell environment" + # reset old environment variables + if test -n "$_OLD_VIRTUAL_PATH" + set -gx PATH $_OLD_VIRTUAL_PATH + set -e _OLD_VIRTUAL_PATH + end + if test -n "$_OLD_VIRTUAL_PYTHONHOME" + set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME + set -e _OLD_VIRTUAL_PYTHONHOME + end + + if test -n "$_OLD_FISH_PROMPT_OVERRIDE" + functions -e fish_prompt + set -e _OLD_FISH_PROMPT_OVERRIDE + functions -c _old_fish_prompt fish_prompt + functions -e _old_fish_prompt + end + + set -e VIRTUAL_ENV + if test "$argv[1]" != "nondestructive" + # Self-destruct! + functions -e deactivate + end +end + +# Unset irrelevant variables. +deactivate nondestructive + +set -gx VIRTUAL_ENV "/Users/william/Downloads/work4u-interview-main/backend/work4u" + +set -gx _OLD_VIRTUAL_PATH $PATH +set -gx PATH "$VIRTUAL_ENV/bin" $PATH + +# Unset PYTHONHOME if set. +if set -q PYTHONHOME + set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME + set -e PYTHONHOME +end + +if test -z "$VIRTUAL_ENV_DISABLE_PROMPT" + # fish uses a function instead of an env var to generate the prompt. + + # Save the current fish_prompt function as the function _old_fish_prompt. + functions -c fish_prompt _old_fish_prompt + + # With the original prompt function renamed, we can override with our own. + function fish_prompt + # Save the return status of the last command. + set -l old_status $status + + # Output the venv prompt; color taken from the blue of the Python logo. + printf "%s%s%s" (set_color 4B8BBE) "(work4u) " (set_color normal) + + # Restore the return status of the previous command. + echo "exit $old_status" | . + # Output the original/"old" prompt. + _old_fish_prompt + end + + set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV" +end diff --git a/backend/work4u/bin/alembic b/backend/work4u/bin/alembic new file mode 100755 index 0000000..07e4be8 --- /dev/null +++ b/backend/work4u/bin/alembic @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from alembic.config import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/backend/work4u/bin/dotenv b/backend/work4u/bin/dotenv new file mode 100755 index 0000000..ed8c5a8 --- /dev/null +++ b/backend/work4u/bin/dotenv @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from dotenv.__main__ import cli +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(cli()) diff --git a/backend/work4u/bin/httpx b/backend/work4u/bin/httpx new file mode 100755 index 0000000..776b828 --- /dev/null +++ b/backend/work4u/bin/httpx @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from httpx import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/backend/work4u/bin/mako-render b/backend/work4u/bin/mako-render new file mode 100755 index 0000000..8df67d7 --- /dev/null +++ b/backend/work4u/bin/mako-render @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from mako.cmd import cmdline +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(cmdline()) diff --git a/backend/work4u/bin/normalizer b/backend/work4u/bin/normalizer new file mode 100755 index 0000000..ca8048c --- /dev/null +++ b/backend/work4u/bin/normalizer @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from charset_normalizer import cli +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(cli.cli_detect()) diff --git a/backend/work4u/bin/pip b/backend/work4u/bin/pip new file mode 100755 index 0000000..7c4a940 --- /dev/null +++ b/backend/work4u/bin/pip @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/backend/work4u/bin/pip3 b/backend/work4u/bin/pip3 new file mode 100755 index 0000000..7c4a940 --- /dev/null +++ b/backend/work4u/bin/pip3 @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/backend/work4u/bin/pip3.9 b/backend/work4u/bin/pip3.9 new file mode 100755 index 0000000..7c4a940 --- /dev/null +++ b/backend/work4u/bin/pip3.9 @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pip._internal.cli.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/backend/work4u/bin/py.test b/backend/work4u/bin/py.test new file mode 100755 index 0000000..2dffffa --- /dev/null +++ b/backend/work4u/bin/py.test @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pytest import console_main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(console_main()) diff --git a/backend/work4u/bin/pygmentize b/backend/work4u/bin/pygmentize new file mode 100755 index 0000000..8016466 --- /dev/null +++ b/backend/work4u/bin/pygmentize @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pygments.cmdline import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/backend/work4u/bin/pyrsa-decrypt b/backend/work4u/bin/pyrsa-decrypt new file mode 100755 index 0000000..f097e7f --- /dev/null +++ b/backend/work4u/bin/pyrsa-decrypt @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rsa.cli import decrypt +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(decrypt()) diff --git a/backend/work4u/bin/pyrsa-encrypt b/backend/work4u/bin/pyrsa-encrypt new file mode 100755 index 0000000..a68172a --- /dev/null +++ b/backend/work4u/bin/pyrsa-encrypt @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rsa.cli import encrypt +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(encrypt()) diff --git a/backend/work4u/bin/pyrsa-keygen b/backend/work4u/bin/pyrsa-keygen new file mode 100755 index 0000000..8efb11b --- /dev/null +++ b/backend/work4u/bin/pyrsa-keygen @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rsa.cli import keygen +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(keygen()) diff --git a/backend/work4u/bin/pyrsa-priv2pub b/backend/work4u/bin/pyrsa-priv2pub new file mode 100755 index 0000000..88b2537 --- /dev/null +++ b/backend/work4u/bin/pyrsa-priv2pub @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rsa.util import private_to_public +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(private_to_public()) diff --git a/backend/work4u/bin/pyrsa-sign b/backend/work4u/bin/pyrsa-sign new file mode 100755 index 0000000..5a122e4 --- /dev/null +++ b/backend/work4u/bin/pyrsa-sign @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rsa.cli import sign +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(sign()) diff --git a/backend/work4u/bin/pyrsa-verify b/backend/work4u/bin/pyrsa-verify new file mode 100755 index 0000000..7f3d02a --- /dev/null +++ b/backend/work4u/bin/pyrsa-verify @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from rsa.cli import verify +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(verify()) diff --git a/backend/work4u/bin/pytest b/backend/work4u/bin/pytest new file mode 100755 index 0000000..2dffffa --- /dev/null +++ b/backend/work4u/bin/pytest @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from pytest import console_main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(console_main()) diff --git a/backend/work4u/bin/python b/backend/work4u/bin/python new file mode 120000 index 0000000..b8a0adb --- /dev/null +++ b/backend/work4u/bin/python @@ -0,0 +1 @@ +python3 \ No newline at end of file diff --git a/backend/work4u/bin/python3 b/backend/work4u/bin/python3 new file mode 120000 index 0000000..975a95f --- /dev/null +++ b/backend/work4u/bin/python3 @@ -0,0 +1 @@ +/Applications/Xcode.app/Contents/Developer/usr/bin/python3 \ No newline at end of file diff --git a/backend/work4u/bin/python3.9 b/backend/work4u/bin/python3.9 new file mode 120000 index 0000000..b8a0adb --- /dev/null +++ b/backend/work4u/bin/python3.9 @@ -0,0 +1 @@ +python3 \ No newline at end of file diff --git a/backend/work4u/bin/tqdm b/backend/work4u/bin/tqdm new file mode 100755 index 0000000..cce3e0c --- /dev/null +++ b/backend/work4u/bin/tqdm @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from tqdm.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/backend/work4u/bin/uvicorn b/backend/work4u/bin/uvicorn new file mode 100755 index 0000000..d45c981 --- /dev/null +++ b/backend/work4u/bin/uvicorn @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from uvicorn.main import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/backend/work4u/bin/watchfiles b/backend/work4u/bin/watchfiles new file mode 100755 index 0000000..4017327 --- /dev/null +++ b/backend/work4u/bin/watchfiles @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from watchfiles.cli import cli +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(cli()) diff --git a/backend/work4u/bin/websockets b/backend/work4u/bin/websockets new file mode 100755 index 0000000..b15c5fd --- /dev/null +++ b/backend/work4u/bin/websockets @@ -0,0 +1,8 @@ +#!/Users/william/Downloads/work4u-interview-main/backend/work4u/bin/python3 +# -*- coding: utf-8 -*- +import re +import sys +from websockets.cli import main +if __name__ == '__main__': + sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) + sys.exit(main()) diff --git a/backend/work4u/pyvenv.cfg b/backend/work4u/pyvenv.cfg new file mode 100644 index 0000000..8e83703 --- /dev/null +++ b/backend/work4u/pyvenv.cfg @@ -0,0 +1,3 @@ +home = /Applications/Xcode.app/Contents/Developer/usr/bin +include-system-site-packages = false +version = 3.9.6 diff --git a/frontend/.gitignore b/frontend/.gitignore new file mode 100644 index 0000000..5ef6a52 --- /dev/null +++ b/frontend/.gitignore @@ -0,0 +1,41 @@ +# See https://help.github.com/articles/ignoring-files/ for more about ignoring files. + +# dependencies +/node_modules +/.pnp +.pnp.* +.yarn/* +!.yarn/patches +!.yarn/plugins +!.yarn/releases +!.yarn/versions + +# testing +/coverage + +# next.js +/.next/ +/out/ + +# production +/build + +# misc +.DS_Store +*.pem + +# debug +npm-debug.log* +yarn-debug.log* +yarn-error.log* +.pnpm-debug.log* + +# env files (can opt-in for committing if needed) +.env* + +# vercel +.vercel + +# typescript +*.tsbuildinfo +next-env.d.ts diff --git a/frontend/README.md b/frontend/README.md new file mode 100644 index 0000000..e215bc4 --- /dev/null +++ b/frontend/README.md @@ -0,0 +1,36 @@ +This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app). + +## Getting Started + +First, run the development server: + +```bash +npm run dev +# or +yarn dev +# or +pnpm dev +# or +bun dev +``` + +Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. + +You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file. + +This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel. + +## Learn More + +To learn more about Next.js, take a look at the following resources: + +- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API. +- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial. + +You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome! + +## Deploy on Vercel + +The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js. + +Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details. diff --git a/frontend/components.json b/frontend/components.json new file mode 100644 index 0000000..ffe928f --- /dev/null +++ b/frontend/components.json @@ -0,0 +1,21 @@ +{ + "$schema": "https://ui.shadcn.com/schema.json", + "style": "new-york", + "rsc": true, + "tsx": true, + "tailwind": { + "config": "", + "css": "src/app/globals.css", + "baseColor": "neutral", + "cssVariables": true, + "prefix": "" + }, + "aliases": { + "components": "@/components", + "utils": "@/lib/utils", + "ui": "@/components/ui", + "lib": "@/lib", + "hooks": "@/hooks" + }, + "iconLibrary": "lucide" +} \ No newline at end of file diff --git a/frontend/eslint.config.mjs b/frontend/eslint.config.mjs new file mode 100644 index 0000000..c85fb67 --- /dev/null +++ b/frontend/eslint.config.mjs @@ -0,0 +1,16 @@ +import { dirname } from "path"; +import { fileURLToPath } from "url"; +import { FlatCompat } from "@eslint/eslintrc"; + +const __filename = fileURLToPath(import.meta.url); +const __dirname = dirname(__filename); + +const compat = new FlatCompat({ + baseDirectory: __dirname, +}); + +const eslintConfig = [ + ...compat.extends("next/core-web-vitals", "next/typescript"), +]; + +export default eslintConfig; diff --git a/frontend/next.config.ts b/frontend/next.config.ts new file mode 100644 index 0000000..e9ffa30 --- /dev/null +++ b/frontend/next.config.ts @@ -0,0 +1,7 @@ +import type { NextConfig } from "next"; + +const nextConfig: NextConfig = { + /* config options here */ +}; + +export default nextConfig; diff --git a/frontend/package.json b/frontend/package.json new file mode 100644 index 0000000..25694a1 --- /dev/null +++ b/frontend/package.json @@ -0,0 +1,37 @@ +{ + "name": "frontend", + "version": "0.1.0", + "private": true, + "scripts": { + "dev": "next dev --turbopack", + "build": "next build", + "start": "next start", + "lint": "next lint" + }, + "dependencies": { + "@radix-ui/react-scroll-area": "^1.2.9", + "@radix-ui/react-separator": "^1.1.7", + "@radix-ui/react-slot": "^1.2.3", + "class-variance-authority": "^0.7.1", + "clsx": "^2.1.1", + "lucide-react": "^0.526.0", + "next": "15.4.4", + "next-themes": "^0.4.6", + "react": "19.1.0", + "react-dom": "19.1.0", + "sonner": "^2.0.6", + "tailwind-merge": "^3.3.1" + }, + "devDependencies": { + "@eslint/eslintrc": "^3", + "@tailwindcss/postcss": "^4", + "@types/node": "^20", + "@types/react": "^19", + "@types/react-dom": "^19", + "eslint": "^9", + "eslint-config-next": "15.4.4", + "tailwindcss": "^4", + "tw-animate-css": "^1.3.6", + "typescript": "^5" + } +} diff --git a/frontend/postcss.config.mjs b/frontend/postcss.config.mjs new file mode 100644 index 0000000..c7bcb4b --- /dev/null +++ b/frontend/postcss.config.mjs @@ -0,0 +1,5 @@ +const config = { + plugins: ["@tailwindcss/postcss"], +}; + +export default config; diff --git a/frontend/src/app/digest/[id]/page.tsx b/frontend/src/app/digest/[id]/page.tsx new file mode 100644 index 0000000..95b5697 --- /dev/null +++ b/frontend/src/app/digest/[id]/page.tsx @@ -0,0 +1,293 @@ +'use client'; + +import { useState, useEffect } from 'react'; +import { useParams, useRouter } from 'next/navigation'; +import Link from 'next/link'; +import { Button } from '@/components/ui/button'; +import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'; +import { Badge } from '@/components/ui/badge'; +import { Separator } from '@/components/ui/separator'; +import { ScrollArea } from '@/components/ui/scroll-area'; +import { Alert, AlertDescription } from '@/components/ui/alert'; +import { apiClient, DigestDetailResponse } from '@/services/api'; +import { toast } from 'sonner'; + +export default function DigestPage() { + const params = useParams(); + const router = useRouter(); + const [digest, setDigest] = useState(null); + const [isLoading, setIsLoading] = useState(true); + const [error, setError] = useState(null); + const [showTranscript, setShowTranscript] = useState(false); + + const digestId = params.id as string; + + useEffect(() => { + if (digestId) { + loadDigest(); + } + }, [digestId]); + + const loadDigest = async () => { + try { + setIsLoading(true); + setError(null); + + // Try to parse as number first (internal ID), then try as UUID (public ID) + const numericId = parseInt(digestId); + let result: DigestDetailResponse; + + if (!isNaN(numericId)) { + result = await apiClient.getDigest(numericId); + } else { + result = await apiClient.getSharedDigest(digestId); + } + + setDigest(result); + } catch (err) { + const errorMessage = err instanceof Error ? err.message : 'Failed to load digest'; + setError(errorMessage); + toast.error(errorMessage); + } finally { + setIsLoading(false); + } + }; + + const copyShareLink = async () => { + if (!digest) return; + + try { + if (!digest.is_public) { + await apiClient.updateDigestVisibility(digest.id, true); + setDigest({ ...digest, is_public: true }); + } + + const shareUrl = `${window.location.origin}/digest/share/${digest.public_id}`; + await navigator.clipboard.writeText(shareUrl); + toast.success('Share link copied to clipboard!'); + } catch (err) { + toast.error('Failed to copy share link'); + } + }; + + const toggleVisibility = async () => { + if (!digest) return; + + try { + const newVisibility = !digest.is_public; + const updatedDigest = await apiClient.updateDigestVisibility(digest.id, newVisibility); + setDigest(updatedDigest); + toast.success(`Digest is now ${newVisibility ? 'public' : 'private'}`); + } catch (err) { + toast.error('Failed to update visibility'); + } + }; + + const deleteDigest = async () => { + if (!digest || !confirm('Are you sure you want to delete this digest?')) { + return; + } + + try { + await apiClient.deleteDigest(digest.id); + toast.success('Digest deleted successfully'); + router.push('/digests'); + } catch (err) { + toast.error('Failed to delete digest'); + } + }; + + if (isLoading) { + return ( +
+
+
+
+
+

Loading digest...

+
+
+
+
+ ); + } + + if (error || !digest) { + return ( +
+
+
+ + + {error || 'Digest not found'} + + +
+ + + +
+
+
+
+ ); + } + + return ( +
+
+
+ {/* Header */} +
+
+

+ Meeting Digest #{digest.id} +

+

+ 📅 {new Date(digest.created_at).toLocaleDateString()} at{' '} + {new Date(digest.created_at).toLocaleTimeString()} +

+ {digest.updated_at !== digest.created_at && ( +

+ 🔄 Updated on {new Date(digest.updated_at).toLocaleDateString()} +

+ )} +
+
+ + + + + + +
+
+ + {/* Digest Content */} + + +
+ Meeting Summary +
+ + {digest.is_public ? 'Public' : 'Private'} + + + + +
+
+
+ +
+

+ 📝 Summary Overview +

+

+ {digest.summary_overview} +

+
+ + + +
+

+ 🎯 Key Decisions +

+ {digest.key_decisions.length > 0 ? ( +
    + {digest.key_decisions.map((decision, index) => ( +
  • + + + {decision} + +
  • + ))} +
+ ) : ( +

+ No key decisions identified in this meeting. +

+ )} +
+ + + +
+

+ ✅ Action Items +

+ {digest.action_items.length > 0 ? ( +
    + {digest.action_items.map((item, index) => ( +
  • + + + {item} + +
  • + ))} +
+ ) : ( +

+ No action items identified in this meeting. +

+ )} +
+
+
+ + {/* Original Transcript */} + + + + Original Transcript + + + + {showTranscript && ( + + +
+ {digest.original_transcript} +
+
+
+ )} +
+
+
+
+ ); +} diff --git a/frontend/src/app/digest/share/[publicId]/page.tsx b/frontend/src/app/digest/share/[publicId]/page.tsx new file mode 100644 index 0000000..a312e27 --- /dev/null +++ b/frontend/src/app/digest/share/[publicId]/page.tsx @@ -0,0 +1,253 @@ +'use client'; + +import { useState, useEffect } from 'react'; +import { useParams } from 'next/navigation'; +import Link from 'next/link'; +import { Button } from '@/components/ui/button'; +import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'; +import { Badge } from '@/components/ui/badge'; +import { Separator } from '@/components/ui/separator'; +import { ScrollArea } from '@/components/ui/scroll-area'; +import { Alert, AlertDescription } from '@/components/ui/alert'; +import { apiClient, DigestDetailResponse } from '@/services/api'; +import { toast } from 'sonner'; + +export default function SharedDigestPage() { + const params = useParams(); + const [digest, setDigest] = useState(null); + const [isLoading, setIsLoading] = useState(true); + const [error, setError] = useState(null); + const [showTranscript, setShowTranscript] = useState(false); + + const publicId = params.publicId as string; + + useEffect(() => { + if (publicId) { + loadDigest(); + } + }, [publicId]); + + const loadDigest = async () => { + try { + setIsLoading(true); + setError(null); + + const result = await apiClient.getSharedDigest(publicId); + setDigest(result); + } catch (err) { + const errorMessage = err instanceof Error ? err.message : 'Failed to load shared digest'; + setError(errorMessage); + } finally { + setIsLoading(false); + } + }; + + const copyShareLink = async () => { + if (!digest) return; + + try { + const shareUrl = `${window.location.origin}/digest/share/${digest.public_id}`; + await navigator.clipboard.writeText(shareUrl); + toast.success('Share link copied to clipboard!'); + } catch (err) { + toast.error('Failed to copy share link'); + } + }; + + if (isLoading) { + return ( +
+
+
+
+
+

Loading shared digest...

+
+
+
+
+ ); + } + + if (error || !digest) { + return ( +
+
+
+ + + {error || 'Shared digest not found or is no longer public'} + + +
+ + + +
+
+
+
+ ); + } + + return ( +
+
+
+ {/* Header */} +
+
+
+
+ 🤖 +
+

+ Shared Meeting Digest +

+
+

+ Created on {new Date(digest.created_at).toLocaleDateString()} at{' '} + {new Date(digest.created_at).toLocaleTimeString()} +

+
+
+ + + +
+
+ + {/* Digest Content */} + + +
+ Meeting Summary +
+ + Public + + +
+
+
+ +
+

+ 📝 Summary Overview +

+
+

+ {digest.summary_overview} +

+
+
+ + + +
+

+ 🎯 Key Decisions +

+ {digest.key_decisions.length > 0 ? ( +
+
    + {digest.key_decisions.map((decision, index) => ( +
  • +
    + {index + 1} +
    + + {decision} + +
  • + ))} +
+
+ ) : ( +

+ No key decisions identified in this meeting. +

+ )} +
+ + + +
+

+ ✅ Action Items +

+ {digest.action_items.length > 0 ? ( +
+
    + {digest.action_items.map((item, index) => ( +
  • +
    + ✓ +
    + + {item} + +
  • + ))} +
+
+ ) : ( +

+ No action items identified in this meeting. +

+ )} +
+
+
+ + {/* Original Transcript */} + + + + Original Transcript + + + + {showTranscript && ( + + +
+ {digest.original_transcript} +
+
+
+ )} +
+ + {/* Footer */} +
+

+ Powered by AI Meeting Digest +

+ + + +
+
+
+
+ ); +} diff --git a/frontend/src/app/digests/page.tsx b/frontend/src/app/digests/page.tsx new file mode 100644 index 0000000..d986e7e --- /dev/null +++ b/frontend/src/app/digests/page.tsx @@ -0,0 +1,212 @@ +'use client'; + +import { useState, useEffect } from 'react'; +import Link from 'next/link'; +import { Button } from '@/components/ui/button'; +import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'; +import { Badge } from '@/components/ui/badge'; +import { ScrollArea } from '@/components/ui/scroll-area'; +import { Alert, AlertDescription } from '@/components/ui/alert'; +import { apiClient, DigestListResponse } from '@/services/api'; +import { toast } from 'sonner'; + +export default function DigestsPage() { + const [digests, setDigests] = useState([]); + const [isLoading, setIsLoading] = useState(true); + const [error, setError] = useState(null); + + useEffect(() => { + loadDigests(); + }, []); + + const loadDigests = async () => { + try { + setIsLoading(true); + setError(null); + const result = await apiClient.getAllDigests(); + setDigests(result); + } catch (err) { + const errorMessage = err instanceof Error ? err.message : 'Failed to load digests'; + setError(errorMessage); + toast.error(errorMessage); + } finally { + setIsLoading(false); + } + }; + + const copyShareLink = async (digest: DigestListResponse) => { + try { + if (!digest.is_public) { + await apiClient.updateDigestVisibility(digest.id, true); + // Update local state + setDigests(prevDigests => + prevDigests.map(d => + d.id === digest.id ? { ...d, is_public: true } : d + ) + ); + } + + const shareUrl = `${window.location.origin}/digest/share/${digest.public_id}`; + await navigator.clipboard.writeText(shareUrl); + toast.success('Share link copied to clipboard!'); + } catch (err) { + toast.error('Failed to copy share link'); + } + }; + + const deleteDigest = async (id: number) => { + if (!confirm('Are you sure you want to delete this digest?')) { + return; + } + + try { + await apiClient.deleteDigest(id); + setDigests(prevDigests => prevDigests.filter(d => d.id !== id)); + toast.success('Digest deleted successfully'); + } catch (err) { + toast.error('Failed to delete digest'); + } + }; + + if (isLoading) { + return ( +
+
+
+
+
+

Loading digests...

+
+
+
+
+ ); + } + + return ( +
+
+
+ {/* Header */} +
+
+

+ Meeting Digests +

+

+ Your past meeting summaries and insights +

+
+ + + +
+ + {error && ( + + {error} + + )} + + {digests.length === 0 ? ( + + +
📝
+

No digests yet

+

+ Start by creating your first meeting digest +

+ + + +
+
+ ) : ( +
+ {digests.map((digest) => ( + + +
+
+ +
+ #{digest.id} +
+ + Meeting Digest + +
+ + 📅 {new Date(digest.created_at).toLocaleDateString()} at{' '} + {new Date(digest.created_at).toLocaleTimeString()} + +
+
+ + {digest.is_public ? '🌐 Public' : '🔒 Private'} + +
+
+
+ +
+
+

+ {digest.summary_overview} +

+
+ +
+
+ + + + +
+ +
+
+
+
+ ))} +
+ )} + + {/* Refresh Button */} +
+ +
+
+
+
+ ); +} diff --git a/frontend/src/app/favicon.ico b/frontend/src/app/favicon.ico new file mode 100644 index 0000000..718d6fe Binary files /dev/null and b/frontend/src/app/favicon.ico differ diff --git a/frontend/src/app/globals.css b/frontend/src/app/globals.css new file mode 100644 index 0000000..a39919a --- /dev/null +++ b/frontend/src/app/globals.css @@ -0,0 +1,159 @@ +@import "tailwindcss"; +@import "tw-animate-css"; + +@custom-variant dark (&:is(.dark *)); + +/* Custom utility classes */ +.line-clamp-3 { + display: -webkit-box; + -webkit-box-orient: vertical; + -webkit-line-clamp: 3; + overflow: hidden; +} + +/* Smooth streaming animation */ +.streaming-text { + transform: translateZ(0); + backface-visibility: hidden; + perspective: 1000px; +} + +/* Custom scrollbar */ +.scrollbar-thin { + scrollbar-width: thin; +} + +.scrollbar-thumb-gray-300::-webkit-scrollbar { + width: 6px; +} + +.scrollbar-thumb-gray-300::-webkit-scrollbar-track { + background: transparent; +} + +.scrollbar-thumb-gray-300::-webkit-scrollbar-thumb { + background: #d1d5db; + border-radius: 3px; +} + +.dark .scrollbar-thumb-gray-600::-webkit-scrollbar-thumb { + background: #4b5563; +} + +@theme inline { + --color-background: var(--background); + --color-foreground: var(--foreground); + --font-sans: var(--font-geist-sans); + --font-mono: var(--font-geist-mono); + --color-sidebar-ring: var(--sidebar-ring); + --color-sidebar-border: var(--sidebar-border); + --color-sidebar-accent-foreground: var(--sidebar-accent-foreground); + --color-sidebar-accent: var(--sidebar-accent); + --color-sidebar-primary-foreground: var(--sidebar-primary-foreground); + --color-sidebar-primary: var(--sidebar-primary); + --color-sidebar-foreground: var(--sidebar-foreground); + --color-sidebar: var(--sidebar); + --color-chart-5: var(--chart-5); + --color-chart-4: var(--chart-4); + --color-chart-3: var(--chart-3); + --color-chart-2: var(--chart-2); + --color-chart-1: var(--chart-1); + --color-ring: var(--ring); + --color-input: var(--input); + --color-border: var(--border); + --color-destructive: var(--destructive); + --color-accent-foreground: var(--accent-foreground); + --color-accent: var(--accent); + --color-muted-foreground: var(--muted-foreground); + --color-muted: var(--muted); + --color-secondary-foreground: var(--secondary-foreground); + --color-secondary: var(--secondary); + --color-primary-foreground: var(--primary-foreground); + --color-primary: var(--primary); + --color-popover-foreground: var(--popover-foreground); + --color-popover: var(--popover); + --color-card-foreground: var(--card-foreground); + --color-card: var(--card); + --radius-sm: calc(var(--radius) - 4px); + --radius-md: calc(var(--radius) - 2px); + --radius-lg: var(--radius); + --radius-xl: calc(var(--radius) + 4px); +} + +:root { + --radius: 0.625rem; + --background: oklch(1 0 0); + --foreground: oklch(0.145 0 0); + --card: oklch(1 0 0); + --card-foreground: oklch(0.145 0 0); + --popover: oklch(1 0 0); + --popover-foreground: oklch(0.145 0 0); + --primary: oklch(0.205 0 0); + --primary-foreground: oklch(0.985 0 0); + --secondary: oklch(0.97 0 0); + --secondary-foreground: oklch(0.205 0 0); + --muted: oklch(0.97 0 0); + --muted-foreground: oklch(0.556 0 0); + --accent: oklch(0.97 0 0); + --accent-foreground: oklch(0.205 0 0); + --destructive: oklch(0.577 0.245 27.325); + --border: oklch(0.922 0 0); + --input: oklch(0.922 0 0); + --ring: oklch(0.708 0 0); + --chart-1: oklch(0.646 0.222 41.116); + --chart-2: oklch(0.6 0.118 184.704); + --chart-3: oklch(0.398 0.07 227.392); + --chart-4: oklch(0.828 0.189 84.429); + --chart-5: oklch(0.769 0.188 70.08); + --sidebar: oklch(0.985 0 0); + --sidebar-foreground: oklch(0.145 0 0); + --sidebar-primary: oklch(0.205 0 0); + --sidebar-primary-foreground: oklch(0.985 0 0); + --sidebar-accent: oklch(0.97 0 0); + --sidebar-accent-foreground: oklch(0.205 0 0); + --sidebar-border: oklch(0.922 0 0); + --sidebar-ring: oklch(0.708 0 0); +} + +.dark { + --background: oklch(0.145 0 0); + --foreground: oklch(0.985 0 0); + --card: oklch(0.205 0 0); + --card-foreground: oklch(0.985 0 0); + --popover: oklch(0.205 0 0); + --popover-foreground: oklch(0.985 0 0); + --primary: oklch(0.922 0 0); + --primary-foreground: oklch(0.205 0 0); + --secondary: oklch(0.269 0 0); + --secondary-foreground: oklch(0.985 0 0); + --muted: oklch(0.269 0 0); + --muted-foreground: oklch(0.708 0 0); + --accent: oklch(0.269 0 0); + --accent-foreground: oklch(0.985 0 0); + --destructive: oklch(0.704 0.191 22.216); + --border: oklch(1 0 0 / 10%); + --input: oklch(1 0 0 / 15%); + --ring: oklch(0.556 0 0); + --chart-1: oklch(0.488 0.243 264.376); + --chart-2: oklch(0.696 0.17 162.48); + --chart-3: oklch(0.769 0.188 70.08); + --chart-4: oklch(0.627 0.265 303.9); + --chart-5: oklch(0.645 0.246 16.439); + --sidebar: oklch(0.205 0 0); + --sidebar-foreground: oklch(0.985 0 0); + --sidebar-primary: oklch(0.488 0.243 264.376); + --sidebar-primary-foreground: oklch(0.985 0 0); + --sidebar-accent: oklch(0.269 0 0); + --sidebar-accent-foreground: oklch(0.985 0 0); + --sidebar-border: oklch(1 0 0 / 10%); + --sidebar-ring: oklch(0.556 0 0); +} + +@layer base { + * { + @apply border-border outline-ring/50; + } + body { + @apply bg-background text-foreground; + } +} diff --git a/frontend/src/app/layout.tsx b/frontend/src/app/layout.tsx new file mode 100644 index 0000000..4c52431 --- /dev/null +++ b/frontend/src/app/layout.tsx @@ -0,0 +1,36 @@ +import type { Metadata } from "next"; +import { Geist, Geist_Mono } from "next/font/google"; +import { Toaster } from "@/components/ui/sonner"; +import "./globals.css"; + +const geistSans = Geist({ + variable: "--font-geist-sans", + subsets: ["latin"], +}); + +const geistMono = Geist_Mono({ + variable: "--font-geist-mono", + subsets: ["latin"], +}); + +export const metadata: Metadata = { + title: "AI Meeting Digest", + description: "Transform your meeting transcripts into structured summaries with AI", +}; + +export default function RootLayout({ + children, +}: Readonly<{ + children: React.ReactNode; +}>) { + return ( + + + {children} + + + + ); +} diff --git a/frontend/src/app/page.tsx b/frontend/src/app/page.tsx new file mode 100644 index 0000000..d2ae5e6 --- /dev/null +++ b/frontend/src/app/page.tsx @@ -0,0 +1,109 @@ +'use client'; + +import { useState } from 'react'; +import Link from 'next/link'; + +import { Button } from '@/components/ui/button'; +import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'; +import { DigestCreator } from '@/components/DigestCreator'; + +export default function Home() { + return ( +
+
+ {/* Header */} +
+
+
+ 🤖 +
+
+

+ AI Meeting Digest +

+

+ Transform your meeting transcripts into structured summaries with AI-powered insights, key decisions, and action items +

+
+ + + +
+
+ + {/* Main Content */} +
+ + + Create New Digest + + Paste your meeting transcript below and get an AI-generated summary with key decisions and action items + + + + + + +
+ + {/* Features Section */} +
+ + +
+ 📝 +
+ Smart Summaries +
+ +

+ Get concise overviews of your meetings with key points highlighted and structured for easy reading. +

+
+
+ + + +
+ ✅ +
+ Action Items +
+ +

+ Automatically extract action items and assignments from transcripts with responsible parties identified. +

+
+
+ + + +
+ 🔗 +
+ Share Easily +
+ +

+ Generate shareable links for your team to access meeting summaries with privacy controls. +

+
+
+
+ + {/* Call to Action */} +
+

+ Powered by Google Gemini AI • Secure and Private +

+
+
+
+ ); +} diff --git a/frontend/src/components/DigestCreator.tsx b/frontend/src/components/DigestCreator.tsx new file mode 100644 index 0000000..3d71b98 --- /dev/null +++ b/frontend/src/components/DigestCreator.tsx @@ -0,0 +1,435 @@ +'use client'; + +import { useState, useRef, useEffect } from 'react'; +import { Button } from '@/components/ui/button'; +import { Textarea } from '@/components/ui/textarea'; +import { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card'; +import { Badge } from '@/components/ui/badge'; +import { Separator } from '@/components/ui/separator'; +import { Alert, AlertDescription } from '@/components/ui/alert'; +import { apiClient, DigestResponse } from '@/services/api'; +import { toast } from 'sonner'; + +interface StreamChunk { + content: string; + is_complete: boolean; + digest_id?: number; + error?: string; +} + +export function DigestCreator() { + const [transcript, setTranscript] = useState(''); + const [isLoading, setIsLoading] = useState(false); + const [digest, setDigest] = useState(null); + const [streamingContent, setStreamingContent] = useState(''); + const [displayedContent, setDisplayedContent] = useState(''); + const [error, setError] = useState(null); + const [useStreaming, setUseStreaming] = useState(true); + const abortControllerRef = useRef(null); + const streamingIntervalRef = useRef(null); + + // Word-by-word animation effect + useEffect(() => { + if (streamingContent && streamingContent !== displayedContent) { + // Clear existing interval + if (streamingIntervalRef.current) { + clearInterval(streamingIntervalRef.current); + } + + const words = streamingContent.split(' '); + const currentWords = displayedContent.split(' '); + let currentIndex = currentWords.length; + + if (currentIndex < words.length) { + streamingIntervalRef.current = setInterval(() => { + if (currentIndex < words.length) { + const newContent = words.slice(0, currentIndex + 1).join(' '); + setDisplayedContent(newContent); + currentIndex++; + } else { + if (streamingIntervalRef.current) { + clearInterval(streamingIntervalRef.current); + streamingIntervalRef.current = null; + } + } + }, 40); // Slightly slower for smoother animation + } + } + + return () => { + if (streamingIntervalRef.current) { + clearInterval(streamingIntervalRef.current); + } + }; + }, [streamingContent, displayedContent]); + + const handleSubmit = async (e: React.FormEvent) => { + e.preventDefault(); + if (!transcript.trim()) { + toast.error('Please enter a meeting transcript'); + return; + } + + setIsLoading(true); + setError(null); + setDigest(null); + setStreamingContent(''); + setDisplayedContent(''); + + try { + if (useStreaming) { + await handleStreamingSubmit(); + } else { + await handleRegularSubmit(); + } + } catch (err) { + const errorMessage = err instanceof Error ? err.message : 'An unexpected error occurred'; + setError(errorMessage); + toast.error(errorMessage); + } finally { + setIsLoading(false); + } + }; + + const handleRegularSubmit = async () => { + const result = await apiClient.createDigest(transcript); + setDigest(result); + toast.success('Digest created successfully!'); + }; + + const handleStreamingSubmit = async () => { + abortControllerRef.current = new AbortController(); + + try { + const stream = await apiClient.createDigestStream(transcript); + const reader = stream.getReader(); + const decoder = new TextDecoder(); + + while (true) { + const { done, value } = await reader.read(); + + if (done) break; + + const chunk = decoder.decode(value); + const lines = chunk.split('\n').filter(line => line.trim()); + + for (const line of lines) { + if (line.startsWith('data: ')) { + try { + const data: StreamChunk = JSON.parse(line.slice(6)); + + if (data.error) { + throw new Error(data.error); + } + + if (data.content) { + setStreamingContent(prev => prev + data.content); + } + + if (data.is_complete && data.digest_id) { + // Fetch the complete digest + const completeDigest = await apiClient.getDigest(data.digest_id); + setDigest(completeDigest); + setStreamingContent(''); + toast.success('Digest created successfully!'); + break; + } + } catch (e) { + console.error('Error parsing stream data:', e); + } + } + } + } + } catch (err) { + if (err instanceof Error && err.name === 'AbortError') { + toast.info('Stream cancelled'); + } else { + throw err; + } + } + }; + + const handleCancel = () => { + if (abortControllerRef.current) { + abortControllerRef.current.abort(); + } + if (streamingIntervalRef.current) { + clearInterval(streamingIntervalRef.current); + streamingIntervalRef.current = null; + } + setIsLoading(false); + setStreamingContent(''); + setDisplayedContent(''); + toast.info('Operation cancelled'); + }; + + const copyShareLink = async () => { + if (!digest) return; + + try { + // First make the digest public if it's not already + if (!digest.is_public) { + await apiClient.updateDigestVisibility(digest.id, true); + setDigest({ ...digest, is_public: true }); + } + + const shareUrl = `${window.location.origin}/digest/share/${digest.public_id}`; + await navigator.clipboard.writeText(shareUrl); + toast.success('Share link copied to clipboard!'); + } catch (err) { + toast.error('Failed to copy share link'); + } + }; + + const sampleTranscript = `Meeting Notes - Project Alpha Planning +Date: July 26, 2025 +Attendees: Sarah Johnson (PM), Mike Chen (Engineering), Lisa Park (Design), Tom Wilson (QA) + +Sarah: Good morning everyone. Let's start with our project roadmap. Mike, can you give us an update on the backend API development? + +Mike: Sure. We've completed about 70% of the core API endpoints. The user authentication system is done, and we're working on the data processing modules. I estimate we'll be finished by August 15th. + +Lisa: That sounds good. On the design side, we've finalized the UI mockups and created the design system. Sarah, I'll need your approval on the color scheme by Friday so we can start implementation. + +Sarah: Absolutely. I'll review them tomorrow and get back to you. Tom, what's the testing plan looking like? + +Tom: I've drafted a comprehensive test plan covering unit tests, integration tests, and user acceptance testing. We should start testing as soon as the first beta build is ready. I recommend we allocate 2 weeks for the full testing cycle. + +Sarah: Perfect. Let's make sure we have enough buffer time. Our client deadline is September 30th, so we need to account for any unexpected issues. + +Mike: Speaking of issues, we might need to revisit the database schema. The current design might not scale well with the expected user load. + +Sarah: Good point. Mike, can you prepare a proposal for database optimization by next Wednesday? We'll discuss it in our next planning meeting. + +Lisa: Also, I think we should consider doing user testing sessions before the final release. It would help us catch usability issues early. + +Tom: I agree. I can coordinate with the UX research team to set up testing sessions in early September. + +Sarah: Excellent. Let me summarize our action items: Mike will finish the API by August 15th and prepare a database optimization proposal by next Wednesday. Lisa will get design approval from me by Friday. Tom will coordinate user testing sessions for early September. Our next check-in is scheduled for Friday at 2 PM. + +Mike: Sounds good. One more thing - we should consider implementing automated deployment to streamline our release process. + +Sarah: Great suggestion. Mike, can you research deployment automation tools and present options in our Friday meeting? + +Mike: Will do. + +Sarah: Perfect. Thanks everyone. Meeting adjourned.`; + + return ( +
+
+
+
+ + +
+