diff --git a/README.md b/README.md index bac2352..fbe4a6a 100755 --- a/README.md +++ b/README.md @@ -2,23 +2,21 @@ HyperFleet API - Simple REST API for cluster lifecycle management. Provides CRUD operations for clusters and status sub-resources. Pure data layer with PostgreSQL integration - no business logic or event creation. Stateless design enables horizontal scaling. -![HyperFleet](rhtap-hyperfleet_sm.png) - ## Architecture ### Technology Stack -- **Language**: Go 1.24 or higher -- **API Definition**: TypeSpec → OpenAPI 3.0.3 -- **Code Generation**: openapi-generator-cli v7.16.0 +- **Language**: Go 1.24+ +- **API Definition**: OpenAPI 3.0 +- **Code Generation**: openapi-generator-cli - **Database**: PostgreSQL with GORM ORM - **Container Runtime**: Podman - **Testing**: Gomega + Resty ### Core Features -* TypeSpec-based API specification -* OpenAPI 3.0 code generation workflow +* OpenAPI 3.0 specification +* Automated Go code generation from OpenAPI * Cluster and NodePool lifecycle management * Adapter-based status reporting with Kubernetes-style conditions * Pagination and search capabilities @@ -26,676 +24,144 @@ HyperFleet API - Simple REST API for cluster lifecycle management. Provides CRUD * Database migrations with GORM * Embedded OpenAPI specification using `//go:embed` -## Project Structure +### Project Structure -``` +```text hyperfleet-api/ ├── cmd/hyperfleet-api/ # Application entry point ├── pkg/ │ ├── api/ # API models and handlers -│ │ ├── openapi/ # Generated Go models from OpenAPI -│ │ │ ├── api/ # Embedded OpenAPI specification -│ │ │ └── model_*.go # Generated model structs -│ │ └── openapi_embed.go # Go embed directive │ ├── dao/ # Data access layer │ ├── db/ # Database setup and migrations │ ├── handlers/ # HTTP request handlers -│ ├── services/ # Business logic -│ └── server/ # Server configuration +│ └── services/ # Business logic ├── openapi/ # API specification source -│ └── openapi.yaml # TypeSpec-generated OpenAPI spec (32KB) -├── test/ -│ ├── integration/ # Integration tests -│ └── factories/ # Test data factories +├── test/ # Integration tests and factories +├── docs/ # Detailed documentation └── Makefile # Build automation ``` -## API Resources - -### Cluster Management - -Cluster resources represent Kubernetes clusters managed across different cloud providers. - -**Endpoints:** -``` -GET /api/hyperfleet/v1/clusters -POST /api/hyperfleet/v1/clusters -GET /api/hyperfleet/v1/clusters/{cluster_id} -GET /api/hyperfleet/v1/clusters/{cluster_id}/statuses -POST /api/hyperfleet/v1/clusters/{cluster_id}/statuses -``` - -**Data Model:** -```json -{ - "kind": "Cluster", - "id": "string", - "name": "string", - "generation": 1, - "spec": { - "region": "us-west-2", - "version": "4.15", - "nodes": 3 - }, - "labels": { - "env": "production" - }, - "status": { - "phase": "Ready", - "observed_generation": 1, - "adapters": [...] - } -} -``` - -**Status Phases:** -- `NotReady` - Cluster is being provisioned or has failing conditions -- `Ready` - All adapter conditions report success -- `Failed` - Cluster provisioning or operation failed - -### NodePool Management - -NodePool resources represent groups of compute nodes within a cluster. - -**Endpoints:** -``` -GET /api/hyperfleet/v1/nodepools -GET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools -POST /api/hyperfleet/v1/clusters/{cluster_id}/nodepools -GET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id} -GET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}/statuses -POST /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}/statuses -``` - -**Data Model:** -```json -{ - "kind": "NodePool", - "id": "string", - "name": "string", - "owner_references": { - "kind": "Cluster", - "id": "cluster_id" - }, - "spec": { - "instance_type": "m5.2xlarge", - "replicas": 3, - "disk_size": 120 - }, - "labels": {}, - "status": { - "phase": "Ready", - "adapters": [...] - } -} -``` - -### Adapter Status Pattern - -Resources report status through adapter-specific condition sets following Kubernetes conventions. - -**Structure:** -```json -{ - "adapter": "dns-adapter", - "observed_generation": 1, - "created_time": "2025-11-17T15:04:05Z", - "last_report_time": "2025-11-17T15:04:05Z", - "conditions": [ - { - "type": "Ready", - "status": "True", - "reason": "ClusterProvisioned", - "message": "Cluster successfully provisioned", - "last_transition_time": "2025-11-17T15:04:05Z" - } - ], - "data": {} -} -``` - -**Note**: The `created_time`, `last_report_time`, and `last_transition_time` fields are set by the service. - - -**Condition Types:** -- `Ready` - Resource is operational -- `Available` - Resource is available for use -- `Progressing` - Resource is being modified -- Custom types defined by adapters - -### List Response Pattern - -All list endpoints return consistent pagination metadata: - -```json -{ - "kind": "ClusterList", - "page": 1, - "size": 10, - "total": 100, - "items": [...] -} -``` - -**Pagination Parameters:** -- `?page=N` - Page number (default: 1) -- `?pageSize=N` - Items per page (default: 100) - -**Search Parameters:** -- Uses TSL (Tree Search Language) query syntax -- Supported fields: `name`, `status.phase`, `labels.` -- Supported operators: `=`, `in`, `and`, `or` -- Examples: - ```bash - # Simple query - curl -G http://localhost:8000/api/hyperfleet/v1/clusters \ - --data-urlencode "search=name='my-cluster'" - - # AND query - curl -G http://localhost:8000/api/hyperfleet/v1/clusters \ - --data-urlencode "search=status.phase='Ready' and labels.env='production'" - - # OR query - curl -G http://localhost:8000/api/hyperfleet/v1/clusters \ - --data-urlencode "search=labels.env='dev' or labels.env='staging'" - ``` - -## Development Workflow +## Quick Start ### Prerequisites -Before running hyperfleet-api, ensure these prerequisites are installed. See [PREREQUISITES.md](./PREREQUISITES.md) for details. +- **Go 1.24+**, **Podman**, **PostgreSQL 13+**, **Make** -- Go 1.24 or higher -- Podman -- PostgreSQL 13+ -- Make +See [PREREQUISITES.md](PREREQUISITES.md) for installation instructions. -### Initial Setup +### Installation ```bash -# 1. Generate OpenAPI code (must run first as pkg/api/openapi is required by go.mod) -make generate +# 1. Generate OpenAPI code and mocks +make generate-all # 2. Install dependencies go mod download -# 3. Build the binary +# 3. Build binary make binary -# 4. Setup PostgreSQL database +# 4. Setup database make db/setup -# 5. Run database migrations +# 5. Run migrations ./hyperfleet-api migrate -# 6. Verify database schema -make db/login -psql -h localhost -U hyperfleet hyperfleet -\dt +# 6. Start service (no auth) +make run-no-auth ``` -**Note**: The `pkg/api/openapi/` directory is not tracked in git. You must run `make generate` after cloning or pulling changes to the OpenAPI specification. - -### Pre-commit Hooks (Optional) +**Note**: Generated code is not tracked in git. You must run `make generate-all` after cloning. -This project uses pre-commit hooks for code quality and security checks. +### Accessing the API -**For Red Hat internal contributors:** -```bash -# Install pre-commit -brew install pre-commit # macOS -# or -pip install pre-commit +The service starts on `localhost:8000`: -# Install hooks -pre-commit install -pre-commit install --hook-type pre-push +- **REST API**: `http://localhost:8000/api/hyperfleet/v1/` +- **OpenAPI spec**: `http://localhost:8000/api/hyperfleet/v1/openapi` +- **Swagger UI**: `http://localhost:8000/api/hyperfleet/v1/openapi.html` +- **Health check**: `http://localhost:8083/healthcheck` +- **Metrics**: `http://localhost:8080/metrics` -# Test -pre-commit run --all-files +```bash +# Test the API +curl http://localhost:8000/api/hyperfleet/v1/clusters | jq ``` -**For external contributors:** - -The `.pre-commit-config.yaml` includes `rh-pre-commit` which requires access to Red Hat's internal GitLab. External contributors have two options: - -1. **Skip the internal hook** (recommended): - ```bash - # Skip rh-pre-commit when committing - SKIP=rh-pre-commit git commit -m "your message" - ``` - -2. **Comment out the internal hook**: - Edit `.pre-commit-config.yaml` and locate the repo block with `repo: https://gitlab.cee.redhat.com/...` (the Red Hat internal GitLab URL) or whose hooks include `id: rh-pre-commit`. Comment out or remove that entire repo block. - -The other hooks (`rh-hooks-ai`) are publicly accessible and will work for all contributors. +## API Resources -**Updating hooks:** +### Clusters -To update all hooks to their latest versions: -```bash -# Update all hooks to latest releases -pre-commit autoupdate +Kubernetes clusters with provider-specific configurations, labels, and adapter-based status reporting. -# Test updated hooks -pre-commit run --all-files -``` +**Main endpoints:** +- `GET/POST /api/hyperfleet/v1/clusters` +- `GET /api/hyperfleet/v1/clusters/{id}` +- `GET/POST /api/hyperfleet/v1/clusters/{id}/statuses` -This command updates the `rev` fields in `.pre-commit-config.yaml` to the latest available versions. Review the changes before committing to ensure compatibility. +### NodePools -See [PRE_COMMIT_DEMO.md](./PRE_COMMIT_DEMO.md) for detailed setup instructions and troubleshooting. +Groups of compute nodes within clusters. -### Running the Service +**Main endpoints:** +- `GET /api/hyperfleet/v1/nodepools` +- `GET/POST /api/hyperfleet/v1/clusters/{cluster_id}/nodepools` +- `GET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}` +- `GET/POST /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}/statuses` -**Local development (no authentication):** -```bash -make run-no-auth -``` +Both resources support pagination, label-based search, and adapter status reporting. See [docs/api-resources.md](docs/api-resources.md) for complete API documentation. -The service starts on `localhost:8000`: -- REST API: `http://localhost:8000/api/hyperfleet/v1/` -- OpenAPI spec: `http://localhost:8000/openapi` -- Swagger UI: `http://localhost:8000/openapi-ui` -- Health check: `http://localhost:8083/healthcheck` -- Metrics: `http://localhost:8080/metrics` +## Example Usage -**Test the API:** ```bash -# Check API metadata -curl http://localhost:8000/api/hyperfleet | jq - -# List clusters -curl http://localhost:8000/api/hyperfleet/v1/clusters | jq - # Create a cluster curl -X POST http://localhost:8000/api/hyperfleet/v1/clusters \ -H "Content-Type: application/json" \ - -d '{ - "kind": "Cluster", - "name": "prod-cluster-1", - "spec": { - "region": "us-west-2", - "version": "4.15", - "nodes": 3 - }, - "labels": { - "env": "production" - } - }' | jq -``` - -### Configuration - -HyperFleet API can be configured via environment variables: - -#### Schema Validation - -**`OPENAPI_SCHEMA_PATH`** -- **Description**: Path to the OpenAPI specification file used for validating cluster and nodepool spec fields -- **Default**: `openapi/openapi.yaml` (repository base schema) -- **Required**: No (service will start with default schema if not specified) -- **Usage**: - - **Local development**: Uses default repository schema - - **Production**: Set via Helm deployment to inject provider-specific schema from ConfigMap - -**Example:** -```bash -# Local development (uses default) -./hyperfleet-api serve - -# Custom schema path -export OPENAPI_SCHEMA_PATH=/path/to/custom/openapi.yaml -./hyperfleet-api serve - -# Production (Helm sets this automatically) -# OPENAPI_SCHEMA_PATH=/etc/hyperfleet/schemas/openapi.yaml -``` - -**How it works:** -1. The schema validator loads the OpenAPI specification at startup -2. When POST/PATCH requests are made to create or update resources, the `spec` field is validated against the schema -3. Invalid specs return HTTP 400 with detailed field-level error messages -4. Unknown resource types or missing schemas are gracefully handled (validation skipped) - -**Provider-specific schemas:** -In production deployments, cloud providers can inject their own OpenAPI schemas via Helm: -```bash -helm install hyperfleet-api ./chart \ - --set-file provider.schema=gcp-schema.yaml -``` - -The injected schema is mounted at `/etc/hyperfleet/schemas/openapi.yaml` and automatically used for validation. - -### Testing - -```bash -# Unit tests -make test + -d '{"kind": "Cluster", "name": "my-cluster", "spec": {...}, "labels": {...}}' | jq -# Integration tests (requires running database) -make test-integration -``` - -**Test Coverage:** - -All 12 API endpoints have integration test coverage: - -| Endpoint | Coverage | -|----------|----------| -| GET /compatibility | ✓ | -| GET /clusters | ✓ (list, pagination, search) | -| POST /clusters | ✓ | -| GET /clusters/{id} | ✓ | -| GET /clusters/{id}/statuses | ✓ | -| POST /clusters/{id}/statuses | ✓ | -| GET /nodepools | ✓ (list, pagination) | -| GET /clusters/{id}/nodepools | ✓ | -| POST /clusters/{id}/nodepools | ✓ | -| GET /clusters/{id}/nodepools/{nodepool_id} | ✓ | -| GET /clusters/{id}/nodepools/{nodepool_id}/statuses | ✓ | -| POST /clusters/{id}/nodepools/{nodepool_id}/statuses | ✓ | - -## Code Generation Workflow - -### TypeSpec to OpenAPI - -The API specification is defined using TypeSpec and compiled to OpenAPI 3.0 from [hyperfleet-api-spec](https://github.com/openshift-hyperfleet/hyperfleet-api-spec): - -``` -TypeSpec definitions (.tsp files) - ↓ -tsp compile - ↓ -openapi/openapi.yaml (32KB, source specification) -``` - -### OpenAPI to Go Models - -Generated Go code is created via Docker-based workflow: - -``` -openapi/openapi.yaml - ↓ -make generate (podman + openapi-generator-cli v7.16.0) - ↓ -pkg/api/openapi/model_*.go (Go model structs) -pkg/api/openapi/api/openapi.yaml (44KB, fully resolved spec) -``` - -**Generation process:** -1. `make generate` removes existing generated code -2. Builds Docker image with openapi-generator-cli -3. Runs code generator inside container -4. Copies generated files to host - -**Generated artifacts:** -- Model structs with JSON tags -- Type definitions for all API resources -- Validation tags for required fields -- Fully resolved OpenAPI specification - -**Important**: Generated files in `pkg/api/openapi/` are not tracked in git. Developers must run `make generate` after cloning or pulling changes to the OpenAPI specification. - -### Runtime Embedding - -The fully resolved OpenAPI specification is embedded at compile time using Go 1.16+ `//go:embed`: - -```go -// pkg/api/openapi_embed.go -//go:embed openapi/api/openapi.yaml -var openapiFS embed.FS - -func GetOpenAPISpec() ([]byte, error) { - return fs.ReadFile(openapiFS, "openapi/api/openapi.yaml") -} -``` - -This embedded specification is: -- Compiled into the binary -- Served at `/openapi` endpoint -- Used by Swagger UI at `/openapi-ui` -- Zero runtime file I/O required - -## Database Schema - -### Core Tables - -**clusters** -- Primary resources for cluster management -- Includes spec (region, version, nodes) -- Stores metadata (labels, generation) -- Tracks created_by, updated_by - -**node_pools** -- Child resources owned by clusters -- Contains spec (instance_type, replicas, disk_size) -- Maintains owner_id foreign key to clusters -- Soft delete support - -**adapter_statuses** -- Polymorphic status records -- owner_type: 'Cluster' or 'NodePool' -- owner_id: References clusters or node_pools -- Stores adapter name and conditions JSON -- Tracks observed_generation - -**labels** -- Key-value pairs for resource categorization -- owner_type and owner_id for polymorphic relationships -- Supports filtering and search - -## OpenAPI Specification Structure - -**Source file (`openapi/openapi.yaml` - 32KB):** -- TypeSpec compilation output -- Uses `$ref` for parameter reuse (78 references) -- Compact, maintainable structure -- Input for code generation - -**Generated file (`pkg/api/openapi/api/openapi.yaml` - 44KB):** -- openapi-generator output -- Fully resolved (no external `$ref`) -- Inline parameter definitions (54 references) -- Includes server configuration -- Embedded in Go binary - -**Key differences:** -- Source file: Optimized for maintainability -- Generated file: Optimized for runtime serving - -## Build Commands - -```bash -# Generate OpenAPI client code -make generate - -# Build binary -make binary - -# Run database migrations -./hyperfleet-api migrate - -# Start server (no auth) -make run-no-auth - -# Run tests -make test -make test-integration - -# Database management -make db/setup # Create PostgreSQL container -make db/teardown # Remove PostgreSQL container -make db/login # Connect to database shell -``` - -## Container Image - -Build and push container images using the multi-stage Dockerfile: - -```bash -# Build container image -make image - -# Build with custom tag -make image IMAGE_TAG=v1.0.0 - -# Build and push to default registry -make image-push - -# Build and push to personal Quay registry (for development) -QUAY_USER=myuser make image-dev +# Search clusters +curl -G http://localhost:8000/api/hyperfleet/v1/clusters \ + --data-urlencode "search=labels.env='production'" | jq ``` -Default image: `quay.io/openshift-hyperfleet/hyperfleet-api:latest` - -## Kubernetes Deployment - -### Using Helm Chart +## Development -The project includes a Helm chart for Kubernetes deployment with configurable PostgreSQL support. +### Common Commands -**Development deployment (with built-in PostgreSQL):** ```bash -helm install hyperfleet-api ./charts/ \ - --namespace hyperfleet-system \ - --create-namespace +make binary # Build binary +make run-no-auth # Run without authentication +make test # Run unit tests +make test-integration # Run integration tests +make generate # Generate OpenAPI models +make generate-mocks # Generate test mocks +make generate-all # Generate OpenAPI models and mocks +make db/setup # Create PostgreSQL container +make image # Build container image ``` -**Production deployment (with external database like GCP Cloud SQL):** -```bash -# First, create a secret with database credentials -kubectl create secret generic hyperfleet-db-external \ - --namespace hyperfleet-system \ - --from-literal=db.host= \ - --from-literal=db.port=5432 \ - --from-literal=db.name=hyperfleet \ - --from-literal=db.user=hyperfleet \ - --from-literal=db.password= - -# Deploy with external database -helm install hyperfleet-api ./charts/ \ - --namespace hyperfleet-system \ - --set database.postgresql.enabled=false \ - --set database.external.enabled=true \ - --set database.external.secretName=hyperfleet-db-external -``` +See [docs/development.md](docs/development.md) for detailed workflows. -**Custom image deployment:** -```bash -helm install hyperfleet-api ./charts/ \ - --namespace hyperfleet-system \ - --set image.registry=quay.io/myuser \ - --set image.repository=hyperfleet-api \ - --set image.tag=v1.0.0 -``` +### Pre-commit Hooks -**Upgrade deployment:** -```bash -helm upgrade hyperfleet-api ./charts/ --namespace hyperfleet-system -``` +This project uses [pre-commit](https://pre-commit.io/) for code quality checks. See [docs/development.md](docs/development.md#pre-commit-hooks-optional) for setup instructions. -**Uninstall:** -```bash -helm uninstall hyperfleet-api --namespace hyperfleet-system -``` +## Documentation -### Helm Values +### Core Documentation -| Parameter | Description | Default | -|-----------|-------------|---------| -| `image.registry` | Container registry | `quay.io/openshift-hyperfleet` | -| `image.repository` | Image repository | `hyperfleet-api` | -| `image.tag` | Image tag | `latest` | -| `database.postgresql.enabled` | Deploy built-in PostgreSQL | `true` | -| `database.external.enabled` | Use external database | `false` | -| `database.external.secretName` | Secret with db credentials | `""` | -| `auth.enableJwt` | Enable JWT authentication | `true` | -| `auth.enableAuthz` | Enable authorization | `true` | +- **[API Resources](docs/api-resources.md)** - API endpoints, data models, and search capabilities +- **[Development Guide](docs/development.md)** - Local setup, testing, code generation, and workflows +- **[Database](docs/database.md)** - Schema, migrations, and data model +- **[Deployment](docs/deployment.md)** - Container images, Kubernetes deployment, and configuration +- **[Authentication](docs/authentication.md)** - Development and production auth -## API Authentication +### Additional Resources -**Development mode (no auth):** -```bash -make run-no-auth -curl http://localhost:8000/api/hyperfleet/v1/clusters -``` - -**Production mode (OCM auth):** -```bash -make run -ocm login --token=${OCM_ACCESS_TOKEN} --url=http://localhost:8000 -ocm get /api/hyperfleet/v1/clusters -``` - -## Example Usage - -### Create Cluster and NodePool - -```bash -# 1. Create cluster -CLUSTER=$(curl -s -X POST http://localhost:8000/api/hyperfleet/v1/clusters \ - -H "Content-Type: application/json" \ - -d '{ - "kind": "Cluster", - "name": "production-cluster", - "spec": { - "region": "us-east-1", - "version": "4.16", - "nodes": 5 - }, - "labels": { - "env": "production", - "team": "platform" - } - }') - -CLUSTER_ID=$(echo $CLUSTER | jq -r '.id') - -# 2. Create node pool -curl -X POST http://localhost:8000/api/hyperfleet/v1/clusters/$CLUSTER_ID/nodepools \ - -H "Content-Type: application/json" \ - -d '{ - "kind": "NodePool", - "name": "worker-pool", - "spec": { - "instance_type": "m5.2xlarge", - "replicas": 10, - "disk_size": 200 - }, - "labels": { - "pool_type": "worker" - } - }' | jq - -# 3. Report adapter status -curl -X POST http://localhost:8000/api/hyperfleet/v1/clusters/$CLUSTER_ID/statuses \ - -H "Content-Type: application/json" \ - -d '{ - "adapter": "dns-adapter", - "observed_generation": 1, - "observed_time": "2025-11-17T15:04:05Z", - "conditions": [ - { - "type": "Ready", - "status": "True", - "reason": "ClusterProvisioned", - "message": "Cluster successfully provisioned" - } - ] - }' | jq - -# 4. Get cluster with aggregated status -curl http://localhost:8000/api/hyperfleet/v1/clusters/$CLUSTER_ID | jq - -# 5. Search with AND condition -curl -G http://localhost:8000/api/hyperfleet/v1/clusters \ - --data-urlencode "search=status.phase='Ready' and labels.env='production'" | jq - -# 6. Search with OR condition -curl -G http://localhost:8000/api/hyperfleet/v1/clusters \ - --data-urlencode "search=labels.env='dev' or labels.env='staging'" | jq -``` +- **[PREREQUISITES.md](PREREQUISITES.md)** - Prerequisite installation +- **[docs/continuous-delivery-migration.md](docs/continuous-delivery-migration.md)** - CD migration guide +- **[docs/dao.md](docs/dao.md)** - Data access patterns +- **[docs/testcontainers.md](docs/testcontainers.md)** - Testcontainers usage ## License diff --git a/docs/api-resources.md b/docs/api-resources.md new file mode 100644 index 0000000..fbf068d --- /dev/null +++ b/docs/api-resources.md @@ -0,0 +1,414 @@ +# API Resources + +This document provides detailed information about the HyperFleet API resources, including endpoints, request/response formats, and usage patterns. + +## Cluster Management + +### Endpoints + +```text +GET /api/hyperfleet/v1/clusters +POST /api/hyperfleet/v1/clusters +GET /api/hyperfleet/v1/clusters/{cluster_id} +GET /api/hyperfleet/v1/clusters/{cluster_id}/statuses +POST /api/hyperfleet/v1/clusters/{cluster_id}/statuses +``` + +### Create Cluster + +**POST** `/api/hyperfleet/v1/clusters` + +**Request Body:** +```json +{ + "kind": "Cluster", + "name": "my-cluster", + "spec": {}, + "labels": { + "environment": "production" + } +} +``` + +**Response (201 Created):** +```json +{ + "kind": "Cluster", + "id": "2abc123...", + "href": "/api/hyperfleet/v1/clusters/2abc123...", + "name": "my-cluster", + "generation": 1, + "spec": {}, + "labels": { + "environment": "production" + }, + "created_time": "2025-01-01T00:00:00Z", + "updated_time": "2025-01-01T00:00:00Z", + "created_by": "user@example.com", + "updated_by": "user@example.com", + "status": { + "phase": "NotReady", + "observed_generation": 0, + "last_transition_time": "2025-01-01T00:00:00Z", + "last_updated_time": "2025-01-01T00:00:00Z", + "conditions": [] + } +} +``` + +**Note**: Status is initially `NotReady` with empty conditions until adapters report status. + +### Get Cluster + +**GET** `/api/hyperfleet/v1/clusters/{cluster_id}` + +**Response (200 OK):** +```json +{ + "kind": "Cluster", + "id": "2abc123...", + "href": "/api/hyperfleet/v1/clusters/2abc123...", + "name": "my-cluster", + "generation": 1, + "spec": {}, + "labels": { + "environment": "production" + }, + "created_time": "2025-01-01T00:00:00Z", + "updated_time": "2025-01-01T00:00:00Z", + "created_by": "user@example.com", + "updated_by": "user@example.com", + "status": { + "phase": "Ready", + "observed_generation": 1, + "last_transition_time": "2025-01-01T00:00:00Z", + "last_updated_time": "2025-01-01T00:00:00Z", + "conditions": [ + { + "type": "ValidationSuccessful", + "status": "True", + "reason": "AllValidationsPassed", + "message": "All validations passed", + "observed_generation": 1, + "created_time": "2025-01-01T00:00:00Z", + "last_updated_time": "2025-01-01T00:00:00Z", + "last_transition_time": "2025-01-01T00:00:00Z" + }, + { + "type": "DNSSuccessful", + "status": "True", + "reason": "DNSProvisioned", + "message": "DNS successfully configured", + "observed_generation": 1, + "created_time": "2025-01-01T00:00:00Z", + "last_updated_time": "2025-01-01T00:00:00Z", + "last_transition_time": "2025-01-01T00:00:00Z" + } + ] + } +} +``` + +### List Clusters + +**GET** `/api/hyperfleet/v1/clusters?page=1&pageSize=10` + +**Response (200 OK):** +```json +{ + "kind": "ClusterList", + "page": 1, + "size": 10, + "total": 100, + "items": [ + { + "kind": "Cluster", + "id": "2abc123...", + "name": "my-cluster", + ... + } + ] +} +``` + +### Report Cluster Status + +**POST** `/api/hyperfleet/v1/clusters/{cluster_id}/statuses` + +Adapters use this endpoint to report their status. + +**Request Body:** +```json +{ + "adapter": "validator", + "observed_generation": 1, + "observed_time": "2025-01-01T10:00:00Z", + "conditions": [ + { + "type": "Available", + "status": "True", + "reason": "AllValidationsPassed", + "message": "All validations passed" + }, + { + "type": "Applied", + "status": "True", + "reason": "ValidationJobApplied", + "message": "Validation job applied successfully" + }, + { + "type": "Health", + "status": "True", + "reason": "OperationsCompleted", + "message": "All adapter operations completed successfully" + } + ], + "data": { + "job_name": "validator-job-abc123", + "attempt": 1 + } +} +``` + +**Response (201 Created):** +```json +{ + "adapter": "validator", + "observed_generation": 1, + "conditions": [ + { + "type": "Available", + "status": "True", + "reason": "AllValidationsPassed", + "message": "All validations passed", + "last_transition_time": "2025-01-01T10:00:00Z" + }, + { + "type": "Applied", + "status": "True", + "reason": "ValidationJobApplied", + "message": "Validation job applied successfully", + "last_transition_time": "2025-01-01T10:00:00Z" + }, + { + "type": "Health", + "status": "True", + "reason": "OperationsCompleted", + "message": "All adapter operations completed successfully", + "last_transition_time": "2025-01-01T10:00:00Z" + } + ], + "data": { + "job_name": "validator-job-abc123", + "attempt": 1 + }, + "created_time": "2025-01-01T10:00:00Z", + "last_report_time": "2025-01-01T10:00:00Z" +} +``` + +**Note**: The API automatically sets `created_time`, `last_report_time`, and `last_transition_time` fields. + +### Status Phases + +- `NotReady` - Cluster is being provisioned or has failing conditions +- `Ready` - All adapter conditions report success +- `Failed` - Cluster provisioning or operation failed + +## NodePool Management + +### Endpoints + +```text +GET /api/hyperfleet/v1/nodepools +GET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools +POST /api/hyperfleet/v1/clusters/{cluster_id}/nodepools +GET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id} +GET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}/statuses +POST /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}/statuses +``` + +### Create NodePool + +**POST** `/api/hyperfleet/v1/clusters/{cluster_id}/nodepools` + +**Request Body:** +```json +{ + "kind": "NodePool", + "name": "worker-pool", + "spec": {}, + "labels": { + "role": "worker" + } +} +``` + +**Response (201 Created):** +```json +{ + "kind": "NodePool", + "id": "2def456...", + "href": "/api/hyperfleet/v1/nodepools/2def456...", + "name": "worker-pool", + "owner_references": { + "kind": "Cluster", + "id": "2abc123..." + }, + "generation": 1, + "spec": {}, + "labels": { + "role": "worker" + }, + "created_time": "2025-01-01T00:00:00Z", + "updated_time": "2025-01-01T00:00:00Z", + "created_by": "user@example.com", + "updated_by": "user@example.com", + "status": { + "phase": "NotReady", + "observed_generation": 0, + "last_transition_time": "2025-01-01T00:00:00Z", + "last_updated_time": "2025-01-01T00:00:00Z", + "conditions": [] + } +} +``` + +### Get NodePool + +**GET** `/api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}` + +**Response (200 OK):** +```json +{ + "kind": "NodePool", + "id": "2def456...", + "href": "/api/hyperfleet/v1/nodepools/2def456...", + "name": "worker-pool", + "owner_references": { + "kind": "Cluster", + "id": "2abc123..." + }, + "generation": 1, + "spec": {}, + "labels": { + "role": "worker" + }, + "created_time": "2025-01-01T00:00:00Z", + "updated_time": "2025-01-01T00:00:00Z", + "created_by": "user@example.com", + "updated_by": "user@example.com", + "status": { + "phase": "Ready", + "observed_generation": 1, + "last_transition_time": "2025-01-01T00:00:00Z", + "last_updated_time": "2025-01-01T00:00:00Z", + "conditions": [...] + } +} +``` + +### Report NodePool Status + +**POST** `/api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}/statuses` + +Same format as cluster status reporting (see above). + +## Pagination and Search + +### Pagination + +All list endpoints support pagination: + +```text +GET /api/hyperfleet/v1/clusters?page=1&pageSize=10 +``` + +**Parameters:** +- `page` - Page number (default: 1) +- `pageSize` - Items per page (default: 100) + +**Response:** +```json +{ + "kind": "ClusterList", + "page": 1, + "size": 10, + "total": 100, + "items": [...] +} +``` + +### Search + +Search using TSL (Tree Search Language) query syntax: + +```bash +# Simple equality +curl -G http://localhost:8000/api/hyperfleet/v1/clusters \ + --data-urlencode "search=name='my-cluster'" + +# AND query +curl -G http://localhost:8000/api/hyperfleet/v1/clusters \ + --data-urlencode "search=status.phase='Ready' and labels.environment='production'" + +# OR query +curl -G http://localhost:8000/api/hyperfleet/v1/clusters \ + --data-urlencode "search=labels.environment='dev' or labels.environment='staging'" +``` + +**Supported fields:** +- `name` - Resource name +- `status.phase` - Status phase (NotReady, Ready, Failed) +- `labels.` - Label values + +**Supported operators:** +- `=` - Equality +- `in` - In list +- `and` - Logical AND +- `or` - Logical OR + +## Field Descriptions + +### Common Fields + +- `kind` - Resource type (Cluster, NodePool) +- `id` - Unique resource identifier (auto-generated, format: `2`) +- `href` - Resource URI +- `name` - Resource name (user-defined) +- `generation` - Spec version counter (incremented on spec updates) +- `spec` - Provider-specific configuration (JSONB, validated against OpenAPI schema) +- `labels` - Key-value pairs for categorization and search +- `created_time` - When resource was created (API-managed) +- `updated_time` - When resource was last updated (API-managed) +- `created_by` - User who created the resource (email) +- `updated_by` - User who last updated the resource (email) + +### Status Fields + +- `phase` - Current resource phase (NotReady, Ready, Failed) +- `observed_generation` - Last spec generation processed (min across all adapters) +- `last_transition_time` - When phase last changed +- `last_updated_time` - Min of all adapter last_report_time (detects stale adapters) +- `conditions` - Array of resource conditions from adapters + +### Condition Fields + +**In AdapterStatus POST request (ConditionRequest):** +- `type` - Condition type (Available, Applied, Health) +- `status` - Condition status (True, False, Unknown) +- `reason` - Machine-readable reason code +- `message` - Human-readable message + +**In Cluster/NodePool status (ResourceCondition):** +- All above fields plus: +- `observed_generation` - Generation this condition reflects +- `created_time` - When condition was first created (API-managed) +- `last_updated_time` - When adapter last reported (API-managed, from AdapterStatus.last_report_time) +- `last_transition_time` - When status last changed (API-managed) + +## Related Documentation + +- [Example Usage](../README.md#example-usage) - Practical examples +- [Authentication](authentication.md) - API authentication +- [Database](database.md) - Database schema diff --git a/docs/authentication.md b/docs/authentication.md new file mode 100644 index 0000000..92995d9 --- /dev/null +++ b/docs/authentication.md @@ -0,0 +1,160 @@ +# Authentication + +This document describes authentication mechanisms for the HyperFleet API. + +## Overview + +HyperFleet API supports two authentication modes: + +1. **Development Mode (No Auth)**: For local development and testing +2. **Production Mode (OCM Auth)**: JWT-based authentication via OpenShift Cluster Manager + +## Development Mode (No Auth) + +For local development and testing, authentication can be disabled. + +### Usage + +```bash +# Start service without authentication +make run-no-auth + +# Access API without tokens +curl http://localhost:8000/api/hyperfleet/v1/clusters | jq +``` + +### Configuration + +```bash +export AUTH_ENABLED=false +./hyperfleet-api serve +``` + +**Important**: Never disable authentication in production environments. + +## Production Mode (OCM Auth) + +Production deployments use JWT-based authentication integrated with OpenShift Cluster Manager (OCM). + +### Usage + +```bash +# Start service with authentication +make run + +# Login to OCM +ocm login --token=${OCM_ACCESS_TOKEN} --url=http://localhost:8000 + +# Access API with authentication +ocm get /api/hyperfleet/v1/clusters +``` + +### JWT Authentication + +HyperFleet API validates JWT tokens issued by Red Hat SSO. + +**Token validation checks:** +1. Signature - Token signed by trusted issuer +2. Issuer - Matches configured `JWT_ISSUER` +3. Audience - Matches configured `JWT_AUDIENCE` +4. Expiration - Token not expired +5. Claims - Required claims present + +**Token format:** +```text +Authorization: Bearer +``` + +Example request: +```bash +curl -H "Authorization: Bearer ${TOKEN}" \ + http://localhost:8000/api/hyperfleet/v1/clusters +``` + +## Authorization + +HyperFleet API implements resource-based authorization. + +### Resource Ownership + +Resources track ownership via `created_by` and `updated_by` fields: + +```json +{ + "id": "cluster-123", + "name": "my-cluster", + "created_by": "user@example.com", + "updated_by": "user@example.com" +} +``` + +### Access Control + +- **Create**: Users can create resources +- **Read**: Users can read resources they created or have access to +- **Update**: Users can update resources they own +- **Delete**: Users can delete resources they own + +Users within the same organization can access shared resources based on organizational membership. + +## Configuration + +### Environment Variables + +```bash +# Development (no auth) +export AUTH_ENABLED=false + +# Production (with auth) +export AUTH_ENABLED=true +export OCM_URL=https://api.openshift.com +export JWT_ISSUER=https://sso.redhat.com/auth/realms/redhat-external +export JWT_AUDIENCE=https://api.openshift.com +``` + +See [Deployment](deployment.md) for complete configuration options. + +### Kubernetes Deployment + +Configure via Helm values: + +```yaml +# values.yaml +auth: + enabled: true + ocmUrl: https://api.openshift.com + jwtIssuer: https://sso.redhat.com/auth/realms/redhat-external + jwtAudience: https://api.openshift.com +``` + +Deploy: +```bash +helm install hyperfleet-api ./charts/ --values values.yaml +``` + +## Troubleshooting + +### Common Issues + +**401 Unauthorized** +- Check token is valid and not expired +- Verify `JWT_ISSUER` and `JWT_AUDIENCE` match token claims +- Ensure `Authorization` header is correctly formatted + +**403 Forbidden** +- User authenticated but lacks permissions +- Check resource ownership +- Verify organizational membership + +**Token debugging** +```bash +# Decode JWT token (header and payload only, not verified) +echo $TOKEN | cut -d. -f2 | base64 -d | jq + +# Check token expiration +ocm token --refresh +``` + +## Related Documentation + +- [Deployment](deployment.md) - Authentication configuration and Kubernetes setup diff --git a/docs/database.md b/docs/database.md new file mode 100644 index 0000000..38c5c53 --- /dev/null +++ b/docs/database.md @@ -0,0 +1,83 @@ +# Database + +This document describes the database architecture used by HyperFleet API. + +## Overview + +HyperFleet API uses PostgreSQL with GORM ORM. The schema follows a simple relational model with polymorphic associations. + +## Core Tables + +### clusters +Primary resources for cluster management. Contains cluster metadata and JSONB spec field for provider-specific configuration. + +### node_pools +Child resources owned by clusters, representing groups of compute nodes. Uses foreign key relationship with cascade delete. + +### adapter_statuses +Polymorphic status records for both clusters and node pools. Stores adapter-reported conditions in JSONB format. + +**Polymorphic pattern:** +- `owner_type` + `owner_id` allows one table to serve both clusters and node pools +- Enables efficient status lookups across resource types + +### labels +Key-value pairs for resource categorization and search. Uses polymorphic association to support both clusters and node pools. + +## Schema Relationships + +```text +clusters (1) ──→ (N) node_pools + │ │ + │ │ + └────────┬───────────┘ + │ + ├──→ adapter_statuses (polymorphic) + └──→ labels (polymorphic) +``` + +## Key Design Patterns + +### JSONB Fields + +Flexible schema storage for: +- **spec** - Provider-specific cluster/nodepool configurations +- **conditions** - Adapter status condition arrays +- **data** - Adapter metadata + +**Benefits:** +- Support multiple cloud providers without schema changes +- Runtime validation against OpenAPI schema +- PostgreSQL JSON query capabilities + +### Soft Delete + +Resources use GORM's soft delete pattern with `deleted_at` timestamp. Soft-deleted records are excluded from queries by default. + +### Migration System + +Uses GORM AutoMigrate: +- Non-destructive (never drops columns or tables) +- Additive (creates missing tables, columns, indexes) +- Run via `./hyperfleet-api migrate` + +## Database Setup + +```bash +# Create PostgreSQL container +make db/setup + +# Run migrations +./hyperfleet-api migrate + +# Connect to database +make db/login +``` + +See [development.md](development.md) for detailed setup instructions. + +## Related Documentation + +- [Development Guide](development.md) - Database setup and migrations +- [Deployment](deployment.md) - Database configuration and connection settings +- [API Resources](api-resources.md) - Resource data models diff --git a/docs/deployment.md b/docs/deployment.md new file mode 100644 index 0000000..b374b28 --- /dev/null +++ b/docs/deployment.md @@ -0,0 +1,354 @@ +# Deployment Guide + +This guide covers building container images and deploying HyperFleet API to Kubernetes using Helm. + +## Container Image + +### Building Images + +Build and push container images: + +```bash +# Build container image with default tag +make image + +# Build with custom tag +make image IMAGE_TAG=v1.0.0 + +# Build and push to default registry +make image-push + +# Build and push to personal Quay registry (for development) +QUAY_USER=myuser make image-dev +``` + +### Default Image + +The default container image is: +```text +quay.io/openshift-hyperfleet/hyperfleet-api:latest +``` + +### Custom Registry + +To use a custom container registry: + +```bash +# Build with custom registry +make image \ + IMAGE_REGISTRY=your-registry.io/yourorg \ + IMAGE_TAG=v1.0.0 + +# Push to custom registry +podman push your-registry.io/yourorg/hyperfleet-api:v1.0.0 +``` + +## Configuration + +HyperFleet API is configured via environment variables. + +### Schema Validation + +**`OPENAPI_SCHEMA_PATH`** - Path to OpenAPI specification for spec validation + +The API validates cluster and nodepool `spec` fields against an OpenAPI schema. This allows different providers (GCP, AWS, Azure) to have different spec structures. + +- Default: Uses `openapi/openapi.yaml` from the repository +- Custom: Set via `OPENAPI_SCHEMA_PATH` environment variable for provider-specific schemas + +```bash +export OPENAPI_SCHEMA_PATH=/path/to/custom-schema.yaml +``` + +### Environment Variables + +**Database:** +- `DB_HOST` - PostgreSQL hostname (default: `localhost`) +- `DB_PORT` - PostgreSQL port (default: `5432`) +- `DB_NAME` - Database name (default: `hyperfleet`) +- `DB_USER` - Database username (default: `hyperfleet`) +- `DB_PASSWORD` - Database password (required) +- `DB_SSLMODE` - SSL mode: `disable`, `require`, `verify-ca`, `verify-full` (default: `disable`) + +**Authentication:** +- `AUTH_ENABLED` - Enable JWT authentication (default: `true`) +- `OCM_URL` - OpenShift Cluster Manager API URL (default: `https://api.openshift.com`) +- `JWT_ISSUER` - JWT token issuer URL (default: `https://sso.redhat.com/auth/realms/redhat-external`) +- `JWT_AUDIENCE` - JWT token audience (default: `https://api.openshift.com`) + +**Server:** +- `PORT` - API server port (default: `8000`) +- `METRICS_PORT` - Metrics endpoint port (default: `8080`) +- `HEALTH_PORT` - Health check port (default: `8083`) + +**Logging:** +- `LOG_LEVEL` - Logging level: `debug`, `info`, `warn`, `error` (default: `info`) +- `LOG_FORMAT` - Log format: `json`, `text` (default: `json`) + +## Kubernetes Deployment + +### Using Helm Chart + +The project includes a Helm chart for Kubernetes deployment with configurable PostgreSQL support. + +#### Development Deployment + +Deploy with built-in PostgreSQL for development and testing: + +```bash +helm install hyperfleet-api ./charts/ \ + --namespace hyperfleet-system \ + --create-namespace +``` + +This creates: +- HyperFleet API deployment +- PostgreSQL StatefulSet +- Services for both components +- ConfigMaps and Secrets + +#### Production Deployment + +Deploy with external database (recommended for production): + +##### Step 1: Create database secret + +```bash +kubectl create secret generic hyperfleet-db-external \ + --namespace hyperfleet-system \ + --from-literal=db.host= \ + --from-literal=db.port=5432 \ + --from-literal=db.name=hyperfleet \ + --from-literal=db.user=hyperfleet \ + --from-literal=db.password= +``` + +##### Step 2: Deploy with external database + +```bash +helm install hyperfleet-api ./charts/ \ + --namespace hyperfleet-system \ + --set database.postgresql.enabled=false \ + --set database.external.enabled=true \ + --set database.external.secretName=hyperfleet-db-external +``` + +#### Custom Image Deployment + +Deploy with custom container image: + +```bash +helm install hyperfleet-api ./charts/ \ + --namespace hyperfleet-system \ + --set image.registry=quay.io \ + --set image.repository=myuser/hyperfleet-api \ + --set image.tag=v1.0.0 +``` + +#### Upgrade Deployment + +Upgrade to a new version: + +```bash +helm upgrade hyperfleet-api ./charts/ \ + --namespace hyperfleet-system \ + --set image.tag=v1.1.0 +``` + +#### Uninstall + +Remove the deployment: + +```bash +helm uninstall hyperfleet-api --namespace hyperfleet-system +``` + +## Helm Values + +### Key Configuration Options + +| Parameter | Description | Default | +|-----------|-------------|---------| +| `image.registry` | Container registry | `quay.io` | +| `image.repository` | Image repository | `openshift-hyperfleet/hyperfleet-api` | +| `image.tag` | Image tag | `latest` | +| `image.pullPolicy` | Image pull policy | `IfNotPresent` | +| `auth.enableJwt` | Enable JWT authentication | `true` | +| `database.postgresql.enabled` | Enable built-in PostgreSQL | `true` | +| `database.external.enabled` | Use external database | `false` | +| `database.external.secretName` | Secret containing database credentials | `hyperfleet-db-external` | +| `replicaCount` | Number of API replicas | `1` | +| `resources.limits.cpu` | CPU limit | `500m` | +| `resources.limits.memory` | Memory limit | `512Mi` | + +### Custom Values File + +Create a `values.yaml` file: + +```yaml +# values.yaml +image: + registry: quay.io + repository: myuser/hyperfleet-api + tag: v1.0.0 + +auth: + enableJwt: true + +database: + postgresql: + enabled: false + external: + enabled: true + secretName: hyperfleet-db-external + +replicaCount: 3 + +resources: + limits: + cpu: 1000m + memory: 1Gi + requests: + cpu: 500m + memory: 512Mi +``` + +Deploy with custom values: +```bash +helm install hyperfleet-api ./charts/ \ + --namespace hyperfleet-system \ + --values values.yaml +``` + +## Helm Operations + +### Check Deployment Status + +```bash +# Get deployment status +helm status hyperfleet-api --namespace hyperfleet-system + +# List all releases +helm list --namespace hyperfleet-system + +# Check pods +kubectl get pods --namespace hyperfleet-system + +# Check services +kubectl get svc --namespace hyperfleet-system +``` + +### View Logs + +```bash +# View API logs +kubectl logs -f deployment/hyperfleet-api --namespace hyperfleet-system + +# View logs from all pods +kubectl logs -f -l app=hyperfleet-api --namespace hyperfleet-system + +# View PostgreSQL logs (if using built-in) +kubectl logs -f statefulset/hyperfleet-postgresql --namespace hyperfleet-system +``` + +### Troubleshooting + +```bash +# Describe pod for events and status +kubectl describe pod --namespace hyperfleet-system + +# Check deployment events +kubectl get events --namespace hyperfleet-system --sort-by='.lastTimestamp' + +# Exec into pod for debugging +kubectl exec -it deployment/hyperfleet-api --namespace hyperfleet-system -- /bin/sh + +# Check secrets +kubectl get secrets --namespace hyperfleet-system + +# Verify ConfigMaps +kubectl get configmaps --namespace hyperfleet-system +``` + +## Health Checks + +The deployment includes liveness and readiness probes at `GET /healthcheck` (port 8083). + +## Scaling + +Scale replicas: +```bash +# Manual scaling +kubectl scale deployment hyperfleet-api --replicas=3 --namespace hyperfleet-system + +# Via Helm +helm upgrade hyperfleet-api ./charts/ \ + --namespace hyperfleet-system \ + --set replicaCount=3 +``` + +Enable autoscaling via Helm values (`autoscaling.enabled=true`). + +## Monitoring + +Prometheus metrics available at `http://:8080/metrics`. + +For Prometheus Operator, enable ServiceMonitor via Helm values (`serviceMonitor.enabled=true`). + +## Production Best Practices + +- Use external managed database (Cloud SQL, RDS, Azure Database) +- Enable authentication with `auth.enableJwt=true` +- Set resource limits and use multiple replicas +- Use specific image tags instead of `latest` +- Enable monitoring and regular database backups + +## Complete Deployment Example + +### GKE Deployment + +```bash +# 1. Build and push image +export QUAY_USER=myuser +podman login quay.io +make image-dev + +# 2. Get GKE credentials +gcloud container clusters get-credentials my-cluster \ + --zone=us-central1-a \ + --project=my-project + +# 3. Create namespace +kubectl create namespace hyperfleet-system +kubectl config set-context --current --namespace=hyperfleet-system + +# 4. Create database secret (for production) +kubectl create secret generic hyperfleet-db-external \ + --from-literal=db.host=10.10.10.10 \ + --from-literal=db.port=5432 \ + --from-literal=db.name=hyperfleet \ + --from-literal=db.user=hyperfleet \ + --from-literal=db.password=secretpassword + +# 5. Deploy with Helm +helm install hyperfleet-api ./charts/ \ + --set image.registry=quay.io \ + --set image.repository=myuser/hyperfleet-api \ + --set image.tag=dev-abc123 \ + --set auth.enableJwt=false \ + --set database.postgresql.enabled=false \ + --set database.external.enabled=true + +# 6. Verify deployment +kubectl get pods +kubectl logs -f deployment/hyperfleet-api + +# 7. Access API (port-forward for testing) +kubectl port-forward svc/hyperfleet-api 8000:8000 +curl http://localhost:8000/api/hyperfleet/v1/clusters +``` + +## Related Documentation + +- [Development Guide](development.md) - Local development setup +- [Authentication](authentication.md) - Authentication configuration diff --git a/docs/development.md b/docs/development.md new file mode 100644 index 0000000..3e75734 --- /dev/null +++ b/docs/development.md @@ -0,0 +1,379 @@ +# Development Guide + +This guide covers the complete development workflow for HyperFleet API, from initial setup to running tests. + +## Prerequisites + +Before running hyperfleet-api, ensure these prerequisites are installed. See [PREREQUISITES.md](../PREREQUISITES.md) for detailed installation instructions. + +- **Go 1.24 or higher** +- **Podman** +- **PostgreSQL 13+** +- **Make** + +Verify installations: +```bash +go version # Should show 1.24+ +podman version +make --version +``` + +## Initial Setup + +Set up your local development environment: + +```bash +# 1. Generate OpenAPI code and mocks +make generate-all + +# 2. Install dependencies +go mod download + +# 3. Build the binary +make binary + +# 4. Setup PostgreSQL database +make db/setup + +# 5. Run database migrations +./hyperfleet-api migrate + +# 6. Verify database schema +make db/login +\dt +``` + +**Important**: Generated code is not tracked in git. You must run `make generate-all` after cloning to generate both OpenAPI models and mocks. + +## Pre-commit Hooks (Optional) + +This project uses pre-commit hooks for code quality and security checks. + +### Setup + +```bash +# Install pre-commit +brew install pre-commit # macOS +# or +pip install pre-commit + +# Install hooks +pre-commit install +pre-commit install --hook-type pre-push + +# Test +pre-commit run --all-files +``` + +### For External Contributors + +The `.pre-commit-config.yaml` includes `rh-pre-commit` which requires access to Red Hat's internal GitLab. External contributors can skip it: + +```bash +# Skip internal hook when committing +SKIP=rh-pre-commit git commit -m "your message" +``` + +Or comment out the internal hook in `.pre-commit-config.yaml`. + +### Update Hooks + +```bash +pre-commit autoupdate +pre-commit run --all-files +``` + +## Running the Service + +### Local Development (No Authentication) + +```bash +make run-no-auth +``` + +The service starts on `localhost:8000`: +- REST API: `http://localhost:8000/api/hyperfleet/v1/` +- OpenAPI spec: `http://localhost:8000/api/hyperfleet/v1/openapi` +- Swagger UI: `http://localhost:8000/api/hyperfleet/v1/openapi.html` +- Health check: `http://localhost:8083/healthcheck` +- Metrics: `http://localhost:8080/metrics` + +### Testing the API + +```bash +# List clusters +curl http://localhost:8000/api/hyperfleet/v1/clusters | jq + +# Create a cluster +curl -X POST http://localhost:8000/api/hyperfleet/v1/clusters \ + -H "Content-Type: application/json" \ + -d '{ + "kind": "Cluster", + "name": "prod-cluster-1", + "spec": {...}, + "labels": {"env": "production"} + }' | jq +``` + +### Production Mode (OCM Authentication) + +```bash +make run +ocm login --token=${OCM_ACCESS_TOKEN} --url=http://localhost:8000 +ocm get /api/hyperfleet/v1/clusters +``` + +See [Deployment](deployment.md) and [Authentication](authentication.md) for complete configuration options. + +## Testing + +```bash +# Unit tests +make test + +# Integration tests (requires running database) +make test-integration +``` + +All API endpoints have integration test coverage. + +## Build Commands + +### Common Commands + +```bash +# Generate OpenAPI client code +make generate + +# Generate mocks for testing +make generate-mocks + +# Generate both OpenAPI and mocks +make generate-all + +# Build binary +make binary + +# Run database migrations +./hyperfleet-api migrate + +# Start server (no auth) +make run-no-auth + +# Run tests +make test +make test-integration + +# Database management +make db/setup # Create PostgreSQL container +make db/teardown # Remove PostgreSQL container +make db/login # Connect to database shell +``` + +### Build Targets + +| Command | Description | +|---------|-------------| +| `make generate` | Generate Go models from OpenAPI spec | +| `make generate-mocks` | Generate mock implementations for testing | +| `make generate-all` | Generate both OpenAPI models and mocks | +| `make binary` | Build hyperfleet-api executable | +| `make test` | Run unit tests | +| `make test-integration` | Run integration tests | +| `make run-no-auth` | Start server without authentication | +| `make run` | Start server with OCM authentication | +| `make db/setup` | Create PostgreSQL container | +| `make db/teardown` | Remove PostgreSQL container | +| `make db/login` | Connect to database shell | + +## Development Workflow + +### Code Generation + +HyperFleet API generates Go models from OpenAPI specifications using `openapi-generator-cli`. + +**Workflow**: +```text +openapi/openapi.yaml + ↓ +make generate (podman + openapi-generator-cli) + ↓ +pkg/api/openapi/model_*.go (Go structs) +pkg/api/openapi/api/openapi.yaml (embedded spec) +``` + +**Generated artifacts**: +- Go model structs with JSON tags (`model_*.go`) +- Fully resolved OpenAPI specification (embedded in binary) + +**Important**: +- Generated files are NOT tracked in git +- Must run `make generate` after cloning +- Must run after OpenAPI spec updates + +**OpenAPI spec source**: +The `openapi/openapi.yaml` is maintained in the [hyperfleet-api-spec](https://github.com/openshift-hyperfleet/hyperfleet-api-spec) repository using TypeSpec. When the spec changes, the compiled YAML is copied here. Developers working on hyperfleet-api only need to run `make generate` - no TypeSpec knowledge required. + +**Commands**: +```bash +# Generate Go models from OpenAPI spec +make generate + +# Generate both OpenAPI models and mocks +make generate-all +``` + +**Troubleshooting**: +```bash +# If "pkg/api/openapi not found" +make generate +go mod download + +# If generator container fails +podman info # Check podman is running +make generate +``` + +### Mock Generation + +Mock implementations of service interfaces are used for unit testing. Mocks are generated using `mockgen`. + +**When to regenerate mocks**: +- After modifying service interface definitions in `pkg/services/` +- When adding or removing methods from service interfaces +- After initial clone (mocks are not committed to git) + +**How it works**: +Service files contain `//go:generate` directives that specify how to generate mocks: +```go +//go:generate mockgen-v0.6.0 -source=cluster.go -package=services -destination=cluster_mock.go +``` + +**Commands**: +```bash +# Generate mocks only +make generate-mocks + +# Generate OpenAPI models and mocks together +make generate-all +``` + +### Tool Dependency Management (Bingo) + +HyperFleet API uses [bingo](https://github.com/bwplotka/bingo) to manage Go tool dependencies with pinned versions. + +**Managed tools**: +- `mockgen` - Mock generation for testing +- `golangci-lint` - Code linting +- `gotestsum` - Enhanced test output + +**Common operations**: +```bash +# Install all tools +bingo get + +# Install a specific tool +bingo get + +# Update a tool to latest version +bingo get @latest + +# List all managed tools +bingo list +``` + +Tool versions are tracked in `.bingo/*.mod` files and loaded automatically via `include .bingo/Variables.mk` in the Makefile. + +### Making Changes + +1. **Create a feature branch**: + ```bash + git checkout -b feature/my-feature + ``` + +2. **Make your changes** to the code + +3. **Update OpenAPI spec if needed**: + - Make changes in the [hyperfleet-api-spec](https://github.com/openshift-hyperfleet/hyperfleet-api-spec) repository + - Copy updated `openapi.yaml` to this repository + - Run `make generate` to regenerate Go models + +4. **Regenerate mocks if service interfaces changed**: + ```bash + make generate-mocks + ``` + +5. **Run tests**: + ```bash + make test + make test-integration + ``` + +6. **Commit your changes**: + ```bash + git add . + git commit -m "feat: add new feature" + # Pre-commit hooks will run automatically + ``` + +7. **Push and create pull request**: + ```bash + git push origin feature/my-feature + ``` + +## Troubleshooting + +### "pkg/api/openapi not found" + +**Problem**: Missing generated OpenAPI code + +**Solution**: +```bash +make generate +go mod download +``` + +### "undefined: Mock*" or missing mock files + +**Problem**: Missing generated mock implementations + +**Solution**: +```bash +make generate-mocks +``` + +### Database Connection Errors + +**Problem**: Cannot connect to PostgreSQL + +**Solution**: +```bash +# Check if container is running +podman ps | grep postgres + +# Restart database +make db/teardown +make db/setup +``` + +### Test Failures + +**Problem**: Integration tests failing + +**Solution**: +```bash +# Ensure database is running +make db/setup + +# Run migrations +./hyperfleet-api migrate + +# Run tests again +make test-integration +``` + +## Related Documentation + +- [Database](database.md) - Database schema and migrations +- [Deployment](deployment.md) - Container and Kubernetes deployment +- [API Resources](api-resources.md) - API endpoints and data models diff --git a/docs/testcontainers.md b/docs/testcontainers.md index e2405c0..40c89a1 100755 --- a/docs/testcontainers.md +++ b/docs/testcontainers.md @@ -11,7 +11,7 @@ testcontainers project only supports Docker officially and some errors can appea If you encounter the following error: -``` +```text Failed to start PostgreSQL testcontainer: create container: container create: Error response from daemon: container create: unable to find network with name or ID bridge: network not found: creating reaper failed ``` It can happen because testcontainers spin up an additional [testcontainers/ryuk](https://github.com/testcontainers/moby-ryuk) container that manages the lifecycle of the containers used in the tests and performs cleanup in case there are fails. @@ -25,7 +25,7 @@ TESTCONTAINERS_RYUK_DISABLED=true Or setting a property in `~/.testcontainers.properties` -``` +```text ryuk.disabled=true ```