Skip to content

fix(#84): Make Redis optional in ServiceDefaults to fix integration tests#88

Merged
mpaulosky merged 12 commits intomainfrom
squad/aspire-redis-cache
Feb 17, 2026
Merged

fix(#84): Make Redis optional in ServiceDefaults to fix integration tests#88
mpaulosky merged 12 commits intomainfrom
squad/aspire-redis-cache

Conversation

@mpaulosky
Copy link
Copy Markdown
Owner

Fix: Redis Health Check Integration Test Failure (Issue #84)

Nebula investigated and resolved test failures caused by Redis being unconditionally required in ServiceDefaults while integration tests don't have Redis available.

Problem

ServiceDefaults unconditionally registered RedisHealthCheck, which required IConnectionMultiplexer from StackExchange.Redis. Integration tests only spin up MongoDB via TestContainers — no Redis available. The health endpoint returned HTTP 500/503 when trying to resolve the missing dependency.

Solution

Made Redis registration optional in ServiceDefaults/Extensions.cs:

  • Added Redis:Enabled configuration flag (defaults to rue for production)
  • Registered IConnectionMultiplexer singleton only when Redis is enabled
  • Conditionally added RedisHealthCheck to health checks only when Redis is enabled
  • Set Redis__Enabled=false environment variable in IssueTrackerTestFactory BEFORE calling �ase.ConfigureWebHost()

Key Learning

WebApplicationFactory configuration timing matters. In-memory config from ConfigureAppConfiguration runs AFTER Program.cs initialization, so it can't affect services registered during startup. Environment variables work because they're available during host builder construction.

Test Results

All 418 tests passing

  • 11 integration tests
  • 4 architecture tests
  • 72 service unit tests
  • 12 ServiceDefaults tests
  • 37 PlugIns unit tests
  • 86 CoreBusiness unit tests
  • 142 UI unit tests
  • 54 PlugIns integration tests

Closes

Closes #84


Branch: squad/aspire-redis-cache → main
Co-authored-by: Nebula (Tester)

mpaulosky and others added 11 commits February 16, 2026 16:43
Wong fixed the "tools: Tool names must be unique" error by:
- Restructuring .ai-team/skills/auth0-integration/ (moved from loose file to proper subdirectory)
- Adding standardized YAML frontmatter with unique 'name' fields to all skill definitions
- Auditing tool names across skills and MCP servers (zero conflicts found)

Result: Copilot CLI operational. No remaining blockers.

Status: Complete ✓

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Add Aspire.Hosting.Redis v13.0.0 and Microsoft.Extensions.Caching.StackExchangeRedis v10.0.0 to Directory.Packages.props
- Update StackExchange.Redis to v2.9.32 for Aspire compatibility
- Add Aspire.Hosting.Redis to AppHost.csproj
- Add Redis resource to AppHost Program.cs with health check and data volume
- Update UI service reference to include Redis
- Create RedisHealthCheck implementation in ServiceDefaults.HealthChecks
- Add Redis distributed cache registration in ServiceDefaults.Extensions
- Create ServiceDefaults.Tests project with integration tests for cache registration

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Create ICacheService interface with GetAsync, SetAsync, and RemoveAsync methods
- Implement CacheService wrapping IDistributedCache with JSON serialization
- Register ICacheService in ServiceDefaults.Extensions
- Add comprehensive unit tests for cache operations
- Tests cover cache hits/misses, expiration, serialization, and error handling
- Use 5-minute default TTL for distributed cache operations
- Include XML documentation on all public methods

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Documents implementation details, design decisions, test results, and
next phase recommendations for cache service layer.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Created Integration.Tests project with 11 comprehensive cache operation tests
- Tests cover: SetAsync/GetAsync/RemoveAsync operations, TTL expiration, concurrent ops, error handling, performance baselines
- Validates ICacheService registration and end-to-end JSON serialization
- Tests graceful degradation on corrupted cache entries
- Verifies cache performance (< 5ms hit latency)
- All 11 integration tests pass
- No regressions in existing unit tests (364 passing)
- ServiceDefaults.Tests maintains 12/12 pass rate

Acceptance criteria met:
✓ Cache operations work end-to-end
✓ Concurrent operations (100+) succeed
✓ Cache serialization/deserialization validated
✓ Error handling tested
✓ Performance baseline established
✓ Integration tests comprehensive

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Comprehensive test report documenting:
- All 11 integration tests passing
- Cache operations validated (Set/Get/Remove/TTL/Expiration)
- 100+ concurrent operations stress test passed
- Error handling and graceful degradation confirmed
- Performance baseline: < 1ms cache hit latency
- All 8 acceptance criteria met
- 364 tests total (no regressions)
- Ready for production deployment

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- docs/Aspire.md: System architecture, topology, resource configuration, local setup
- docs/Cache-Strategy.md: Three-tier strategy, usage patterns, invalidation approaches
- docs/Health-Checks.md: Readiness/liveness probes, interpreting responses, troubleshooting
- docs/Running-Aspire-Locally.md: Quick start guide, prerequisites, step-by-step setup
- docs/Production-Readiness.md: Persistence, replication, monitoring, scaling, security

All documentation follows markdown standards and includes code examples from the project.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Session: 2026-02-17-issue84-resolution
Requested by: mpaulosky

Changes:
- Merged Nebula's Redis health check fix decision
- Logged test investigation and resolution
- Captured team directive: all commits require passing tests
- Updated decisions.md with new team policies
- Archived all inbox decision files

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…ests

Tests were failing because Redis was unconditionally required but not
available in integration test environment. Now Redis registration is
optional and can be disabled via Redis__Enabled configuration.

- Made IConnectionMultiplexer registration conditional on Redis:Enabled config
- Added RedisHealthCheck conditionally only when Redis is enabled
- Set Redis__Enabled=false in IssueTrackerTestFactory for test environment
- All 418 integration and unit tests now passing

Fixes #84

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings February 17, 2026 18:28
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request addresses Issue #84 by making Redis optional in ServiceDefaults to fix integration test failures. The problem was that ServiceDefaults unconditionally registered RedisHealthCheck which required IConnectionMultiplexer, but integration tests only spin up MongoDB via TestContainers without Redis. The solution introduces a Redis:Enabled configuration flag (defaulting to true) to conditionally register Redis-related services including IConnectionMultiplexer, RedisHealthCheck, and the distributed cache implementation.

Changes:

  • Made Redis registration conditional in ServiceDefaults based on Redis:Enabled configuration
  • Added Redis container to AppHost orchestration with health checks
  • Created CacheService implementation with JSON serialization support for distributed caching
  • Added comprehensive test coverage (12 ServiceDefaults tests, 11 integration tests)
  • Added extensive documentation covering Aspire architecture, cache strategy, health checks, running locally, and production readiness

Reviewed changes

Copilot reviewed 26 out of 26 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
src/ServiceDefaults/Extensions.cs Added conditional Redis registration based on configuration
src/ServiceDefaults/HealthChecks/RedisHealthCheck.cs New Redis health check with 2-second timeout
src/ServiceDefaults/CacheService.cs New ICacheService interface and implementation with JSON serialization
src/ServiceDefaults/GlobalUsings.cs Added required imports for Redis and caching
src/ServiceDefaults/ServiceDefaults.csproj Added Redis and caching package references
src/AppHost/Program.cs Added Redis container resource to orchestration
src/AppHost/AppHost.csproj Added Aspire.Hosting.Redis package reference
tests/IssueTracker.PlugIns.Tests.Integration/IssueTrackerTestFactory.cs Set Redis__Enabled environment variable before ServiceDefaults initialization
tests/ServiceDefaults.Tests/* New test project with cache service and extensions tests
tests/Integration.Tests/* New test project with cache integration tests
docs/*.md Five new comprehensive documentation files
Directory.Packages.props Added Redis-related package versions
IssueTracker.slnx Added new test projects to solution

};
return ConnectionMultiplexer.Connect(config);
});
}
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When Redis is disabled (Redis:Enabled=false), ICacheService is still registered but IDistributedCache is not registered by AddStackExchangeRedisCache. This will cause a runtime exception when ICacheService tries to resolve IDistributedCache. Either register a fallback in-memory IDistributedCache implementation when Redis is disabled, or conditionally register ICacheService only when Redis is enabled.

Suggested change
}
}
else
{
// Fallback: use in-memory distributed cache when Redis is disabled
builder.Services.AddDistributedMemoryCache();
}

Copilot uses AI. Check for mistakes.
Comment on lines +33 to +54
builder.Services.AddStackExchangeRedisCache(options =>
{
options.ConfigurationOptions = new StackExchange.Redis.ConfigurationOptions
{
EndPoints = { redisEndpoint },
AbortOnConnectFail = false,
ConnectTimeout = 2000,
SyncTimeout = 2000,
};
});

// Redis Connection: Required by RedisHealthCheck
builder.Services.AddSingleton<IConnectionMultiplexer>(sp =>
{
var config = new StackExchange.Redis.ConfigurationOptions
{
EndPoints = { redisEndpoint },
AbortOnConnectFail = false,
ConnectTimeout = 2000,
SyncTimeout = 2000,
};
return ConnectionMultiplexer.Connect(config);
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redis configuration options are duplicated in two places: AddStackExchangeRedisCache (lines 35-41) and IConnectionMultiplexer registration (lines 47-53). The configuration is identical. Consider extracting this to a shared ConfigurationOptions instance to follow the DRY principle and ensure consistency if configuration values change.

Suggested change
builder.Services.AddStackExchangeRedisCache(options =>
{
options.ConfigurationOptions = new StackExchange.Redis.ConfigurationOptions
{
EndPoints = { redisEndpoint },
AbortOnConnectFail = false,
ConnectTimeout = 2000,
SyncTimeout = 2000,
};
});
// Redis Connection: Required by RedisHealthCheck
builder.Services.AddSingleton<IConnectionMultiplexer>(sp =>
{
var config = new StackExchange.Redis.ConfigurationOptions
{
EndPoints = { redisEndpoint },
AbortOnConnectFail = false,
ConnectTimeout = 2000,
SyncTimeout = 2000,
};
return ConnectionMultiplexer.Connect(config);
var redisConfigOptions = new StackExchange.Redis.ConfigurationOptions
{
EndPoints = { redisEndpoint },
AbortOnConnectFail = false,
ConnectTimeout = 2000,
SyncTimeout = 2000,
};
builder.Services.AddStackExchangeRedisCache(options =>
{
options.ConfigurationOptions = redisConfigOptions;
});
// Redis Connection: Required by RedisHealthCheck
builder.Services.AddSingleton<IConnectionMultiplexer>(sp =>
{
return ConnectionMultiplexer.Connect(redisConfigOptions);

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,73 @@
// ============================================
// Copyright (c) 2023. All rights reserved.
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The copyright year is 2023, but this file was created in 2026 (according to the PR date). The copyright year should be updated to match the current year or the year of creation.

Suggested change
// Copyright (c) 2023. All rights reserved.
// Copyright (c) 2026. All rights reserved.

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,133 @@
// ============================================
// Copyright (c) 2023. All rights reserved.
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The copyright year is 2023, but this file was created in 2026 (according to the PR date). The copyright year should be updated to match the current year or the year of creation.

Suggested change
// Copyright (c) 2023. All rights reserved.
// Copyright (c) 2026. All rights reserved.

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +563
## Production Readiness Guide

### Overview

This guide outlines best practices and configurations for deploying Issue Tracker to production
environments. It covers Redis persistence, cache behavior, health checks, monitoring, and
performance tuning.

### Redis Persistence & Data Safety

In production, Redis must persist data to survive restarts and failures.

#### Persistence Strategies

##### RDB (Snapshot) - Default

**How it works**: Periodically saves entire dataset to disk

**Configuration**:

```yaml
# docker-compose.yml for production
services:
redis:
image: redis:7-alpine
command: redis-server --save 60 1000 --appendonly no
volumes:
- redis-data:/data
ports:
- "6379:6379"

volumes:
redis-data:
driver: local
```

**Options**:

- `--save 60 1000` - Save every 60 seconds if 1000+ keys changed
- Adjust to `--save 300 10` for less frequent saves (slower recovery, better performance)

**Pros**: Simple, low overhead

**Cons**: Can lose data between snapshots

##### AOF (Append-Only File) - Safer

**How it works**: Logs every write command, replays on recovery

**Configuration**:

```yaml
services:
redis:
image: redis:7-alpine
command: redis-server --appendonly yes --appendfsync everysec
volumes:
- redis-data:/data
ports:
- "6379:6379"
```

**Options**:

- `--appendfsync everysec` - Fsync once per second (balanced safety/performance)
- `--appendfsync always` - Fsync after every write (safest, slowest)
- `--appendfsync no` - Let OS decide when to fsync (fastest, riskier)

**Pros**: Safer, minimal data loss

**Cons**: Slower writes, larger disk footprint

##### Hybrid (RDB + AOF) - Recommended

**Configuration**:

```yaml
services:
redis:
image: redis:7-alpine
command: |
redis-server
--save 60 1000
--appendonly yes
--appendfsync everysec
volumes:
- redis-data:/data
ports:
- "6379:6379"
```

On recovery, Redis uses AOF first (more recent), then RDB if AOF unavailable.

### Redis Replication (High Availability)

For production with downtime requirements, deploy Redis in a replicated setup:

```yaml
version: '3.8'
services:
redis-primary:
image: redis:7-alpine
command: redis-server --port 6379
volumes:
- redis-primary:/data
ports:
- "6379:6379"

redis-replica:
image: redis:7-alpine
command: redis-server --port 6380 --slaveof redis-primary 6379
depends_on:
- redis-primary
volumes:
- redis-replica:/data
ports:
- "6380:6380"

volumes:
redis-primary:
redis-replica:
```

**Application Configuration**:

```csharp
// Connects to primary for writes, can read from replicas
var options = new StackExchange.Redis.ConfigurationOptions
{
EndPoints = { "redis-primary:6379", "redis-replica:6380" },
TieBreaker = "",
ServiceName = "mymaster"
};

var connection = await StackExchange.Redis.ConnectionMultiplexer.ConnectAsync(options);
```

### Cache Behavior: Local vs. Production

#### Local Development (AppHost)

- **Scope**: Single developer machine
- **Persistence**: Volumes created/destroyed with AppHost
- **TTLs**: Short (5-10 minutes) for rapid iteration
- **Invalidation**: Manual (restart AppHost to clear all cache)
- **Monitoring**: Aspire dashboard provides visibility

#### Production

- **Scope**: Multiple servers, distributed load
- **Persistence**: Persistent volumes (RDB/AOF)
- **TTLs**: Longer (30+ minutes) for cost/performance optimization
- **Invalidation**: Careful coordination (invalidate only what changed)
- **Monitoring**: OpenTelemetry metrics, alerting on cache misses

#### Key Differences

| Aspect | Local | Production |
|--------|-------|------------|
| **Replication** | Single instance | Master-slave or cluster |
| **Failover** | Manual restart | Automatic (Redis Sentinel or Cluster) |
| **Data Persistence** | Docker volume | Persistent storage + backups |
| **TTL Strategy** | Aggressive (fast iteration) | Conservative (cost/consistency) |
| **Invalidation** | Full cache wipes OK | Surgical, event-driven |

### Health Check Configuration for Production

Health checks must be configured differently for startup vs. ongoing operation:

#### Startup Phase (High Confidence Required)

During container startup, services must be "ready" before accepting traffic:

```csharp
// In Program.cs
var healthChecks = builder.Services.AddHealthChecks()
.AddCheck<MongoDbHealthCheck>(
"mongodb",
HealthStatus.Unhealthy, // Fail on any error
tags: new[] { "startup" }
)
.AddCheck<RedisHealthCheck>(
"redis",
HealthStatus.Unhealthy, // Fail on any error
tags: new[] { "startup" }
);
```

Kubernetes probe:

```yaml
readinessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 30 # Wait for startup
periodSeconds: 10
failureThreshold: 3
```

#### Ongoing Phase (Graceful Degradation)

Once running, services should degrade rather than fail:

```csharp
// After startup, mark as "Degraded" instead of "Unhealthy"
var healthChecks = builder.Services.AddHealthChecks()
.AddCheck<MongoDbHealthCheck>(
"mongodb",
HealthStatus.Degraded, // Non-fatal issues
tags: new[] { "liveness" }
)
.AddCheck<RedisHealthCheck>(
"redis",
HealthStatus.Degraded, // Cache is optional, not critical
tags: new[] { "liveness" }
);
```

### Monitoring & Observability

#### OpenTelemetry Metrics Collection

Issue Tracker exports metrics via OpenTelemetry. Configure exporters in production:

```csharp
// In ServiceDefaults/Extensions.cs (already implemented)
builder.Services.AddOpenTelemetry()
.WithMetrics(metrics => metrics
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://otel-collector:4317");
})
);
```

**Metrics to Export**:

- Cache hit/miss rates
- MongoDB query latency
- Redis command latency
- HTTP request latency
- Error rates per service

#### Prometheus Scraping

Configure Prometheus to scrape metrics endpoint:

```yaml
global:
scrape_interval: 15s

scrape_configs:
- job_name: 'issuetracker'
static_configs:
- targets: ['issuetracker-ui:5000']
metrics_path: '/metrics'
```

#### Application Insights (Azure)

For Azure deployments, configure Application Insights:

```csharp
builder.Services.AddApplicationInsightsTelemetry();

builder.Services.ConfigureOpenTelemetryMeterProvider(metrics =>
metrics.AddAzureMonitorMetricExporter()
);
```

### Performance Tuning

#### Cache TTL Optimization

Analyze cache hit/miss rates and adjust TTLs:

**Strategy**:

- **High Hit Rate (>80%)**: TTL is good, no change needed
- **Low Hit Rate (<50%)**: Increase TTL (data is reusable longer)
- **Stale Data Complaints**: Decrease TTL (data changes more frequently)

**Recommended Production TTLs**:

```csharp
// Query results: Re-execute queries every 30 minutes
const int QueryResultsTTL = 30;

// Output/reports: Re-render every 60 minutes
const int ReportTTL = 60;

// Session data: Keep user prefs for 24 hours
const int SessionTTL = 24 * 60;
```

#### Redis Memory Management

Configure Redis memory limits and eviction policy:

```yaml
services:
redis:
image: redis:7-alpine
command: |
redis-server
--maxmemory 512mb
--maxmemory-policy allkeys-lru
ports:
- "6379:6379"
```

**Policies**:

- `allkeys-lru` - Evict least recently used keys when limit reached
- `volatile-lru` - Evict keys with TTL when limit reached
- `allkeys-random` - Random eviction

**Monitoring**:

```bash
# Check memory usage
redis-cli INFO memory

# Expected output:
# used_memory_human:125.42M
# maxmemory:512000000
```

#### Database Query Optimization

Ensure MongoDB indexes are created for frequently-queried fields:

```csharp
// In migration or setup
var collection = database.GetCollection<Issue>("issues");
var indexModel = new CreateIndexModel<Issue>(
Builders<Issue>.IndexKeys.Ascending(x => x.CreatedBy)
);
await collection.Indexes.CreateOneAsync(indexModel);
```

**Verify indexes**:

```bash
mongosh --host localhost --port 27017
use devissuetracker
db.issues.getIndexes()
```

### Backup & Disaster Recovery

#### MongoDB Backups

Schedule daily backups using `mongodump`:

```bash
#!/bin/bash
# backup-mongodb.sh
mongodump \
--host mongodb-prod:27017 \
-u admin -p $MONGO_PASSWORD \
--out /backups/mongo-$(date +%Y%m%d)
```

#### Redis Backups

Copy RDB/AOF files to persistent storage:

```bash
#!/bin/bash
# backup-redis.sh
docker exec redis-prod redis-cli BGSAVE
docker cp redis-prod:/data/dump.rdb /backups/dump-$(date +%Y%m%d).rdb
```

#### Recovery Procedures

**MongoDB Recovery**:

```bash
mongorestore \
--host mongodb-prod:27017 \
-u admin -p $MONGO_PASSWORD \
/backups/mongo-20240101
```

**Redis Recovery**:

```bash
docker cp /backups/dump-20240101.rdb redis-prod:/data/dump.rdb
docker restart redis-prod
```

### Scaling Strategies

#### Horizontal Scaling (Multiple UI Instances)

Use load balancer in front of multiple UI instances:

```yaml
services:
loadbalancer:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf

ui-1:
image: issuetracker-ui:latest
depends_on:
- mongodb
- redis

ui-2:
image: issuetracker-ui:latest
depends_on:
- mongodb
- redis
```

#### Redis Cluster (Horizontal Cache)

For massive cache volumes, use Redis Cluster:

```yaml
services:
redis-cluster:
image: redis:7-alpine
command: redis-server --cluster-enabled yes
environment:
- REDIS_CLUSTER_NODES=6
```

Application connects to any node; Redis handles sharding automatically.

### Troubleshooting Production Issues

#### Symptom: Slow Response Times

1. Check health endpoint:
```bash
curl https://prod.example.com/health
```

2. Inspect OpenTelemetry traces for slow queries/services

3. Review cache hit rates (low hit rate = increasing database load)

4. Check Redis and MongoDB resource usage:
```bash
docker stats
```

#### Symptom: High Memory Usage

1. Check Redis memory:
```bash
redis-cli INFO memory
```

2. If near limit, keys are being evicted; consider increasing memory or adjusting TTL

3. Check MongoDB memory:
```bash
mongosh --eval "db.stats()"
```

#### Symptom: Frequent Health Check Failures

1. Review health check timeout thresholds (may be too strict)

2. Check network latency between containers

3. Increase timeout values if infrastructure is slow:
```csharp
private static readonly TimeSpan Timeout = TimeSpan.FromSeconds(5); // Increased from 3
```

### Security Considerations

#### Network Isolation

Ensure MongoDB and Redis are not exposed to the internet:

```yaml
services:
mongodb:
ports:
- "127.0.0.1:27017:27017" # Localhost only

redis:
ports:
- "127.0.0.1:6379:6379" # Localhost only
```

#### Authentication

Enable authentication for both services:

**MongoDB**:

```yaml
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD}
```

**Redis**:

```yaml
command: redis-server --requirepass ${REDIS_PASSWORD}
```

#### Encryption

Enable TLS for production connections:

**MongoDB TLS**:

```csharp
var settings = MongoClientSettings.FromConnectionString(
"mongodb+srv://admin:pass@mongodb.example.com/?ssl=true"
);
var client = new MongoClient(settings);
```

**Redis TLS**:

```csharp
var options = ConfigurationOptions.Parse(
"redis-prod.example.com:6380,ssl=true,sslProtocols=Tls12"
);
var connection = await ConnectionMultiplexer.ConnectAsync(options);
```

### Pre-Deployment Checklist

- [ ] Redis persistence enabled (RDB or AOF)
- [ ] MongoDB backups configured and tested
- [ ] Health checks configured for startup + ongoing operation
- [ ] OpenTelemetry metrics exporter configured
- [ ] Cache TTLs optimized for production load
- [ ] MongoDB indexes created for query performance
- [ ] Network security in place (no internet exposure)
- [ ] Authentication enabled for MongoDB and Redis
- [ ] TLS/HTTPS enabled for all external communication
- [ ] Monitoring and alerting configured
- [ ] Disaster recovery procedures documented
- [ ] Load balancer configured (if scaling horizontally)

### Post-Deployment Monitoring

After deployment:

1. Monitor health check endpoint every 5 minutes
2. Track cache hit/miss rates daily
3. Review performance metrics weekly
4. Test backup/recovery procedures monthly
5. Rotate credentials quarterly
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This documentation file uses H4 (####) and H5 (#####) heading levels. According to the project's markdown standards (guideline 1000002), content should be restructured to use only H2 and H3 headings. H4 usage should be restructured, and H5 is more strongly discouraged. Consider promoting some sections or reorganizing the hierarchy to eliminate these deeper heading levels.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines +41 to +46
// Handle test scenarios where Redis is not available
if (_connection == null)
{
return HealthCheckResult.Unhealthy("Redis connection is not configured");
}

Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The null check for _connection will never be true because the constructor parameter is not nullable and will throw before this line is reached if null is passed. Since RedisHealthCheck is only registered when Redis is enabled (line 67 in Extensions.cs), and IConnectionMultiplexer is registered as a singleton when Redis is enabled (line 45-55 in Extensions.cs), this defensive null check is unnecessary and unreachable. Consider removing it or making the constructor parameter nullable if null checks are truly needed.

Suggested change
// Handle test scenarios where Redis is not available
if (_connection == null)
{
return HealthCheckResult.Unhealthy("Redis connection is not configured");
}

Copilot uses AI. Check for mistakes.
@@ -0,0 +1,95 @@
// ============================================
// Copyright (c) 2023. All rights reserved.
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The copyright year is 2023, but these test files were created in 2026 (according to the PR date). The copyright year should be updated to match the current year or the year of creation.

Suggested change
// Copyright (c) 2023. All rights reserved.
// Copyright (c) 2026. All rights reserved.

Copilot uses AI. Check for mistakes.
Comment thread docs/Health-Checks.md
Comment on lines +1 to +381
## Health Checks & Service Monitoring

### Overview

Health checks are automated probes that verify the availability and responsiveness of external
dependencies (MongoDB, Redis). Issue Tracker exposes two standardized health check endpoints that
return the aggregate health status and detailed per-service information.

### Health Check Endpoints

#### `/health` - Readiness Probe

Indicates whether the application is **ready to accept traffic**.

**Purpose**: Used by load balancers, orchestrators, and deployment tools to determine readiness

**HTTP Status Codes**:

- `200 OK` - All services healthy, application ready
- `503 Service Unavailable` - One or more services degraded or unhealthy

**Response Example (All Healthy)**:

```json
{
"status": "Healthy",
"checks": {
"mongodb": {
"status": "Healthy",
"description": "MongoDB connection is responsive"
},
"redis": {
"status": "Healthy",
"description": "Redis connection is responsive"
}
}
}
```

**Response Example (MongoDB Degraded)**:

```json
{
"status": "Degraded",
"checks": {
"mongodb": {
"status": "Unhealthy",
"description": "MongoDB connection timed out after 3s"
},
"redis": {
"status": "Healthy",
"description": "Redis connection is responsive"
}
}
}
```

#### `/health/live` - Liveness Probe

Indicates whether the application **process is alive** (not applicable to Issue Tracker currently,
but reserved for future implementation).

**Purpose**: Used to detect and restart dead processes

**HTTP Status Codes**:

- `200 OK` - Process is running
- `503 Service Unavailable` - Process is deadlocked or hung

### Interpreting Health Responses

#### Status Levels

| Status | Meaning | Action |
|--------|---------|--------|
| **Healthy** | Service responds within timeout, fully operational | No action needed |
| **Degraded** | Service responds but with issues (slow, partial failure) | Investigate logs, consider restart |
| **Unhealthy** | Service unresponsive, timed out, or failed | Restart service, check container logs |

#### Common Issues and Meanings

| Response | Cause | Solution |
|----------|-------|----------|
| `"MongoDB connection is responsive"` | MongoDB healthy | None |
| `"MongoDB connection timed out after 3s"` | MongoDB slow or offline | Check `docker logs`, restart container |
| `"Redis connection is responsive"` | Redis healthy | None |
| `"Redis connection timed out after 2s"` | Redis slow or offline | Check `docker logs`, restart container |
| `"MongoDB ping returned zero response time"` | Unexpected response | Restart MongoDB, check network |

### MongoDB Health Check

**Service**: `mongodb`

**Probe Mechanism**: Sends a `ping` command to MongoDB admin database

**Timeout**: 3 seconds

**Implementation Details**:

```csharp
public class MongoDbHealthCheck : IHealthCheck
{
private static readonly TimeSpan Timeout = TimeSpan.FromSeconds(3);

public async Task<HealthCheckResult> CheckHealthAsync(...)
{
using var timeoutCts = new CancellationTokenSource(Timeout);

var database = _client.GetDatabase("admin");
var pingCommand = new BsonDocument("ping", 1);

await database.RunCommandAsync<BsonDocument>(pingCommand, ...);

return HealthCheckResult.Healthy("MongoDB connection is responsive");
}
}
```

**Troubleshooting**:

```bash
# Check MongoDB container is running
docker ps | grep mongodb

# Inspect container logs
docker logs <mongodb-container-id>

# Test connection manually
mongosh --host localhost --port 27017 -u course -p whatever

# If failed, restart container
docker restart <mongodb-container-id>
```

### Redis Health Check

**Service**: `redis`

**Probe Mechanism**: Sends a `PING` command to Redis server

**Timeout**: 2 seconds

**Implementation Details**:

```csharp
public class RedisHealthCheck : IHealthCheck
{
private static readonly TimeSpan Timeout = TimeSpan.FromSeconds(2);

public async Task<HealthCheckResult> CheckHealthAsync(...)
{
using var timeoutCts = new CancellationTokenSource(Timeout);

var server = _connection.GetServer(_connection.GetEndPoints().First());
var pong = await server.PingAsync(flags: CommandFlags.DemandMaster);

if (pong != TimeSpan.Zero)
{
return HealthCheckResult.Healthy("Redis connection is responsive");
}

return HealthCheckResult.Unhealthy("Redis ping returned zero response time");
}
}
```

**Troubleshooting**:

```bash
# Check Redis container is running
docker ps | grep redis

# Inspect container logs
docker logs <redis-container-id>

# Test connection manually
redis-cli -h localhost -p 6379 PING

# If failed, restart container
docker restart <redis-container-id>
```

### Troubleshooting Unhealthy Services

#### Scenario 1: MongoDB Timeout

**Symptoms**:

- Health check shows: `"MongoDB connection timed out after 3s"`
- Blazor UI cannot load issue data

**Steps**:

1. Check if container is running:
```bash
docker ps | grep mongodb
```

2. View container logs:
```bash
docker logs <mongodb-container-id> --tail 50
```

3. If container is not running, restart AppHost:
```bash
# Stop AppHost (Ctrl+C)
# Then restart
dotnet run --project src/AppHost/AppHost.csproj
```

4. If container is running but slow, check resource constraints:
```bash
docker stats <mongodb-container-id>
```

5. If memory/CPU usage is high, restart the container:
```bash
docker restart <mongodb-container-id>
```

#### Scenario 2: Redis Unreachable

**Symptoms**:

- Health check shows: `"Redis connection timed out after 2s"`
- Cache operations fail, no caching available

**Steps**:

1. Verify Redis is running:
```bash
redis-cli -h localhost -p 6379 PING
```

2. If command fails, check if container exists:
```bash
docker ps -a | grep redis
```

3. Restart Redis container:
```bash
docker restart <redis-container-id>
```

4. Or restart entire AppHost:
```bash
dotnet run --project src/AppHost/AppHost.csproj
```

#### Scenario 3: Intermittent Unhealthy Status

**Symptoms**:

- Health check occasionally shows "Degraded"
- Application works but is slow

**Causes**:

- High database/network load
- Container resource constraints
- DNS resolution delays

**Steps**:

1. Monitor container resources:
```bash
docker stats
```

2. Check network latency:
```bash
ping localhost
```

3. Increase timeouts if acceptable for your use case (edit health check code)

4. Scale or optimize database queries

### Integration with Kubernetes/Container Orchestrators

Health check endpoints integrate with container orchestrators (Kubernetes, Docker Swarm, etc.)
for automated service recovery.

#### Kubernetes Probe Configuration

```yaml
apiVersion: v1
kind: Pod
metadata:
name: issuetracker
spec:
containers:
- name: ui
image: issuetracker-ui:latest

# Readiness probe: is service ready for traffic?
readinessProbe:
httpGet:
path: /health
port: 5000
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3

# Liveness probe: is process still alive?
livenessProbe:
httpGet:
path: /health/live
port: 5000
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
```

**Behavior**:

- **initialDelaySeconds**: Wait 10 seconds after container starts before first probe
- **periodSeconds**: Check every 10 seconds
- **failureThreshold**: Restart container after 3 consecutive failures

#### Docker Compose Health Check

```yaml
services:
ui:
image: issuetracker-ui:latest
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 30s
```

### Monitoring Health Metrics

#### Check Health Endpoint from CLI

```bash
# Using curl
curl http://localhost:5000/health | jq

# Using PowerShell
Invoke-WebRequest -Uri "http://localhost:5000/health" | ConvertFrom-Json | ConvertTo-Json -Depth 5
```

#### Parse Health Response

```csharp
public class HealthResponse
{
public string Status { get; set; }
public Dictionary<string, HealthCheckData> Checks { get; set; }
}

public class HealthCheckData
{
public string Status { get; set; }
public string Description { get; set; }
}
```

#### Metrics to Track

- **Health Check Latency**: Time to complete health check probe
- **Failure Rate**: Percentage of failed health checks
- **Recovery Time**: Time to transition from Unhealthy → Healthy

Use OpenTelemetry metrics collection (see Production-Readiness.md) to export these metrics to
monitoring systems.

### Health Check Best Practices

1. **Run health checks frequently** (every 10 seconds) to detect failures quickly
2. **Use appropriate timeouts** (MongoDB: 3s, Redis: 2s) to avoid cascading failures
3. **Alert on degraded status**, not just unhealthy
4. **Test health endpoints manually** during deployment
5. **Log health check results** for debugging
6. **Implement gradual restarts** (exponential backoff) to avoid thundering herd
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This documentation file uses H4 (####) heading levels. According to the project's markdown standards (guideline 1000002), content should be restructured to use only H2 and H3 headings. Consider promoting some sections or reorganizing the hierarchy to eliminate these deeper heading levels.

Copilot generated this review using guidance from repository custom instructions.
Comment thread docs/Cache-Strategy.md
Comment on lines +1 to +387
## Cache Strategy & Implementation

### Overview

Issue Tracker implements a three-tier caching strategy using **Redis** as the distributed cache
backend. Caching improves application performance by storing frequently accessed data, reducing
database load and latency.

### What Is Cached and Why

Caching is applied to operations where:

- Data is **read-heavy** (more reads than writes)
- Response time matters for user experience
- Data does not require strict real-time consistency
- Stale data is acceptable within a defined window

**Data NOT cached**:

- Authentication tokens (use session storage)
- User passwords or sensitive credentials
- Frequently-changing metrics (real-time dashboards)
- Transient error states

### Three-Tier Caching Strategy

#### Tier 1: Query Results (5-Minute TTL)

Stores the results of expensive database queries.

**Use Case**: Listing all issues, fetching user profiles

**TTL**: 5 minutes (`TimeSpan.FromMinutes(5)`)

**Example**:

```csharp
var cacheKey = "issues:all";
var issues = await _cacheService.GetAsync<List<Issue>>(cacheKey);

if (issues is null)
{
// Cache miss: fetch from database
issues = await _issueRepository.GetAllAsync();

// Store in cache for 5 minutes
await _cacheService.SetAsync(cacheKey, issues, TimeSpan.FromMinutes(5));
}

return issues;
```

**Invalidation**: When issues are created, updated, or deleted

#### Tier 2: Output (10-Minute TTL)

Stores rendered or processed output, such as formatted reports or aggregated data.

**Use Case**: Issue statistics, dashboard summaries

**TTL**: 10 minutes (`TimeSpan.FromMinutes(10)`)

**Example**:

```csharp
var cacheKey = "report:issue-summary";
var report = await _cacheService.GetAsync<IssueReport>(cacheKey);

if (report is null)
{
// Cache miss: generate report
report = await _reportService.GenerateSummaryAsync();

// Store in cache for 10 minutes
await _cacheService.SetAsync(cacheKey, report, TimeSpan.FromMinutes(10));
}

return report;
```

**Invalidation**: When underlying data changes or on schedule

#### Tier 3: Session (1-Hour TTL)

Stores user-specific state and preferences.

**Use Case**: User settings, recent searches, filter state

**TTL**: 1 hour (`TimeSpan.FromHours(1)`)

**Example**:

```csharp
var userId = currentUser.Id;
var cacheKey = $"session:user:{userId}:preferences";
var prefs = await _cacheService.GetAsync<UserPreferences>(cacheKey);

if (prefs is null)
{
prefs = new UserPreferences { /* defaults */ };
await _cacheService.SetAsync(cacheKey, prefs, TimeSpan.FromHours(1));
}

return prefs;
```

**Invalidation**: When user updates preferences or session expires

### Using ICacheService

#### Injection

Register `ICacheService` in your service class via constructor injection:

```csharp
public class IssueService
{
private readonly ICacheService _cacheService;
private readonly IIssueRepository _repository;
private readonly ILogger<IssueService> _logger;

public IssueService(
ICacheService cacheService,
IIssueRepository repository,
ILogger<IssueService> logger)
{
_cacheService = cacheService;
_repository = repository;
_logger = logger;
}
}
```

#### Core Operations

**Get from Cache**:

```csharp
public async Task<Issue?> GetIssueByIdAsync(string id)
{
var cacheKey = $"issue:{id}";
var issue = await _cacheService.GetAsync<Issue>(cacheKey);

if (issue is not null)
{
_logger.LogDebug("Cache hit for issue {IssueId}", id);
return issue;
}

// Fetch from database and cache
issue = await _repository.GetByIdAsync(id);
if (issue is not null)
{
await _cacheService.SetAsync(cacheKey, issue, TimeSpan.FromMinutes(5));
}

return issue;
}
```

**Set in Cache**:

```csharp
await _cacheService.SetAsync(
key: "mydata:key",
value: myObject,
expiration: TimeSpan.FromMinutes(5)
);
```

**Remove from Cache**:

```csharp
await _cacheService.RemoveAsync("issue:123");
```

### Cache Key Naming Convention

Use hierarchical, dot-separated keys for clarity and organization.

**Format**: `{domain}:{entity}:{id}:{variant}`

**Examples**:

```
issues:all # All issues (Tier 1)
issues:all:active # Active issues only
issues:list:page:1 # Paginated list, page 1
issue:123 # Single issue by ID
issue:123:comments # Issue with comments
issue:123:activity:full # Full activity audit
report:issue-summary # Issue summary report
session:user:john-doe:preferences # User preferences
session:user:john-doe:recent-search # Recent searches
```

**Benefits**:

- Easy to find related cache entries
- Clear pattern for debugging
- Simplifies bulk invalidation (use prefix matching)

### Cache Invalidation Patterns

#### Pattern 1: Immediate Invalidation (On Write)

Invalidate cache immediately when data changes.

```csharp
public async Task UpdateIssueAsync(string id, UpdateIssueRequest request)
{
// Update in database
var issue = await _repository.UpdateAsync(id, request);

// Invalidate related caches
await _cacheService.RemoveAsync($"issue:{id}");
await _cacheService.RemoveAsync("issues:all");

return issue;
}
```

**Pros**: Data consistency, no stale cache

**Cons**: Cache may become empty frequently, reducing hit rates

#### Pattern 2: Time-Based Expiration (TTL)

Let cache expire naturally after TTL.

```csharp
// Cache expires after 5 minutes automatically
await _cacheService.SetAsync(
"issues:all",
issues,
TimeSpan.FromMinutes(5)
);
```

**Pros**: Simple, less code

**Cons**: Stale data for up to TTL duration

#### Pattern 3: Lazy Invalidation

Combine both: invalidate on critical updates, let others expire naturally.

```csharp
public async Task DeleteIssueAsync(string id)
{
// Critical operation: invalidate immediately
await _cacheService.RemoveAsync($"issue:{id}");
await _cacheService.RemoveAsync("issues:all");

// Delete from database
await _repository.DeleteAsync(id);
}

public async Task GetIssuesAsync()
{
// Non-critical: rely on TTL
var cacheKey = "issues:all";
var issues = await _cacheService.GetAsync<List<Issue>>(cacheKey);

if (issues is null)
{
issues = await _repository.GetAllAsync();
await _cacheService.SetAsync(cacheKey, issues, TimeSpan.FromMinutes(5));
}

return issues;
}
```

**Pros**: Balanced performance and consistency

**Cons**: Requires careful key management

#### Pattern 4: Event-Driven Invalidation

Publish events when data changes; subscribe to invalidate caches.

```csharp
// When an issue is updated
public class IssueUpdatedEvent
{
public string IssueId { get; set; }
}

// Subscriber
public class CacheInvalidationHandler : INotificationHandler<IssueUpdatedEvent>
{
private readonly ICacheService _cache;

public async Task Handle(IssueUpdatedEvent notification, CancellationToken ct)
{
await _cache.RemoveAsync($"issue:{notification.IssueId}");
await _cache.RemoveAsync("issues:all");
}
}
```

**Pros**: Decoupled, scales well with many cache keys

**Cons**: Requires event infrastructure (MediatR, etc.)

### When NOT to Cache

**Security Data**:

- Passwords, API keys, tokens (store in session only)

**Frequently-Changing Data**:

- Real-time metrics, stock prices, user counts

**Large Objects**:

- Videos, files (use CDN or object storage instead)

**User-Specific Data**:

- Sensitive information that must not leak between users (be careful with key scoping)

**Data with Strict Consistency Requirements**:

- Financial transactions, critical operations

### Monitoring Cache Performance

Monitor cache hit/miss rates to optimize TTLs:

```csharp
logger.LogInformation(
"Cache operation: {CacheKey}, Status: {Status}",
cacheKey,
hitOrMiss
);
```

Check logs for patterns:

- High hit rate on Tier 1 (Query Results) = good strategy
- Low hit rate = TTL too short or keys not reused
- Stale data complaints = TTL too long

### Cache Serialization

`ICacheService` uses `System.Text.Json` for serialization. Ensure cached objects are JSON-serializable:

- Use `[JsonPropertyName]` for property mapping if needed
- Avoid circular references
- Use standard .NET types (List, Dictionary, etc.)

**Example**:

```csharp
public class Issue
{
[JsonPropertyName("id")]
public string Id { get; set; }

[JsonPropertyName("title")]
public string Title { get; set; }
}

// This will cache/deserialize correctly
await _cache.SetAsync("issue:1", issue, TimeSpan.FromMinutes(5));
```

### Error Handling

`ICacheService` logs serialization errors and removes corrupted entries:

```csharp
try
{
var result = await _cache.GetAsync<Issue>(key);
}
catch (JsonException ex)
{
logger.LogWarning(ex, "Failed to deserialize cached value");
// Entry is automatically removed; next request fetches fresh data
}
```

Your code does not need explicit error handling for cache operations.
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This documentation file uses H4 (####) heading levels. According to the project's markdown standards (guideline 1000002), content should be restructured to use only H2 and H3 headings. Consider promoting some sections or reorganizing the hierarchy to eliminate these deeper heading levels.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines +1 to +349
## Running Aspire Locally - Quick Start

### Prerequisites

Before running Issue Tracker locally, ensure you have:

- **.NET 10 SDK** (check with `dotnet --version`, minimum: 10.0.100)
- **Docker Desktop** (required for container provisioning)
- **Git** (for cloning the repository)
- **At least 4 GB RAM** available for Docker containers
- **Open ports**: 5000, 5001, 6379, 27017, 18888

### Step 1: Clone the Repository

```bash
git clone https://github.com/mpaulosky/IssueTracker.git
cd IssueTracker
```

### Step 2: Verify Prerequisites

```bash
# Check .NET version
dotnet --version
# Should output: 10.0.x

# Check Docker is running
docker ps
# Should succeed without error
```

If Docker fails, start Docker Desktop and retry.

### Step 3: Restore Dependencies

```bash
dotnet restore
```

This downloads all NuGet packages specified in `Directory.Packages.props`.

**Expected output**:

```
Determining projects to restore...
Restore completed in 1.23 sec for E:\github\IssueTracker\IssueTracker.slnx
```

### Step 4: Start AppHost

```bash
dotnet run --project src/AppHost/AppHost.csproj
```

**Expected output**:

```
Aspire.Hosting[0]
Running on: http://localhost:18888
```

This command starts:

1. **Aspire Orchestrator** - Manages services and containers
2. **MongoDB Container** - Document database (port 27017)
3. **Redis Container** - In-memory cache (port 6379)
4. **Blazor UI** - Web application (ports 5000/5001)

Do **not** close this terminal window while developing.

### Step 5: Verify Services Are Running

#### Option A: Using Aspire Dashboard

Open your browser and navigate to: `http://localhost:18888`

**Dashboard shows**:

- All running services (MongoDB, Redis, UI)
- Health status: `Healthy`, `Degraded`, or `Unhealthy`
- Log streams from each service
- OpenTelemetry traces
- Resource usage

**Expected services**:

- `mongodb` - Status: Healthy (or Degraded/Unhealthy if not ready)
- `redis` - Status: Healthy (or Degraded/Unhealthy if not ready)
- `ui` - Status: Healthy (UI service running)

#### Option B: Using Command Line

```bash
# Check containers are running
docker ps

# Should show:
# - issuetracker-mongodb or my-mongodb
# - issuetracker-redis or redis
# - issuetracker-ui or ui
```

#### Option C: Check Health Endpoint

```bash
# Using curl
curl http://localhost:5000/health

# Using PowerShell
Invoke-WebRequest http://localhost:5000/health | ConvertFrom-Json | ConvertTo-Json -Depth 5
```

**Expected response**:

```json
{
"status": "Healthy",
"checks": {
"mongodb": {
"status": "Healthy",
"description": "MongoDB connection is responsive"
},
"redis": {
"status": "Healthy",
"description": "Redis connection is responsive"
}
}
}
```

### Step 6: Access the Application

#### Web Application

- **HTTP**: `http://localhost:5000`
- **HTTPS**: `https://localhost:5001`

Both URLs serve the Blazor UI. Accept any SSL warnings in your browser (development certificate).

#### Aspire Dashboard

- **URL**: `http://localhost:18888`
- **Features**: Real-time logs, traces, metrics for all services

### Services Reference

| Service | Port | URL | Purpose |
|---------|------|-----|---------|
| Blazor UI | 5000/5001 | `http://localhost:5000` | Issue Tracker web app |
| MongoDB | 27017 | `mongodb://localhost:27017` | Document database |
| Redis | 6379 | `redis://localhost:6379` | Distributed cache |
| Aspire Dashboard | 18888 | `http://localhost:18888` | Monitoring & diagnostics |

### Accessing Services During Development

#### MongoDB Connection

From within the Blazor application, MongoDB is accessed via the connection string configured in
AppHost:

```csharp
var mongodb = builder.AddMongoDB("mongodb");
```

The UI service automatically receives the connection string from Aspire:

```csharp
var ui = builder
.AddProject<Projects.IssueTracker_UI>("ui")
.WithReference(mongodb);
```

**Manual Connection** (for debugging):

```bash
# Using mongosh (MongoDB shell)
mongosh --host localhost --port 27017 -u course -p whatever

# Or from MongoDB Compass: mongodb://course:whatever@localhost:27017
```

#### Redis Connection

Redis is similarly injected into the UI service:

```csharp
var redis = builder.AddRedis("redis");
```

**Manual Connection** (for debugging):

```bash
# Using redis-cli
redis-cli -h localhost -p 6379

# Test connection
redis-cli -h localhost -p 6379 PING
# Should return: PONG
```

### Stopping Services

#### Graceful Shutdown

Press `Ctrl+C` in the terminal running AppHost:

```
^C
Hosting stopped
```

This cleanly shuts down:

1. Blazor UI
2. Redis container
3. MongoDB container
4. Aspire orchestrator

All data is persisted to Docker volumes and restored on next startup.

#### Force Shutdown

If Ctrl+C does not work:

```bash
# PowerShell: Find and stop the AppHost process
Get-Process -Name "dotnet" | Where-Object {$_.CommandLine -like "*AppHost*"} | Stop-Process -Force

# Or manually stop Docker containers
docker stop $(docker ps -q)
```

### Clearing Data for Fresh Start

To reset all development data and start clean:

#### Option 1: Just Clear Data Volumes

```bash
# Identify Docker volumes
docker volume ls | grep issuetracker

# Remove specific volumes
docker volume rm issuetracker-mongodb_data issuetracker-redis_data

# Restart AppHost (will recreate empty volumes)
dotnet run --project src/AppHost/AppHost.csproj
```

#### Option 2: Complete Docker Reset

```bash
# Stop all containers
docker stop $(docker ps -q)

# Remove all Issue Tracker containers
docker ps -a | grep issuetracker | awk '{print $1}' | xargs docker rm

# Remove all volumes
docker volume rm $(docker volume ls | grep issuetracker | awk '{print $2}')

# Restart AppHost
dotnet run --project src/AppHost/AppHost.csproj
```

**Warning**: This removes all development data. Use only for testing.

### Common Issues

#### Issue: "Aspire dashboard not accessible (Connection refused)"

**Symptoms**: Cannot reach `http://localhost:18888`

**Solutions**:

1. Check AppHost is still running (terminal should show active process)
2. Verify port 18888 is not blocked by firewall
3. Restart AppHost

#### Issue: "MongoDB connection timeout"

**Symptoms**: Health check shows MongoDB unhealthy

**Solutions**:

```bash
# Check MongoDB container logs
docker logs <mongodb-container-id> --tail 20

# Restart MongoDB
docker restart <mongodb-container-id>

# Or restart AppHost
dotnet run --project src/AppHost/AppHost.csproj
```

#### Issue: "Redis connection refused"

**Symptoms**: Cache operations fail, `/health` shows Redis unhealthy

**Solutions**:

```bash
# Verify Redis is running
docker ps | grep redis

# Check Redis logs
docker logs <redis-container-id>

# Test Redis manually
redis-cli -h localhost -p 6379 PING

# Restart Redis
docker restart <redis-container-id>
```

#### Issue: "Port 5000 already in use"

**Symptoms**: AppHost fails to start, error: `Address already in use`

**Solutions**:

```bash
# Find process using port 5000 (PowerShell)
Get-NetTCPConnection -LocalPort 5000 -ErrorAction SilentlyContinue | Select-Object OwningProcess
tasklist /FI "PID eq <PID>"

# Kill the process (replace PID)
Stop-Process -Id <PID> -Force

# Or find and stop on port 6379 or 27017 if those are conflicting
```

### Performance Tips

1. **Allocate sufficient Docker resources** (Settings → Resources: 4 GB RAM, 2 CPUs minimum)
2. **Use HTTPS for production testing** (`https://localhost:5001`)
3. **Monitor Aspire dashboard** for slow services
4. **Check health endpoint** if services feel unresponsive
5. **Clear volumes periodically** to prevent disk clutter

### Next Steps

After AppHost is running:

- Read [Cache-Strategy.md](Cache-Strategy.md) to understand caching
- Review [Health-Checks.md](Health-Checks.md) for monitoring
- Check [Aspire.md](Aspire.md) for architecture details
- See [Production-Readiness.md](Production-Readiness.md) for deployment guidance
Copy link

Copilot AI Feb 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This documentation file uses H4 (####) heading levels. According to the project's markdown standards (guideline 1000002), content should be restructured to use only H2 and H3 headings. Consider promoting some sections or reorganizing the hierarchy to eliminate these deeper heading levels.

Copilot generated this review using guidance from repository custom instructions.
@mpaulosky mpaulosky merged commit dd030ec into main Feb 17, 2026
2 of 3 checks passed
@mpaulosky mpaulosky deleted the squad/aspire-redis-cache branch February 17, 2026 18:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Tests Failing

2 participants