diff --git a/.github/skills/auth-and-identity/SKILL.md b/.github/skills/auth-and-identity/SKILL.md new file mode 100644 index 0000000..d14f313 --- /dev/null +++ b/.github/skills/auth-and-identity/SKILL.md @@ -0,0 +1,174 @@ +--- +name: auth-and-identity +description: Use this skill when asked to set up authentication, authorization, or identity in a Cratis Arc project — backend, frontend, or both. Covers implementing identity providers, protecting commands/queries with authorization attributes, integrating Microsoft Identity Platform, connecting backend identity to React frontends, and local development testing with generated principals. Also covers customizing IProvideIdentityDetails for enriching identity from databases, blocking users, multi-tenant identity, user preferences, and modifying identity at runtime. Use this whenever the user mentions auth, login, roles, permissions, identity details, user context, protecting endpoints, identity provider customization, or anything related to who the user is and what they can access. +--- + +# Auth & Identity in Cratis Arc + +This skill covers the full auth and identity stack in a Cratis Arc application. Read the relevant reference files below for detailed API usage. + +> **Read the relevant instruction files first.** This skill references concepts from the core copilot instructions in `.github/copilot-instructions.md`. If you need details on vertical slices, commands, queries, or proxy generation, consult those instructions. + +## Architecture Overview + +Identity and auth in Arc follow a cookie-first, convention-based pattern: + +``` +Frontend (React) Backend (ASP.NET Core) +───────────────── ────────────────────── + app.MapIdentityProvider() + └── useIdentity() hook └── GET /.cratis/me + │ │ + ├─ 1. Read .cratis-identity cookie │ + └─ 2. If no cookie → fetch /.cratis/me │ + ▼ + AuthenticationMiddleware + └── IAuthenticationHandler[] + │ sets HttpContext.User + ▼ + IIdentityProviderResultHandler + └── IProvideIdentityDetails.Provide() + │ + ▼ + IdentityProviderResult + → JSON response + → .cratis-identity cookie (base64) + +Authorization: + [Authorize] / [Roles("Admin")] / [AllowAnonymous] + └── AuthorizationEvaluator checks per command/query +``` + +**Key design decisions:** +- The `.cratis-identity` cookie is `HttpOnly=false` so the frontend JavaScript can read it directly — no extra HTTP call needed on page load. +- Identity details are base64-encoded JSON in the cookie, automatically decoded by the frontend `IdentityProvider`. +- Only one `IProvideIdentityDetails` implementation is allowed per application (auto-discovered). If none exists, a default provider grants access to everyone. +- Cratis has its own `[Authorize]`, `[AllowAnonymous]`, and `[Roles]` attributes in `Cratis.Arc.Authorization` — these are distinct from ASP.NET Core's and are evaluated by `AuthorizationEvaluator` in the command and query pipeline. + +--- + +## Decision Tree — Which Reference to Read + +Use this decision tree to determine which reference file(s) to read based on the user's task: + +| User wants to... | Read this reference | +|---|---| +| Add identity details to their app | [references/backend-identity.md](references/backend-identity.md) | +| Customize `IProvideIdentityDetails` (enrich from DB, block users, multi-tenant, preferences) | [references/backend-identity.md](references/backend-identity.md) | +| Modify identity at runtime (stateless selections, `ModifyDetails`) | [references/backend-identity.md](references/backend-identity.md) | +| Use Azure AD / Entra ID / Microsoft Identity | [references/authentication.md](references/authentication.md) | +| Write a custom authentication handler (API key, JWT, etc.) | [references/authentication.md](references/authentication.md) | +| Protect commands or queries with roles | [references/authorization.md](references/authorization.md) | +| Set up `[Authorize]`, `[AllowAnonymous]`, or `[Roles]` | [references/authorization.md](references/authorization.md) | +| Consume identity in React | [references/frontend.md](references/frontend.md) | +| Use identity in MVVM or vanilla TypeScript | [references/frontend.md](references/frontend.md) | +| Test identity locally without Azure | [references/local-development.md](references/local-development.md) | +| Full-stack setup (backend + frontend) | Read all references in order | + +**For full-stack tasks, read in this order:** +1. `references/backend-identity.md` — identity provider and startup +2. `references/authentication.md` — how users are authenticated +3. `references/authorization.md` — protecting commands and queries +4. `references/frontend.md` — consuming identity in the UI +5. `references/local-development.md` — testing without real infrastructure + +--- + +## Quick-Start: Full-Stack Identity Setup + +This is the minimum checklist for an application with identity. Read the reference files for details on each step. + +### Backend + +1. **Details record**: Define a C# record for application-specific user information +2. **Identity provider**: Implement `IProvideIdentityDetails` (auto-discovered, one per app) +3. **Startup**: Call `app.MapIdentityProvider()` to register `GET /.cratis/me` +4. **Authentication**: Add `AddMicrosoftIdentityPlatformIdentityAuthentication()` or implement `IAuthenticationHandler` +5. **Authorization**: Add `[Authorize]` / `[Roles]` / `[AllowAnonymous]` attributes from `Cratis.Arc.Authorization` to commands and queries + +### Frontend + +6. **Provider**: Wrap app root with `` from `@cratis/arc.react/identity` +7. **Hook**: Use `useIdentity()` to access identity anywhere in the component tree +8. **Roles**: Use `identity.isInRole('Admin')` for UI-level role gating + +### Proxy Generation + +9. Using `IProvideIdentityDetails` (generic) enables automatic TypeScript type generation at `dotnet build` time — the generated types can be imported in the frontend for end-to-end type safety. + +--- + +## Critical Rules + +These rules are frequently violated — always enforce them: + +1. **One identity provider per app**: Only one `IProvideIdentityDetails` implementation is allowed. Multiple throws `MultipleIdentityDetailsProvidersFound`. +2. **Use Cratis attributes, not ASP.NET Core's**: `[Authorize]`, `[Roles]`, `[AllowAnonymous]` must come from `Cratis.Arc.Authorization`, not `Microsoft.AspNetCore.Authorization`. +3. **Never combine `[Authorize]` and `[AllowAnonymous]` on the same target**: This throws `AmbiguousAuthorizationLevel`. +4. **Prefer the generic interface**: Use `IProvideIdentityDetails` over `IProvideIdentityDetails` to enable proxy generation. +5. **Auto-discovery**: Both `IProvideIdentityDetails` and `IAuthenticationHandler` implementations are auto-discovered — no DI registration needed. +6. **Frontend role checks are UX, not security**: `isInRole()` on the frontend hides UI elements. The backend `[Roles]` attribute is the actual security boundary. +7. **Build before frontend**: TypeScript proxy types for identity details are generated by `dotnet build`. The backend must compile before the frontend can import them. + +--- + +## Common Code Patterns + +### Protecting a command with roles + +```csharp +[Command] +[Roles("Admin")] +public record PromoteUser(UserId Id) +{ + public void Handle(IUserService users) => users.Promote(Id); +} +``` + +### Conditionally rendering UI based on roles + +```tsx +const identity = useIdentity(); + +return identity.isInRole('Admin') + ? + : ; +``` + +### Modifying identity at runtime (stateless selections) + +```csharp +public class SetDepartment(IIdentityProviderResultHandler identityHandler) +{ + public async Task Handle(string department) => + await identityHandler.ModifyDetails( + details => details with { SelectedDepartment = department }); +} +``` + +--- + +## Reference Documentation + +### Skill references (detailed implementation guidance) + +- [Backend Identity Provider](references/backend-identity.md) — `IProvideIdentityDetails`, `IdentityProviderContext`, cookie mechanics, proxy generation, `ModifyDetails` +- [Authentication](references/authentication.md) — `IAuthenticationHandler`, `AuthenticationResult`, Microsoft Identity Platform, combining handlers +- [Authorization](references/authorization.md) — `[Authorize]`, `[Roles]`, `[AllowAnonymous]`, inheritance rules, fallback policies +- [Frontend Identity](references/frontend.md) — React `IdentityProvider`, `useIdentity()`, MVVM, core identity, role checking +- [Local Development](references/local-development.md) — Generating principals, ModHeader, cookie fallback, dev testing + +### Project documentation + +- [Backend Identity](Documentation/backend/identity.md) +- [Microsoft Identity](Documentation/backend/asp-net-core/microsoft-identity.md) +- [ASP.NET Core Authorization](Documentation/backend/asp-net-core/authorization.md) +- [Core Authentication](Documentation/backend/core/authentication.md) +- [Core Authorization](Documentation/backend/core/authorization.md) +- [Command Authorization](Documentation/backend/commands/model-bound/authorization.md) +- [Query Authorization](Documentation/backend/queries/model-bound/authorization.md) +- [Proxy Generation for Identity](Documentation/backend/proxy-generation/identity-details.md) +- [Frontend Core Identity](Documentation/frontend/core/identity.md) +- [Frontend React Identity](Documentation/frontend/react/identity.md) +- [Frontend MVVM Identity](Documentation/frontend/react.mvvm/identity.md) +- [Generating Principals for Local Dev](Documentation/general/generating-principal.md) diff --git a/.github/skills/auth-and-identity/references/authentication.md b/.github/skills/auth-and-identity/references/authentication.md new file mode 100644 index 0000000..aa9a4ef --- /dev/null +++ b/.github/skills/auth-and-identity/references/authentication.md @@ -0,0 +1,129 @@ +# Authentication + +This reference covers implementing custom authentication handlers and integrating Microsoft Identity Platform. + +## Authentication Pipeline + +Arc's authentication system is built around `IAuthenticationHandler` in `Cratis.Arc.Authentication`. Handlers are called in sequence until one succeeds or fails: + +1. Each registered handler is called in order +2. If a handler returns `Succeeded` → request is authenticated, pipeline stops +3. If a handler returns `Failed` → request is rejected (401), pipeline stops +4. If a handler returns `Anonymous` → try the next handler +5. If all handlers return `Anonymous` → request is unauthenticated + +## `AuthenticationResult` Outcomes + +| Method | Meaning | When to Use | +|--------|---------|-------------| +| `AuthenticationResult.Anonymous` | Handler doesn't apply | No relevant credentials found — let other handlers try | +| `AuthenticationResult.Succeeded(principal)` | Authentication succeeded | Valid credentials → return a `ClaimsPrincipal` | +| `AuthenticationResult.Failed(reason)` | Authentication explicitly failed | Credentials present but invalid → return 401 | + +## Custom Authentication Handler + +Handlers are **auto-discovered** — implement `IAuthenticationHandler` and Arc registers it. No manual DI wiring needed. + +```csharp +using System.Security.Claims; +using Cratis.Arc.Authentication; +using Cratis.Arc.Http; + +public class ApiKeyAuthenticationHandler(IApiKeyValidator validator) : IAuthenticationHandler +{ + public async Task HandleAuthentication(IHttpRequestContext context) + { + if (!context.Headers.TryGetValue("X-API-Key", out var apiKey)) + return AuthenticationResult.Anonymous; + + if (!await validator.IsValid(apiKey)) + return AuthenticationResult.Failed(new AuthenticationFailureReason("Invalid API key")); + + var claims = new[] + { + new Claim(ClaimTypes.Name, "API Client"), + new Claim(ClaimTypes.AuthenticationMethod, "ApiKey") + }; + + var principal = new ClaimsPrincipal(new ClaimsIdentity(claims, "ApiKey")); + return AuthenticationResult.Succeeded(principal); + } +} +``` + +### Common Patterns + +**Bearer Token:** +```csharp +public class BearerTokenAuthenticationHandler : IAuthenticationHandler +{ + public async Task HandleAuthentication(IHttpRequestContext context) + { + if (!context.Headers.TryGetValue("Authorization", out var header)) + return AuthenticationResult.Anonymous; + + if (!header.StartsWith("Bearer ", StringComparison.OrdinalIgnoreCase)) + return AuthenticationResult.Anonymous; + + var token = header["Bearer ".Length..].Trim(); + // validate token, build ClaimsPrincipal... + return AuthenticationResult.Succeeded(principal); + } +} +``` + +**Custom Header:** +```csharp +public class CustomHeaderAuthenticationHandler : IAuthenticationHandler +{ + public Task HandleAuthentication(IHttpRequestContext context) + { + if (!context.Headers.TryGetValue("X-User-ID", out var userId)) + return Task.FromResult(AuthenticationResult.Anonymous); + + var claims = new[] { new Claim(ClaimTypes.NameIdentifier, userId) }; + var principal = new ClaimsPrincipal(new ClaimsIdentity(claims, "CustomHeader")); + return Task.FromResult(AuthenticationResult.Succeeded(principal)); + } +} +``` + +### Best Practices + +- **Return `Anonymous` when your handler doesn't apply** — never `Failed` just because your header is missing +- **Provide clear failure reasons** — `AuthenticationFailureReason("API key is expired")` +- **Dependency injection works** — handlers can take constructor dependencies +- **Handle exceptions gracefully** — catch validation errors and return `Failed` with a reason + +## Microsoft Identity Platform + +For Azure-hosted apps using Azure AD / Entra ID, Arc provides built-in support: + +```csharp +builder.Services.AddMicrosoftIdentityPlatformIdentityAuthentication(); +``` + +This registers an ASP.NET Core `AuthenticationHandler` that reads Azure-provided headers: + +| Header | Description | +|--------|-------------| +| `x-ms-client-principal` | Base64-encoded Microsoft Client Principal token | +| `x-ms-client-principal-id` | User's unique ID from Azure AD | +| `x-ms-client-principal-name` | User's display name | + +These headers are automatically set by Azure Container Apps, Web Apps, and Static Web Apps. You also need: + +```csharp +var app = builder.Build(); +app.UseAuthentication(); +app.UseAuthorization(); +app.MapIdentityProvider(); +``` + +## Combining Multiple Handlers + +Multiple handlers coexist naturally because `Anonymous` means "I don't handle this request": + +1. **ApiKeyAuthenticationHandler** — if `X-API-Key` present, authenticates or rejects. If absent, returns `Anonymous`. +2. **MicrosoftIdentityPlatformHandler** — checks for `x-ms-client-principal`. If absent, returns `Anonymous`. +3. If all return `Anonymous` → request is unauthenticated. diff --git a/.github/skills/auth-and-identity/references/authorization.md b/.github/skills/auth-and-identity/references/authorization.md new file mode 100644 index 0000000..18a030a --- /dev/null +++ b/.github/skills/auth-and-identity/references/authorization.md @@ -0,0 +1,159 @@ +# Authorization + +This reference covers protecting commands and queries with authorization attributes. + +## Attributes + +Arc uses its own authorization attributes from `Cratis.Arc.Authorization` — these are **distinct from ASP.NET Core's** and are evaluated by `AuthorizationEvaluator` in the command/query pipeline. + +| Attribute | Purpose | +|-----------|---------| +| `[Authorize]` | Require authentication. Optionally specify `Roles` or `Policy`. | +| `[Roles("Admin", "Manager")]` | Convenience wrapper — user needs at least **one** of the listed roles. | +| `[AllowAnonymous]` | Bypass authorization. Useful with fallback policies. | + +## On Model-Bound Commands + +```csharp +[Command] +[Roles("Admin", "Editor")] +public record DeleteArticle(ArticleId Id) +{ + public void Handle(IArticleService articles) => articles.Delete(Id); +} +``` + +When authorization fails, `Handle()` is **never called**. Check `CommandResult.IsAuthorized`: + +```csharp +var result = await commandPipeline.Execute(new DeleteArticle(articleId)); +if (!result.IsAuthorized) +{ + // User lacked required role — command was not executed +} +``` + +## On Model-Bound Queries + +Authorization applies at both class and method level: + +```csharp +[ReadModel] +[Authorize] +public record DebitAccount(AccountId Id, AccountName Name, decimal Balance) +{ + [Roles("Admin")] + public static IEnumerable GetAllAccounts( + IMongoCollection collection) => + collection.Find(_ => true).ToList(); + + [Roles("Manager")] + public static IEnumerable GetHighValueAccounts( + IMongoCollection collection) => + collection.Find(a => a.Balance > 50000).ToList(); + + [AllowAnonymous] + public static int GetTotalCount(IMongoCollection collection) => + (int)collection.CountDocuments(_ => true); +} +``` + +## Inheritance Rules + +| Scenario | Result | +|----------|--------| +| `[Authorize]` on type | All methods require authentication | +| `[Roles]` on type | All methods require those roles | +| `[AllowAnonymous]` on type | All methods allow anonymous access | +| Method-level attribute present | **Overrides** type-level attribute | +| Both `[Authorize]` and `[AllowAnonymous]` on same target | **Error** — throws `AmbiguousAuthorizationLevel` | + +Method-level always takes precedence: +- Methods **without** authorization attributes inherit the class-level attribute +- Methods **with** `[Roles(...)]` override the class-level attribute +- Methods **with** `[AllowAnonymous]` completely bypass authorization + +## Fallback Policy (Secure by Default) + +Make all endpoints require authentication unless explicitly opted out: + +```csharp +builder.Services.AddAuthorizationBuilder() + .SetFallbackPolicy(new AuthorizationPolicyBuilder() + .RequireAuthenticatedUser() + .Build()); +``` + +With this, use `[AllowAnonymous]` to make specific endpoints public: + +```csharp +[Command] +[AllowAnonymous] +public record GetPublicCatalog() +{ + public Catalog Handle(ICatalogService catalog) => catalog.GetPublic(); +} +``` + +### Default Policy vs Fallback Policy + +| Policy | Applied When | +|--------|-------------| +| **Default Policy** | `[Authorize]` is used without parameters | +| **Fallback Policy** | No authorization attribute is present at all | + +## Policy-Based Authorization + +For complex scenarios: + +```csharp +[Command] +[Authorize(Policy = "RequireElevatedAccess")] +public record PerformSensitiveOperation(string Data) +{ + public void Handle(ISensitiveService service) => service.Execute(Data); +} +``` + +Define the policy in startup: + +```csharp +builder.Services.AddAuthorization(options => +{ + options.AddPolicy("RequireElevatedAccess", policy => + policy.RequireAssertion(context => + context.User.IsInRole("Admin") || + context.User.HasClaim("elevated", "true"))); +}); +``` + +## Custom Authorization Logic in Handlers + +For domain-specific authorization beyond attributes: + +```csharp +[Authorize] +public record UpdateOrder(OrderId Id, string Data) +{ + public async Task Handle( + IOrderRepository orders, CommandContext context) + { + var userId = context.User.FindFirst(ClaimTypes.NameIdentifier)?.Value; + var order = await orders.GetById(Id); + + if (order.OwnerId != userId) + return CommandResult.Forbidden(context.CorrelationId, "Can only update own orders"); + + order.Update(Data); + await orders.Save(order); + return CommandResult.Success; + } +} +``` + +## Authorization Results + +| Scenario | HTTP Status | +|----------|-------------| +| Not authenticated | 401 Unauthorized | +| Authenticated but wrong role | 403 Forbidden | diff --git a/.github/skills/auth-and-identity/references/backend-identity.md b/.github/skills/auth-and-identity/references/backend-identity.md new file mode 100644 index 0000000..f5ba162 --- /dev/null +++ b/.github/skills/auth-and-identity/references/backend-identity.md @@ -0,0 +1,379 @@ +# Backend Identity Provider + +This reference covers implementing `IProvideIdentityDetails` to enrich user identity with application-specific details, and what you can use it for. + +## What It Is + +`IProvideIdentityDetails` is the single extension point where you transform raw authentication claims into rich, application-specific user information. The object you return becomes the `details` field in the frontend identity — available via `useIdentity().details` in React. + +Think of it as the bridge between "who the identity provider says this person is" (claims) and "what our application knows about this person" (department, permissions, preferences, tenant, avatar URL, etc.). + +## How It Works + +When a user hits `GET /.cratis/me`, Arc: + +1. Extracts claims from the authenticated `ClaimsPrincipal` (`sub` → `Id`, `Identity.Name` → `Name`, all claims → `Claims`) +2. Builds an `IdentityProviderContext` with user ID, name, and claims +3. Calls your `IProvideIdentityDetails.Provide()` method +4. Serializes the result as JSON and sets the `.cratis-identity` cookie (base64-encoded, `HttpOnly=false` so frontend JS can read it) +5. Returns the result as JSON in the HTTP response + +The `Details` object you return can be **any shape** — it gets serialized as JSON. The frontend deserializes it into the `TDetails` type parameter you specify. + +## Mapping the Endpoint + +In your application startup: + +```csharp +var app = builder.Build(); +app.UseAuthentication(); +app.UseAuthorization(); +app.MapIdentityProvider(); +``` + +This registers `GET /.cratis/me` — the well-known route the frontend calls to get identity information. + +## The Interface + +```csharp +// Base interface — single method +public interface IProvideIdentityDetails +{ + Task Provide(IdentityProviderContext context); +} + +// Generic variant — adds no methods, just captures TDetails for proxy generation +public interface IProvideIdentityDetails : IProvideIdentityDetails + where TDetails : class; +``` + +Always prefer `IProvideIdentityDetails` — the generic type parameter enables automatic TypeScript proxy generation for your details type. + +## Basic Implementation + +The simplest implementation extracts claims and returns a details record: + +```csharp +public record UserDetails(string Department, string Title, bool IsManager); + +public class IdentityDetailsProvider : IProvideIdentityDetails +{ + public Task Provide(IdentityProviderContext context) + { + var department = context.Claims + .FirstOrDefault(c => c.Key == "department").Value ?? "Unknown"; + + var details = new UserDetails(department, "Engineer", false); + return Task.FromResult(new IdentityDetails(true, details)); + } +} +``` + +### Rules + +- Only **one** `IProvideIdentityDetails` implementation is allowed per app. Multiple implementations throw `MultipleIdentityDetailsProvidersFound`. +- If none exists, `DefaultIdentityDetailsProvider` is used (grants access to everyone with empty details). +- Dependency injection works — your provider can take constructor dependencies. +- The provider is registered as **scoped** (per-request), so it can safely use scoped services. +- Use `IProvideIdentityDetails` (generic) to enable automatic TypeScript proxy generation for the details type. Always prefer the generic interface. + +## `IdentityProviderContext` + +This is the input to your `Provide()` method — everything Arc knows about the user from authentication: + +```csharp +public record IdentityProviderContext( + IdentityId Id, + IdentityName Name, + IEnumerable> Claims); +``` + +| Property | Type | Description | +|----------|------|-------------| +| `Id` | `IdentityId` | User's unique ID (extracted from the `sub` claim) | +| `Name` | `IdentityName` | Display name (from `ClaimsPrincipal.Identity.Name`) | +| `Claims` | `IEnumerable>` | All claims from the authentication token as key-value pairs | + +`IdentityId` and `IdentityName` are `ConceptAs` types with `.Empty` sentinels. + +### Working with Claims + +Claims contain everything the authentication handler extracted from the token. Common claim keys: + +| Claim Key | Description | Example Value | +|-----------|-------------|---------------| +| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier` | User ID | `abc123` | +| `name` | Display name | `Jane Doe` | +| `email` or `preferred_username` | Email | `jane@contoso.com` | +| `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` | Role | `Admin` | +| Custom claims | Anything your IdP adds | `department`, `tenant_id`, etc. | + +```csharp +// Extract a specific claim +var email = context.Claims.FirstOrDefault(c => c.Key == "preferred_username").Value; + +// Extract all roles +var roles = context.Claims + .Where(c => c.Key == "http://schemas.microsoft.com/ws/2008/06/identity/claims/role") + .Select(c => c.Value); + +// Check if a claim exists +var hasDepartment = context.Claims.Any(c => c.Key == "department"); +``` + +## `IdentityDetails` + +The return type from your `Provide()` method: + +```csharp +public record IdentityDetails(bool IsUserAuthorized, object Details); +``` + +| Property | Type | Description | +|----------|------|-------------| +| `IsUserAuthorized` | `bool` | Whether the user is allowed into the application. `false` → HTTP 403. | +| `Details` | `object` | Application-specific details payload (serialized as JSON) | + +## What You Can Use It For + +### 1. Enriching Identity from a Database + +The most common use case — look up the user in your own database and add application-specific information: + +```csharp +public record UserDetails( + UserId UserId, + string Department, + string Title, + string AvatarUrl, + bool IsManager); + +public class IdentityDetailsProvider(IMongoCollection users) : IProvideIdentityDetails +{ + public async Task Provide(IdentityProviderContext context) + { + var user = await users.Find(u => u.ExternalId == context.Id.Value).FirstOrDefaultAsync(); + + if (user is null) + { + // Unknown user — still allow access, but with empty details + return new IdentityDetails(true, new UserDetails( + UserId.Empty, "Unknown", "Unknown", string.Empty, false)); + } + + var details = new UserDetails( + user.Id, + user.Department, + user.Title, + user.AvatarUrl, + user.IsManager); + + return new IdentityDetails(true, details); + } +} +``` + +### 2. Blocking Unauthorized Users (Application-Level Gate) + +Return `IsUserAuthorized = false` to completely block a user from the application. The endpoint returns HTTP 403 and the frontend never gets identity. This is different from role-based authorization — it's an application-level gate: + +```csharp +public class IdentityDetailsProvider(IMongoCollection allowedUsers) : IProvideIdentityDetails +{ + public async Task Provide(IdentityProviderContext context) + { + var isAllowed = await allowedUsers + .Find(u => u.ExternalId == context.Id.Value) + .AnyAsync(); + + if (!isAllowed) + { + // User is authenticated but not authorized for this application + return new IdentityDetails(false, new UserDetails()); + } + + // ... build details normally + return new IdentityDetails(true, details); + } +} +``` + +Use cases: +- Invite-only applications (only registered users can access) +- Disabled/suspended user accounts +- Tenant-specific access control + +### 3. Multi-Tenant Identity + +Add tenant information to identity so the frontend knows which tenant context the user is in: + +```csharp +public record TenantUserDetails( + TenantId TenantId, + TenantName TenantName, + string Role, + IEnumerable AvailableTenants); + +public class IdentityDetailsProvider(ITenantService tenants) : IProvideIdentityDetails +{ + public async Task Provide(IdentityProviderContext context) + { + var userTenants = await tenants.GetTenantsForUser(context.Id.Value); + var activeTenant = userTenants.FirstOrDefault(); + + if (activeTenant is null) + { + return new IdentityDetails(false, new TenantUserDetails( + TenantId.Empty, TenantName.Empty, string.Empty, [])); + } + + var details = new TenantUserDetails( + activeTenant.Id, + activeTenant.Name, + activeTenant.UserRole, + userTenants.Select(t => t.Id)); + + return new IdentityDetails(true, details); + } +} +``` + +### 4. Enriching From External APIs + +Call external services during identity resolution: + +```csharp +public class IdentityDetailsProvider( + IMongoCollection users, + IGraphApiClient graphClient) : IProvideIdentityDetails +{ + public async Task Provide(IdentityProviderContext context) + { + // Combine data from your database and Microsoft Graph + var user = await users.Find(u => u.ExternalId == context.Id.Value).FirstOrDefaultAsync(); + var graphProfile = await graphClient.GetUserProfile(context.Id.Value); + + var details = new UserDetails( + user?.Department ?? graphProfile.Department, + graphProfile.JobTitle, + graphProfile.Photo); + + return new IdentityDetails(true, details); + } +} +``` + +### 5. Claims-Only Provider (No Database) + +Sometimes all you need is to reshape token claims into a application-friendly structure — no database lookup required: + +```csharp +public record UserDetails(string Email, string Department, IEnumerable Groups); + +public class IdentityDetailsProvider : IProvideIdentityDetails +{ + public Task Provide(IdentityProviderContext context) + { + var email = context.Claims.FirstOrDefault(c => c.Key == "preferred_username").Value ?? string.Empty; + var department = context.Claims.FirstOrDefault(c => c.Key == "department").Value ?? "Unknown"; + var groups = context.Claims + .Where(c => c.Key == "groups") + .Select(c => c.Value); + + return Task.FromResult(new IdentityDetails( + true, + new UserDetails(email, department, groups))); + } +} +``` + +### 6. User Preferences and Settings + +Surface user preferences (theme, language, selected workspace) as identity details so the frontend can use them immediately on load: + +```csharp +public record UserDetails( + string Theme, + string Language, + WorkspaceId ActiveWorkspace); + +public class IdentityDetailsProvider(IMongoCollection preferences) : IProvideIdentityDetails +{ + public async Task Provide(IdentityProviderContext context) + { + var prefs = await preferences + .Find(p => p.UserId == context.Id.Value) + .FirstOrDefaultAsync(); + + var details = new UserDetails( + prefs?.Theme ?? "light", + prefs?.Language ?? "en", + prefs?.ActiveWorkspace ?? WorkspaceId.Empty); + + return new IdentityDetails(true, details); + } +} +``` + +## Modifying Identity at Runtime + +`IIdentityProviderResultHandler` allows modifying identity details during requests — useful for stateless applications tracking user selections without a database round-trip: + +```csharp +public class SetActiveWorkspace(IIdentityProviderResultHandler identityHandler) +{ + public async Task Handle(WorkspaceId workspaceId) => + await identityHandler.ModifyDetails( + details => details with { ActiveWorkspace = workspaceId }); +} +``` + +This re-generates the identity from the current context, applies the modifier function to the details, and re-writes the cookie. The frontend will see the updated details on the next `identity.refresh()` call or page load. + +| Method | Description | +|--------|-------------| +| `GenerateFromCurrentContext()` | Build `IdentityProviderResult` from current HTTP context | +| `Write(result)` | Write result as JSON response + update cookie | +| `ModifyDetails(modifier)` | Read current cookie, apply modifier, write updated cookie | + +### When to Use `ModifyDetails` vs. Database Updates + +| Approach | Use When | +|----------|----------| +| `ModifyDetails()` | Ephemeral selections (active tenant, selected workspace, UI preferences) that don't need persistence. Updates the cookie in-place. | +| Database update + `identity.refresh()` | Persistent changes (profile updates, role changes). Update the database, then have the frontend call `refresh()` to pick up the new data. | + +## Proxy Generation for Identity Types + +When using the generic `IProvideIdentityDetails`, the proxy generator automatically creates TypeScript types for `TDetails` at build time: + +1. Define a C# record for your details type +2. Implement `IProvideIdentityDetails` +3. Run `dotnet build` — TypeScript interface is generated +4. Import and use the generated type in your frontend + +| Feature | `IProvideIdentityDetails` | `IProvideIdentityDetails` | +|---------|---------------------------|-------------------------------------| +| Runtime behavior | Identical | Identical | +| Proxy generation | No type generated | TypeScript type generated | +| Frontend type safety | Manual typing required | Automatic type safety | + +## Default Behavior (No Provider) + +If you don't implement `IProvideIdentityDetails`, Arc uses `DefaultIdentityDetailsProvider`: + +```csharp +public class DefaultIdentityDetailsProvider : IProvideIdentityDetails +{ + public Task Provide(IdentityProviderContext context) => + Task.FromResult(new IdentityDetails(true, new { })); +} +``` + +This grants access to everyone with empty details. It's useful for apps that only need authentication (roles) without custom details. + +## Multi-Service Considerations + +- **Single service**: Implement `IProvideIdentityDetails` directly +- **Multiple services**: Have your ingress/reverse proxy call each service's `/.cratis/me` and merge results into a single JSON structure for the `.cratis-identity` cookie +- **Dedicated identity service**: Aggregate identity info from various sources in one service diff --git a/.github/skills/auth-and-identity/references/frontend.md b/.github/skills/auth-and-identity/references/frontend.md new file mode 100644 index 0000000..733eb71 --- /dev/null +++ b/.github/skills/auth-and-identity/references/frontend.md @@ -0,0 +1,184 @@ +# Frontend Identity + +This reference covers consuming identity in React, MVVM, and vanilla TypeScript. + +## How the Frontend Gets Identity + +The frontend uses a **cookie-first** approach: + +1. Check for the `.cratis-identity` cookie (base64-encoded JSON, `HttpOnly=false`) +2. If cookie exists → decode and use it (no HTTP call needed) +3. If no cookie → call `GET /.cratis/me` to fetch identity and set the cookie +4. In development mode, the cookie fallback (`/.cratis/me` call) always works — no ingress simulation needed + +## React + +### IdentityProvider Context + +Wrap your app root with ``: + +```tsx +import { IdentityProvider } from '@cratis/arc.react/identity'; + +export const App = () => ( + + {/* your app */} + +); +``` + +For type-safe details with complex types (e.g., `Guid`), pass a `detailsType` constructor: + +```tsx +import { IdentityProvider } from '@cratis/arc.react/identity'; +import { Guid } from '@cratis/fundamentals'; + +class UserIdentityDetails { + userId: Guid = Guid.empty; + firstName: string = ''; + lastName: string = ''; +} + +export const App = () => ( + + {/* your app */} + +); +``` + +### `useIdentity()` Hook + +Access identity anywhere in your component tree: + +```tsx +import { useIdentity } from '@cratis/arc.react/identity'; + +type UserDetails = { + department: string; + title: string; +}; + +export const UserProfile = () => { + const identity = useIdentity(); + + return ( +
+

{identity.name}

+

Department: {identity.details.department}

+
+ ); +}; +``` + +**With default values** (useful for local development when the cookie might not exist): + +```tsx +const identity = useIdentity({ + department: '[N/A]', + title: '[N/A]' +}); +``` + +**With a constructor for type-safe deserialization** (uses `JsonSerializer.deserializeFromInstance()` under the hood): + +```tsx +const identity = useIdentity(UserIdentityDetails); + +// With default values: +const identity = useIdentity(UserIdentityDetails, { + userId: Guid.empty, + firstName: '[N/A]', + lastName: '[N/A]' +}); +``` + +### Role Checking + +```tsx +const identity = useIdentity(); + +if (identity.isInRole('Admin')) { + // show admin UI +} + +// Or access roles directly +console.log(identity.roles); +``` + +### Refreshing Identity + +When identity changes on the backend (e.g., user granted new roles): + +```tsx +const identity = useIdentity(); +const handleRefresh = () => identity.refresh(); +``` + +This calls `GET /.cratis/me` again and updates both the cookie and context. + +### `IIdentity` Shape + +| Property | Type | Description | +|----------|------|-------------| +| `id` | `string` | Unique ID from identity provider | +| `name` | `string` | Display name | +| `roles` | `string[]` | Assigned roles | +| `details` | `TDetails` | Application-specific details | +| `isSet` | `boolean` | Whether identity has been loaded | +| `isInRole(role)` | `(string) => boolean` | Check role membership | +| `refresh()` | `() => Promise` | Re-fetch identity from backend | + +## MVVM (tsyringe) + +In an MVVM setup, `IIdentityProvider` is automatically registered in the DI container by `Bindings.initialize()`: + +```typescript +import { injectable } from 'tsyringe'; +import { IIdentityProvider } from '@cratis/arc/identity'; + +type UserDetails = { + department: string; +}; + +@injectable() +export class MyViewModel { + constructor(private readonly _identityProvider: IIdentityProvider) {} + + async loadUser() { + const identity = await this._identityProvider.getCurrent(); + console.log(identity.details.department); + } +} +``` + +Requires the [MVVM Context](Documentation/frontend/react.mvvm/mvvm-context.md) to be set up. + +## Vanilla TypeScript (No Framework) + +Use `IdentityProvider` directly: + +```typescript +import { IdentityProvider } from '@cratis/arc/identity'; + +const identity = await IdentityProvider.getCurrent(); +console.log(identity.name); +console.log(identity.isInRole('Admin')); +``` + +## Frontend Role-Based UI Pattern + +Combine `useIdentity()` with authorization attributes on the backend for defense in depth: + +```tsx +const identity = useIdentity(); + +return ( +
+ {identity.isInRole('Admin') && } + {identity.isInRole('Manager') && } + +
+); +``` + +The frontend check is for UX (hiding buttons the user can't use). The backend `[Roles]` attribute is the actual security boundary. diff --git a/.github/skills/auth-and-identity/references/local-development.md b/.github/skills/auth-and-identity/references/local-development.md new file mode 100644 index 0000000..6fed70a --- /dev/null +++ b/.github/skills/auth-and-identity/references/local-development.md @@ -0,0 +1,111 @@ +# Local Development & Testing + +This reference covers simulating authentication and identity in local development environments without real Azure or identity infrastructure. + +## How Identity Works in Development + +When running locally: + +1. No Azure App Service or ingress injects identity headers +2. The `.cratis-identity` cookie will not be set automatically +3. The frontend cookie reader falls back to calling `GET /.cratis/me` +4. The backend returns a default anonymous identity with empty details + +## Generating a Microsoft Client Principal + +Azure App Service Easy Auth injects identity via the `X-MS-CLIENT-PRINCIPAL` header. You can simulate this locally with a browser extension. + +### Step 1: Build the Principal JSON + +```json +{ + "auth_typ": "aad", + "claims": [ + { "typ": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", "val": "user-unique-id" }, + { "typ": "name", "val": "Jane Developer" }, + { "typ": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", "val": "Admin" }, + { "typ": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", "val": "Manager" } + ], + "name_typ": "name", + "role_typ": "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" +} +``` + +### Step 2: Base64-Encode It + +**macOS / Linux terminal:** + +```bash +echo -n '{"auth_typ":"aad","claims":[{"typ":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier","val":"user-unique-id"},{"typ":"name","val":"Jane Developer"},{"typ":"http://schemas.microsoft.com/ws/2008/06/identity/claims/role","val":"Admin"}],"name_typ":"name","role_typ":"http://schemas.microsoft.com/ws/2008/06/identity/claims/role"}' | base64 +``` + +**Browser console:** + +```javascript +btoa(JSON.stringify({ + auth_typ: "aad", + claims: [ + { typ: "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", val: "user-unique-id" }, + { typ: "name", val: "Jane Developer" }, + { typ: "http://schemas.microsoft.com/ws/2008/06/identity/claims/role", val: "Admin" } + ], + name_typ: "name", + role_typ: "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" +})); +``` + +### Step 3: Inject the Header + +Use the [ModHeader](https://modheader.com/) browser extension: + +1. Install ModHeader for Chrome/Edge/Firefox +2. Add a request header: + - **Name**: `X-MS-CLIENT-PRINCIPAL` + - **Value**: the base64 string from Step 2 +3. Reload the page — the backend will authenticate the user as if Azure App Service injected the header + +## Custom Identity Details in Development + +If your `IProvideIdentityDetails` implementation enriches identity from a database, that still runs in development. The principal provides the initial `id`, `name`, and `roles`; your enrichment adds extra fields. + +To test without a database, create a dev-only fallback in your identity provider: + +```csharp +public Task Provide(IdentityProviderContext context) +{ + // In development, if the user ID is unknown, return sensible defaults + var details = await _readModelStore.GetOrDefault(context.Id, new UserDetails + { + Department = "Engineering", + Title = "Developer" + }); + + return details; +} +``` + +## Testing Without ModHeader + +If you prefer not to install a browser extension, you can also set the cookie directly: + +1. Build the identity JSON matching the `.cratis-identity` cookie format +2. Set it via browser console: + +```javascript +document.cookie = '.cratis-identity=' + btoa(JSON.stringify({ + id: 'user-id', + name: 'Jane Developer', + roles: ['Admin'], + details: { department: 'Engineering' } +})) + '; path=/'; +``` + +3. Reload — the frontend will use this cookie directly without calling the backend + +## Important Notes + +- The cookie is `HttpOnly=false` by design — the frontend JavaScript must read it +- The cookie path is `/` +- In production, the cookie is set by the backend middleware; in development, you can set it manually +- The `GET /.cratis/me` endpoint always works — it is the fallback for all environments +- ModHeader header injection simulates the full pipeline (authentication → identity provider → cookie), while direct cookie setting bypasses the backend entirely diff --git a/.github/skills/cratis-command/SKILL.md b/.github/skills/cratis-command/SKILL.md new file mode 100644 index 0000000..7afe272 --- /dev/null +++ b/.github/skills/cratis-command/SKILL.md @@ -0,0 +1,275 @@ +--- +name: cratis-command +description: Step-by-step guidance for creating a Cratis Arc command — [Command] record, Handle() method, CommandValidator, proxy generation, and React .use() hook with CommandDialog. Use when adding or creating a command, wiring up a form or button to the backend, working with IEventLog, CommandResult, CommandValidator, CommandDialog, or [Command] attribute. +--- + +# Creating a Cratis Command + +A command represents a user action that changes state. In Cratis Arc the path is: + +``` +[Command] record + Handle() → validator → dotnet build → TypeScript proxy → React .use() +``` + +The command record **owns its own handler** — no separate controller class required. Follow the steps in order. Jump to the reference files for deeper detail on any step. + +--- + +## Step 1 — Define the C# command record + +A command is a **record decorated with `[Command]`** that contains its own `Handle()` method. No separate controller is needed. + +```csharp +// API/Accounts/OpenDebitAccount.cs +namespace MyApp.API.Accounts; + +using Cratis.Arc.Commands.ModelBound; +using Cratis.Chronicle.Events; + +[Command] +public record OpenDebitAccount(AccountId AccountId, string Name, OwnerId OwnerId) +{ + public DebitAccountOpened Handle() => + new(Name, OwnerId); // Arc appends the returned event; AccountId is the event source +} + +[EventType] // NO arguments — never [EventType("some-guid")] +public record DebitAccountOpened(string Name, OwnerId OwnerId); +``` + +**Rules:** +- `[Command]` attribute is **required** — it makes the type discoverable and the analyzer will warn without it +- `Handle()` returns the event (or events) to append — Arc's Chronicle integration automatically appends them +- `[EventType]` takes **no arguments** — the identifier is generated from the type name +- Name the command as an imperative action — `OpenDebitAccount`, not `OpenDebitAccountCommand` +- Use `ConceptAs` wrappers for all identity types — never raw `Guid` or `string` + +```csharp +// Concepts/AccountId.cs +using Cratis.Concepts; + +public record AccountId(Guid Value) : ConceptAs(Value); +``` + +### Generating a new ID and returning it + +```csharp +[Command] +public record RegisterEmployee(string FirstName, string LastName, string Department) +{ + // Return (eventSourceId, event) — Arc uses the first element as the event source ID + // and sends it back to the client as CommandResult.response + public (EmployeeId, EmployeeRegistered) Handle() + { + var employeeId = new EmployeeId(Guid.NewGuid()); + return (employeeId, new(FirstName, LastName, Department)); + } +} +``` + +### Appending multiple events + +```csharp +[Command] +public record TransferFunds(AccountId FromId, AccountId ToId, decimal Amount) +{ + public IEnumerable Handle() => + [ + new FundsWithdrawn(FromId, Amount), + new FundsDeposited(ToId, Amount), + ]; +} +``` + +--- + +## Step 2 — Inject dependencies into Handle (when needed) + +Handle() parameters are injected by DI — use this for constraints or external services: + +```csharp +[Command] +public record OpenDebitAccount(AccountId AccountId, string Name, OwnerId OwnerId) +{ + public async Task Handle( + IUniqueAccountConstraint constraint) + { + await constraint.Validate(Name); + return new(Name, OwnerId); + } +} +``` + +Arc automatically wraps the return in `CommandResult` / `CommandResult`. See `references/command-result.md`. + +--- + +## Step 3 — Add validation (optional but recommended) + +FluentValidation rules are extracted by the proxy generator and run **client-side** before the request is sent: + +```csharp +// API/Accounts/OpenDebitAccountValidator.cs +public class OpenDebitAccountValidator : CommandValidator +{ + public OpenDebitAccountValidator() + { + RuleFor(c => c.Name) + .NotEmpty().WithMessage("Account name is required") + .MaximumLength(100); + RuleFor(c => c.OwnerId) + .NotEmpty().WithMessage("Owner is required"); + } +} +``` + +- Extends `CommandValidator` (not `AbstractValidator`) — this makes it discoverable automatically +- No registration needed +- Arc also creates a `/validate` endpoint automatically; the frontend `command.validate()` calls it without executing the handler + +For simple cases, Data Annotations on the record work too: + +```csharp +public record OpenDebitAccount( + [Required] Guid AccountId, + [Required, MaxLength(100)] string Name, + [Required] Guid OwnerId); +``` + +--- + +## Step 4 — Generate the TypeScript proxy + +```bash +dotnet build +``` + +The `Cratis.Arc.ProxyGenerator.Build` MSBuild package runs during the build and writes TypeScript files to the path configured in your `.csproj`: + +```xml + + $(MSBuildThisFileDirectory)../Web/src/api + +``` + +This produces `Web/src/api/Accounts/OpenDebitAccount.ts`. For first-time setup see `references/proxy-setup.md`. + +--- + +## Step 5 — Use the command in React + +### Inline form (full control) + +```tsx +import { OpenDebitAccount } from '../api/Accounts/OpenDebitAccount'; + +export const OpenAccountForm = () => { + const [command, setValues] = OpenDebitAccount.use(); + const [error, setError] = useState(''); + + const handleSubmit = async () => { + const result = await command.execute(); + if (result.isSuccess) { + onSuccess(result.response); // result.response is Guid if you returned one + } else if (!result.isValid) { + setError(result.validationResults[0]?.message ?? 'Validation failed'); + } + }; + + return ( +
+ (command.name = e.target.value)} + placeholder="Account name" + /> + {error &&

{error}

} + +
+ ); +}; +``` + +**Key properties on the command instance:** + +| Property / method | What it does | +| ----------------- | ------------ | +| `command.propName` | Get/set the property value | +| `command.hasChanges` | `true` when any value differs from the initial | +| `command.execute()` | Send the POST, returns `CommandResult` | +| `command.validate()` | Call the validate endpoint (no side effects) | +| `setValues(obj)` | Set multiple properties at once (e.g. from a query result) | + +### With initial values (edit scenario) + +```tsx +const [command] = UpdateAccount.use({ + accountId: account.id, + name: account.name, +}); +``` + +### Using `CommandDialog` (quickest path for modal forms) + +See Step 6 and `references/command-dialog.md`. + +--- + +## Step 6 — Wrap in a CommandDialog (optional) + +For modal dialogs, this is the quickest path: + +```tsx +import { useDialog, useDialogContext, DialogResult } from '@cratis/arc.react/dialogs'; +import { CommandDialog } from '@cratis/components/CommandDialog'; +import { InputTextField } from '@cratis/components/CommandForm'; +import { OpenDebitAccount } from '../api/Accounts/OpenDebitAccount'; + +// --- Dialog component --- +const OpenAccountDialog = () => { + const { closeDialog } = useDialogContext(); + return ( + closeDialog(DialogResult.Ok)} + onCancel={() => closeDialog(DialogResult.Cancelled)} + > + value={c => c.name} label="Name" /> + + ); +}; + +// --- Parent component --- +export const AccountsPage = () => { + const [OpenAccountDialogWrapper, showOpenAccount] = useDialog(OpenAccountDialog); + + const handleOpen = async () => { + const [result] = await showOpenAccount(); + if (result === DialogResult.Ok) { + // command already executed inside the dialog — refresh your data here + } + }; + + return ( + <> + + + + ); +}; +``` + +`CommandDialog` calls `onConfirm` only after a successful `command.execute()`, so you don't need to check `isSuccess` yourself. See `references/command-dialog.md` for the full props list and edit-dialog patterns. + +--- + +## Reference files + +| File | What's in it | +| ---- | ------------ | +| `references/command-result.md` | Full `CommandResult` shape, error handling patterns | +| `references/command-dialog.md` | `CommandDialog` props, CommandForm fields, edit dialogs | +| `references/validation.md` | FluentValidation, Data Annotations, client-side pre-flight | +| `references/proxy-setup.md` | First-time proxy generator setup | diff --git a/.github/skills/cratis-command/evals/evals.json b/.github/skills/cratis-command/evals/evals.json new file mode 100644 index 0000000..a0d7e29 --- /dev/null +++ b/.github/skills/cratis-command/evals/evals.json @@ -0,0 +1,35 @@ +{ + "skill_name": "cratis-command", + "evals": [ + { + "id": 1, + "prompt": "Create a command to register a new employee with a first name, last name, and department. Duplicate employee names (same first+last) should be prevented. Include everything needed: the event, constraint, validator, and command with Handle().", + "expected_output": "A single C# file containing: [EventType] EmployeeRegistered record, UniqueEmployeeName IConstraint, RegisterEmployeeValidator CommandValidator, and [Command] RegisterEmployee record with Handle() that returns (EmployeeId, EmployeeRegistered). Correct namespacing and file header.", + "files": [], + "assertions": [ + "Output contains [Command] attribute on a record", + "Output contains a Handle() method directly on the command record (not a separate handler class)", + "[EventType] attribute has no arguments (no GUID, no string)", + "Output contains an IConstraint implementation for uniqueness", + "Output contains a CommandValidator using FluentValidation", + "Output uses ConceptAs for EmployeeId (not raw Guid)", + "File uses file-scoped namespace declaration", + "File starts with the Cratis copyright header" + ] + }, + { + "id": 2, + "prompt": "I need a command to change the email address of an existing customer. The command should validate the email format. Show me the full backend slice file.", + "expected_output": "A single C# slice file containing: [EventType] CustomerEmailChanged record, a validator checking email format, and [Command] ChangeCustomerEmail record with Handle() returning the event. Uses ConceptAs for ids.", + "files": [], + "assertions": [ + "Output contains [Command] attribute on a record", + "Handle() is defined directly on the command record", + "Output contains CommandValidator with email format validation", + "[EventType] attribute has no arguments", + "Output uses ConceptAs for identity types (not raw Guid or string)", + "File uses file-scoped namespace" + ] + } + ] +} diff --git a/.github/skills/cratis-command/references/command-dialog.md b/.github/skills/cratis-command/references/command-dialog.md new file mode 100644 index 0000000..81f67f7 --- /dev/null +++ b/.github/skills/cratis-command/references/command-dialog.md @@ -0,0 +1,140 @@ +# CommandDialog — Reference + +`CommandDialog` from `@cratis/components/CommandDialog` is a modal dialog that executes a command when the user confirms. + +## How it works + +- It creates a command instance internally from the `command` constructor you pass +- It renders your `CommandForm` fields as children, bound to that instance +- When the user clicks OK, it calls `command.execute()` +- `onConfirm` is called **only if** execution succeeds +- `onCancel` / dismiss closes without executing + +--- + +## Dialog pattern with `useDialog` + +```tsx +import { useDialog, useDialogContext, DialogResult } from '@cratis/arc.react/dialogs'; +import { CommandDialog } from '@cratis/components/CommandDialog'; +import { InputTextField, NumberField } from '@cratis/components/CommandForm'; +import { CreateProduct } from '../api/Products/CreateProduct'; +import { CommandResult } from '@cratis/arc/commands'; + +// 1. Define the dialog +const CreateProductDialog = () => { + const { closeDialog } = useDialogContext>(); + return ( + + command={CreateProduct} + title="Create product" + okLabel="Create" + onConfirm={async (result) => closeDialog(DialogResult.Ok, result)} + onCancel={() => closeDialog(DialogResult.Cancelled)} + > + value={c => c.name} label="Name" /> + value={c => c.price} label="Price" /> + + ); +}; + +// 2. Use it from the parent +const [CreateProductDialogWrapper, showCreateProduct] = useDialog(CreateProductDialog); + +const handleCreate = async () => { + const [dialogResult, commandResult] = await showCreateProduct(); + if (dialogResult === DialogResult.Ok && commandResult?.isSuccess) { + refreshProducts(); + if (commandResult.response) navigateTo(`/products/${commandResult.response}`); + } +}; + +// 3. Render the wrapper +return ( + <> + + + +); +``` + +--- + +## Edit dialog (pre-populate with existing values) + +```tsx +const EditProductDialog = ({ product }: { product: Product }) => { + const { closeDialog } = useDialogContext(); + return ( + + command={UpdateProduct} + title="Edit product" + currentValues={{ name: product.name, price: product.price }} + initialValues={{ productId: product.id }} + onConfirm={() => closeDialog(DialogResult.Ok)} + onCancel={() => closeDialog(DialogResult.Cancelled)} + > + value={c => c.name} label="Name" /> + value={c => c.price} label="Price" /> + + ); +}; + +// Pass product to the dialog +const [EditDialogWrapper, showEdit] = useDialog(EditProductDialog); +await showEdit({ product: selectedProduct }); +``` + +- `initialValues` sets the baseline for change tracking (values that are not "changes") +- `currentValues` populates the initial field display + +--- + +## CommandForm field components + +All fields come from `@cratis/components/CommandForm`. Always pass the command type as the generic parameter so the `value` accessor is fully typed. + +```tsx +import { + InputTextField, // text input + NumberField, // number input + CheckboxField, // boolean toggle + DateField, // date picker + DropdownField, // select from options list + TextAreaField, // multi-line text +} from '@cratis/components/CommandForm'; + + value={c => c.title} label="Title" /> + value={c => c.quantity} label="Qty" min={1} /> + value={c => c.isActive} label="Active" /> + value={c => c.dueDate} label="Due date" /> + + value={c => c.status} + label="Status" + options={statusOptions} + optionLabel="label" + optionValue="value" +/> + value={c => c.notes} label="Notes" rows={3} /> +``` + +The `value` prop takes a function `(commandInstance) => property`. This drives both reading the value and writing it back on change. + +--- + +## CommandDialog props + +| Prop | Required | Purpose | +| ---- | -------- | ------- | +| `command` | ✓ | Command constructor | +| `title` | ✓ | Dialog header text | +| `initialValues` | | Values set as the change-tracking baseline | +| `currentValues` | | Values to pre-populate the fields | +| `onConfirm` | | Called after successful execution | +| `onCancel` | | Called when cancelled/dismissed | +| `okLabel` | | Confirm button text (default: "Ok") | +| `cancelLabel` | | Cancel button text (default: "Cancel") | +| `isValid` | | Extra validity gate combining with field validation | +| `onBeforeExecute` | | Transform command values just before `.execute()` | +| `onFieldChange` | | Callback on every field change | +| `buttons` | | Override button set (`DialogButtons` enum or custom node) | diff --git a/.github/skills/cratis-command/references/command-result.md b/.github/skills/cratis-command/references/command-result.md new file mode 100644 index 0000000..394fae8 --- /dev/null +++ b/.github/skills/cratis-command/references/command-result.md @@ -0,0 +1,81 @@ +# CommandResult — Reference + +Arc wraps every command response in a `CommandResult` envelope. + +## Shape + +```ts +interface CommandResult { + isSuccess: boolean; // true when authorized + valid + no exceptions + isAuthorized: boolean; // false → user lacks permission + isValid: boolean; // false → one or more validators failed + validationResults: ValidationResult[]; + hasExceptions: boolean; // true → unhandled server exception + exceptionMessages: string[]; + exceptionStackTrace: string; + response: T | null; // present when the backend returned a value +} + +interface ValidationResult { + propertyName: string; // camelCase, matches the command property + message: string; + severity: string; // 'Error' | 'Warning' | 'Info' +} +``` + +## Handling all cases + +```tsx +const handleSubmit = async () => { + const result = await command.execute(); + + if (!result.isAuthorized) { + navigate('/login'); + return; + } + + if (!result.isValid) { + // Map errors by property for inline display + const fieldErrors = result.validationResults.reduce((acc, v) => { + acc[v.propertyName] = v.message; + return acc; + }, {} as Record); + setErrors(fieldErrors); + return; + } + + if (result.hasExceptions) { + toast.error(result.exceptionMessages.join('\n')); + return; + } + + onSuccess(result.response); // result.response typed when backend returns a value +}; +``` + +## Accessing the returned value + +If the backend controller returns a value: + +```csharp +[HttpPost] +public async Task CreateOrder([FromBody] CreateOrder command) +{ + var id = Guid.NewGuid(); + await eventLog.Append(id, new OrderCreated(...)); + return id; +} +``` + +Then on the frontend: + +```ts +const result = await createOrder.execute(); +if (result.isSuccess) { + const newId = result.response as string; // Guids come as strings +} +``` + +## Bypassing the CommandResult wrapper + +If you need to return a raw HTTP response (e.g. for file downloads), use `[AspNetResult]` on the controller action. The frontend receives the raw response and the proxy does not include `execute()`. diff --git a/.github/skills/cratis-command/references/proxy-setup.md b/.github/skills/cratis-command/references/proxy-setup.md new file mode 100644 index 0000000..bf061b0 --- /dev/null +++ b/.github/skills/cratis-command/references/proxy-setup.md @@ -0,0 +1,71 @@ +# Proxy Generator Setup — Reference + +The proxy generator runs as an MSBuild task during `dotnet build` and produces TypeScript interfaces and classes for every command and query it finds. + +--- + +## Install the NuGet package + +Add to every `.csproj` that contains controllers, commands, or queries: + +```xml + +``` + +## Configure the output path + +```xml + + + $(MSBuildThisFileDirectory)../Web/src/api + +``` + +## Install the frontend package + +```bash +npm install @cratis/arc +``` + +--- + +## Run the generator + +```bash +dotnet build +``` + +The generator will write TypeScript files under the configured output path, mirroring your C# namespace hierarchy as folders: + +``` +Web/src/api/ + Accounts/ + OpenDebitAccount.ts ← POST action → command proxy + AllAccounts.ts ← GET action → query proxy + index.ts + index.ts +``` + +--- + +## Common gotchas + +| Problem | Fix | +| ------- | --- | +| No files generated | Ensure the package is referenced and the project builds cleanly first | +| Wrong folder structure | The folder mirrors the **namespace**, not the file path — adjust your namespace | +| Stale proxies | Run `dotnet clean && dotnet build` to force full regeneration | +| Proxies mixed with hand-written files | Set `true` to prevent the generator deleting everything; or move proxies to a dedicated folder | + +--- + +## Multi-project solutions + +``` +MyApp.API.csproj: + + + ../MyApp.Web/src/api +``` + +Only the project with controllers needs the proxy generator package. Domain and read-model projects do not. diff --git a/.github/skills/cratis-command/references/validation.md b/.github/skills/cratis-command/references/validation.md new file mode 100644 index 0000000..2d3f092 --- /dev/null +++ b/.github/skills/cratis-command/references/validation.md @@ -0,0 +1,104 @@ +# Command Validation — Reference + +Validation in Cratis Arc runs in two places: **client-side** (via the proxy, before the request is sent) and **server-side** (the full pipeline, always). + +--- + +## FluentValidation (recommended) + +```csharp +// Must extend CommandValidator, not AbstractValidator +public class CreateOrderValidator : CommandValidator +{ + public CreateOrderValidator() + { + RuleFor(c => c.CustomerId).NotEmpty().WithMessage("Customer is required"); + RuleFor(c => c.Total).GreaterThan(0).WithMessage("Total must be positive"); + RuleFor(c => c.Items).NotEmpty().WithMessage("Order must have at least one item"); + } +} +``` + +**Why `CommandValidator`?** It marks the class for automatic discovery (no DI registration needed) and allows the proxy generator to extract the rules into TypeScript for client-side pre-flight. + +Rules that can be extracted and run client-side: +- `NotEmpty`, `NotNull` +- `MaximumLength`, `MinimumLength`, `Length` +- `GreaterThan`, `LessThan`, `GreaterThanOrEqualTo`, `LessThanOrEqualTo` +- `Must` with simple predicates +- `EmailAddress` + +Rules that only run server-side (cannot be extracted): +- Validators with injected dependencies (e.g. database uniqueness checks) + +--- + +## Data Annotations + +```csharp +public record CreateOrder( + [Required] Guid CustomerId, + [Range(0.01, double.MaxValue, ErrorMessage = "Total must be positive")] decimal Total, + [Required, MinLength(1)] List Items +); +``` + +Simpler but less flexible than FluentValidation. Rules are enforced server-side and reflected in `CommandResult.validationResults`. + +--- + +## Automatic validate endpoint + +For every `[HttpPost]` command, Arc registers a parallel endpoint: + +- Execute: `POST /api/orders/create` +- Validate: `POST /api/orders/create/validate` + +The validate endpoint runs all authorization and validation filters but **never** calls the handler. No side effects. + +--- + +## Client-side validate() in React + +```tsx +const [command] = CreateOrder.use(); +const [errors, setErrors] = useState>({}); + +// Option A: validate on blur +const handleBlur = async (field: string) => { + const result = await command.validate(); + const fieldError = result.validationResults.find(v => v.propertyName === field); + setErrors(prev => ({ ...prev, [field]: fieldError?.message ?? '' })); +}; + +// Option B: validate proactively as user types +useEffect(() => { + command.validate().then(result => { + setCanSubmit(result.isSuccess); + }); +}, [command.hasChanges]); + +// Option C: validate + conditional execute +const handleSubmit = async () => { + const validation = await command.validate(); + if (!validation.isValid) { + setErrors(validation.validationResults.reduce(...)); + return; + } + await command.execute(); +}; +``` + +--- + +## Showing validation errors per field + +```tsx +const getError = (field: keyof typeof command) => + result.validationResults.find(v => v.propertyName === String(field))?.message; + + +{getError('name') && {getError('name')}} +``` + +`propertyName` in the result is camelCase matching the C# property name (lowercased first letter). diff --git a/.github/skills/cratis-csharp-standards/SKILL.md b/.github/skills/cratis-csharp-standards/SKILL.md new file mode 100644 index 0000000..d62b401 --- /dev/null +++ b/.github/skills/cratis-csharp-standards/SKILL.md @@ -0,0 +1,79 @@ +--- +name: cratis-csharp-standards +description: Reference for Cratis C# coding conventions — formatting, naming, records, nullable handling, exceptions, logging, and DI. Use whenever writing C# in a Cratis project, deciding between record vs class, checking naming rules, applying formatting conventions, handling null safety, creating exception types, or asking "how should this be written?" Also covers CUPID characteristics and domain-based folder structure. Trigger on any C# style or standards question for a Cratis project. +--- + +## Key rules (quick reference) + +- Use C# 13 features always — records, primary constructors, pattern matching +- `var` over explicit types — the right side already tells you the type +- File-scoped namespace declarations — one less indentation level +- `using` directives: alphabetically sorted, single-line, unused ones removed +- No regions — if a file needs them, it needs refactoring +- No postfixes: no `Async`, `Impl`, `Service`, `Manager`, `Handler` on class names +- No `Exception` suffix on exception types — `AuthorNotFound` not `AuthorNotFoundException` +- Never use built-in exceptions — always create custom exception types +- `record` for events, commands, read models, concepts — value equality for free +- Primary constructors for all types — eliminates field boilerplate +- `is null` / `is not null` — never `== null` / `!= null` +- Blank line before opening `{` of every code block +- Final `return` on its own line +- Private fields: `_camelCase` with underscore prefix +- Interfaces: `I` prefix (`IMyService`) +- No `[EventType]` arguments — the type name is the event identifier +- `[Command]` records define `Handle()` directly — no separate handler classes + +--- + +## Formatting + +```csharp +// File-scoped namespace (no extra indent) +namespace MyApp.Authors.Registration; + +// Alphabetically sorted using directives +using Cratis.Arc.Commands; +using Cratis.Chronicle.Events; +using Microsoft.Extensions.Logging; + +// Blank line before { of every block +if (condition) +{ + DoSomething(); +} + +// Expression-bodied for simple members +public string FullName => $"{FirstName} {LastName}"; + +// Final return on its own line +public string GetName() +{ + var result = BuildName(); + + return result; +} +``` + +--- + +## Naming + +| Artifact | Convention | Example | +| --- | --- | --- | +| Type / method / public member | PascalCase | `RegisterAuthor`, `AuthorId` | +| Private field | `_camelCase` | `_eventLog` | +| Local variable | camelCase | `authorId` | +| Interface | `I` prefix | `IEventLog` | +| Exception type | No `Exception` suffix | `AuthorNotFound` | +| Feature folder | Pluralized domain noun | `Authors/` | +| Concept file | Concept name | `AuthorId.cs` | + +No abbreviations unless widely known (XML, JSON, Id, URL). No prefixes/postfixes that describe technical role (Controller, ViewModel, Handler, Manager, Factory, Base). + +--- + +## Reference files + +- `references/code-style.md` — records, primary constructors, nullable, collections, async +- `references/exceptions-logging-di.md` — exception types, logging with [LoggerMessage], DI conventions +- `references/domain-philosophy.md` — CUPID characteristics, cohesion, ubiquitous language diff --git a/.github/skills/cratis-csharp-standards/evals/evals.json b/.github/skills/cratis-csharp-standards/evals/evals.json new file mode 100644 index 0000000..1a6c38c --- /dev/null +++ b/.github/skills/cratis-csharp-standards/evals/evals.json @@ -0,0 +1,37 @@ +{ + "skill_name": "cratis-csharp-standards", + "evals": [ + { + "id": 1, + "prompt": "Write a C# service class called AccountManager that handles creating bank accounts. It takes IEventLog and ILogger as dependencies. When CreateAccount(AccountName name) is called, it logs 'Creating account {Name}' at Information level and appends an AccountCreated event. If the account already exists it should throw an appropriate exception. Follow all Cratis C# coding standards.", + "expected_output": "Class uses primary constructor, file-scoped namespace, record for event, custom exception (no Exception suffix, not built-in), [LoggerMessage] in a separate AccountManagerLogging.cs partial class, ConceptAs for AccountId/AccountName, var for local variables, expression-bodied members where appropriate.", + "files": [], + "assertions": [ + "Uses primary constructor (not field declarations + constructor body)", + "Uses file-scoped namespace (no extra indentation level)", + "Custom exception class defined (not InvalidOperationException or similar built-in)", + "Custom exception class does NOT have 'Exception' suffix", + "Logging uses [LoggerMessage] attribute in a separate partial static internal class", + "AccountId uses ConceptAs (not raw Guid)", + "AccountName uses ConceptAs (not raw string)", + "Uses 'var' for local variable declarations", + "No postfix on class names (no 'Manager', 'Impl', 'Service' naming anti-patterns caught)" + ] + }, + { + "id": 2, + "prompt": "Review this C# code and rewrite it following Cratis coding standards:\n\n```csharp\npublic class OrderService\n{\n private IOrderRepository _repo;\n private ILogger _logger;\n \n public OrderService(IOrderRepository repo, ILogger logger)\n {\n _repo = repo;\n _logger = logger;\n }\n \n public async Task> GetAllOrdersAsync()\n {\n _logger.LogInformation(\"Getting all orders\");\n var orders = await _repo.GetAllAsync();\n return orders;\n }\n \n public void ProcessOrder(Guid orderId)\n {\n if (orderId == null)\n throw new InvalidOperationException(\"OrderId cannot be null\");\n }\n}\n```", + "expected_output": "Rewritten with primary constructor, file-scoped namespace, IEnumerable return type, [LoggerMessage] logging, ConceptAs for OrderId, custom exception, 'is null' check, Async suffix removed, no redundant 'Async' postfix.", + "files": [], + "assertions": [ + "Primary constructor used (field declarations removed)", + "Returns IEnumerable not List", + "Uses 'is null' instead of '== null' for null check", + "Custom exception replaces InvalidOperationException", + "Async method does not have Async postfix", + "Uses [LoggerMessage] or mentions it should be moved to logging partial class", + "OrderId converted to ConceptAs or at minimum flagged as a raw primitive" + ] + } + ] +} diff --git a/.github/skills/cratis-csharp-standards/references/code-style.md b/.github/skills/cratis-csharp-standards/references/code-style.md new file mode 100644 index 0000000..7d98160 --- /dev/null +++ b/.github/skills/cratis-csharp-standards/references/code-style.md @@ -0,0 +1,187 @@ +# Code Style — Reference + +## Records + +Use `record` types for all immutable data structures — events, commands, read models, concepts, DTOs. + +```csharp +// ✅ Prefer record +public record AuthorRegistered(AuthorName Name); + +// ✅ Record with primary constructor +public record Author(AuthorId Id, AuthorName Name); +``` + +Records give value equality, immutability, and concise syntax for free. A `record class` with `init`-only properties is equivalent when you need extra methods. + +--- + +## Primary Constructors + +Use primary constructors for all types — they eliminate the field declaration + constructor ceremony. + +```csharp +// ✅ Primary constructor +public class AuthorService(IEventLog eventLog, ILogger logger) +{ + public async Task Register(AuthorName name) => + await eventLog.Append(AuthorId.New(), new AuthorRegistered(name)); +} + +// ❌ Verbose constructor + fields +public class AuthorService +{ + readonly IEventLog _eventLog; + readonly ILogger _logger; + + public AuthorService(IEventLog eventLog, ILogger logger) + { + _eventLog = eventLog; + _logger = logger; + } +} +``` + +When NOT using primary constructors (e.g. you need field initialization logic), prefix private fields with `_`. + +--- + +## var + +Always use `var` when declaring local variables — the right side already tells you the type. + +```csharp +// ✅ +var authorId = AuthorId.New(); +var authors = collection.Find(_ => true).ToList(); + +// ❌ +AuthorId authorId = AuthorId.New(); +List authors = collection.Find(_ => true).ToList(); +``` + +--- + +## Expression-bodied members + +Use for simple methods and properties: + +```csharp +public string FullName => $"{FirstName} {LastName}"; +public void Log(string message) => _logger.LogInformation(message); +public AuthorId Id => _id; +``` + +--- + +## Collections + +```csharp +// ✅ Return IEnumerable for read-only sequences +public IEnumerable All() => _collection.Find(_ => true).ToList(); + +// ❌ Never expose mutable collection types from public APIs +public List All() => ... +public Dictionary ByName() => ... + +// ✅ IReadOnlyDictionary for key-value returns +public IReadOnlyDictionary ByName() => ... +``` + +--- + +## Nullable Reference Types + +```csharp +// ✅ is null / is not null +if (author is null) throw new AuthorNotFound(); +if (name is not null) DoSomething(name); + +// ❌ == null / != null +if (author == null) ... + +// Trust the type system — don't add defensive null checks when annotations guarantee non-null: +// If param is not nullable, don't guard it +public void Register(AuthorName name) // name is guaranteed non-null — no null check needed +{ + // ... +} + +// Add ! when you're certain but the compiler isn't: +var author = collection.Find(a => a.Id == id).FirstOrDefault()!; +``` + +--- + +## Async + +```csharp +// ✅ Async with proper naming (no Async suffix unless needed for overload disambiguation) +public async Task Append(AuthorId id, AuthorRegistered @event) => + await _eventLog.Append(id, @event); + +// Use Task for async results +public async Task FindById(AuthorId id) => ... + +// Use await — never .Result or .Wait() +var result = await _eventLog.Append(id, @event); +``` + +--- + +## Immutability + +Prefer immutable designs. Use `with` expressions to create modified copies of records: + +```csharp +var updated = existing with { Name = newName }; +``` + +Avoid returning mutable objects that callers could mutate. The owner of state is responsible for mutations. + +--- + +## Pattern matching + +Use pattern matching and switch expressions wherever possible: + +```csharp +// ✅ Pattern matching +if (result is Result.Success success) + return success.Value; + +// ✅ Switch expression +var description = status switch +{ + Status.Active => "Active", + Status.Inactive => "Inactive", + _ => "Unknown" +}; +``` + +--- + +## String interpolation + +```csharp +// ✅ +var message = $"Author '{name}' already exists"; + +// ❌ +var message = string.Format("Author '{0}' already exists", name); +var message = "Author '" + name + "' already exists"; +``` + +--- + +## Interface bodies + +For interfaces with no members, omit the body: + +```csharp +// ✅ +public interface IMyMarker; + +// ❌ +public interface IMyMarker { } +``` diff --git a/.github/skills/cratis-csharp-standards/references/domain-philosophy.md b/.github/skills/cratis-csharp-standards/references/domain-philosophy.md new file mode 100644 index 0000000..46db1e1 --- /dev/null +++ b/.github/skills/cratis-csharp-standards/references/domain-philosophy.md @@ -0,0 +1,99 @@ +# Domain Philosophy — Reference + +## CUPID characteristics + +Rather than adhering purely to SOLID principles, Cratis favors the **CUPID** characteristics (Dan North): + +| Letter | Characteristic | What it means | +| --- | --- | --- | +| **C** | Composable | Things play nicely together, minimal coupling, components can be assembled freely | +| **U** | Unix philosophy | Do one thing and do it well — focused, single-purpose components | +| **P** | Predictable | Deterministic behavior, consistent output, no surprises | +| **I** | Idiomatic | Code feels natural for the language and ecosystem | +| **D** | Domain-based | Use domain vocabulary and structure, not technical vocabulary | + +--- + +## Cohesion over layers + +**Do not** split code by technical role (MVC-style): + +``` +❌ Layered (avoid) +Models/ + Author.cs +Controllers/ + AuthorsController.cs +Services/ + AuthorService.cs +Events/ + AuthorRegistered.cs +``` + +**Do** group by feature — everything that changes together lives together: + +``` +✅ Feature-cohesive (Cratis style) +Features/ + Authors/ + Registration/ + Registration.cs ← command + event + validator + AddAuthor.tsx + Listing/ + Listing.cs ← read model + projection + query + Listing.tsx +``` + +For different technical concerns (frontend vs backend), naturally separate into different projects, but maintain the cohesive feature structure within each project. + +--- + +## Domain language (Ubiquitous Language) + +Name things after the domain concept they represent, not after the technical pattern: + +| ✅ Domain-named | ❌ Tech-named | +| --- | --- | +| `Authors` | `AuthorController`, `AuthorManager` | +| `Registration` | `RegisterAuthorHandler`, `RegisterAuthorCommand` | +| `AuthorNotFound` | `AuthorNotFoundException`, `NotFoundException` | +| `AuthorId` | `Guid authorId` | +| `Listing` | `GetAllAuthorsQuery` | + +--- + +## Pluralization + +Features are groupings — pluralize them: + +- Folder: `Authors/`, `Employees/`, `Orders/` +- Route: `/api/Authors/{authorId}`, `/api/Orders` +- Schema: `Authors`, `OrderItems` + +--- + +## 12-Factor + +Systems should follow [12factor.net](https://12factor.net) for scalability, maintainability, and operability: + +- Config from environment, not hardcoded +- Stateless processes +- Treat logs as event streams +- Declarative setup for easy replication + +--- + +## Frictionless dependencies + +Healthy dependencies = fast, independent releases. If you must coordinate releases between two components, there is an unhealthy coupling that should be addressed through events, interfaces, or package versioning. + +--- + +## Immutability & side-effects + +Favor immutable designs to reduce side effects: + +- Records with `init`-only properties +- Return new instances instead of mutating existing ones +- Expose `IEnumerable` and `IReadOnlyDictionary` — never mutable collections from public APIs +- The owner of state is responsible for its mutations — don't let consumers mutate your internal state diff --git a/.github/skills/cratis-csharp-standards/references/exceptions-logging-di.md b/.github/skills/cratis-csharp-standards/references/exceptions-logging-di.md new file mode 100644 index 0000000..53b2f9a --- /dev/null +++ b/.github/skills/cratis-csharp-standards/references/exceptions-logging-di.md @@ -0,0 +1,115 @@ +# Exceptions, Logging, and Dependency Injection — Reference + +## Exceptions + +### Rules + +- Only throw for truly exceptional situations — never for control flow +- Always create a **custom** exception type — never use built-in types (`InvalidOperationException`, `ArgumentException`, etc.) +- **No `Exception` suffix** — `AuthorNotFound` reads better than `AuthorNotFoundException` +- Provide a meaningful message +- Add XML `` doc starting with "The exception that is thrown when ..." + +### Pattern + +```csharp +/// +/// The exception that is thrown when an author is not found. +/// +/// The that was not found. +public class AuthorNotFound(AuthorId id) : Exception($"Author with id '{id}' was not found"); +``` + +Usage: + +```csharp +var author = await _authors.FindById(id) + ?? throw new AuthorNotFound(id); +``` + +--- + +## Logging + +### Rules + +- Structured logging with named parameters +- `ILogger` where `T` is the containing class +- Log message definitions go in a separate `Logging.cs` file — partial static internal class +- Use `[LoggerMessage]` attribute — do **not** include `eventId` +- Appropriate log levels: `Information`, `Warning`, `Error`, `Debug` + +### Logging class pattern + +```csharp +// AuthorServiceLogging.cs +namespace MyApp.Authors; + +static partial class AuthorServiceLogging +{ + [LoggerMessage(LogLevel.Information, "Registering author '{Name}'")] + internal static partial void RegisteringAuthor(this ILogger logger, AuthorName name); + + [LoggerMessage(LogLevel.Warning, "Author with name '{Name}' already exists")] + internal static partial void AuthorAlreadyExists(this ILogger logger, AuthorName name); +} +``` + +Usage in the service: + +```csharp +public class AuthorService(ILogger logger) +{ + public Task Register(AuthorName name) + { + logger.RegisteringAuthor(name); + // ... + } +} +``` + +--- + +## Dependency Injection + +### Rules + +- Prefer **constructor injection** — never use `IServiceProvider` directly (service locator anti-pattern) +- For singletons, use the `[Singleton]` attribute — no explicit `services.AddSingleton<>()` needed +- Convention-based systems (`IFoo → Foo`) are auto-discovered — don't register them explicitly +- `Handle()` method parameters are automatically resolved from DI — no manual wiring needed + +### [Singleton] attribute + +```csharp +[Singleton] +public class AuthorService(IEventLog eventLog) : IAuthorService +{ + // registered as singleton automatically +} +``` + +### DI in Handle() + +```csharp +[Command] +public record SendNotification(AuthorId AuthorId) +{ + // INotificationService resolved from DI automatically + public async Task Handle(INotificationService notifications) => + await notifications.Notify(AuthorId, "Welcome!"); +} +``` + +### Avoiding service locator + +```csharp +// ✅ Constructor injection +public class MyService(IAuthorService authors) { ... } + +// ❌ Service locator +public class MyService(IServiceProvider provider) +{ + void DoWork() => provider.GetService()!.DoSomething(); +} +``` diff --git a/.github/skills/cratis-react-page/SKILL.md b/.github/skills/cratis-react-page/SKILL.md new file mode 100644 index 0000000..83e7336 --- /dev/null +++ b/.github/skills/cratis-react-page/SKILL.md @@ -0,0 +1,188 @@ +--- +name: cratis-react-page +description: Step-by-step guidance for building a React page in a Cratis Arc application — listing data with DataPage, toolbar actions with CommandDialog, row selection, detail panels, observable queries, and MVVM. Use whenever building or modifying a React page that lists or displays data, adding a table or grid, wiring up Add/Edit/Delete actions, using DataPage, DataTableForQuery, CommandDialog, or any @cratis/components. Also trigger when connecting a React component to a proxy-generated query or observable query. +--- + +## Workflow + +### Step 1 — Prerequisites + +- Backend query and command endpoints must already exist (see `cratis-readmodel` and `cratis-command` skills) +- Run `dotnet build` on the backend to regenerate proxies +- All proxy `.ts` files in `` must be committed/saved before importing + +Imports you will need: + +```tsx +import { DataPage } from '@cratis/components'; +import { CommandDialog } from '@cratis/components'; +import { useDialog } from '@cratis/arc.react/dialogs'; +``` + +--- + +### Step 2 — Basic DataPage setup + +`DataPage` combines a toolbar/menu, a data table, and an optional detail panel into one component. + +```tsx +import { DataPage, Column } from '@cratis/components'; +import { AllAccounts } from './queries/AllAccounts'; + +export const AccountsPage = () => { + return ( + + ); +}; +``` + +--- + +### Step 3 — Add menu actions + +Menu items appear in the toolbar above the table. + +```tsx +import { DataPage, MenuItemGroup, MenuItem } from '@cratis/components'; +import { useDialog, IDialogMediatorRequest } from '@cratis/arc.react/dialogs'; +import { CommandDialog } from '@cratis/components'; +import { CreateAccount } from './commands/CreateAccount'; + +export const AccountsPage = () => { + const [createDialog, openCreate] = useDialog<{}, CreateAccount>( + async (_, command) => { /* post-confirm callback (optional) */ } + ); + + return ( + <> + + + + } + /> + + {/* CommandForm fields */} + + + ); +}; +``` + +See `references/dialogs.md` for `useDialog`/`useDialogContext` API detail. + +--- + +### Step 4 — Row selection and edit dialog + +Pass a selected-row handler and an edit dialog. The selected item becomes the initial values for the edit command. + +```tsx +const [editDialog, openEdit] = useDialog( + async (row, command) => { + command.accountId = row.id; + } +); + + openEdit(row)} +/> + + ({ accountId: row.id, name: row.name })} +> + ... + +``` + +--- + +### Step 5 — Choose observable vs standard query + +Use an **observable query** when the data updates in real time without user-triggered refresh: + +```tsx + +``` + +Observable query results push updates automatically. You cannot call `requery()` on observable queries. + +Use a **standard query** for data that only changes when the user takes an action (and use `useEffect`/`refresh` to invalidate after a command). + +--- + +### Step 6 — Detail panel (optional) + +A detail panel renders beside the table when a row is selected. + +```tsx + } + ... +/> +``` + +--- + +### Step 7 — MVVM view model (optional, for complex pages) + +For pages with complex state or coordination logic, wrap the page in a view model: + +```tsx +import { withViewModel } from '@cratis/arc.react.mvvm'; +import { injectable } from 'tsyringe'; + +@injectable() +class AccountsViewModel { + selectedAccount?: AccountSummary; +} + +export const AccountsPage = withViewModel(AccountsViewModel, ({ viewModel }) => ( + +)); +``` + +See `references/mvvm.md` for full MVVM guidance. + +--- + +## Quick decision guide + +| Need | Use | +| --- | --- | +| Read-only list | `DataPage` with `query` | +| Real-time updates | `DataPage` with `observableQuery` | +| Add / create action | `MenuItem` + `CommandDialog` + `useDialog` | +| Edit selected row | Row selection + `CommandDialog` + `initialValuesMapper` | +| Side detail panel | `detailPanel` prop on `DataPage` | +| Complex page logic | `withViewModel` MVVM wrapper | + +## Reference files + +- `references/data-page.md` — DataPage props, MenuItems, Columns, detailPanel +- `references/data-table.md` — Standalone DataTableForQuery / DataTableForObservableQuery +- `references/dialogs.md` — `useDialog`, `useDialogContext`, `CommandDialog` full API +- `references/mvvm.md` — `withViewModel`, `@injectable`, `IHandleProps`, reactive props diff --git a/.github/skills/cratis-react-page/evals/evals.json b/.github/skills/cratis-react-page/evals/evals.json new file mode 100644 index 0000000..f637c9c --- /dev/null +++ b/.github/skills/cratis-react-page/evals/evals.json @@ -0,0 +1,34 @@ +{ + "skill_name": "cratis-react-page", + "evals": [ + { + "id": 1, + "prompt": "Build a React page for managing employees. It should list all employees in a table (name, department, start date columns). There should be an 'Add Employee' button in the toolbar that opens a dialog to create a new employee (fields: firstName, lastName, department). Assume the proxies AllEmployees (query) and RegisterEmployee (command) already exist.", + "expected_output": "A TSX file using DataPage with query prop, columns, MenuItemGroup/MenuItem for Add, and CommandDialog with InputTextField fields. useDialog for wiring the dialog. CommandForm fields for all employee properties.", + "files": [], + "assertions": [ + "Output uses DataPage component (not a raw table)", + "Output uses useDialog for dialog state management", + "Output uses CommandDialog (not a manual Dialog/Button setup)", + "Output uses MenuItemGroup and MenuItem for the Add action", + "Output uses InputTextField or similar CommandForm fields inside the dialog", + "Output does NOT manually manage open/close boolean state for the dialog", + "Output imports from correct packages (@cratis/components, @cratis/arc.react/dialogs)" + ] + }, + { + "id": 2, + "prompt": "I need a React page that shows a real-time list of active orders (observable query: AllOrdersLive). When clicking an order, show a detail panel on the side. There's a 'Cancel Order' action that should open a confirmation dialog. Use MVVM pattern since we have complex selection state.", + "expected_output": "TSX using DataPage with observableQuery, detailPanel prop, withViewModel from @cratis/arc.react.mvvm with @injectable() view model, and CommandDialog for cancel action.", + "files": [], + "assertions": [ + "Output uses observableQuery prop (not query) for real-time data", + "Output uses withViewModel from @cratis/arc.react.mvvm", + "View model class has @injectable() decorator", + "View model class calls makeAutoObservable(this)", + "Output uses detailPanel prop on DataPage", + "Output uses CommandDialog for the cancel action" + ] + } + ] +} diff --git a/.github/skills/cratis-react-page/references/data-page.md b/.github/skills/cratis-react-page/references/data-page.md new file mode 100644 index 0000000..8af6f6d --- /dev/null +++ b/.github/skills/cratis-react-page/references/data-page.md @@ -0,0 +1,100 @@ +# DataPage — Reference + +`DataPage` (from `@cratis/components`) is the standard full-page layout providing a menubar, data table, and optional detail panel in one component. + +## Import + +```tsx +import { DataPage, MenuItemGroup, MenuItem, Column } from '@cratis/components'; +``` + +## Core props + +| Prop | Type | Description | +| --- | --- | --- | +| `query` | Query class | Proxy-generated query class (standard) | +| `observableQuery` | Observable query class | Proxy-generated observable query class (real-time) | +| `columns` | `Column[]` | Column definitions (see below) | +| `menuItems` | `ReactNode` | Toolbar content (usually ``) | +| `detailPanel` | `(row: T) => ReactNode` | Renders to the right when a row is selected | +| `onRowSelected` | `(row: T) => void` | Callback when user clicks a row | +| `noDataMessage` | `string` | Message when the query returns no rows | +| `queryArgs` | `object` | Arguments forwarded to the query proxy | + +Use either `query` or `observableQuery` — not both. + +## Column definition + +```tsx +type Column = { + header: string; + field: keyof T | ((row: T) => string); + width?: number | string; + sortable?: boolean; +}; +``` + +Example with custom renderer: + +```tsx +columns={[ + { header: 'Name', field: 'name' }, + { header: 'Balance', field: (row) => row.balance.toFixed(2) }, +]} +``` + +## MenuItemGroup / MenuItem + +```tsx + + + + +``` + +Multiple `MenuItemGroup` children create visual separators between groups. + +## Detail panel + +The detail panel receives the currently selected row. It is hidden when no row is selected. + +```tsx + ( + + )} +/> +``` + +## Full example + +```tsx +export const AccountsPage = () => { + const [createDialog, openCreate] = useDialog(); + + return ( + <> + `$${r.balance.toFixed(2)}` }, + ]} + menuItems={ + + + + } + detailPanel={(row) => } + noDataMessage="No accounts found." + /> + + ... + + + ); +}; +``` diff --git a/.github/skills/cratis-react-page/references/data-table.md b/.github/skills/cratis-react-page/references/data-table.md new file mode 100644 index 0000000..c2cce21 --- /dev/null +++ b/.github/skills/cratis-react-page/references/data-table.md @@ -0,0 +1,62 @@ +# Data Tables — Reference + +Use standalone data table components when you need a table without the built-in `DataPage` full-page chrome (e.g., embedded inside another panel or card). + +## DataTableForQuery + +```tsx +import { DataTableForQuery } from '@cratis/components'; +import { AllAccounts } from './queries/AllAccounts'; + + setSelected(row)} +/> +``` + +## DataTableForObservableQuery + +```tsx +import { DataTableForObservableQuery } from '@cratis/components'; +import { AllAccountsLive } from './queries/AllAccountsLive'; + + setSelected(row)} +/> +``` + +## Shared props + +| Prop | Type | Description | +| --- | --- | --- | +| `query` / `query` | Query class | Proxy query (use the appropriate component for type) | +| `columns` | `Column[]` | Column definitions (same shape as DataPage) | +| `onRowSelected` | `(row: T) => void` | Row click callback | +| `selectedRow` | `T \| undefined` | Externally controlled selected row | +| `noDataMessage` | `string` | Message when no rows are returned | +| `queryArgs` | `object` | Arguments forwarded to the query | + +## Column definition + +```ts +type Column = { + header: string; + field: keyof T | ((row: T) => string); + width?: number | string; +}; +``` + +## When to use each component + +| Situation | Component | +| --- | --- | +| Full page with toolbar | `DataPage` | +| Embedded table, standard query | `DataTableForQuery` | +| Embedded table, real-time push | `DataTableForObservableQuery` | +| Inline data (no query) | Custom table (out of scope) | diff --git a/.github/skills/cratis-react-page/references/dialogs.md b/.github/skills/cratis-react-page/references/dialogs.md new file mode 100644 index 0000000..c138f0a --- /dev/null +++ b/.github/skills/cratis-react-page/references/dialogs.md @@ -0,0 +1,98 @@ +# Dialogs — Reference + +## useDialog + +`useDialog` wires up a `CommandDialog` without needing to manage open/close state yourself. + +```tsx +import { useDialog } from '@cratis/arc.react/dialogs'; + +const [dialogMediator, openDialog] = useDialog( + async (context, command) => { + // optional: called when user clicks Confirm, after command executes + } +); +``` + +- `TContext` — data passed to `openDialog()` (typically a selected row) +- `TCommand` — the command proxy type + +Call `openDialog(context?)` to show the dialog. Pass a row or other context when needed. + +## useDialogContext + +Inside a `CommandDialog`'s `children`, call `useDialogContext()` to read the context value passed by `openDialog`: + +```tsx +const AddItemDialog = () => { + const [item] = useDialogContext(); + const [command, setValues] = AddItem.use({ itemId: item?.id }); + + return ( + + ); +}; +``` + +## CommandDialog + +```tsx +import { CommandDialog } from '@cratis/components'; + + ({ ownerId: ctx.id })} +> + + + +``` + +### Props + +| Prop | Type | Description | +| --- | --- | --- | +| `dialogMediator` | `IDialogMediator` | From `useDialog` | +| `title` | `string` | Dialog header text | +| `command` | Command class | Proxy-generated command class | +| `initialValuesMapper` | `(ctx) => Partial` | Map context to initial command values | +| `children` | `ReactNode` | `CommandForm` field components | +| `confirmLabel` | `string` | Confirm button text (default: "Save") | +| `cancelLabel` | `string` | Cancel button text (default: "Cancel") | + +## CommandForm field components + +All field components accept `id` matching the command property name. They read and write via the command proxy automatically when used inside `CommandDialog`. + +```tsx +import { InputTextField, NumberField, CheckboxField, DateField, DropdownField, TextAreaField } from '@cratis/components'; + + + + + + + +``` + +## Edit dialog pattern + +```tsx +const [editDialog, openEdit] = useDialog( + async (row, command) => { /* post-confirm */ } +); + +// Trigger from row selection: + + + ({ accountId: row.id, name: row.name })} +> + + +``` diff --git a/.github/skills/cratis-react-page/references/mvvm.md b/.github/skills/cratis-react-page/references/mvvm.md new file mode 100644 index 0000000..3ba11ca --- /dev/null +++ b/.github/skills/cratis-react-page/references/mvvm.md @@ -0,0 +1,133 @@ +# MVVM — Reference + +The Arc MVVM pattern keeps page logic in plain TypeScript classes (view models) and keeps components purely declarative. + +## When to use MVVM + +- Page has complex coordinated state (selected item, filters, multiple dialogs) +- Logic needs unit-testing independent of React +- You want to share state across child components via injection + +For simple pages, MVVM is optional — use regular hooks directly in the component instead. + +## Setup + +Install packages if not already present: + +``` +npm install @cratis/arc.react.mvvm tsyringe reflect-metadata +``` + +Ensure `tsconfig.json` enables decorators: + +```json +{ + "compilerOptions": { + "experimentalDecorators": true, + "emitDecoratorMetadata": true + } +} +``` + +Import `reflect-metadata` once, at the entry point of your app: + +```tsx +import 'reflect-metadata'; +``` + +## View model class + +```ts +import { injectable } from 'tsyringe'; +import { makeAutoObservable } from 'mobx'; + +@injectable() +export class AccountsViewModel { + selectedAccount?: AccountSummary = undefined; + + constructor() { + makeAutoObservable(this); + } + + selectAccount(account: AccountSummary) { + this.selectedAccount = account; + } +} +``` + +- `@injectable()` — registers the class with tsyringe for DI +- `makeAutoObservable(this)` — makes all fields reactive (MobX) + +## withViewModel + +```tsx +import { withViewModel } from '@cratis/arc.react.mvvm'; + +export const AccountsPage = withViewModel(AccountsViewModel, ({ viewModel }) => { + return ( + viewModel.selectAccount(row)} + detailPanel={() => viewModel.selectedAccount + ? + : null + } + /> + ); +}); +``` + +The view model instance is created once per mount and disposed on unmount. It is the same instance for the whole component tree under `withViewModel`. + +## IHandleProps — reactive props + +When a child component needs to receive a prop and react to its changes, implement `IHandleProps`: + +```ts +import { IHandleProps } from '@cratis/arc.react.mvvm'; + +interface DetailProps { + account: AccountSummary; +} + +@injectable() +export class AccountDetailViewModel implements IHandleProps { + account!: AccountSummary; + + propsChanged(props: DetailProps): void { + this.account = props.account; + } +} +``` + +`propsChanged` is called whenever the parent passes new props, allowing the view model to react. + +## Dependency injection in view models + +Use tsyringe constructor injection. Cratis registers common singletons (e.g., `IEventStore`, query/command types): + +```ts +@injectable() +export class AccountsViewModel { + constructor( + private readonly _eventLog: IEventLog, + ) { + makeAutoObservable(this); + } +} +``` + +## MVVM context + +Wrap the app (or route root) in `` to enable the DI container: + +```tsx +import { MVVM } from '@cratis/arc.react.mvvm'; + + + + +``` + +If you are using ``, it already includes `` internally — do not double-wrap. diff --git a/.github/skills/cratis-readmodel/SKILL.md b/.github/skills/cratis-readmodel/SKILL.md new file mode 100644 index 0000000..44fa3df --- /dev/null +++ b/.github/skills/cratis-readmodel/SKILL.md @@ -0,0 +1,247 @@ +--- +name: cratis-readmodel +description: Step-by-step guidance for creating a Cratis Chronicle read model from scratch — defining events, choosing between projection and reducer, [ReadModel] record with static query methods, and the generated TypeScript proxy in React. Use when creating a read model, working with [EventType], [ReadModel], IProjectionFor, IReducerFor, observable queries, or deriving state from events. For adding a projection or reactor to an existing read model, use add-projection instead. +--- + +# Creating a Cratis Read Model + +A read model is derived state built from events. The path is: + +``` +[EventType] records → [ReadModel] record + static query methods → projection or reducer → TypeScript proxy → React +``` + +--- + +## Step 1 — Define your events + +Events are the source of truth. Define each as a `record` decorated with `[EventType]`. Name them in **past tense**. + +```csharp +// Domain/Events/AccountEvents.cs +using Cratis.Chronicle.Events; + +[EventType] +public record DebitAccountOpened(string Name, Guid OwnerId); + +[EventType] +public record DebitAccountClosed(); + +[EventType] +public record FundsDeposited(decimal Amount); + +[EventType] +public record FundsWithdrawn(decimal Amount); +``` + +Good event design: +- One clear purpose per event — do not mix concerns +- No nullables unless genuinely optional +- Properties represent facts, not intent + +--- + +## Step 2 — Define the read model record + +Decorate the record with `[ReadModel]` and add **static query methods** directly on it. The proxy generator turns each static method into a TypeScript query class. + +```csharp +// Domain/ReadModels/AccountSummary.cs +using Cratis.Arc.Queries.ModelBound; +using MongoDB.Driver; + +[ReadModel] +public record AccountSummary(AccountId Id, string Name, OwnerId OwnerId, decimal Balance, bool IsClosed) +{ + // Snapshot query — returns current data once + public static async Task> AllAccounts( + IMongoCollection collection) + => await collection.Find(Builders.Filter.Empty).ToListAsync(); + + public static async Task GetAccount( + AccountId id, + IMongoCollection collection) + => await collection.Find(a => a.Id == id).FirstOrDefaultAsync(); + + // Observable query — pushes updates in real time + public static ISubject> ObserveAllAccounts( + IMongoCollection collection) + => collection.Observe(); +} +``` + +**Rules:** +- `[ReadModel]` attribute is **required** for proxy generation and runtime routing +- Static methods must be `public static` and return the record type, a collection of it, or `ISubject` for real-time push +- Do **not** return `Task>` — observable methods must return `ISubject` directly +- Use `ConceptAs` wrappers for all identity fields — never raw `Guid` +- One read model per use case — do not reuse them + +--- + +## Step 3 — Choose: projection or reducer? + +| | Projection | Reducer | +|-| ---------- | ------- | +| **Best for** | Shaped read models with mapping logic, joins, children | Running aggregates: balances, counts, sums | +| **How it works** | Declarative mapping: each event updates specific fields | Receives events one by one and returns the new full state | +| **When to pick** | The read model shape comes mostly from mapping event fields | The state is a function of *accumulating* multiple events | + +For the `AccountSummary` above: use a **projection** for name/owner fields and a **reducer** for balance (a running total). In practice, reducers cover both when the aggregate combines both concerns. + +--- + +## Step 4A — Implement a projection + +```csharp +// Domain/Projections/AccountSummaryProjection.cs +using Cratis.Chronicle.Projections; + +public class AccountSummaryProjection : IProjectionFor +{ + public void Define(IProjectionBuilderFor builder) => builder + .From(from => from + .Set(m => m.Name).To(e => e.Name) + .Set(m => m.OwnerId).To(e => e.OwnerId) + .Set(m => m.Balance).WithValue(0m)) + .From(from => from + .Add(m => m.Balance).With(e => e.Amount)) + .From(from => from + .Subtract(m => m.Balance).With(e => e.Amount)) + .From(from => from + .Set(m => m.IsClosed).WithValue(true)); +} +``` + +- Discovered automatically — no registration needed +- `IProjectionFor` is keyed by **event source ID** by default (the `Id` passed to `IEventLog.Append`) +- See `references/projections.md` for joins, auto-mapping, children, composite keys + +### Model-bound shorthand (for simple cases) + +```csharp +using Cratis.Chronicle.Keys; +using Cratis.Chronicle.Projections.ModelBound; + +public record AccountInfo( + [Key] Guid Id, + [FromEvent] string Name, // auto-maps matching property + [SetFrom(nameof(DebitAccountOpened.OwnerId))] Guid OwnerId +); +``` + +--- + +## Step 4B — Implement a reducer + +Use a reducer when the state is built by accumulating values across events: + +```csharp +// Domain/Reducers/AccountBalanceReducer.cs +using Cratis.Chronicle.Events; +using Cratis.Chronicle.Reducers; + +public class AccountBalanceReducer : IReducerFor +{ + public AccountBalance Opened(DebitAccountOpened @event, AccountBalance? current, EventContext context) + => new(0m, context.Occurred); + + public AccountBalance Deposited(FundsDeposited @event, AccountBalance? current, EventContext context) + => (current ?? new(0m, context.Occurred)) with { Balance = (current?.Balance ?? 0m) + @event.Amount }; + + public AccountBalance Withdrawn(FundsWithdrawn @event, AccountBalance? current, EventContext context) + => (current ?? new(0m, context.Occurred)) with { Balance = (current?.Balance ?? 0m) - @event.Amount }; +} + +public record AccountBalance(decimal Balance, DateTimeOffset LastUpdated); +``` + +- Return the **complete new state** — do not mutate `current` +- `current` is `null` on the first event for a given event source +- `EventContext` provides `Occurred`, `EventSourceId`, `SequenceNumber`, `CorrelationId` +- Discovered automatically — no registration needed + +--- + +## Step 5 — Expose read model queries + +Query methods live **directly on the `[ReadModel]` record** as static methods (see Step 2). You do **not** need a separate controller or `IReadModels` injection. + +The method name becomes the TypeScript proxy class name — use descriptive names like `AllAccounts`, `GetAccount`, `ObserveAllAccounts`. + +### Snapshot (one-time) queries + +```csharp +[ReadModel] +public record AccountSummary(AccountId Id, string Name, decimal Balance) +{ + public static async Task> AllAccounts( + IMongoCollection collection) + => await collection.Find(_ => true).ToListAsync(); + + public static async Task GetAccount( + AccountId id, + IMongoCollection collection) + => await collection.Find(a => a.Id == id).FirstOrDefaultAsync(); +} +``` + +### Observable (real-time push) queries + +Return `ISubject` to push updates as projection changes land: + +```csharp +[ReadModel] +public record AccountSummary(AccountId Id, string Name, decimal Balance) +{ + public static ISubject> ObserveAllAccounts( + IMongoCollection collection) + => collection.Observe(); + + public static ISubject ObserveAccount( + AccountId id, + IMongoCollection collection) + => collection.Observe(a => a.Id == id); +} +``` + +When the frontend uses an observable query, the query proxy type changes from `QueryFor` to `ObservableQueryFor`, and `DataPage` uses the `observableQuery` prop instead of `query`. + +--- + +## Step 6 — Build and use in React + +```bash +dotnet build # generates TypeScript proxies +``` + +```tsx +import { AllAccounts } from '../api/Accounts/AllAccounts'; + +export const AccountList = () => { + const [accounts] = AllAccounts.use(); + + if (accounts.isPerforming) return ; + + return ( +
    + {accounts.data.map(a => ( +
  • {a.name} — ${a.balance}
  • + ))} +
+ ); +}; +``` + +For building full pages with filtering, sorting, and command actions — see the `cratis-react-page` skill. + +--- + +## Reference files + +| File | What's in it | +| ---- | ------------ | +| `references/projections.md` | Full builder API: `Set`, `Add`, `Join`, `Children`, `AutoMap`, composite keys | +| `references/reducers.md` | Reducer signatures, async, passive, snapshot behaviour | +| `references/events.md` | `[EventType]`, appending, `AppendResult`, tags, constraints | +| `references/queries.md` | Query result shape, observable queries, paging | diff --git a/.github/skills/cratis-readmodel/evals/evals.json b/.github/skills/cratis-readmodel/evals/evals.json new file mode 100644 index 0000000..3493423 --- /dev/null +++ b/.github/skills/cratis-readmodel/evals/evals.json @@ -0,0 +1,33 @@ +{ + "skill_name": "cratis-readmodel", + "evals": [ + { + "id": 1, + "prompt": "I need a read model that shows all employees with their department name and start date. The read model should be updated whenever an employee is registered (EmployeeRegistered event has EmployeeName, DepartmentId) or when a department name changes (DepartmentNameChanged has DepartmentId, NewName). Use the model-bound/attribute approach if possible.", + "expected_output": "A C# file with [ReadModel] record using [FromEvent<>] and possibly [Join<>] attributes, with a static AllEmployees query method returning ISubject>. Includes [EventType] records if not already defined.", + "files": [], + "assertions": [ + "Output contains [ReadModel] attribute on a record", + "Output includes a static query method on the read model record", + "Query method returns ISubject for real-time push (observable query)", + "Output uses [FromEvent] or IProjectionFor for projection", + "Output uses ConceptAs for id types", + "Output does NOT use IProjectionFor with joins reading from other read models (events only)" + ] + }, + { + "id": 2, + "prompt": "Create a read model to track the current stock level for each product. Events are: ProductAdded(ProductId, InitialStock), StockIncreased(ProductId, Amount), StockDecreased(ProductId, Amount). I need a reducer since the logic involves arithmetic accumulation.", + "expected_output": "A C# file with [ReadModel] record + IReducerFor class that handles all three events. Includes [EventType] records and a static query method.", + "files": [], + "assertions": [ + "Output contains IReducerFor implementation", + "Reducer handles null current state gracefully (first event initialization)", + "Reducer methods are pure (no side effects, no I/O calls)", + "Output contains [EventType] records for all three events", + "Output includes a static query method on the read model", + "Uses 'with' expressions for record state updates" + ] + } + ] +} diff --git a/.github/skills/cratis-readmodel/references/events.md b/.github/skills/cratis-readmodel/references/events.md new file mode 100644 index 0000000..514e43e --- /dev/null +++ b/.github/skills/cratis-readmodel/references/events.md @@ -0,0 +1,82 @@ +# Events — Reference + +## [EventType] attribute + +```csharp +using Cratis.Chronicle.Events; + +[EventType] +public record OrderPlaced(string CustomerId, decimal Total); + +// Optional: stable custom ID (survives class renames) +[EventType("4f8e91a1-0e3e-4c12-933c-b7c8c26a0de1")] +public record OrderPlaced(string CustomerId, decimal Total); +``` + +Every event must be decorated with `[EventType]` — this makes it discoverable and registers its schema with Chronicle. + +--- + +## Good event design + +- **Past tense**: `OrderPlaced`, `UserOnboarded`, `BookReturned` ✓ — not `PlaceOrder`, `OnboardUser` +- **One purpose**: an `AddressChanged` event should only carry address fields, not payment info +- **No nullables** unless the field is genuinely optional (e.g. `string? MiddleName`) +- **Immutable facts**: events represent something that *has happened* — do not use them to encode intent or possibility + +--- + +## Appending an event + +Inject `IEventLog` and call `Append(eventSourceId, eventInstance)`: + +```csharp +public class OrdersController(IEventLog eventLog) : ControllerBase +{ + [HttpPost] + public async Task PlaceOrder([FromBody] PlaceOrder command) + { + var result = await eventLog.Append( + command.OrderId, + new OrderPlaced(command.CustomerId, command.Total)); + + if (!result.IsSuccess) + { + // result.HasConcurrencyViolation — two requests raced + // result.HasConstraintViolations — uniqueness constraint failed + } + } +} +``` + +The first argument is the **event source ID** — the identity the event belongs to (analogous to an aggregate root ID). Projections and reducers are keyed by this ID by default. + +--- + +## Constraints (uniqueness) + +Enforce uniqueness at append time without application-level checks: + +```csharp +public class UniqueOrderNumberConstraint : IUniqueConstraintFor +{ + public string GetConstraintValue(OrderPlaced @event) => @event.OrderNumber; +} +``` + +Violation surfaces in `AppendResult.HasConstraintViolations`. + +--- + +## Tags + +```csharp +[EventType] +[EventTag("high-value")] +public record LargeOrderPlaced(decimal Total); + +// Or apply at append time: +await eventLog.Append(orderId, new OrderPlaced(...), ["high-value"]); +``` + +Tags allow filtering reactors, projections and reducers to only handle tagged events. diff --git a/.github/skills/cratis-readmodel/references/projections.md b/.github/skills/cratis-readmodel/references/projections.md new file mode 100644 index 0000000..71c9733 --- /dev/null +++ b/.github/skills/cratis-readmodel/references/projections.md @@ -0,0 +1,112 @@ +# Projections — Reference + +## Declarative projection builder (`IProjectionFor`) + +```csharp +public class InvoiceProjection : IProjectionFor +{ + public void Define(IProjectionBuilderFor builder) => builder + .From(from => from.AutoMap()) // map all matching property names + .From(from => from + .Set(m => m.PaidAt).ToEventContextProperty(c => c.Occurred) + .Set(m => m.Status).WithValue(InvoiceStatus.Paid)) + .From(from => from + .Add(m => m.TotalAmount).With(e => e.Price)) + .Join(j => j + .On(m => m.CustomerId) + .Set(m => m.CustomerName).To(e => e.Name)) + .RemovedWith() + .Children(m => m.Lines, cb => cb + .IdentifiedBy(li => li.LineItemId) + .From(from => from.AutoMap()) + .RemovedWith()); +} +``` + +### Builder method reference + +| Method | Purpose | +| ------ | ------- | +| `.From(cb)` | Handle an event type | +| `.AutoMap()` | Map all event properties with matching names to the read model | +| `.Set(m => m.Prop).To(e => e.Prop)` | Explicit property mapping | +| `.Set(m => m.Prop).WithValue(val)` | Set a constant | +| `.Set(m => m.Prop).ToEventContextProperty(c => c.X)` | Map from event metadata | +| `.Add(m => m.Prop).With(e => e.X)` | Add (numeric) | +| `.Subtract(m => m.Prop).With(e => e.X)` | Subtract | +| `.Count(m => m.Prop)` | Increment a counter | +| `.Join(j => j.On(key).Set(...))` | Cross-stream join another event | +| `.RemovedWith()` | Delete the read model on this event | +| `.Children(m => m.Coll, cb)` | Manage a child collection | +| `.FromEvery(cb)` | Apply mapping to every event type | +| `.UsingKey(e => e.Prop)` | Override the key (default: event source ID) | +| `.Passive()` | On-demand only, no active observer | +| `.NotRewindable()` | Forward-only, no replay | + +### Event context properties + +```csharp +.ToEventContextProperty(c => c.Occurred) // DateTimeOffset +.ToEventContextProperty(c => c.EventSourceId) // string +.ToEventContextProperty(c => c.SequenceNumber) // long +.ToEventContextProperty(c => c.CorrelationId) // Guid +``` + +### Composite keys + +```csharp +builder.UsingCompositeKey(key => key + .Set(k => k.Year).ToEventContextProperty(c => c.Occurred.Year) + .Set(k => k.Month).ToEventContextProperty(c => c.Occurred.Month)); +``` + +--- + +## Model-bound projections (attribute-based) + +Attributes go directly on the read model record. Chronicle discovers types with any mapping attribute automatically — no separate class needed. + +```csharp +using Cratis.Chronicle.Keys; +using Cratis.Chronicle.Projections.ModelBound; + +public record InvoiceInfo( + [Key] Guid Id, + [FromEvent] string Number, // auto-map by name + [SetFrom(nameof(InvoiceCreated.Total))] decimal Total, + [AddFrom(nameof(LineItemAdded.Price))] decimal RunningTotal, + [SetFromContext(nameof(EventContext.Occurred))] DateTimeOffset? PaidAt, + [RemovedWith] bool _ // marker; property unused +); +``` + +| Attribute | Equivalent builder | +| --------- | ------------------ | +| `[Key]` | Default key (event source ID) | +| `[FromEvent]` | `.AutoMap()` for event T | +| `[SetFrom(nameof(...))]` | `.Set(...).To(...)` | +| `[AddFrom(nameof(...))]` | `.Add(...).With(...)` | +| `[SubtractFrom(nameof(...))]` | `.Subtract(...).With(...)` | +| `[SetFromContext(nameof(...))]` | `.Set(...).ToEventContextProperty(...)` | +| `[Increment]` | counter increment | +| `[Decrement]` | counter decrement | +| `[RemovedWith]` | `.RemovedWith()` | +| `[Passive]` | `.Passive()` | +| `[NotRewindable]` | `.NotRewindable()` | + +--- + +## Reading projected read models + +```csharp +// Inject IReadModels in a controller +[HttpGet("{id}")] +public async Task Get(Guid id) + => await readModels.GetOne(id); + +// Strong consistency (replay events synchronously before returning) +var account = await readModels.GetOneWithImmediateProjection(id); + +// All instances +var all = await readModels.GetAll(); +``` diff --git a/.github/skills/cratis-readmodel/references/queries.md b/.github/skills/cratis-readmodel/references/queries.md new file mode 100644 index 0000000..cf78bc3 --- /dev/null +++ b/.github/skills/cratis-readmodel/references/queries.md @@ -0,0 +1,104 @@ +# Queries — Reference + +## Query endpoint patterns + +### Collection query + +```csharp +[HttpGet] +public IEnumerable AllAccounts() + => collection.Find(_ => true).ToList(); +``` + +### Single item query + +```csharp +[HttpGet("{id}")] +public AccountSummary? GetAccount(Guid id) + => collection.Find(a => a.Id == id).FirstOrDefault(); +``` + +### Filtered query (with proxy parameter) + +```csharp +[HttpGet("search")] +public IEnumerable Search([FromQuery] string? filter) + => collection.Find(a => a.Name.StartsWith(filter ?? string.Empty)).ToList(); +``` + +The `[FromQuery]` parameter is included in the generated TypeScript proxy. + +### Observable (real-time) query + +Return `ISubject` to push data to clients over WebSocket: + +```csharp +[HttpGet("live")] +public ISubject> AllAccountsLive() +{ + var observable = new ClientObservable>(); + observable.OnNext(collection.Find(_ => true).ToList()); + + var changeStream = collection.Watch(); + observable.ClientDisconnected += () => changeStream.Dispose(); + Task.Run(async () => + { + await foreach (var _ in changeStream.ToAsyncEnumerable()) + observable.OnNext(collection.Find(_ => true).ToList()); + }); + + return observable; +} +``` + +The proxy generator produces an `ObservableQuery` TypeScript class for `ISubject` return types. The React hook `useObservableQuery()` is used automatically. + +--- + +## QueryResult shape (frontend) + +```ts +interface QueryResultWithState { + data: T; + isSuccess: boolean; + isAuthorized: boolean; + isValid: boolean; + validationResults: ValidationResult[]; + hasExceptions: boolean; + exceptionMessages: string[]; + paging: { page: number; pageSize: number; totalItems: number; totalPages: number }; + + // React-specific: + hasData: boolean; // non-null and non-empty + isPerforming: boolean; // request in flight +} +``` + +--- + +## React usage + +```tsx +// Standard query — returns [result, requery] +const [accounts, refresh] = AllAccounts.use(); + +// With parameters +const [results] = Search.use({ filter: searchText }); + +// Observable query — returns [result] only (no manual refresh) +const [liveAccounts] = AllAccountsLive.use(); +``` + +For full page layouts with tables and menu actions, see the `cratis-react-page` skill. + +--- + +## Naming conventions + +The **method name** on the controller becomes the TypeScript proxy class name. Make it descriptive. + +| ✅ Good | ❌ Avoid | +| ------- | ------- | +| `AllAccounts` | `Get`, `GetAll`, `List` | +| `AccountsByOwner` | `Query`, `Fetch` | +| `AllAccountsLive` | `Observable`, `Live` | diff --git a/.github/skills/cratis-readmodel/references/reducers.md b/.github/skills/cratis-readmodel/references/reducers.md new file mode 100644 index 0000000..19e68b7 --- /dev/null +++ b/.github/skills/cratis-readmodel/references/reducers.md @@ -0,0 +1,95 @@ +# Reducers — Reference + +## Signatures + +All public methods with a `[EventType]` record as the first parameter are treated as event handlers. All the following signatures are valid: + +```csharp +public TState Handle(TEvent @event, TState? current, EventContext context) +public TState Handle(TEvent @event, TState? current) +public Task Handle(TEvent @event, TState? current, EventContext context) +public Task Handle(TEvent @event, TState? current) +``` + +Method names are yours to choose — Chronicle matches by the event type parameter. + +--- + +## EventContext properties + +| Property | Type | Description | +| -------- | ---- | ----------- | +| `context.Occurred` | `DateTimeOffset` | When the event was appended | +| `context.EventSourceId` | `EventSourceId` | The aggregate root identifier | +| `context.SequenceNumber` | `EventSequenceNumber` | Position in the event sequence | +| `context.CorrelationId` | `CorrelationId` | Correlation ID for causality tracking | + +--- + +## Full example: shopping cart + +```csharp +public record CartItem(string Sku, int Quantity, decimal UnitPrice); +public record CartState(IReadOnlyList Items, decimal Total, bool IsCheckedOut); + +public class CartReducer : IReducerFor +{ + public CartState Created(CartCreated @event, CartState? current, EventContext context) + => new([], 0m, false); + + public CartState ItemAdded(CartItemAdded @event, CartState? current, EventContext context) + { + var items = (current?.Items ?? []).ToList(); + var existing = items.FirstOrDefault(i => i.Sku == @event.Sku); + if (existing is not null) + { + items.Remove(existing); + items.Add(existing with { Quantity = existing.Quantity + @event.Quantity }); + } + else + { + items.Add(new CartItem(@event.Sku, @event.Quantity, @event.UnitPrice)); + } + var total = items.Sum(i => i.Quantity * i.UnitPrice); + return new CartState(items, total, false); + } + + public CartState ItemRemoved(CartItemRemoved @event, CartState? current, EventContext context) + { + var items = (current?.Items ?? []).Where(i => i.Sku != @event.Sku).ToList(); + return new CartState(items, items.Sum(i => i.Quantity * i.UnitPrice), false); + } + + public CartState CheckedOut(CartCheckedOut @event, CartState? current, EventContext context) + => (current ?? new([], 0m, false)) with { IsCheckedOut = true }; +} +``` + +--- + +## Passive reducers + +A passive reducer is not an active observer — it computes state on demand. Useful for previews or draft calculations: + +```csharp +[Passive] +public class DraftOrderReducer : IReducerFor { ... } +``` + +Call explicitly rather than subscribing automatically: + +```csharp +var state = await readModels.GetOne(orderId); +``` + +--- + +## Reading reducer state + +```csharp +// Single instance +var cart = await readModels.GetOne(cartId); + +// All instances +var allCarts = await readModels.GetAll(); +``` diff --git a/.github/skills/cratis-specs-csharp/SKILL.md b/.github/skills/cratis-specs-csharp/SKILL.md new file mode 100644 index 0000000..c6c81ba --- /dev/null +++ b/.github/skills/cratis-specs-csharp/SKILL.md @@ -0,0 +1,174 @@ +--- +name: cratis-specs-csharp +description: Step-by-step guidance for writing C# specs in Cratis using BDD-style Specification by Example — the Establish/Because/should_ pattern, for_/when_/and_ folder hierarchy, reusable given/ contexts, NSubstitute mocking, and Chronicle integration specs. Use when writing C# unit or integration specs, creating spec files or folders, structuring the for_/when_/and_ hierarchy, mocking with NSubstitute, writing given/ reusable contexts, or using ShouldEqual/ShouldBeTrue assertions. For integration specs specifically tied to a vertical slice command, write-specs offers a more focused workflow. +--- + +## Core philosophy + +Specs are **executable documentation** — the folder tree reads like a spec sheet. Favor readability over DRY. Each spec file has: +- One action under test (`Because`) +- One setup (`Establish`) +- One or more focused assertions (`should_*`) + +--- + +## Step 1 — Choose the spec type + +| Scenario | Spec type | +| --- | --- | +| Isolated unit logic (no I/O) | Unit spec in `for_/` | +| A full vertical slice (command → event) | Chronicle integration spec in slice folder | +| Complex setup shared across many specs | Reusable context in `given/` | + +--- + +## Step 2 — Create the folder structure + +``` +for_/ +├── given/ +│ ├── all_dependencies.cs ← mocks all deps, inherits Specification +│ └── a_.cs ← creates SUT, inherits all_dependencies +├── when_/ ← behavior with multiple outcomes +│ ├── and_.cs +│ └── with_.cs +└── when_.cs ← single outcome = single file +``` + +Folder/file names read as English sentences: +- `for_Changeset / when_adding_changes / and_there_are_differences` +- `for_AuthorService / when_registering / and_name_already_exists` + +--- + +## Step 3 — Write a spec + +```csharp +// for_KeyHelper/when_combining_parts.cs +namespace MyApp.for_KeyHelper; + +public class when_combining_parts : Specification +{ + object[] _parts; + string _result; + + void Establish() => _parts = ["First", "Second", "Third"]; + + void Because() => _result = KeyHelper.Combine(_parts); + + [Fact] void should_combine_all_parts() => _result.ShouldEqual("First+Second+Third"); + [Fact] void should_not_be_empty() => _result.ShouldNotBeEmpty(); +} +``` + +Rules: +- Inherit `Specification` (from `Cratis.Specifications`) +- `void Establish()` — setup before the action +- `void Because()` — **one** action under test (the thing being specified) +- `[Fact] void should_*()` — one assertion per fact, no blank lines between them +- All fields: `private` (or `protected` in `given/` contexts), `_camelCase` +- All methods can be `async Task` when needed + +--- + +## Step 4 — Add a reusable context (`given/`) + +When multiple specs share the same setup, extract it into a `given/` class: + +```csharp +// for_AuthorService/given/all_dependencies.cs +namespace MyApp.Authors.for_AuthorService.given; + +public class all_dependencies : Specification +{ + protected IEventLog _eventLog; + protected ILogger _logger; + + void Establish() + { + _eventLog = Substitute.For(); + _logger = Substitute.For>(); + } +} +``` + +```csharp +// for_AuthorService/given/an_author_service.cs +namespace MyApp.Authors.for_AuthorService.given; + +public class an_author_service : all_dependencies +{ + protected AuthorService _service; + + void Establish() => _service = new(_eventLog, _logger); +} +``` + +```csharp +// for_AuthorService/when_registering/and_name_is_valid.cs +namespace MyApp.Authors.for_AuthorService.when_registering; + +public class and_name_is_valid : given.an_author_service +{ + void Because() => _service.Register(new AuthorName("John")); + + [Fact] void should_append_event() => + _eventLog.Received(1).Append(Arg.Any(), Arg.Any()); +} +``` + +See `references/csharp-patterns.md` for full NSubstitute patterns and assertion methods. + +--- + +## Step 5 — Write a Chronicle integration spec + +Integration specs live directly inside the slice's `when_/` folder and test the full stack against a real Chronicle event store. + +```csharp +// Authors/Registration/when_registering/and_there_are_no_authors.cs +using context = MyApp.Authors.Registration.when_registering.and_there_are_no_authors.context; + +namespace MyApp.Authors.Registration.when_registering; + +[Collection(ChronicleCollection.Name)] +public class and_there_are_no_authors(context context) : Given(context) +{ + public class context(ChronicleOutOfProcessFixture fixture) : given.an_http_client(fixture) + { + public CommandResult? Result; + + async Task Because() => + Result = await Client.ExecuteCommand( + "/api/authors/register", + new RegisterAuthor(new AuthorName("John Doe"))); + } + + [Fact] void should_be_successful() => Context.Result!.IsSuccess.ShouldBeTrue(); + [Fact] void should_have_appended_one_event() => + Context.ShouldHaveTailSequenceNumber(EventSequenceNumber.First); + [Fact] void should_append_author_registered_event() => + Context.ShouldHaveAppendedEvent( + EventSequenceNumber.First, Context.Result!.Response, + evt => evt.Name.Value.ShouldEqual("John Doe")); +} +``` + +See `references/integration-specs.md` for the full integration spec guide. + +--- + +## What NOT to spec + +- Simple auto-properties (`public AuthorId Id { get; }`) +- Properties returning constructor parameters +- Simple delegation (`public IEnumerable All => _list;`) +- Logging calls +- Trivial null checks + +--- + +## Reference files + +- `references/csharp-patterns.md` — BDD pattern detail, NSubstitute, assertions, exception catching +- `references/integration-specs.md` — Chronicle integration spec structure, helpers, Given diff --git a/.github/skills/cratis-specs-csharp/evals/evals.json b/.github/skills/cratis-specs-csharp/evals/evals.json new file mode 100644 index 0000000..6bbbd80 --- /dev/null +++ b/.github/skills/cratis-specs-csharp/evals/evals.json @@ -0,0 +1,37 @@ +{ + "skill_name": "cratis-specs-csharp", + "evals": [ + { + "id": 1, + "prompt": "Write C# specs for a class called EmployeeService that has a Register(EmployeeName name) method. The method appends an EmployeeRegistered event via IEventLog. Write specs for two scenarios: (1) when the name is valid - it should append the event, (2) when the employee name already exists - it should throw EmployeeAlreadyRegistered. Follow Cratis BDD spec conventions.", + "expected_output": "Folder structure: for_EmployeeService/given/all_dependencies.cs + given/an_employee_service.cs + when_registering/and_name_is_valid.cs + when_registering/and_name_already_exists.cs. Uses Specification base class, Establish/Because/should_ pattern, NSubstitute for IEventLog, ShouldXxx assertions, Catch.Exception.", + "files": [], + "assertions": [ + "Uses Specification base class (not xUnit test class directly)", + "States are separated: Establish (setup), Because (single action), [Fact] should_* (assertions)", + "Folder structure follows for_/when_/and_ naming", + "Uses NSubstitute: Substitute.For()", + "Uses ShouldXxx assertion methods (not Assert.Equal or similar)", + "Exception test uses Catch.Exception pattern", + "Reusable context in given/ folder with protected fields", + "No [Fact] attribute on Establish or Because methods" + ] + }, + { + "id": 2, + "prompt": "Write a Chronicle integration spec for a command RegisterAuthor(AuthorName name) at route /api/authors/register. Write two scenarios: one where registration succeeds (no prior authors), and one where it fails because the author name already exists.", + "expected_output": "Two spec files under Authors/Registration/when_registering/ using [Collection(ChronicleCollection.Name)], Given, context inner class inheriting given.an_http_client(fixture), async Task Because() using Client.ExecuteCommand. Assertions use ShouldHaveTailSequenceNumber and ShouldHaveAppendedEvent.", + "files": [], + "assertions": [ + "Uses [Collection(ChronicleCollection.Name)] attribute", + "context is an inner public class", + "Uses Given as base class for outer spec class", + "Has 'using context = ...' alias at the top of each file", + "Because() is async Task and calls Client.ExecuteCommand", + "Uses ShouldHaveTailSequenceNumber for event count verification", + "Precondition spec uses async Task Establish() to seed events", + "Uses ShouldHaveAppendedEvent for verifying event contents" + ] + } + ] +} diff --git a/.github/skills/cratis-specs-csharp/references/csharp-patterns.md b/.github/skills/cratis-specs-csharp/references/csharp-patterns.md new file mode 100644 index 0000000..db9a574 --- /dev/null +++ b/.github/skills/cratis-specs-csharp/references/csharp-patterns.md @@ -0,0 +1,166 @@ +# C# Spec Patterns — Reference + +## BDD phases + +| Method | Purpose | Notes | +| --- | --- | --- | +| `void Establish()` | Setup — runs before `Because()` | Base-class `Establish` runs first, then derived class | +| `void Because()` | The single action under test | Only in concrete spec files — never in `given/` contexts | +| `[Fact] void should_*()` | One assertion per fact | No blank lines between `should_` methods | +| `void Destroy()` | Teardown after each test | Optional | + +All phases can be `async Task`. + +--- + +## Minimal spec + +```csharp +namespace MyApp.for_KeyHelper; + +public class when_combining_parts : Specification +{ + object[] _parts; + string _result; + + void Establish() => _parts = ["First", "Second", "Third"]; + void Because() => _result = KeyHelper.Combine(_parts); + + [Fact] void should_combine_all_parts() => _result.ShouldEqual("First+Second+Third"); + [Fact] void should_not_be_empty() => _result.ShouldNotBeEmpty(); +} +``` + +--- + +## Reusable context (layered given) + +```csharp +// given/all_dependencies.cs — mock all external deps +public class all_dependencies : Specification +{ + protected IEventStore _eventStore; + protected IReactorInvoker _reactorInvoker; + + void Establish() + { + _eventStore = Substitute.For(); + _reactorInvoker = Substitute.For(); + } +} + +// given/a_reactor_handler.cs — build system under test +public class a_reactor_handler : all_dependencies +{ + protected ReactorHandler _handler; + + void Establish() => _handler = new(_eventStore, _reactorInvoker); +} + +// when_handling/and_event_is_received.cs — concrete spec +public class and_event_is_received : given.a_reactor_handler +{ + void Because() => _handler.Handle(new SomeEvent()); + + [Fact] void should_invoke_reactor() => + _reactorInvoker.Received(1).Invoke(Arg.Any()); +} +``` + +--- + +## NSubstitute patterns + +```csharp +// Create substitutes +_service = Substitute.For(); + +// Return values +_service.GetValue(Arg.Any()).Returns("result"); +_service.GetAsync(Arg.Any()).Returns(Task.FromResult(42)); + +// Argument matchers +Arg.Is(r => r.Id == expectedId && r.Name == expectedName) + +// Verify calls +_service.Received(1).DoSomething(Arg.Any()); +_service.DidNotReceive().DoSomethingElse(); + +// Capture arguments +_service.When(s => s.Process(Arg.Any>())) + .Do(info => _captured = info.Arg>()); + +// Throw from substitute +_handler.Handle(Arg.Any()).Throws(new MyException("fail")); +``` + +--- + +## Assertion extension methods (Cratis.Specifications) + +| Method | Example | +| --- | --- | +| `.ShouldEqual(expected)` | `_result.ShouldEqual(42)` | +| `.ShouldBeTrue()` | `_flag.ShouldBeTrue()` | +| `.ShouldBeFalse()` | `_flag.ShouldBeFalse()` | +| `.ShouldBeNull()` | `_error.ShouldBeNull()` | +| `.ShouldNotBeNull()` | `_value.ShouldNotBeNull()` | +| `.ShouldBeEmpty()` | `_list.ShouldBeEmpty()` | +| `.ShouldNotBeEmpty()` | `_list.ShouldNotBeEmpty()` | +| `.ShouldContain(item)` | `_list.ShouldContain(expected)` | +| `.ShouldNotContain(item)` | `_list.ShouldNotContain(excluded)` | +| `.ShouldContainOnly(items)` | `_list.ShouldContainOnly(expectedItems)` | +| `.ShouldBeOfExactType()` | `_obj.ShouldBeOfExactType()` | +| `.ShouldBeGreaterThan(n)` | `_count.ShouldBeGreaterThan(0)` | +| `.ShouldBeLessThan(n)` | `_count.ShouldBeLessThan(100)` | + +--- + +## Catching exceptions + +```csharp +Exception? _error; + +async Task Because() => _error = await Catch.Exception(_sut.DoSomethingThatThrows); + +[Fact] void should_throw() => _error.ShouldNotBeNull(); +[Fact] void should_not_throw() => _error.ShouldBeNull(); +[Fact] void should_throw_author_not_found() => _error.ShouldBeOfExactType(); +``` + +--- + +## Using statements + +Common usings are provided globally in `GlobalUsings.Specs.cs` — do **not** add them manually: +- `Xunit` +- `NSubstitute` +- `Cratis.Specifications` + +Do **not** add a `using` for the namespace of the system under test. + +--- + +## Multiple outcomes → folder + +``` +// Single outcome → single file +for_MyService/when_processing.cs + +// Multiple outcomes → folder + files +for_MyService/when_processing/ + and_input_is_valid.cs + and_input_is_null.cs + with_empty_collection.cs + without_required_field.cs +``` + +Allowed file name prefixes: `and_*`, `with_*`, `without_*`, `having_*`, `given_*` + +--- + +## Folder naming read as sentences + +`for_AuthorService / when_registering / and_name_already_exists` + +→ "For AuthorService, when registering, and name already exists, it should..." diff --git a/.github/skills/cratis-specs-csharp/references/integration-specs.md b/.github/skills/cratis-specs-csharp/references/integration-specs.md new file mode 100644 index 0000000..0f6ce1a --- /dev/null +++ b/.github/skills/cratis-specs-csharp/references/integration-specs.md @@ -0,0 +1,101 @@ +# Chronicle Integration Specs — Reference + +Integration specs test a complete vertical slice end-to-end — from HTTP request through command handling, event appending, constraint checking, and projection — against a real Chronicle event store. If one passes, the entire stack works. + +They live under `when_/` **inside the slice folder** (not in a `for_/` unit folder — there's no isolated unit, the entire slice is under test). + +--- + +## Structure + +```csharp +// Authors/Registration/when_registering/and_there_are_no_authors.cs + +using context = MyApp.Authors.Registration.when_registering.and_there_are_no_authors.context; + +namespace MyApp.Authors.Registration.when_registering; + +[Collection(ChronicleCollection.Name)] +public class and_there_are_no_authors(context context) : Given(context) +{ + public class context(ChronicleOutOfProcessFixture fixture) : given.an_http_client(fixture) + { + public CommandResult? Result; + + async Task Because() => + Result = await Client.ExecuteCommand( + "/api/authors/register", + new RegisterAuthor(new AuthorName("John Doe"))); + } + + [Fact] void should_be_successful() => Context.Result!.IsSuccess.ShouldBeTrue(); + [Fact] void should_have_appended_one_event() => + Context.ShouldHaveTailSequenceNumber(EventSequenceNumber.First); + [Fact] void should_append_author_registered_event() => + Context.ShouldHaveAppendedEvent( + EventSequenceNumber.First, Context.Result!.Response, + evt => evt.Name.Value.ShouldEqual("John Doe")); +} +``` + +--- + +## Spec with preconditions (seed the event store) + +Use `async Task Establish()` to append events before `Because()`: + +```csharp +public class context(ChronicleOutOfProcessFixture fixture) : given.an_http_client(fixture) +{ + public const string ExistingName = "John Doe"; + public CommandResult? Result; + + async Task Establish() => + await EventStore.EventLog.Append(AuthorId.New(), new AuthorRegistered(ExistingName)); + + async Task Because() => + Result = await Client.ExecuteCommand( + "/api/authors/register", + new RegisterAuthor(ExistingName)); +} + +[Fact] void should_not_be_successful() => Context.Result!.IsSuccess.ShouldBeFalse(); +[Fact] void should_not_have_appended_additional_events() => + Context.ShouldHaveTailSequenceNumber(EventSequenceNumber.First); +``` + +--- + +## ExecuteCommand overloads + +```csharp +// Command with no typed response (returns CommandResult) +Result = await Client.ExecuteCommand(url, command); + +// Command with typed response (returns CommandResult) +Result = await Client.ExecuteCommand(url, command); +``` + +--- + +## Integration assertion helpers + +| Helper | What it verifies | +| --- | --- | +| `Context.Result!.IsSuccess.ShouldBeTrue()` | Command succeeded | +| `Context.Result!.IsSuccess.ShouldBeFalse()` | Command failed (validation, constraint, etc.) | +| `Context.ShouldHaveTailSequenceNumber(EventSequenceNumber.First)` | Event log has exactly one event (sequence 0) | +| `Context.ShouldHaveTailSequenceNumber(n)` | Event log tail is at sequence `n` | +| `Context.ShouldHaveAppendedEvent(seq, eventSourceId, validator)` | Specific event was appended at sequence with correct values | + +--- + +## Key rules + +- `context` is an **inner public class** inheriting from `given.an_http_client(fixture)` +- Always add `using context = .context;` alias at the top +- `[Collection(ChronicleCollection.Name)]` on the outer class — required for test isolation +- `Establish` seeds preconditions; `Because` executes the command under test +- Declare `Result` as nullable, initialized to `null!` if needed for nullable analysis +- The outer class constructor receives `context` via xUnit constructor injection +- Never mix unit specs and integration specs in the same folder diff --git a/.github/skills/cratis-specs-typescript/SKILL.md b/.github/skills/cratis-specs-typescript/SKILL.md new file mode 100644 index 0000000..f153bb1 --- /dev/null +++ b/.github/skills/cratis-specs-typescript/SKILL.md @@ -0,0 +1,135 @@ +--- +name: cratis-specs-typescript +description: Step-by-step guidance for writing TypeScript specs in Cratis using BDD-style Specification by Example — the given()/describe/it pattern, for_/when_/ folder hierarchy, reusable context classes, Sinon mocking, and Chai assertions. Use whenever writing TypeScript specs or tests, creating spec files/folders, using the given() helper, mocking with sinon.createStubInstance or sinon.stub, asserting with Chai .should, or understanding how yarn test runs specs. +--- + +## Core philosophy + +Same BDD philosophy as C# specs — specs describe behaviors, not implementations. The `given()` helper + context class mirrors the C# `Specification` base class: setup is separated from the action, each `it()` verifies a single outcome. + +--- + +## Step 1 — Create the folder structure + +``` +for_/ +├── given/ +│ └── a_.ts ← reusable context class +├── when_/ ← behavior with multiple outcomes +│ ├── with_.ts +│ ├── without_.ts +│ └── and_.ts +└── when_.ts ← single outcome = single file +``` + +Example: +``` +for_EventsCommandResponseValueHandler/ +├── given/ +│ └── an_events_command_response_value_handler.ts +├── when_checking_can_handle/ +│ ├── with_valid_events_collection.ts +│ ├── with_null_value.ts +│ └── without_event_source_id.ts +└── when_handling/ + ├── empty_events_collection.ts + └── multiple_events_collection.ts +``` + +--- + +## Step 2 — Write a reusable context class + +```ts +// for_AuthorService/given/an_author_service.ts +import sinon from 'sinon'; +import { AuthorService } from '../../../AuthorService'; + +export class an_author_service { + eventLog: sinon.StubbedInstance; + service: AuthorService; + + constructor() { + this.eventLog = sinon.createStubInstance(EventLog); + this.service = new AuthorService(this.eventLog); + } +} +``` + +Properties are **public** (unlike C# protected fields) — tests access them via `context.propertyName`. + +--- + +## Step 3 — Write a spec using `given()` + +```ts +// for_AuthorService/when_registering/with_valid_name.ts +import { an_author_service } from '../given/an_author_service'; +import { given } from '../../given'; // import from package root + +describe('when registering with valid name', given(an_author_service, context => { + beforeEach(async () => { + await context.service.register('John Doe'); + }); + + it('should append an event', () => { + context.eventLog.append.calledOnce.should.be.true; + }); + + it('should pass the author name', () => { + const call = context.eventLog.append.firstCall; + call.args[1].name.should.equal('John Doe'); + }); +})); +``` + +--- + +## Step 4 — Simple spec (no shared context) + +For behaviors without shared setup: + +```ts +describe('when replacing route parameters', () => { + let result: { route: string; unusedParameters: object }; + + beforeEach(() => { + result = UrlHelpers.replaceRouteParameters('/api/items/{id}', { id: '123' }); + }); + + it('should replace the route parameter', () => { + result.route.should.equal('/api/items/123'); + }); + + it('should remove used parameters', () => { + Object.keys(result.unusedParameters).should.have.lengthOf(0); + }); +}); +``` + +--- + +## Naming conventions + +| Element | Convention | Example | +| --- | --- | --- | +| `describe()` text | Natural language sentence | `'when registering with valid name'` | +| `it()` text | Starts with "should", uses **spaces** | `'should append an event'` | +| Context class | `a_` or `an_` prefix | `an_author_service` | +| Spec file | Descriptive, `with_` / `without_` / `and_` | `with_valid_name.ts` | + +**Always use spaces in `it()` descriptions** — never underscores. + +--- + +## Running specs + +```bash +yarn test # from the package root +``` + +--- + +## Reference files + +- `references/typescript-patterns.md` — Chai assertions, Sinon patterns, async specs, full examples diff --git a/.github/skills/cratis-specs-typescript/evals/evals.json b/.github/skills/cratis-specs-typescript/evals/evals.json new file mode 100644 index 0000000..1ca8aee --- /dev/null +++ b/.github/skills/cratis-specs-typescript/evals/evals.json @@ -0,0 +1,35 @@ +{ + "skill_name": "cratis-specs-typescript", + "evals": [ + { + "id": 1, + "prompt": "Write TypeScript specs for a function formatCurrency(amount: number, currency: string): string. Test two behaviors: (1) when formatting a positive amount — it should include the currency symbol and two decimal places; (2) when formatting a negative amount — it should include a minus sign. Follow Cratis TypeScript spec conventions.", + "expected_output": "Folder structure: for_formatCurrency/when_formatting_positive_amount.ts + when_formatting_negative_amount.ts. Uses describe()/it() with Vitest/Mocha. Uses Chai .should fluent assertions (NOT expect()). it() descriptions use spaces not underscores.", + "files": [], + "assertions": [ + "Uses Chai .should fluent interface (value.should.equal(), NOT expect(value).to.equal())", + "it() descriptions use spaces (not underscores)", + "it() descriptions start with 'should'", + "Folder structure follows for_/when_.ts naming", + "Uses describe() for the outer scenario block", + "Uses beforeEach() for setup", + "Does NOT use expect() from Chai" + ] + }, + { + "id": 2, + "prompt": "Write TypeScript specs for a class CommandHandler that has a method handle(command: Command): Promise. It depends on an ICommandExecutor interface. Spec two scenarios: (1) when the executor succeeds — result.isSuccess should be true; (2) when the executor throws — result.isSuccess should be false. Use the given() helper with a reusable context class.", + "expected_output": "for_CommandHandler/given/a_command_handler.ts context class using sinon.createStubInstance + for_CommandHandler/when_handling/with_successful_execution.ts and with_failed_execution.ts using given() helper. Chai .should assertions.", + "files": [], + "assertions": [ + "Reusable context class defined in given/ folder", + "Context class uses sinon.createStubInstance for mocking", + "Context class properties are public (not protected)", + "Specs use given() helper from package root", + "Uses Chai .should assertions", + "Folder names follow for_/when_/with_ pattern", + "it() descriptions start with 'should' and use spaces" + ] + } + ] +} diff --git a/.github/skills/cratis-specs-typescript/references/typescript-patterns.md b/.github/skills/cratis-specs-typescript/references/typescript-patterns.md new file mode 100644 index 0000000..74f0aae --- /dev/null +++ b/.github/skills/cratis-specs-typescript/references/typescript-patterns.md @@ -0,0 +1,168 @@ +# TypeScript Spec Patterns — Reference + +## Frameworks + +| Framework | Role | +| --- | --- | +| [Vitest](https://vitest.dev/) | Test runner | +| [Mocha](https://mochajs.org/) | Test structure (`describe`, `it`, `beforeEach`) | +| [Chai](https://www.chaijs.com/) | Assertions — always `.should` fluent interface | +| [SinonJS](https://sinonjs.org/) | Mocking and stubbing | + +--- + +## Chai assertions — always use `.should` + +Never use `expect()`. The `.should` style reads as a natural English sentence. + +```ts +// Equality +value.should.equal(expected); +value.should.deep.equal({ id: 1, name: 'John' }); + +// Booleans +flag.should.be.true; +flag.should.be.false; + +// Null / undefined +value.should.be.null; +value.should.not.be.null; +value.should.be.undefined; +value.should.not.be.undefined; + +// Arrays +array.should.contain(item); +array.should.have.lengthOf(3); +array.should.be.empty; +array.should.not.be.empty; + +// Types +value.should.be.instanceOf(MyClass); + +// Throwing +(() => throwingFn()).should.throw(ErrorType); +``` + +--- + +## Sinon mocking + +```ts +import sinon from 'sinon'; + +// Stub an entire class (all methods become stubs) +const service = sinon.createStubInstance(ConcreteService); + +// Stub a global function +const fetchStub = sinon.stub(globalThis, 'fetch'); +fetchStub.resolves({ ok: true, json: async () => ({ data: 'value' }) }); + +// Configure return values +service.getValue.returns('result'); +service.getAsync.resolves(42); + +// Access call details +service.doSomething.calledOnce.should.be.true; +service.doSomething.calledWith('expected-arg').should.be.true; +service.doSomething.callCount.should.equal(2); + +const firstCall = service.doSomething.firstCall; +firstCall.args[0].should.equal('expected'); + +// Restore stubs after test +afterEach(() => sinon.restore()); +``` + +--- + +## given() helper — full pattern + +The `given()` function instantiates a context class, runs tests with it, and ensures setup is isolated per test. + +```ts +import { given } from '../../given'; // import from package root +import { a_my_service } from '../given/a_my_service'; + +describe('when doing something', given(a_my_service, context => { + let result: string; + + beforeEach(async () => { + result = await context.service.doSomething('input'); + }); + + it('should return expected result', () => { + result.should.equal('expected'); + }); + + it('should call dependency once', () => { + context.dependency.process.calledOnce.should.be.true; + }); +})); +``` + +--- + +## Reusable context class + +```ts +// given/a_my_service.ts +import sinon from 'sinon'; +import { MyService } from '../../../MyService'; + +export class a_my_service { + dependency: sinon.StubbedInstance; + service: MyService; + + constructor() { + this.dependency = sinon.createStubInstance(DependencyClass); + // configure defaults: + this.dependency.getValue.returns('default'); + this.service = new MyService(this.dependency as unknown as IDependency); + } +} +``` + +Properties are **public** — accessed via `context.propertyName` in specs. + +--- + +## Multiple outcomes — folder pattern + +``` +when_processing/ +├── with_valid_input.ts → happy path +├── with_empty_input.ts → edge case +└── without_required_field.ts → failure path +``` + +Each file has its own `describe()` block, its own `beforeEach`, and its own `it()` assertions. + +--- + +## Async specs + +`beforeEach`, `afterEach`, and `it` can all be `async`: + +```ts +describe('when loading data', given(a_loader, context => { + let result: Data[]; + + beforeEach(async () => { + result = await context.loader.load('source'); + }); + + it('should return items', () => { + result.should.have.lengthOf(3); + }); +})); +``` + +--- + +## What NOT to spec + +Same as C#: +- Simple property getters and setters +- Properties that return constructor parameters directly +- Trivial delegation +- Don't write specs whose `describe` starts with "when getting" or "when returning" — these are almost always testing getters, not behavior diff --git a/.github/skills/cratis-vertical-slice/SKILL.md b/.github/skills/cratis-vertical-slice/SKILL.md new file mode 100644 index 0000000..8ca0fe1 --- /dev/null +++ b/.github/skills/cratis-vertical-slice/SKILL.md @@ -0,0 +1,206 @@ +--- +name: cratis-vertical-slice +description: Explains how vertical feature slices are structured in a Cratis Chronicle + Arc application — folder layout, what goes in the single backend .cs file, the four slice types (State Change/View/Automation/Translation), and how features compose slices. Use when asking how vertical slices work, where to put files, how to organize a feature folder, which slice type to choose, or how the workflow from C# to TypeScript proxies to React looks. For actively building a new slice end-to-end right now, use new-vertical-slice instead. +--- + +## Core principle + +A vertical slice contains **everything for a single behavior**: the command or query, the events it produces, the projections that build read models, the React component, and the specs. Everything lives together because everything changes together. + +One feature folder → many slices. +One slice folder → one `.cs` file (all backend) + one `.tsx` file (frontend). + +--- + +## Step 1 — Identify the feature and slice type + +First, name the feature (a domain noun, pluralized) and identify the slice type: + +| Slice type | What it does | Key artifacts | +| --- | --- | --- | +| **State Change** | Mutates system state | Command + events + validators/constraints | +| **State View** | Projects events into queryable data | Read model + projection + queries | +| **Automation** | Reacts to events, makes decisions | Reactor + optional local read models | +| **Translation** | Adapts events between slices | Reactor → triggers commands in own slice | + +--- + +## Step 2 — Create the folder structure + +``` +Features/ +└── / ← feature root (pluralized domain noun) + ├── .tsx ← composition page + ├── .cs ← shared ConceptAs types for the feature + └── / ← slice (action or view name) + ├── .cs ← ALL backend artifacts in ONE file + ├── .tsx ← React component + └── when_/ ← integration specs (state-change slices) + └── and_.cs +``` + +✅ Correct: +``` +Features/Authors/ +├── Authors.tsx +├── AuthorId.cs +├── AuthorName.cs +├── Registration/ +│ ├── Registration.cs ← command + event + constraint + validator +│ ├── AddAuthor.tsx +│ └── when_registering/ +│ └── and_there_are_no_authors.cs +└── Listing/ + ├── Listing.cs ← read model + projection + query + └── Listing.tsx +``` + +❌ Wrong — never split by artifact type: +``` +Features/Authors/ +├── Commands/RegisterAuthor.cs +├── Handlers/RegisterAuthorHandler.cs +├── Events/AuthorRegistered.cs +``` + +**Namespace rule**: Drop the `.Features.` segment. +`MyApp.Authors.Registration` — not `MyApp.Features.Authors.Registration`. + +--- + +## Step 3 — Write the backend slice file + +All backend artifacts for one slice go in a single `.cs` file. File header: + +```csharp +// Copyright (c) Cratis. All rights reserved. +// Licensed under the MIT license. See LICENSE file in the project root for full license information. +``` + +For a **State Change** slice, the file contains: +```csharp +[EventType] +public record AuthorRegistered(AuthorName Name); + +public class UniqueAuthorName : IConstraint { ... } + +public class RegisterAuthorValidator : CommandValidator { ... } + +[Command] +public record RegisterAuthor(AuthorName Name) +{ + public (AuthorId, AuthorRegistered) Handle() + { + var authorId = AuthorId.New(); + return (authorId, new(Name)); + } +} +``` + +For a **State View** slice, the file contains: +```csharp +[ReadModel] +[FromEvent] +public record Author( + [Key] AuthorId Id, + AuthorName Name) +{ + public static ISubject> AllAuthors(IMongoCollection collection) => + collection.Observe(); +} +``` + +See `references/slice-anatomy.md` for all artifact patterns. + +--- + +## Step 4 — Define domain concepts + +For every identity or domain value, create a `ConceptAs` type — one file per concept, in the feature folder (or `Features/` root if shared across features). + +```csharp +public record AuthorId(Guid Value) : ConceptAs(Value) +{ + public static readonly AuthorId NotSet = new(Guid.Empty); + public static implicit operator Guid(AuthorId id) => id.Value; + public static implicit operator AuthorId(Guid value) => new(value); + public static implicit operator EventSourceId(AuthorId id) => new(id.Value.ToString()); + public static AuthorId New() => new(Guid.NewGuid()); +} +``` + +See `references/concepts.md` for all concept patterns. + +--- + +## Step 5 — Build to generate TypeScript proxies + +```bash +dotnet build +``` + +This generates `.ts` proxy files in the configured ``. The frontend cannot be written until this succeeds — the proxies are the contract. + +--- + +## Step 6 — Write the React component + +```tsx +// Listing.tsx +import { AllAuthors } from '../proxies/Listing'; // auto-generated + +export const Listing = () => { + const [result] = AllAuthors.use(); + return ( + + + + ); +}; +``` + +--- + +## Step 7 — Compose the feature page + +The feature's `.tsx` assembles slices into a page: + +```tsx +// Authors.tsx +import { AddAuthor } from './Registration/AddAuthor'; +import { Listing } from './Listing/Listing'; +import { useDialog } from '@cratis/arc.react/dialogs'; + +export const Authors = () => { + const [AddAuthorDialog, showAddAuthorDialog] = useDialog(AddAuthor); + const menuItems = [{ label: 'Add Author', command: () => showAddAuthorDialog() }]; + return ( + + + + + + ); +}; +``` + +--- + +## Development workflow order + +Work in this exact sequence — TypeScript proxies are generated from C# during `dotnet build`: + +1. Implement the C# slice file (step 3) +2. Write integration specs for state-change slices +3. `dotnet build` — generates TypeScript proxies (step 5) +4. Implement React component(s) (step 6) +5. Register in the feature composition page (step 7) +6. Add/update routes if needed + +--- + +## Reference files + +- `references/slice-anatomy.md` — complete patterns for every artifact type +- `references/slice-types.md` — when to use each slice type with decision guide +- `references/concepts.md` — ConceptAs patterns for all primitive backing types diff --git a/.github/skills/cratis-vertical-slice/evals/evals.json b/.github/skills/cratis-vertical-slice/evals/evals.json new file mode 100644 index 0000000..aadc3de --- /dev/null +++ b/.github/skills/cratis-vertical-slice/evals/evals.json @@ -0,0 +1,34 @@ +{ + "skill_name": "cratis-vertical-slice", + "evals": [ + { + "id": 1, + "prompt": "I'm building an invoicing feature for my app (namespace: MyBillingApp). The feature needs: (1) creating an invoice, (2) listing all invoices, (3) marking an invoice as paid. Show me the complete folder structure and the content of each .cs and .tsx file, following vertical slice architecture.", + "expected_output": "Features/Invoices/ folder with: Invoices.tsx composition page, InvoiceId.cs and InvoiceNumber.cs concepts, and three slice subdirs: Creation/Creation.cs, Listing/Listing.cs+Listing.tsx, MarkingAsPaid/MarkingAsPaid.cs. All .cs files have all backend artifacts in one file. No Commands/ Events/ Handlers/ folders.", + "files": [], + "assertions": [ + "Output shows a Features/Invoices/ root folder (pluralized feature name)", + "Output contains concept files at the feature root (InvoiceId.cs etc.)", + "Each slice has a subfolder with a single .cs file containing ALL backend artifacts", + "No Commands/, Events/, Handlers/, or Projections/ subfolders within the feature", + "Namespace does NOT include '.Features.' segment (e.g. MyBillingApp.Invoices.Creation)", + "All .cs files start with the Cratis copyright header", + "State change slices contain command + event in the same .cs file", + "State view slice contains read model + projection + query in the same .cs file" + ] + }, + { + "id": 2, + "prompt": "Explain what slice type I should use and how to structure the code for this scenario: when a new order is placed (OrderPlaced event), I need to automatically send a confirmation email via our IEmailService. This is separate from the order creation command itself.", + "expected_output": "Identifies this as an Automation slice (IReactor). Shows IReactor class dispatching on OrderPlaced event, using IEmailService via constructor injection. Places it inside the Orders feature folder in its own slice subfolder.", + "files": [], + "assertions": [ + "Correctly identifies the slice type as Automation (IReactor), not Translation", + "Output shows IReactor implementation with method dispatching on the event type", + "IEmailService injected via constructor (not service locator)", + "Slice is placed inside the feature folder (e.g. Orders/EmailConfirmation/)", + "All artifacts in a single .cs file" + ] + } + ] +} diff --git a/.github/skills/cratis-vertical-slice/references/concepts.md b/.github/skills/cratis-vertical-slice/references/concepts.md new file mode 100644 index 0000000..0dcd399 --- /dev/null +++ b/.github/skills/cratis-vertical-slice/references/concepts.md @@ -0,0 +1,102 @@ +# Concepts — Reference + +## What is a Concept? + +A `ConceptAs` wraps a primitive (`Guid`, `string`, `int`, etc.) in a named domain type. The compiler enforces that you cannot pass a `UserId` where an `AuthorId` was expected — both are `Guid` underneath, but they are distinct types. + +**Never use raw primitives in domain models, commands, events, or queries.** + +--- + +## Full canonical pattern — Guid identity + +```csharp +public record AuthorId(Guid Value) : ConceptAs(Value) +{ + public static readonly AuthorId NotSet = new(Guid.Empty); + + public static implicit operator Guid(AuthorId id) => id.Value; + public static implicit operator AuthorId(Guid value) => new(value); + public static implicit operator EventSourceId(AuthorId id) => new(id.Value.ToString()); + + public static AuthorId New() => new(Guid.NewGuid()); +} +``` + +Include the `EventSourceId` implicit conversion on every **identity** concept that is used as a Chronicle event source key. + +--- + +## String value concept + +```csharp +public record AuthorName(string Value) : ConceptAs(Value) +{ + public static readonly AuthorName NotSet = new(string.Empty); + + public static implicit operator string(AuthorName name) => name.Value; + public static implicit operator AuthorName(string value) => new(value); +} +``` + +--- + +## Integer value concept + +```csharp +public record PageNumber(int Value) : ConceptAs(Value) +{ + public static readonly PageNumber NotSet = new(0); + + public static implicit operator int(PageNumber p) => p.Value; + public static implicit operator PageNumber(int value) => new(value); +} +``` + +--- + +## Rules + +| Rule | Detail | +| --- | --- | +| Inherit as `record` | Gives value equality and immutability for free | +| `ConceptAs` provides `T → Concept` implicitly | You only need to add the `Concept → T` direction | +| Always add `NotSet` sentinel | Use `Guid.Empty`, `string.Empty`, or `0` — no `null` | +| Add `New()` on Guid identity types | Reads better than `new AuthorId(Guid.NewGuid())` | +| Add `EventSourceId` conversion on identity keys | Enables Chronicle to auto-resolve the event source | +| One concept per file | Named after the concept, e.g. `AuthorId.cs` | + +--- + +## File placement + +| Scope | Location | +| --- | --- | +| Used only within one slice | Inside the slice folder | +| Shared between slices of a feature | Feature root folder (`Features/Authors/AuthorId.cs`) | +| Shared between features | `Features/` root (`Features/TenantId.cs`) | + +Never create a standalone `Concepts/` folder — concepts belong near the code that uses them. + +--- + +## In commands and events + +```csharp +// Event uses concepts +[EventType] +public record AuthorRegistered(AuthorName Name); + +// Command uses concepts +[Command] +public record RegisterAuthor(AuthorName Name) +{ + public (AuthorId, AuthorRegistered) Handle() => + (AuthorId.New(), new(Name)); +} + +// Read model uses concepts +[ReadModel] +[FromEvent] +public record Author([Key] AuthorId Id, AuthorName Name); +``` diff --git a/.github/skills/cratis-vertical-slice/references/slice-anatomy.md b/.github/skills/cratis-vertical-slice/references/slice-anatomy.md new file mode 100644 index 0000000..dc0f647 --- /dev/null +++ b/.github/skills/cratis-vertical-slice/references/slice-anatomy.md @@ -0,0 +1,255 @@ +# Slice Anatomy — Reference + +All backend artifacts for a slice go in a single `.cs` file. Below are complete patterns for every artifact type. + +--- + +## File header + +Every `.cs` file starts with: + +```csharp +// Copyright (c) Cratis. All rights reserved. +// Licensed under the MIT license. See LICENSE file in the project root for full license information. +``` + +File-scoped namespace (no extra indentation): + +```csharp +namespace MyApp.Authors.Registration; +``` + +--- + +## Events + +```csharp +[EventType] +public record AuthorRegistered(AuthorName Name); +``` + +Rules: +- `[EventType]` takes **no arguments** — the type name is the identifier +- Past tense: `ItemAddedToCart`, `UserRegistered`, `AddressChangedForPerson` +- No nullable properties — ambiguous events need a second event +- One purpose per event + +--- + +## Commands (model-bound) + +```csharp +[Command] +public record RegisterAuthor(AuthorName Name) +{ + public (AuthorId, AuthorRegistered) Handle() + { + var authorId = AuthorId.New(); + return (authorId, new(Name)); + } +} +``` + +`Handle()` return types: + +| Return | When | +| --- | --- | +| `TEvent Handle()` | Single event, no client result needed | +| `(TResult, TEvent) Handle()` | Return a value to the client + append one event | +| `Result Handle()` | Business rule success/failure path | +| `void Handle()` | Side-effect only, no event | + +Event source resolution (in priority order): +1. Parameter marked with `[Key]` +2. Parameter whose type has implicit conversion to `EventSourceId` +3. Implement `ICanProvideEventSourceId` + +DI in `Handle()`: extra parameters beyond command + read model are resolved from DI automatically. + +--- + +## Business rules via DCB (Dynamic Consistency Boundary) + +Accept a read model parameter in `Handle()` — the framework injects the current projected state: + +```csharp +[Command] +public record ReserveBook(ISBN Isbn, MemberId MemberId) +{ + public Result Handle(Book book) + { + if (book.Available <= 0) + return ValidationResult.Error($"No available copies for {Isbn}"); + return new BookReserved(Isbn, MemberId); + } +} +``` + +--- + +## Validators + +```csharp +public class RegisterAuthorValidator : CommandValidator +{ + public RegisterAuthorValidator() + { + RuleFor(c => c.Name).NotEmpty().WithMessage("Author name is required"); + } +} +``` + +With DI dependencies: + +```csharp +public class MyValidator : CommandValidator +{ + public MyValidator(IMyService service) + { + RuleFor(x => x).MustAsync(async (cmd, ct) => await service.IsValid(cmd)); + } +} +``` + +--- + +## Constraints + +Unique property across events: + +```csharp +public class UniqueAuthorName : IConstraint +{ + public void Define(IConstraintBuilder builder) => builder + .Unique(_ => _ + .On(e => e.Name) + .On(e => e.Name) + .RemovedWith() + .WithMessage("Author name must be unique")); +} +``` + +Unique event type per event source (one-per-stream): + +```csharp +public class UniqueUser : IConstraint +{ + public void Define(IConstraintBuilder builder) => + builder.Unique(); +} +``` + +--- + +## Read models (model-bound, preferred) + +```csharp +[ReadModel] +[FromEvent] +public record Author( + [Key] AuthorId Id, + AuthorName Name) +{ + public static ISubject> AllAuthors(IMongoCollection collection) => + collection.Observe(); + + public static Author? ById(IMongoCollection collection, AuthorId id) => + collection.Find(a => a.Id == id).FirstOrDefault(); +} +``` + +Model-bound projection attributes: + +| Attribute | Purpose | +| --- | --- | +| `[Key]` | Read model primary key | +| `[FromEvent]` | Auto-map from event (class-level) | +| `[SetFrom]` | Explicit property mapping | +| `[AddFrom]` / `[SubtractFrom]` | Arithmetic | +| `[Increment]` / `[Decrement]` | ±1 counters | +| `[Count]` | Absolute count | +| `[ChildrenFrom]` | Child collection from event | +| `[Join]` | Join from another event stream | +| `[RemovedWith]` | Remove entry when event occurs | +| `[Passive]` | On-demand only, not actively observed | + +--- + +## Fluent projections (for complex cases) + +```csharp +public class BorrowedBooksProjection : IProjectionFor +{ + public void Define(IProjectionBuilderFor builder) => builder + .From(from => from + .Set(m => m.UserId).To(e => e.UserId) + .Set(m => m.Borrowed).ToEventContextProperty(c => c.Occurred)) + .Join(j => j + .On(m => m.Id) + .Set(m => m.Title).To(e => e.Title)) + .RemovedWith(); +} +``` + +AutoMap is on by default — just call `.From<>()` directly. Use `.NoAutoMap()` then explicit `.Set()` calls when you need selective mapping. + +**Projections join EVENTS, never read models.** + +--- + +## Reducers + +```csharp +public class AccountBalanceReducer : IReducerFor +{ + public AccountBalance OnDepositMade(DepositMade @event, AccountBalance? current, EventContext context) + { + var balance = current?.Balance ?? 0m; + return new AccountBalance(balance + @event.Amount, context.Occurred); + } +} +``` + +- `current` is `null` for the first event — always handle initialization +- Keep reducers pure — no side effects, no I/O +- Use `with` expressions on records for state updates + +--- + +## Reactors + +```csharp +public class StockKeeping(ICommandPipeline commandPipeline) : IReactor +{ + public async Task HandleBookReserved(BookReserved @event) => + await commandPipeline.Execute(new DecreaseStock(@event.Isbn)); +} +``` + +- `IReactor` is a marker interface — method dispatch by first-parameter event type +- Method name is descriptive; `EventContext` parameter is optional +- `[OnceOnly]` — skips method during event replay + +--- + +## Integration specs + +Live under `when_/` inside the slice folder: + +```csharp +namespace MyApp.Authors.Registration.when_registering; + +[Collection(ChronicleCollection.Name)] +public class and_there_are_no_authors(context context) : Given(context) +{ + public class context(ChronicleOutOfProcessFixture fixture) : given.an_http_client(fixture) + { + public CommandResult? Result; + async Task Because() => + Result = await Client.ExecuteCommand( + "/api/authors/register", new RegisterAuthor("John Doe")); + } + + [Fact] void should_be_successful() => Context.Result.IsSuccess.ShouldBeTrue(); +} +``` diff --git a/.github/skills/cratis-vertical-slice/references/slice-types.md b/.github/skills/cratis-vertical-slice/references/slice-types.md new file mode 100644 index 0000000..5e86cea --- /dev/null +++ b/.github/skills/cratis-vertical-slice/references/slice-types.md @@ -0,0 +1,105 @@ +# Slice Types — Reference + +## The four slice types + +| Type | When to use | Key artifacts | +| --- | --- | --- | +| **State Change** | User action that mutates system state | `[Command]` + `[EventType]` + optional validator/constraint | +| **State View** | Projecting events into queryable read models | `[ReadModel]` + projection/reducer + query methods | +| **Automation** | Reacting to events to make decisions or call external APIs | `IReactor` + optional local read models | +| **Translation** | Adapting events between slices or bounded contexts | `IReactor` → `ICommandPipeline.Execute()` | + +Most features are built from a **State Change slice** paired with a **State View slice**. + +--- + +## State Change slice + +**When**: An action happens (user submits a form, a timer fires, an API is called) that changes state. + +**Contains**: +- `[EventType]` records — the facts that occurred +- `[Command]` record with `Handle()` — validates intent and produces events +- `CommandValidator` — input validation (FluentValidation, exported to TypeScript) +- `IConstraint` — server-side business rules enforced at event-append time + +**Example**: `Authors/Registration/Registration.cs` + +``` +Registration.cs +├── AuthorRegistered (event) +├── UniqueAuthorName (constraint) +├── RegisterAuthorValidator (validator) +└── RegisterAuthor (command with Handle()) +``` + +--- + +## State View slice + +**When**: Data needs to be queried and displayed. Projects the event stream into a read model. + +**Prefer model-bound** (`[ReadModel]` + attribute-based projection) over fluent `IProjectionFor` unless the mapping is too complex for attributes. + +**Contains**: +- `[ReadModel]` record — the query-optimized data shape +- Projection attributes (`[FromEvent]`, `[SetFrom]`, etc.) or `IProjectionFor` +- Static query methods on the record (DI parameters auto-resolved) + +**Example**: `Authors/Listing/Listing.cs` + +``` +Listing.cs +├── Author (read model record) +├── [FromEvent] (projection via attribute) +└── AllAuthors() static query method +``` + +--- + +## Automation slice + +**When**: An event should trigger a side effect automatically — sending an email, calling an external API, making a decision. + +**Contains**: +- `IReactor` implementation — dispatches on event type by first parameter +- Optional local read models for decision state + +```csharp +public class WelcomeEmailSender(IEmailService email) : IReactor +{ + public async Task OnAuthorRegistered(AuthorRegistered @event, EventContext ctx) => + await email.SendWelcome(@event.Name); +} +``` + +--- + +## Translation slice + +**When**: An event from one slice should trigger a command in another slice (event-driven integration). Keeps slices decoupled — neither knows about the other directly. + +**Contains**: +- `IReactor` that listens to source events +- `ICommandPipeline.Execute()` to trigger a command in another slice + +```csharp +public class StockKeeping(ICommandPipeline commandPipeline) : IReactor +{ + public async Task HandleBookReserved(BookReserved @event) => + await commandPipeline.Execute(new DecreaseStock(@event.Isbn)); +} +``` + +--- + +## Decision guide + +``` +User action → State Change slice +Data display → State View slice +Automatic side effect → Automation slice +Cross-slice event reaction → Translation slice +``` + +A typical feature has at least one State Change + one State View. They are completely separate `.cs` files in separate sub-folders. They share events through the feature's namespace — the State View projection references the event type defined in the State Change file. diff --git a/.github/skills/skill-creator/LICENSE.txt b/.github/skills/skill-creator/LICENSE.txt new file mode 100644 index 0000000..7a4a3ea --- /dev/null +++ b/.github/skills/skill-creator/LICENSE.txt @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. \ No newline at end of file diff --git a/.github/skills/skill-creator/SKILL.md b/.github/skills/skill-creator/SKILL.md new file mode 100644 index 0000000..942bfe8 --- /dev/null +++ b/.github/skills/skill-creator/SKILL.md @@ -0,0 +1,479 @@ +--- +name: skill-creator +description: Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, update or optimize an existing skill, run evals to test a skill, benchmark skill performance with variance analysis, or optimize a skill's description for better triggering accuracy. +--- + +# Skill Creator + +A skill for creating new skills and iteratively improving them. + +At a high level, the process of creating a skill goes like this: + +- Decide what you want the skill to do and roughly how it should do it +- Write a draft of the skill +- Create a few test prompts and run claude-with-access-to-the-skill on them +- Help the user evaluate the results both qualitatively and quantitatively + - While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist) + - Use the `eval-viewer/generate_review.py` script to show the user the results for them to look at, and also let them look at the quantitative metrics +- Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks) +- Repeat until you're satisfied +- Expand the test set and try again at larger scale + +Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat. + +On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop. + +Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead. + +Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill. + +Cool? Cool. + +## Communicating with the user + +The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate. + +So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea: + +- "evaluation" and "benchmark" are borderline, but OK +- for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them + +It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it. + +--- + +## Creating a skill + +### Capture Intent + +Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first — the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step. + +1. What should this skill enable Claude to do? +2. When should this skill trigger? (what user phrases/contexts) +3. What's the expected output format? +4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide. + +### Interview and Research + +Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out. + +Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user. + +### Write the SKILL.md + +Based on the user interview, fill in these components: + +- **name**: Skill identifier +- **description**: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'" +- **compatibility**: Required tools, dependencies (optional, rarely needed) +- **the rest of the skill :)** + +### Skill Writing Guide + +#### Anatomy of a Skill + +``` +skill-name/ +├── SKILL.md (required) +│ ├── YAML frontmatter (name, description required) +│ └── Markdown instructions +└── Bundled Resources (optional) + ├── scripts/ - Executable code for deterministic/repetitive tasks + ├── references/ - Docs loaded into context as needed + └── assets/ - Files used in output (templates, icons, fonts) +``` + +#### Progressive Disclosure + +Skills use a three-level loading system: +1. **Metadata** (name + description) - Always in context (~100 words) +2. **SKILL.md body** - In context whenever skill triggers (<500 lines ideal) +3. **Bundled resources** - As needed (unlimited, scripts can execute without loading) + +These word counts are approximate and you can feel free to go longer if needed. + +**Key patterns:** +- Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up. +- Reference files clearly from SKILL.md with guidance on when to read them +- For large reference files (>300 lines), include a table of contents + +**Domain organization**: When a skill supports multiple domains/frameworks, organize by variant: +``` +cloud-deploy/ +├── SKILL.md (workflow + selection) +└── references/ + ├── aws.md + ├── gcp.md + └── azure.md +``` +Claude reads only the relevant reference file. + +#### Principle of Lack of Surprise + +This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though. + +#### Writing Patterns + +Prefer using the imperative form in instructions. + +**Defining output formats** - You can do it like this: +```markdown +## Report structure +ALWAYS use this exact template: +# [Title] +## Executive summary +## Key findings +## Recommendations +``` + +**Examples pattern** - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little): +```markdown +## Commit message format +**Example 1:** +Input: Added user authentication with JWT tokens +Output: feat(auth): implement JWT-based authentication +``` + +### Writing Style + +Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it. + +### Test Cases + +After writing the skill draft, come up with 2-3 realistic test prompts — the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them. + +Save test cases to `evals/evals.json`. Don't write assertions yet — just the prompts. You'll draft assertions in the next step while the runs are in progress. + +```json +{ + "skill_name": "example-skill", + "evals": [ + { + "id": 1, + "prompt": "User's task prompt", + "expected_output": "Description of expected result", + "files": [] + } + ] +} +``` + +See `references/schemas.md` for the full schema (including the `assertions` field, which you'll add later). + +## Running and evaluating test cases + +This section is one continuous sequence — don't stop partway through. Do NOT use `/skill-test` or any other testing skill. + +Put results in `-workspace/` as a sibling to the skill directory. Within the workspace, organize results by iteration (`iteration-1/`, `iteration-2/`, etc.) and within that, each test case gets a directory (`eval-0/`, `eval-1/`, etc.). Don't create all of this upfront — just create directories as you go. + +### Step 1: Spawn all runs (with-skill AND baseline) in the same turn + +For each test case, spawn two subagents in the same turn — one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time. + +**With-skill run:** + +``` +Execute this task: +- Skill path: +- Task: +- Input files: +- Save outputs to: /iteration-/eval-/with_skill/outputs/ +- Outputs to save: +``` + +**Baseline run** (same prompt, but the baseline depends on context): +- **Creating a new skill**: no skill at all. Same prompt, no skill path, save to `without_skill/outputs/`. +- **Improving an existing skill**: the old version. Before editing, snapshot the skill (`cp -r /skill-snapshot/`), then point the baseline subagent at the snapshot. Save to `old_skill/outputs/`. + +Write an `eval_metadata.json` for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing — not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory — don't assume they carry over from previous iterations. + +```json +{ + "eval_id": 0, + "eval_name": "descriptive-name-here", + "prompt": "The user's task prompt", + "assertions": [] +} +``` + +### Step 2: While runs are in progress, draft assertions + +Don't just wait for the runs to finish — you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in `evals/evals.json`, review them and explain what they check. + +Good assertions are objectively verifiable and have descriptive names — they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively — don't force assertions onto things that need human judgment. + +Update the `eval_metadata.json` files and `evals/evals.json` with the assertions once drafted. Also explain to the user what they'll see in the viewer — both the qualitative outputs and the quantitative benchmark. + +### Step 3: As runs complete, capture timing data + +When each subagent task completes, you receive a notification containing `total_tokens` and `duration_ms`. Save this data immediately to `timing.json` in the run directory: + +```json +{ + "total_tokens": 84852, + "duration_ms": 23332, + "total_duration_seconds": 23.3 +} +``` + +This is the only opportunity to capture this data — it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them. + +### Step 4: Grade, aggregate, and launch the viewer + +Once all runs are done: + +1. **Grade each run** — spawn a grader subagent (or grade inline) that reads `agents/grader.md` and evaluates each assertion against the outputs. Save results to `grading.json` in each run directory. The grading.json expectations array must use the fields `text`, `passed`, and `evidence` (not `name`/`met`/`details` or other variants) — the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it — scripts are faster, more reliable, and can be reused across iterations. + +2. **Aggregate into benchmark** — run the aggregation script from the skill-creator directory: + ```bash + python -m scripts.aggregate_benchmark /iteration-N --skill-name + ``` + This produces `benchmark.json` and `benchmark.md` with pass_rate, time, and tokens for each configuration, with mean ± stddev and the delta. If generating benchmark.json manually, see `references/schemas.md` for the exact schema the viewer expects. +Put each with_skill version before its baseline counterpart. + +3. **Do an analyst pass** — read the benchmark data and surface patterns the aggregate stats might hide. See `agents/analyzer.md` (the "Analyzing Benchmark Results" section) for what to look for — things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs. + +4. **Launch the viewer** with both qualitative outputs and quantitative data: + ```bash + nohup python /eval-viewer/generate_review.py \ + /iteration-N \ + --skill-name "my-skill" \ + --benchmark /iteration-N/benchmark.json \ + > /dev/null 2>&1 & + VIEWER_PID=$! + ``` + For iteration 2+, also pass `--previous-workspace /iteration-`. + + **Cowork / headless environments:** If `webbrowser.open()` is not available or the environment has no display, use `--static ` to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a `feedback.json` file when the user clicks "Submit All Reviews". After download, copy `feedback.json` into the workspace directory for the next iteration to pick up. + +Note: please use generate_review.py to create the viewer; there's no need to write custom HTML. + +5. **Tell the user** something like: "I've opened the results in your browser. There are two tabs — 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know." + +### What the user sees in the viewer + +The "Outputs" tab shows one test case at a time: +- **Prompt**: the task that was given +- **Output**: the files the skill produced, rendered inline where possible +- **Previous Output** (iteration 2+): collapsed section showing last iteration's output +- **Formal Grades** (if grading was run): collapsed section showing assertion pass/fail +- **Feedback**: a textbox that auto-saves as they type +- **Previous Feedback** (iteration 2+): their comments from last time, shown below the textbox + +The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations. + +Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to `feedback.json`. + +### Step 5: Read the feedback + +When the user tells you they're done, read `feedback.json`: + +```json +{ + "reviews": [ + {"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."}, + {"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."}, + {"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."} + ], + "status": "complete" +} +``` + +Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints. + +Kill the viewer server when you're done with it: + +```bash +kill $VIEWER_PID 2>/dev/null +``` + +--- + +## Improving the skill + +This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback. + +### How to think about improvements + +1. **Generalize from the feedback.** The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great. + +2. **Keep the prompt lean.** Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs — if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens. + +3. **Explain the why.** Try hard to explain the **why** behind everything you're asking the model to do. Today's LLMs are *smart*. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag — if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach. + +4. **Look for repeated work across test cases.** Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a `create_docx.py` or a `build_chart.py`, that's a strong signal the skill should bundle that script. Write it once, put it in `scripts/`, and tell the skill to use it. This saves every future invocation from reinventing the wheel. + +This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need. + +### The iteration loop + +After improving the skill: + +1. Apply your improvements to the skill +2. Rerun all test cases into a new `iteration-/` directory, including baseline runs. If you're creating a new skill, the baseline is always `without_skill` (no skill) — that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration. +3. Launch the reviewer with `--previous-workspace` pointing at the previous iteration +4. Wait for the user to review and tell you they're done +5. Read the new feedback, improve again, repeat + +Keep going until: +- The user says they're happy +- The feedback is all empty (everything looks good) +- You're not making meaningful progress + +--- + +## Advanced: Blind comparison + +For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read `agents/comparator.md` and `agents/analyzer.md` for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won. + +This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient. + +--- + +## Description Optimization + +The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy. + +### Step 1: Generate trigger eval queries + +Create 20 eval queries — a mix of should-trigger and should-not-trigger. Save as JSON: + +```json +[ + {"query": "the user prompt", "should_trigger": true}, + {"query": "another prompt", "should_trigger": false} +] +``` + +The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them). + +Bad: `"Format this data"`, `"Extract text from PDF"`, `"Create a chart"` + +Good: `"ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"` + +For the **should-trigger** queries (8-10), think about coverage. You want different phrasings of the same intent — some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win. + +For the **should-not-trigger** queries (8-10), the most valuable ones are the near-misses — queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate. + +The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy — it doesn't test anything. The negative cases should be genuinely tricky. + +### Step 2: Review with user + +Present the eval set to the user for review using the HTML template: + +1. Read the template from `assets/eval_review.html` +2. Replace the placeholders: + - `__EVAL_DATA_PLACEHOLDER__` → the JSON array of eval items (no quotes around it — it's a JS variable assignment) + - `__SKILL_NAME_PLACEHOLDER__` → the skill's name + - `__SKILL_DESCRIPTION_PLACEHOLDER__` → the skill's current description +3. Write to a temp file (e.g., `/tmp/eval_review_.html`) and open it: `open /tmp/eval_review_.html` +4. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set" +5. The file downloads to `~/Downloads/eval_set.json` — check the Downloads folder for the most recent version in case there are multiple (e.g., `eval_set (1).json`) + +This step matters — bad eval queries lead to bad descriptions. + +### Step 3: Run the optimization loop + +Tell the user: "This will take some time — I'll run the optimization loop in the background and check on it periodically." + +Save the eval set to the workspace, then run in the background: + +```bash +python -m scripts.run_loop \ + --eval-set \ + --skill-path \ + --model \ + --max-iterations 5 \ + --verbose +``` + +Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences. + +While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like. + +This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude with extended thinking to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with `best_description` — selected by test score rather than train score to avoid overfitting. + +### How skill triggering works + +Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's `available_skills` list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own — simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches. + +This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases — they won't trigger skills regardless of description quality. + +### Step 4: Apply the result + +Take `best_description` from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores. + +--- + +### Package and Present (only if `present_files` tool is available) + +Check whether you have access to the `present_files` tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user: + +```bash +python -m scripts.package_skill +``` + +After packaging, direct the user to the resulting `.skill` file path so they can install it. + +--- + +## Claude.ai-specific instructions + +In Claude.ai, the core workflow is the same (draft → test → review → improve → repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt: + +**Running test cases**: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check — and the human review step compensates. Skip the baseline runs — just use the skill to complete the task as requested. + +**Reviewing results**: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?" + +**Benchmarking**: Skip the quantitative benchmarking — it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user. + +**The iteration loop**: Same as before — improve the skill, rerun the test cases, ask for feedback — just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one. + +**Description optimization**: This section requires the `claude` CLI tool (specifically `claude -p`) which is only available in Claude Code. Skip it if you're on Claude.ai. + +**Blind comparison**: Requires subagents. Skip it. + +**Packaging**: The `package_skill.py` script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting `.skill` file. + +--- + +## Cowork-Specific Instructions + +If you're in Cowork, the main things to know are: + +- You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.) +- You don't have a browser or display, so when generating the eval viewer, use `--static ` to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser. +- For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using `generate_review.py` (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER *BEFORE* evaluating inputs yourself. You want to get them in front of the human ASAP! +- Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download `feedback.json` as a file. You can then read it from there (you may have to request access first). +- Packaging works — `package_skill.py` just needs Python and a filesystem. +- Description optimization (`run_loop.py` / `run_eval.py`) should work in Cowork just fine since it uses `claude -p` via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape. + +--- + +## Reference files + +The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent. + +- `agents/grader.md` — How to evaluate assertions against outputs +- `agents/comparator.md` — How to do blind A/B comparison between two outputs +- `agents/analyzer.md` — How to analyze why one version beat another + +The references/ directory has additional documentation: +- `references/schemas.md` — JSON structures for evals.json, grading.json, etc. + +--- + +Repeating one more time the core loop here for emphasis: + +- Figure out what the skill is about +- Draft or edit the skill +- Run claude-with-access-to-the-skill on test prompts +- With the user, evaluate the outputs: + - Create benchmark.json and run `eval-viewer/generate_review.py` to help the user review them + - Run quantitative evals +- Repeat until you and the user are satisfied +- Package the final skill and return it to the user. + +Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run `eval-viewer/generate_review.py` so human can review test cases" in your TodoList to make sure it happens. + +Good luck! diff --git a/.github/skills/skill-creator/agents/analyzer.md b/.github/skills/skill-creator/agents/analyzer.md new file mode 100644 index 0000000..14e41d6 --- /dev/null +++ b/.github/skills/skill-creator/agents/analyzer.md @@ -0,0 +1,274 @@ +# Post-hoc Analyzer Agent + +Analyze blind comparison results to understand WHY the winner won and generate improvement suggestions. + +## Role + +After the blind comparator determines a winner, the Post-hoc Analyzer "unblids" the results by examining the skills and transcripts. The goal is to extract actionable insights: what made the winner better, and how can the loser be improved? + +## Inputs + +You receive these parameters in your prompt: + +- **winner**: "A" or "B" (from blind comparison) +- **winner_skill_path**: Path to the skill that produced the winning output +- **winner_transcript_path**: Path to the execution transcript for the winner +- **loser_skill_path**: Path to the skill that produced the losing output +- **loser_transcript_path**: Path to the execution transcript for the loser +- **comparison_result_path**: Path to the blind comparator's output JSON +- **output_path**: Where to save the analysis results + +## Process + +### Step 1: Read Comparison Result + +1. Read the blind comparator's output at comparison_result_path +2. Note the winning side (A or B), the reasoning, and any scores +3. Understand what the comparator valued in the winning output + +### Step 2: Read Both Skills + +1. Read the winner skill's SKILL.md and key referenced files +2. Read the loser skill's SKILL.md and key referenced files +3. Identify structural differences: + - Instructions clarity and specificity + - Script/tool usage patterns + - Example coverage + - Edge case handling + +### Step 3: Read Both Transcripts + +1. Read the winner's transcript +2. Read the loser's transcript +3. Compare execution patterns: + - How closely did each follow their skill's instructions? + - What tools were used differently? + - Where did the loser diverge from optimal behavior? + - Did either encounter errors or make recovery attempts? + +### Step 4: Analyze Instruction Following + +For each transcript, evaluate: +- Did the agent follow the skill's explicit instructions? +- Did the agent use the skill's provided tools/scripts? +- Were there missed opportunities to leverage skill content? +- Did the agent add unnecessary steps not in the skill? + +Score instruction following 1-10 and note specific issues. + +### Step 5: Identify Winner Strengths + +Determine what made the winner better: +- Clearer instructions that led to better behavior? +- Better scripts/tools that produced better output? +- More comprehensive examples that guided edge cases? +- Better error handling guidance? + +Be specific. Quote from skills/transcripts where relevant. + +### Step 6: Identify Loser Weaknesses + +Determine what held the loser back: +- Ambiguous instructions that led to suboptimal choices? +- Missing tools/scripts that forced workarounds? +- Gaps in edge case coverage? +- Poor error handling that caused failures? + +### Step 7: Generate Improvement Suggestions + +Based on the analysis, produce actionable suggestions for improving the loser skill: +- Specific instruction changes to make +- Tools/scripts to add or modify +- Examples to include +- Edge cases to address + +Prioritize by impact. Focus on changes that would have changed the outcome. + +### Step 8: Write Analysis Results + +Save structured analysis to `{output_path}`. + +## Output Format + +Write a JSON file with this structure: + +```json +{ + "comparison_summary": { + "winner": "A", + "winner_skill": "path/to/winner/skill", + "loser_skill": "path/to/loser/skill", + "comparator_reasoning": "Brief summary of why comparator chose winner" + }, + "winner_strengths": [ + "Clear step-by-step instructions for handling multi-page documents", + "Included validation script that caught formatting errors", + "Explicit guidance on fallback behavior when OCR fails" + ], + "loser_weaknesses": [ + "Vague instruction 'process the document appropriately' led to inconsistent behavior", + "No script for validation, agent had to improvise and made errors", + "No guidance on OCR failure, agent gave up instead of trying alternatives" + ], + "instruction_following": { + "winner": { + "score": 9, + "issues": [ + "Minor: skipped optional logging step" + ] + }, + "loser": { + "score": 6, + "issues": [ + "Did not use the skill's formatting template", + "Invented own approach instead of following step 3", + "Missed the 'always validate output' instruction" + ] + } + }, + "improvement_suggestions": [ + { + "priority": "high", + "category": "instructions", + "suggestion": "Replace 'process the document appropriately' with explicit steps: 1) Extract text, 2) Identify sections, 3) Format per template", + "expected_impact": "Would eliminate ambiguity that caused inconsistent behavior" + }, + { + "priority": "high", + "category": "tools", + "suggestion": "Add validate_output.py script similar to winner skill's validation approach", + "expected_impact": "Would catch formatting errors before final output" + }, + { + "priority": "medium", + "category": "error_handling", + "suggestion": "Add fallback instructions: 'If OCR fails, try: 1) different resolution, 2) image preprocessing, 3) manual extraction'", + "expected_impact": "Would prevent early failure on difficult documents" + } + ], + "transcript_insights": { + "winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script -> Fixed 2 issues -> Produced output", + "loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods -> No validation -> Output had errors" + } +} +``` + +## Guidelines + +- **Be specific**: Quote from skills and transcripts, don't just say "instructions were unclear" +- **Be actionable**: Suggestions should be concrete changes, not vague advice +- **Focus on skill improvements**: The goal is to improve the losing skill, not critique the agent +- **Prioritize by impact**: Which changes would most likely have changed the outcome? +- **Consider causation**: Did the skill weakness actually cause the worse output, or is it incidental? +- **Stay objective**: Analyze what happened, don't editorialize +- **Think about generalization**: Would this improvement help on other evals too? + +## Categories for Suggestions + +Use these categories to organize improvement suggestions: + +| Category | Description | +|----------|-------------| +| `instructions` | Changes to the skill's prose instructions | +| `tools` | Scripts, templates, or utilities to add/modify | +| `examples` | Example inputs/outputs to include | +| `error_handling` | Guidance for handling failures | +| `structure` | Reorganization of skill content | +| `references` | External docs or resources to add | + +## Priority Levels + +- **high**: Would likely change the outcome of this comparison +- **medium**: Would improve quality but may not change win/loss +- **low**: Nice to have, marginal improvement + +--- + +# Analyzing Benchmark Results + +When analyzing benchmark results, the analyzer's purpose is to **surface patterns and anomalies** across multiple runs, not suggest skill improvements. + +## Role + +Review all benchmark run results and generate freeform notes that help the user understand skill performance. Focus on patterns that wouldn't be visible from aggregate metrics alone. + +## Inputs + +You receive these parameters in your prompt: + +- **benchmark_data_path**: Path to the in-progress benchmark.json with all run results +- **skill_path**: Path to the skill being benchmarked +- **output_path**: Where to save the notes (as JSON array of strings) + +## Process + +### Step 1: Read Benchmark Data + +1. Read the benchmark.json containing all run results +2. Note the configurations tested (with_skill, without_skill) +3. Understand the run_summary aggregates already calculated + +### Step 2: Analyze Per-Assertion Patterns + +For each expectation across all runs: +- Does it **always pass** in both configurations? (may not differentiate skill value) +- Does it **always fail** in both configurations? (may be broken or beyond capability) +- Does it **always pass with skill but fail without**? (skill clearly adds value here) +- Does it **always fail with skill but pass without**? (skill may be hurting) +- Is it **highly variable**? (flaky expectation or non-deterministic behavior) + +### Step 3: Analyze Cross-Eval Patterns + +Look for patterns across evals: +- Are certain eval types consistently harder/easier? +- Do some evals show high variance while others are stable? +- Are there surprising results that contradict expectations? + +### Step 4: Analyze Metrics Patterns + +Look at time_seconds, tokens, tool_calls: +- Does the skill significantly increase execution time? +- Is there high variance in resource usage? +- Are there outlier runs that skew the aggregates? + +### Step 5: Generate Notes + +Write freeform observations as a list of strings. Each note should: +- State a specific observation +- Be grounded in the data (not speculation) +- Help the user understand something the aggregate metrics don't show + +Examples: +- "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value" +- "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure that may be flaky" +- "Without-skill runs consistently fail on table extraction expectations (0% pass rate)" +- "Skill adds 13s average execution time but improves pass rate by 50%" +- "Token usage is 80% higher with skill, primarily due to script output parsing" +- "All 3 without-skill runs for eval 1 produced empty output" + +### Step 6: Write Notes + +Save notes to `{output_path}` as a JSON array of strings: + +```json +[ + "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value", + "Eval 3 shows high variance (50% ± 40%) - run 2 had an unusual failure", + "Without-skill runs consistently fail on table extraction expectations", + "Skill adds 13s average execution time but improves pass rate by 50%" +] +``` + +## Guidelines + +**DO:** +- Report what you observe in the data +- Be specific about which evals, expectations, or runs you're referring to +- Note patterns that aggregate metrics would hide +- Provide context that helps interpret the numbers + +**DO NOT:** +- Suggest improvements to the skill (that's for the improvement step, not benchmarking) +- Make subjective quality judgments ("the output was good/bad") +- Speculate about causes without evidence +- Repeat information already in the run_summary aggregates diff --git a/.github/skills/skill-creator/agents/comparator.md b/.github/skills/skill-creator/agents/comparator.md new file mode 100644 index 0000000..80e00eb --- /dev/null +++ b/.github/skills/skill-creator/agents/comparator.md @@ -0,0 +1,202 @@ +# Blind Comparator Agent + +Compare two outputs WITHOUT knowing which skill produced them. + +## Role + +The Blind Comparator judges which output better accomplishes the eval task. You receive two outputs labeled A and B, but you do NOT know which skill produced which. This prevents bias toward a particular skill or approach. + +Your judgment is based purely on output quality and task completion. + +## Inputs + +You receive these parameters in your prompt: + +- **output_a_path**: Path to the first output file or directory +- **output_b_path**: Path to the second output file or directory +- **eval_prompt**: The original task/prompt that was executed +- **expectations**: List of expectations to check (optional - may be empty) + +## Process + +### Step 1: Read Both Outputs + +1. Examine output A (file or directory) +2. Examine output B (file or directory) +3. Note the type, structure, and content of each +4. If outputs are directories, examine all relevant files inside + +### Step 2: Understand the Task + +1. Read the eval_prompt carefully +2. Identify what the task requires: + - What should be produced? + - What qualities matter (accuracy, completeness, format)? + - What would distinguish a good output from a poor one? + +### Step 3: Generate Evaluation Rubric + +Based on the task, generate a rubric with two dimensions: + +**Content Rubric** (what the output contains): +| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) | +|-----------|----------|----------------|---------------| +| Correctness | Major errors | Minor errors | Fully correct | +| Completeness | Missing key elements | Mostly complete | All elements present | +| Accuracy | Significant inaccuracies | Minor inaccuracies | Accurate throughout | + +**Structure Rubric** (how the output is organized): +| Criterion | 1 (Poor) | 3 (Acceptable) | 5 (Excellent) | +|-----------|----------|----------------|---------------| +| Organization | Disorganized | Reasonably organized | Clear, logical structure | +| Formatting | Inconsistent/broken | Mostly consistent | Professional, polished | +| Usability | Difficult to use | Usable with effort | Easy to use | + +Adapt criteria to the specific task. For example: +- PDF form → "Field alignment", "Text readability", "Data placement" +- Document → "Section structure", "Heading hierarchy", "Paragraph flow" +- Data output → "Schema correctness", "Data types", "Completeness" + +### Step 4: Evaluate Each Output Against the Rubric + +For each output (A and B): + +1. **Score each criterion** on the rubric (1-5 scale) +2. **Calculate dimension totals**: Content score, Structure score +3. **Calculate overall score**: Average of dimension scores, scaled to 1-10 + +### Step 5: Check Assertions (if provided) + +If expectations are provided: + +1. Check each expectation against output A +2. Check each expectation against output B +3. Count pass rates for each output +4. Use expectation scores as secondary evidence (not the primary decision factor) + +### Step 6: Determine the Winner + +Compare A and B based on (in priority order): + +1. **Primary**: Overall rubric score (content + structure) +2. **Secondary**: Assertion pass rates (if applicable) +3. **Tiebreaker**: If truly equal, declare a TIE + +Be decisive - ties should be rare. One output is usually better, even if marginally. + +### Step 7: Write Comparison Results + +Save results to a JSON file at the path specified (or `comparison.json` if not specified). + +## Output Format + +Write a JSON file with this structure: + +```json +{ + "winner": "A", + "reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.", + "rubric": { + "A": { + "content": { + "correctness": 5, + "completeness": 5, + "accuracy": 4 + }, + "structure": { + "organization": 4, + "formatting": 5, + "usability": 4 + }, + "content_score": 4.7, + "structure_score": 4.3, + "overall_score": 9.0 + }, + "B": { + "content": { + "correctness": 3, + "completeness": 2, + "accuracy": 3 + }, + "structure": { + "organization": 3, + "formatting": 2, + "usability": 3 + }, + "content_score": 2.7, + "structure_score": 2.7, + "overall_score": 5.4 + } + }, + "output_quality": { + "A": { + "score": 9, + "strengths": ["Complete solution", "Well-formatted", "All fields present"], + "weaknesses": ["Minor style inconsistency in header"] + }, + "B": { + "score": 5, + "strengths": ["Readable output", "Correct basic structure"], + "weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"] + } + }, + "expectation_results": { + "A": { + "passed": 4, + "total": 5, + "pass_rate": 0.80, + "details": [ + {"text": "Output includes name", "passed": true}, + {"text": "Output includes date", "passed": true}, + {"text": "Format is PDF", "passed": true}, + {"text": "Contains signature", "passed": false}, + {"text": "Readable text", "passed": true} + ] + }, + "B": { + "passed": 3, + "total": 5, + "pass_rate": 0.60, + "details": [ + {"text": "Output includes name", "passed": true}, + {"text": "Output includes date", "passed": false}, + {"text": "Format is PDF", "passed": true}, + {"text": "Contains signature", "passed": false}, + {"text": "Readable text", "passed": true} + ] + } + } +} +``` + +If no expectations were provided, omit the `expectation_results` field entirely. + +## Field Descriptions + +- **winner**: "A", "B", or "TIE" +- **reasoning**: Clear explanation of why the winner was chosen (or why it's a tie) +- **rubric**: Structured rubric evaluation for each output + - **content**: Scores for content criteria (correctness, completeness, accuracy) + - **structure**: Scores for structure criteria (organization, formatting, usability) + - **content_score**: Average of content criteria (1-5) + - **structure_score**: Average of structure criteria (1-5) + - **overall_score**: Combined score scaled to 1-10 +- **output_quality**: Summary quality assessment + - **score**: 1-10 rating (should match rubric overall_score) + - **strengths**: List of positive aspects + - **weaknesses**: List of issues or shortcomings +- **expectation_results**: (Only if expectations provided) + - **passed**: Number of expectations that passed + - **total**: Total number of expectations + - **pass_rate**: Fraction passed (0.0 to 1.0) + - **details**: Individual expectation results + +## Guidelines + +- **Stay blind**: DO NOT try to infer which skill produced which output. Judge purely on output quality. +- **Be specific**: Cite specific examples when explaining strengths and weaknesses. +- **Be decisive**: Choose a winner unless outputs are genuinely equivalent. +- **Output quality first**: Assertion scores are secondary to overall task completion. +- **Be objective**: Don't favor outputs based on style preferences; focus on correctness and completeness. +- **Explain your reasoning**: The reasoning field should make it clear why you chose the winner. +- **Handle edge cases**: If both outputs fail, pick the one that fails less badly. If both are excellent, pick the one that's marginally better. diff --git a/.github/skills/skill-creator/agents/grader.md b/.github/skills/skill-creator/agents/grader.md new file mode 100644 index 0000000..558ab05 --- /dev/null +++ b/.github/skills/skill-creator/agents/grader.md @@ -0,0 +1,223 @@ +# Grader Agent + +Evaluate expectations against an execution transcript and outputs. + +## Role + +The Grader reviews a transcript and output files, then determines whether each expectation passes or fails. Provide clear evidence for each judgment. + +You have two jobs: grade the outputs, and critique the evals themselves. A passing grade on a weak assertion is worse than useless — it creates false confidence. When you notice an assertion that's trivially satisfied, or an important outcome that no assertion checks, say so. + +## Inputs + +You receive these parameters in your prompt: + +- **expectations**: List of expectations to evaluate (strings) +- **transcript_path**: Path to the execution transcript (markdown file) +- **outputs_dir**: Directory containing output files from execution + +## Process + +### Step 1: Read the Transcript + +1. Read the transcript file completely +2. Note the eval prompt, execution steps, and final result +3. Identify any issues or errors documented + +### Step 2: Examine Output Files + +1. List files in outputs_dir +2. Read/examine each file relevant to the expectations. If outputs aren't plain text, use the inspection tools provided in your prompt — don't rely solely on what the transcript says the executor produced. +3. Note contents, structure, and quality + +### Step 3: Evaluate Each Assertion + +For each expectation: + +1. **Search for evidence** in the transcript and outputs +2. **Determine verdict**: + - **PASS**: Clear evidence the expectation is true AND the evidence reflects genuine task completion, not just surface-level compliance + - **FAIL**: No evidence, or evidence contradicts the expectation, or the evidence is superficial (e.g., correct filename but empty/wrong content) +3. **Cite the evidence**: Quote the specific text or describe what you found + +### Step 4: Extract and Verify Claims + +Beyond the predefined expectations, extract implicit claims from the outputs and verify them: + +1. **Extract claims** from the transcript and outputs: + - Factual statements ("The form has 12 fields") + - Process claims ("Used pypdf to fill the form") + - Quality claims ("All fields were filled correctly") + +2. **Verify each claim**: + - **Factual claims**: Can be checked against the outputs or external sources + - **Process claims**: Can be verified from the transcript + - **Quality claims**: Evaluate whether the claim is justified + +3. **Flag unverifiable claims**: Note claims that cannot be verified with available information + +This catches issues that predefined expectations might miss. + +### Step 5: Read User Notes + +If `{outputs_dir}/user_notes.md` exists: +1. Read it and note any uncertainties or issues flagged by the executor +2. Include relevant concerns in the grading output +3. These may reveal problems even when expectations pass + +### Step 6: Critique the Evals + +After grading, consider whether the evals themselves could be improved. Only surface suggestions when there's a clear gap. + +Good suggestions test meaningful outcomes — assertions that are hard to satisfy without actually doing the work correctly. Think about what makes an assertion *discriminating*: it passes when the skill genuinely succeeds and fails when it doesn't. + +Suggestions worth raising: +- An assertion that passed but would also pass for a clearly wrong output (e.g., checking filename existence but not file content) +- An important outcome you observed — good or bad — that no assertion covers at all +- An assertion that can't actually be verified from the available outputs + +Keep the bar high. The goal is to flag things the eval author would say "good catch" about, not to nitpick every assertion. + +### Step 7: Write Grading Results + +Save results to `{outputs_dir}/../grading.json` (sibling to outputs_dir). + +## Grading Criteria + +**PASS when**: +- The transcript or outputs clearly demonstrate the expectation is true +- Specific evidence can be cited +- The evidence reflects genuine substance, not just surface compliance (e.g., a file exists AND contains correct content, not just the right filename) + +**FAIL when**: +- No evidence found for the expectation +- Evidence contradicts the expectation +- The expectation cannot be verified from available information +- The evidence is superficial — the assertion is technically satisfied but the underlying task outcome is wrong or incomplete +- The output appears to meet the assertion by coincidence rather than by actually doing the work + +**When uncertain**: The burden of proof to pass is on the expectation. + +### Step 8: Read Executor Metrics and Timing + +1. If `{outputs_dir}/metrics.json` exists, read it and include in grading output +2. If `{outputs_dir}/../timing.json` exists, read it and include timing data + +## Output Format + +Write a JSON file with this structure: + +```json +{ + "expectations": [ + { + "text": "The output includes the name 'John Smith'", + "passed": true, + "evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'" + }, + { + "text": "The spreadsheet has a SUM formula in cell B10", + "passed": false, + "evidence": "No spreadsheet was created. The output was a text file." + }, + { + "text": "The assistant used the skill's OCR script", + "passed": true, + "evidence": "Transcript Step 2 shows: 'Tool: Bash - python ocr_script.py image.png'" + } + ], + "summary": { + "passed": 2, + "failed": 1, + "total": 3, + "pass_rate": 0.67 + }, + "execution_metrics": { + "tool_calls": { + "Read": 5, + "Write": 2, + "Bash": 8 + }, + "total_tool_calls": 15, + "total_steps": 6, + "errors_encountered": 0, + "output_chars": 12450, + "transcript_chars": 3200 + }, + "timing": { + "executor_duration_seconds": 165.0, + "grader_duration_seconds": 26.0, + "total_duration_seconds": 191.0 + }, + "claims": [ + { + "claim": "The form has 12 fillable fields", + "type": "factual", + "verified": true, + "evidence": "Counted 12 fields in field_info.json" + }, + { + "claim": "All required fields were populated", + "type": "quality", + "verified": false, + "evidence": "Reference section was left blank despite data being available" + } + ], + "user_notes_summary": { + "uncertainties": ["Used 2023 data, may be stale"], + "needs_review": [], + "workarounds": ["Fell back to text overlay for non-fillable fields"] + }, + "eval_feedback": { + "suggestions": [ + { + "assertion": "The output includes the name 'John Smith'", + "reason": "A hallucinated document that mentions the name would also pass — consider checking it appears as the primary contact with matching phone and email from the input" + }, + { + "reason": "No assertion checks whether the extracted phone numbers match the input — I observed incorrect numbers in the output that went uncaught" + } + ], + "overall": "Assertions check presence but not correctness. Consider adding content verification." + } +} +``` + +## Field Descriptions + +- **expectations**: Array of graded expectations + - **text**: The original expectation text + - **passed**: Boolean - true if expectation passes + - **evidence**: Specific quote or description supporting the verdict +- **summary**: Aggregate statistics + - **passed**: Count of passed expectations + - **failed**: Count of failed expectations + - **total**: Total expectations evaluated + - **pass_rate**: Fraction passed (0.0 to 1.0) +- **execution_metrics**: Copied from executor's metrics.json (if available) + - **output_chars**: Total character count of output files (proxy for tokens) + - **transcript_chars**: Character count of transcript +- **timing**: Wall clock timing from timing.json (if available) + - **executor_duration_seconds**: Time spent in executor subagent + - **total_duration_seconds**: Total elapsed time for the run +- **claims**: Extracted and verified claims from the output + - **claim**: The statement being verified + - **type**: "factual", "process", or "quality" + - **verified**: Boolean - whether the claim holds + - **evidence**: Supporting or contradicting evidence +- **user_notes_summary**: Issues flagged by the executor + - **uncertainties**: Things the executor wasn't sure about + - **needs_review**: Items requiring human attention + - **workarounds**: Places where the skill didn't work as expected +- **eval_feedback**: Improvement suggestions for the evals (only when warranted) + - **suggestions**: List of concrete suggestions, each with a `reason` and optionally an `assertion` it relates to + - **overall**: Brief assessment — can be "No suggestions, evals look solid" if nothing to flag + +## Guidelines + +- **Be objective**: Base verdicts on evidence, not assumptions +- **Be specific**: Quote the exact text that supports your verdict +- **Be thorough**: Check both transcript and output files +- **Be consistent**: Apply the same standard to each expectation +- **Explain failures**: Make it clear why evidence was insufficient +- **No partial credit**: Each expectation is pass or fail, not partial diff --git a/.github/skills/skill-creator/assets/eval_review.html b/.github/skills/skill-creator/assets/eval_review.html new file mode 100644 index 0000000..938ff32 --- /dev/null +++ b/.github/skills/skill-creator/assets/eval_review.html @@ -0,0 +1,146 @@ + + + + + + Eval Set Review - __SKILL_NAME_PLACEHOLDER__ + + + + + + +

Eval Set Review: __SKILL_NAME_PLACEHOLDER__

+

Current description: __SKILL_DESCRIPTION_PLACEHOLDER__

+ +
+ + +
+ + + + + + + + + + +
QueryShould TriggerActions
+ +

+ + + + diff --git a/.github/skills/skill-creator/eval-viewer/generate_review.py b/.github/skills/skill-creator/eval-viewer/generate_review.py new file mode 100644 index 0000000..7fa5978 --- /dev/null +++ b/.github/skills/skill-creator/eval-viewer/generate_review.py @@ -0,0 +1,471 @@ +#!/usr/bin/env python3 +"""Generate and serve a review page for eval results. + +Reads the workspace directory, discovers runs (directories with outputs/), +embeds all output data into a self-contained HTML page, and serves it via +a tiny HTTP server. Feedback auto-saves to feedback.json in the workspace. + +Usage: + python generate_review.py [--port PORT] [--skill-name NAME] + python generate_review.py --previous-feedback /path/to/old/feedback.json + +No dependencies beyond the Python stdlib are required. +""" + +import argparse +import base64 +import json +import mimetypes +import os +import re +import signal +import subprocess +import sys +import time +import webbrowser +from functools import partial +from http.server import HTTPServer, BaseHTTPRequestHandler +from pathlib import Path + +# Files to exclude from output listings +METADATA_FILES = {"transcript.md", "user_notes.md", "metrics.json"} + +# Extensions we render as inline text +TEXT_EXTENSIONS = { + ".txt", ".md", ".json", ".csv", ".py", ".js", ".ts", ".tsx", ".jsx", + ".yaml", ".yml", ".xml", ".html", ".css", ".sh", ".rb", ".go", ".rs", + ".java", ".c", ".cpp", ".h", ".hpp", ".sql", ".r", ".toml", +} + +# Extensions we render as inline images +IMAGE_EXTENSIONS = {".png", ".jpg", ".jpeg", ".gif", ".svg", ".webp"} + +# MIME type overrides for common types +MIME_OVERRIDES = { + ".svg": "image/svg+xml", + ".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", + ".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document", + ".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation", +} + + +def get_mime_type(path: Path) -> str: + ext = path.suffix.lower() + if ext in MIME_OVERRIDES: + return MIME_OVERRIDES[ext] + mime, _ = mimetypes.guess_type(str(path)) + return mime or "application/octet-stream" + + +def find_runs(workspace: Path) -> list[dict]: + """Recursively find directories that contain an outputs/ subdirectory.""" + runs: list[dict] = [] + _find_runs_recursive(workspace, workspace, runs) + runs.sort(key=lambda r: (r.get("eval_id", float("inf")), r["id"])) + return runs + + +def _find_runs_recursive(root: Path, current: Path, runs: list[dict]) -> None: + if not current.is_dir(): + return + + outputs_dir = current / "outputs" + if outputs_dir.is_dir(): + run = build_run(root, current) + if run: + runs.append(run) + return + + skip = {"node_modules", ".git", "__pycache__", "skill", "inputs"} + for child in sorted(current.iterdir()): + if child.is_dir() and child.name not in skip: + _find_runs_recursive(root, child, runs) + + +def build_run(root: Path, run_dir: Path) -> dict | None: + """Build a run dict with prompt, outputs, and grading data.""" + prompt = "" + eval_id = None + + # Try eval_metadata.json + for candidate in [run_dir / "eval_metadata.json", run_dir.parent / "eval_metadata.json"]: + if candidate.exists(): + try: + metadata = json.loads(candidate.read_text()) + prompt = metadata.get("prompt", "") + eval_id = metadata.get("eval_id") + except (json.JSONDecodeError, OSError): + pass + if prompt: + break + + # Fall back to transcript.md + if not prompt: + for candidate in [run_dir / "transcript.md", run_dir / "outputs" / "transcript.md"]: + if candidate.exists(): + try: + text = candidate.read_text() + match = re.search(r"## Eval Prompt\n\n([\s\S]*?)(?=\n##|$)", text) + if match: + prompt = match.group(1).strip() + except OSError: + pass + if prompt: + break + + if not prompt: + prompt = "(No prompt found)" + + run_id = str(run_dir.relative_to(root)).replace("/", "-").replace("\\", "-") + + # Collect output files + outputs_dir = run_dir / "outputs" + output_files: list[dict] = [] + if outputs_dir.is_dir(): + for f in sorted(outputs_dir.iterdir()): + if f.is_file() and f.name not in METADATA_FILES: + output_files.append(embed_file(f)) + + # Load grading if present + grading = None + for candidate in [run_dir / "grading.json", run_dir.parent / "grading.json"]: + if candidate.exists(): + try: + grading = json.loads(candidate.read_text()) + except (json.JSONDecodeError, OSError): + pass + if grading: + break + + return { + "id": run_id, + "prompt": prompt, + "eval_id": eval_id, + "outputs": output_files, + "grading": grading, + } + + +def embed_file(path: Path) -> dict: + """Read a file and return an embedded representation.""" + ext = path.suffix.lower() + mime = get_mime_type(path) + + if ext in TEXT_EXTENSIONS: + try: + content = path.read_text(errors="replace") + except OSError: + content = "(Error reading file)" + return { + "name": path.name, + "type": "text", + "content": content, + } + elif ext in IMAGE_EXTENSIONS: + try: + raw = path.read_bytes() + b64 = base64.b64encode(raw).decode("ascii") + except OSError: + return {"name": path.name, "type": "error", "content": "(Error reading file)"} + return { + "name": path.name, + "type": "image", + "mime": mime, + "data_uri": f"data:{mime};base64,{b64}", + } + elif ext == ".pdf": + try: + raw = path.read_bytes() + b64 = base64.b64encode(raw).decode("ascii") + except OSError: + return {"name": path.name, "type": "error", "content": "(Error reading file)"} + return { + "name": path.name, + "type": "pdf", + "data_uri": f"data:{mime};base64,{b64}", + } + elif ext == ".xlsx": + try: + raw = path.read_bytes() + b64 = base64.b64encode(raw).decode("ascii") + except OSError: + return {"name": path.name, "type": "error", "content": "(Error reading file)"} + return { + "name": path.name, + "type": "xlsx", + "data_b64": b64, + } + else: + # Binary / unknown — base64 download link + try: + raw = path.read_bytes() + b64 = base64.b64encode(raw).decode("ascii") + except OSError: + return {"name": path.name, "type": "error", "content": "(Error reading file)"} + return { + "name": path.name, + "type": "binary", + "mime": mime, + "data_uri": f"data:{mime};base64,{b64}", + } + + +def load_previous_iteration(workspace: Path) -> dict[str, dict]: + """Load previous iteration's feedback and outputs. + + Returns a map of run_id -> {"feedback": str, "outputs": list[dict]}. + """ + result: dict[str, dict] = {} + + # Load feedback + feedback_map: dict[str, str] = {} + feedback_path = workspace / "feedback.json" + if feedback_path.exists(): + try: + data = json.loads(feedback_path.read_text()) + feedback_map = { + r["run_id"]: r["feedback"] + for r in data.get("reviews", []) + if r.get("feedback", "").strip() + } + except (json.JSONDecodeError, OSError, KeyError): + pass + + # Load runs (to get outputs) + prev_runs = find_runs(workspace) + for run in prev_runs: + result[run["id"]] = { + "feedback": feedback_map.get(run["id"], ""), + "outputs": run.get("outputs", []), + } + + # Also add feedback for run_ids that had feedback but no matching run + for run_id, fb in feedback_map.items(): + if run_id not in result: + result[run_id] = {"feedback": fb, "outputs": []} + + return result + + +def generate_html( + runs: list[dict], + skill_name: str, + previous: dict[str, dict] | None = None, + benchmark: dict | None = None, +) -> str: + """Generate the complete standalone HTML page with embedded data.""" + template_path = Path(__file__).parent / "viewer.html" + template = template_path.read_text() + + # Build previous_feedback and previous_outputs maps for the template + previous_feedback: dict[str, str] = {} + previous_outputs: dict[str, list[dict]] = {} + if previous: + for run_id, data in previous.items(): + if data.get("feedback"): + previous_feedback[run_id] = data["feedback"] + if data.get("outputs"): + previous_outputs[run_id] = data["outputs"] + + embedded = { + "skill_name": skill_name, + "runs": runs, + "previous_feedback": previous_feedback, + "previous_outputs": previous_outputs, + } + if benchmark: + embedded["benchmark"] = benchmark + + data_json = json.dumps(embedded) + + return template.replace("/*__EMBEDDED_DATA__*/", f"const EMBEDDED_DATA = {data_json};") + + +# --------------------------------------------------------------------------- +# HTTP server (stdlib only, zero dependencies) +# --------------------------------------------------------------------------- + +def _kill_port(port: int) -> None: + """Kill any process listening on the given port.""" + try: + result = subprocess.run( + ["lsof", "-ti", f":{port}"], + capture_output=True, text=True, timeout=5, + ) + for pid_str in result.stdout.strip().split("\n"): + if pid_str.strip(): + try: + os.kill(int(pid_str.strip()), signal.SIGTERM) + except (ProcessLookupError, ValueError): + pass + if result.stdout.strip(): + time.sleep(0.5) + except subprocess.TimeoutExpired: + pass + except FileNotFoundError: + print("Note: lsof not found, cannot check if port is in use", file=sys.stderr) + +class ReviewHandler(BaseHTTPRequestHandler): + """Serves the review HTML and handles feedback saves. + + Regenerates the HTML on each page load so that refreshing the browser + picks up new eval outputs without restarting the server. + """ + + def __init__( + self, + workspace: Path, + skill_name: str, + feedback_path: Path, + previous: dict[str, dict], + benchmark_path: Path | None, + *args, + **kwargs, + ): + self.workspace = workspace + self.skill_name = skill_name + self.feedback_path = feedback_path + self.previous = previous + self.benchmark_path = benchmark_path + super().__init__(*args, **kwargs) + + def do_GET(self) -> None: + if self.path == "/" or self.path == "/index.html": + # Regenerate HTML on each request (re-scans workspace for new outputs) + runs = find_runs(self.workspace) + benchmark = None + if self.benchmark_path and self.benchmark_path.exists(): + try: + benchmark = json.loads(self.benchmark_path.read_text()) + except (json.JSONDecodeError, OSError): + pass + html = generate_html(runs, self.skill_name, self.previous, benchmark) + content = html.encode("utf-8") + self.send_response(200) + self.send_header("Content-Type", "text/html; charset=utf-8") + self.send_header("Content-Length", str(len(content))) + self.end_headers() + self.wfile.write(content) + elif self.path == "/api/feedback": + data = b"{}" + if self.feedback_path.exists(): + data = self.feedback_path.read_bytes() + self.send_response(200) + self.send_header("Content-Type", "application/json") + self.send_header("Content-Length", str(len(data))) + self.end_headers() + self.wfile.write(data) + else: + self.send_error(404) + + def do_POST(self) -> None: + if self.path == "/api/feedback": + length = int(self.headers.get("Content-Length", 0)) + body = self.rfile.read(length) + try: + data = json.loads(body) + if not isinstance(data, dict) or "reviews" not in data: + raise ValueError("Expected JSON object with 'reviews' key") + self.feedback_path.write_text(json.dumps(data, indent=2) + "\n") + resp = b'{"ok":true}' + self.send_response(200) + except (json.JSONDecodeError, OSError, ValueError) as e: + resp = json.dumps({"error": str(e)}).encode() + self.send_response(500) + self.send_header("Content-Type", "application/json") + self.send_header("Content-Length", str(len(resp))) + self.end_headers() + self.wfile.write(resp) + else: + self.send_error(404) + + def log_message(self, format: str, *args: object) -> None: + # Suppress request logging to keep terminal clean + pass + + +def main() -> None: + parser = argparse.ArgumentParser(description="Generate and serve eval review") + parser.add_argument("workspace", type=Path, help="Path to workspace directory") + parser.add_argument("--port", "-p", type=int, default=3117, help="Server port (default: 3117)") + parser.add_argument("--skill-name", "-n", type=str, default=None, help="Skill name for header") + parser.add_argument( + "--previous-workspace", type=Path, default=None, + help="Path to previous iteration's workspace (shows old outputs and feedback as context)", + ) + parser.add_argument( + "--benchmark", type=Path, default=None, + help="Path to benchmark.json to show in the Benchmark tab", + ) + parser.add_argument( + "--static", "-s", type=Path, default=None, + help="Write standalone HTML to this path instead of starting a server", + ) + args = parser.parse_args() + + workspace = args.workspace.resolve() + if not workspace.is_dir(): + print(f"Error: {workspace} is not a directory", file=sys.stderr) + sys.exit(1) + + runs = find_runs(workspace) + if not runs: + print(f"No runs found in {workspace}", file=sys.stderr) + sys.exit(1) + + skill_name = args.skill_name or workspace.name.replace("-workspace", "") + feedback_path = workspace / "feedback.json" + + previous: dict[str, dict] = {} + if args.previous_workspace: + previous = load_previous_iteration(args.previous_workspace.resolve()) + + benchmark_path = args.benchmark.resolve() if args.benchmark else None + benchmark = None + if benchmark_path and benchmark_path.exists(): + try: + benchmark = json.loads(benchmark_path.read_text()) + except (json.JSONDecodeError, OSError): + pass + + if args.static: + html = generate_html(runs, skill_name, previous, benchmark) + args.static.parent.mkdir(parents=True, exist_ok=True) + args.static.write_text(html) + print(f"\n Static viewer written to: {args.static}\n") + sys.exit(0) + + # Kill any existing process on the target port + port = args.port + _kill_port(port) + handler = partial(ReviewHandler, workspace, skill_name, feedback_path, previous, benchmark_path) + try: + server = HTTPServer(("127.0.0.1", port), handler) + except OSError: + # Port still in use after kill attempt — find a free one + server = HTTPServer(("127.0.0.1", 0), handler) + port = server.server_address[1] + + url = f"http://localhost:{port}" + print(f"\n Eval Viewer") + print(f" ─────────────────────────────────") + print(f" URL: {url}") + print(f" Workspace: {workspace}") + print(f" Feedback: {feedback_path}") + if previous: + print(f" Previous: {args.previous_workspace} ({len(previous)} runs)") + if benchmark_path: + print(f" Benchmark: {benchmark_path}") + print(f"\n Press Ctrl+C to stop.\n") + + webbrowser.open(url) + + try: + server.serve_forever() + except KeyboardInterrupt: + print("\nStopped.") + server.server_close() + + +if __name__ == "__main__": + main() diff --git a/.github/skills/skill-creator/eval-viewer/viewer.html b/.github/skills/skill-creator/eval-viewer/viewer.html new file mode 100644 index 0000000..6d8e963 --- /dev/null +++ b/.github/skills/skill-creator/eval-viewer/viewer.html @@ -0,0 +1,1325 @@ + + + + + + Eval Review + + + + + + + +
+
+
+

Eval Review:

+
Review each output and leave feedback below. Navigate with arrow keys or buttons. When done, copy feedback and paste into Claude Code.
+
+
+
+ + + + + +
+
+ +
+
Prompt
+
+
+
+
+ + +
+
Output
+
+
No output files found
+
+
+ + + + + + + + +
+
Your Feedback
+
+ + + +
+
+
+ + +
+ + +
+
+
No benchmark data available. Run a benchmark to see quantitative results here.
+
+
+
+ + +
+
+

Review Complete

+

Your feedback has been saved. Go back to your Claude Code session and tell Claude you're done reviewing.

+
+ +
+
+
+ + +
+ + + + diff --git a/.github/skills/skill-creator/references/schemas.md b/.github/skills/skill-creator/references/schemas.md new file mode 100644 index 0000000..b6eeaa2 --- /dev/null +++ b/.github/skills/skill-creator/references/schemas.md @@ -0,0 +1,430 @@ +# JSON Schemas + +This document defines the JSON schemas used by skill-creator. + +--- + +## evals.json + +Defines the evals for a skill. Located at `evals/evals.json` within the skill directory. + +```json +{ + "skill_name": "example-skill", + "evals": [ + { + "id": 1, + "prompt": "User's example prompt", + "expected_output": "Description of expected result", + "files": ["evals/files/sample1.pdf"], + "expectations": [ + "The output includes X", + "The skill used script Y" + ] + } + ] +} +``` + +**Fields:** +- `skill_name`: Name matching the skill's frontmatter +- `evals[].id`: Unique integer identifier +- `evals[].prompt`: The task to execute +- `evals[].expected_output`: Human-readable description of success +- `evals[].files`: Optional list of input file paths (relative to skill root) +- `evals[].expectations`: List of verifiable statements + +--- + +## history.json + +Tracks version progression in Improve mode. Located at workspace root. + +```json +{ + "started_at": "2026-01-15T10:30:00Z", + "skill_name": "pdf", + "current_best": "v2", + "iterations": [ + { + "version": "v0", + "parent": null, + "expectation_pass_rate": 0.65, + "grading_result": "baseline", + "is_current_best": false + }, + { + "version": "v1", + "parent": "v0", + "expectation_pass_rate": 0.75, + "grading_result": "won", + "is_current_best": false + }, + { + "version": "v2", + "parent": "v1", + "expectation_pass_rate": 0.85, + "grading_result": "won", + "is_current_best": true + } + ] +} +``` + +**Fields:** +- `started_at`: ISO timestamp of when improvement started +- `skill_name`: Name of the skill being improved +- `current_best`: Version identifier of the best performer +- `iterations[].version`: Version identifier (v0, v1, ...) +- `iterations[].parent`: Parent version this was derived from +- `iterations[].expectation_pass_rate`: Pass rate from grading +- `iterations[].grading_result`: "baseline", "won", "lost", or "tie" +- `iterations[].is_current_best`: Whether this is the current best version + +--- + +## grading.json + +Output from the grader agent. Located at `/grading.json`. + +```json +{ + "expectations": [ + { + "text": "The output includes the name 'John Smith'", + "passed": true, + "evidence": "Found in transcript Step 3: 'Extracted names: John Smith, Sarah Johnson'" + }, + { + "text": "The spreadsheet has a SUM formula in cell B10", + "passed": false, + "evidence": "No spreadsheet was created. The output was a text file." + } + ], + "summary": { + "passed": 2, + "failed": 1, + "total": 3, + "pass_rate": 0.67 + }, + "execution_metrics": { + "tool_calls": { + "Read": 5, + "Write": 2, + "Bash": 8 + }, + "total_tool_calls": 15, + "total_steps": 6, + "errors_encountered": 0, + "output_chars": 12450, + "transcript_chars": 3200 + }, + "timing": { + "executor_duration_seconds": 165.0, + "grader_duration_seconds": 26.0, + "total_duration_seconds": 191.0 + }, + "claims": [ + { + "claim": "The form has 12 fillable fields", + "type": "factual", + "verified": true, + "evidence": "Counted 12 fields in field_info.json" + } + ], + "user_notes_summary": { + "uncertainties": ["Used 2023 data, may be stale"], + "needs_review": [], + "workarounds": ["Fell back to text overlay for non-fillable fields"] + }, + "eval_feedback": { + "suggestions": [ + { + "assertion": "The output includes the name 'John Smith'", + "reason": "A hallucinated document that mentions the name would also pass" + } + ], + "overall": "Assertions check presence but not correctness." + } +} +``` + +**Fields:** +- `expectations[]`: Graded expectations with evidence +- `summary`: Aggregate pass/fail counts +- `execution_metrics`: Tool usage and output size (from executor's metrics.json) +- `timing`: Wall clock timing (from timing.json) +- `claims`: Extracted and verified claims from the output +- `user_notes_summary`: Issues flagged by the executor +- `eval_feedback`: (optional) Improvement suggestions for the evals, only present when the grader identifies issues worth raising + +--- + +## metrics.json + +Output from the executor agent. Located at `/outputs/metrics.json`. + +```json +{ + "tool_calls": { + "Read": 5, + "Write": 2, + "Bash": 8, + "Edit": 1, + "Glob": 2, + "Grep": 0 + }, + "total_tool_calls": 18, + "total_steps": 6, + "files_created": ["filled_form.pdf", "field_values.json"], + "errors_encountered": 0, + "output_chars": 12450, + "transcript_chars": 3200 +} +``` + +**Fields:** +- `tool_calls`: Count per tool type +- `total_tool_calls`: Sum of all tool calls +- `total_steps`: Number of major execution steps +- `files_created`: List of output files created +- `errors_encountered`: Number of errors during execution +- `output_chars`: Total character count of output files +- `transcript_chars`: Character count of transcript + +--- + +## timing.json + +Wall clock timing for a run. Located at `/timing.json`. + +**How to capture:** When a subagent task completes, the task notification includes `total_tokens` and `duration_ms`. Save these immediately — they are not persisted anywhere else and cannot be recovered after the fact. + +```json +{ + "total_tokens": 84852, + "duration_ms": 23332, + "total_duration_seconds": 23.3, + "executor_start": "2026-01-15T10:30:00Z", + "executor_end": "2026-01-15T10:32:45Z", + "executor_duration_seconds": 165.0, + "grader_start": "2026-01-15T10:32:46Z", + "grader_end": "2026-01-15T10:33:12Z", + "grader_duration_seconds": 26.0 +} +``` + +--- + +## benchmark.json + +Output from Benchmark mode. Located at `benchmarks//benchmark.json`. + +```json +{ + "metadata": { + "skill_name": "pdf", + "skill_path": "/path/to/pdf", + "executor_model": "claude-sonnet-4-20250514", + "analyzer_model": "most-capable-model", + "timestamp": "2026-01-15T10:30:00Z", + "evals_run": [1, 2, 3], + "runs_per_configuration": 3 + }, + + "runs": [ + { + "eval_id": 1, + "eval_name": "Ocean", + "configuration": "with_skill", + "run_number": 1, + "result": { + "pass_rate": 0.85, + "passed": 6, + "failed": 1, + "total": 7, + "time_seconds": 42.5, + "tokens": 3800, + "tool_calls": 18, + "errors": 0 + }, + "expectations": [ + {"text": "...", "passed": true, "evidence": "..."} + ], + "notes": [ + "Used 2023 data, may be stale", + "Fell back to text overlay for non-fillable fields" + ] + } + ], + + "run_summary": { + "with_skill": { + "pass_rate": {"mean": 0.85, "stddev": 0.05, "min": 0.80, "max": 0.90}, + "time_seconds": {"mean": 45.0, "stddev": 12.0, "min": 32.0, "max": 58.0}, + "tokens": {"mean": 3800, "stddev": 400, "min": 3200, "max": 4100} + }, + "without_skill": { + "pass_rate": {"mean": 0.35, "stddev": 0.08, "min": 0.28, "max": 0.45}, + "time_seconds": {"mean": 32.0, "stddev": 8.0, "min": 24.0, "max": 42.0}, + "tokens": {"mean": 2100, "stddev": 300, "min": 1800, "max": 2500} + }, + "delta": { + "pass_rate": "+0.50", + "time_seconds": "+13.0", + "tokens": "+1700" + } + }, + + "notes": [ + "Assertion 'Output is a PDF file' passes 100% in both configurations - may not differentiate skill value", + "Eval 3 shows high variance (50% ± 40%) - may be flaky or model-dependent", + "Without-skill runs consistently fail on table extraction expectations", + "Skill adds 13s average execution time but improves pass rate by 50%" + ] +} +``` + +**Fields:** +- `metadata`: Information about the benchmark run + - `skill_name`: Name of the skill + - `timestamp`: When the benchmark was run + - `evals_run`: List of eval names or IDs + - `runs_per_configuration`: Number of runs per config (e.g. 3) +- `runs[]`: Individual run results + - `eval_id`: Numeric eval identifier + - `eval_name`: Human-readable eval name (used as section header in the viewer) + - `configuration`: Must be `"with_skill"` or `"without_skill"` (the viewer uses this exact string for grouping and color coding) + - `run_number`: Integer run number (1, 2, 3...) + - `result`: Nested object with `pass_rate`, `passed`, `total`, `time_seconds`, `tokens`, `errors` +- `run_summary`: Statistical aggregates per configuration + - `with_skill` / `without_skill`: Each contains `pass_rate`, `time_seconds`, `tokens` objects with `mean` and `stddev` fields + - `delta`: Difference strings like `"+0.50"`, `"+13.0"`, `"+1700"` +- `notes`: Freeform observations from the analyzer + +**Important:** The viewer reads these field names exactly. Using `config` instead of `configuration`, or putting `pass_rate` at the top level of a run instead of nested under `result`, will cause the viewer to show empty/zero values. Always reference this schema when generating benchmark.json manually. + +--- + +## comparison.json + +Output from blind comparator. Located at `/comparison-N.json`. + +```json +{ + "winner": "A", + "reasoning": "Output A provides a complete solution with proper formatting and all required fields. Output B is missing the date field and has formatting inconsistencies.", + "rubric": { + "A": { + "content": { + "correctness": 5, + "completeness": 5, + "accuracy": 4 + }, + "structure": { + "organization": 4, + "formatting": 5, + "usability": 4 + }, + "content_score": 4.7, + "structure_score": 4.3, + "overall_score": 9.0 + }, + "B": { + "content": { + "correctness": 3, + "completeness": 2, + "accuracy": 3 + }, + "structure": { + "organization": 3, + "formatting": 2, + "usability": 3 + }, + "content_score": 2.7, + "structure_score": 2.7, + "overall_score": 5.4 + } + }, + "output_quality": { + "A": { + "score": 9, + "strengths": ["Complete solution", "Well-formatted", "All fields present"], + "weaknesses": ["Minor style inconsistency in header"] + }, + "B": { + "score": 5, + "strengths": ["Readable output", "Correct basic structure"], + "weaknesses": ["Missing date field", "Formatting inconsistencies", "Partial data extraction"] + } + }, + "expectation_results": { + "A": { + "passed": 4, + "total": 5, + "pass_rate": 0.80, + "details": [ + {"text": "Output includes name", "passed": true} + ] + }, + "B": { + "passed": 3, + "total": 5, + "pass_rate": 0.60, + "details": [ + {"text": "Output includes name", "passed": true} + ] + } + } +} +``` + +--- + +## analysis.json + +Output from post-hoc analyzer. Located at `/analysis.json`. + +```json +{ + "comparison_summary": { + "winner": "A", + "winner_skill": "path/to/winner/skill", + "loser_skill": "path/to/loser/skill", + "comparator_reasoning": "Brief summary of why comparator chose winner" + }, + "winner_strengths": [ + "Clear step-by-step instructions for handling multi-page documents", + "Included validation script that caught formatting errors" + ], + "loser_weaknesses": [ + "Vague instruction 'process the document appropriately' led to inconsistent behavior", + "No script for validation, agent had to improvise" + ], + "instruction_following": { + "winner": { + "score": 9, + "issues": ["Minor: skipped optional logging step"] + }, + "loser": { + "score": 6, + "issues": [ + "Did not use the skill's formatting template", + "Invented own approach instead of following step 3" + ] + } + }, + "improvement_suggestions": [ + { + "priority": "high", + "category": "instructions", + "suggestion": "Replace 'process the document appropriately' with explicit steps", + "expected_impact": "Would eliminate ambiguity that caused inconsistent behavior" + } + ], + "transcript_insights": { + "winner_execution_pattern": "Read skill -> Followed 5-step process -> Used validation script", + "loser_execution_pattern": "Read skill -> Unclear on approach -> Tried 3 different methods" + } +} +``` diff --git a/.github/skills/skill-creator/scripts/__init__.py b/.github/skills/skill-creator/scripts/__init__.py new file mode 100644 index 0000000..5d8369d --- /dev/null +++ b/.github/skills/skill-creator/scripts/__init__.py @@ -0,0 +1,2 @@ +# Copyright (c) Cratis. All rights reserved. +# Licensed under the MIT license. See LICENSE file in the project root for full license information. diff --git a/.github/skills/skill-creator/scripts/aggregate_benchmark.py b/.github/skills/skill-creator/scripts/aggregate_benchmark.py new file mode 100644 index 0000000..3e66e8c --- /dev/null +++ b/.github/skills/skill-creator/scripts/aggregate_benchmark.py @@ -0,0 +1,401 @@ +#!/usr/bin/env python3 +""" +Aggregate individual run results into benchmark summary statistics. + +Reads grading.json files from run directories and produces: +- run_summary with mean, stddev, min, max for each metric +- delta between with_skill and without_skill configurations + +Usage: + python aggregate_benchmark.py + +Example: + python aggregate_benchmark.py benchmarks/2026-01-15T10-30-00/ + +The script supports two directory layouts: + + Workspace layout (from skill-creator iterations): + / + └── eval-N/ + ├── with_skill/ + │ ├── run-1/grading.json + │ └── run-2/grading.json + └── without_skill/ + ├── run-1/grading.json + └── run-2/grading.json + + Legacy layout (with runs/ subdirectory): + / + └── runs/ + └── eval-N/ + ├── with_skill/ + │ └── run-1/grading.json + └── without_skill/ + └── run-1/grading.json +""" + +import argparse +import json +import math +import sys +from datetime import datetime, timezone +from pathlib import Path + + +def calculate_stats(values: list[float]) -> dict: + """Calculate mean, stddev, min, max for a list of values.""" + if not values: + return {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0} + + n = len(values) + mean = sum(values) / n + + if n > 1: + variance = sum((x - mean) ** 2 for x in values) / (n - 1) + stddev = math.sqrt(variance) + else: + stddev = 0.0 + + return { + "mean": round(mean, 4), + "stddev": round(stddev, 4), + "min": round(min(values), 4), + "max": round(max(values), 4) + } + + +def load_run_results(benchmark_dir: Path) -> dict: + """ + Load all run results from a benchmark directory. + + Returns dict keyed by config name (e.g. "with_skill"/"without_skill", + or "new_skill"/"old_skill"), each containing a list of run results. + """ + # Support both layouts: eval dirs directly under benchmark_dir, or under runs/ + runs_dir = benchmark_dir / "runs" + if runs_dir.exists(): + search_dir = runs_dir + elif list(benchmark_dir.glob("eval-*")): + search_dir = benchmark_dir + else: + print(f"No eval directories found in {benchmark_dir} or {benchmark_dir / 'runs'}") + return {} + + results: dict[str, list] = {} + + for eval_idx, eval_dir in enumerate(sorted(search_dir.glob("eval-*"))): + metadata_path = eval_dir / "eval_metadata.json" + if metadata_path.exists(): + try: + with open(metadata_path) as mf: + eval_id = json.load(mf).get("eval_id", eval_idx) + except (json.JSONDecodeError, OSError): + eval_id = eval_idx + else: + try: + eval_id = int(eval_dir.name.split("-")[1]) + except ValueError: + eval_id = eval_idx + + # Discover config directories dynamically rather than hardcoding names + for config_dir in sorted(eval_dir.iterdir()): + if not config_dir.is_dir(): + continue + # Skip non-config directories (inputs, outputs, etc.) + if not list(config_dir.glob("run-*")): + continue + config = config_dir.name + if config not in results: + results[config] = [] + + for run_dir in sorted(config_dir.glob("run-*")): + run_number = int(run_dir.name.split("-")[1]) + grading_file = run_dir / "grading.json" + + if not grading_file.exists(): + print(f"Warning: grading.json not found in {run_dir}") + continue + + try: + with open(grading_file) as f: + grading = json.load(f) + except json.JSONDecodeError as e: + print(f"Warning: Invalid JSON in {grading_file}: {e}") + continue + + # Extract metrics + result = { + "eval_id": eval_id, + "run_number": run_number, + "pass_rate": grading.get("summary", {}).get("pass_rate", 0.0), + "passed": grading.get("summary", {}).get("passed", 0), + "failed": grading.get("summary", {}).get("failed", 0), + "total": grading.get("summary", {}).get("total", 0), + } + + # Extract timing — check grading.json first, then sibling timing.json + timing = grading.get("timing", {}) + result["time_seconds"] = timing.get("total_duration_seconds", 0.0) + timing_file = run_dir / "timing.json" + if result["time_seconds"] == 0.0 and timing_file.exists(): + try: + with open(timing_file) as tf: + timing_data = json.load(tf) + result["time_seconds"] = timing_data.get("total_duration_seconds", 0.0) + result["tokens"] = timing_data.get("total_tokens", 0) + except json.JSONDecodeError: + pass + + # Extract metrics if available + metrics = grading.get("execution_metrics", {}) + result["tool_calls"] = metrics.get("total_tool_calls", 0) + if not result.get("tokens"): + result["tokens"] = metrics.get("output_chars", 0) + result["errors"] = metrics.get("errors_encountered", 0) + + # Extract expectations — viewer requires fields: text, passed, evidence + raw_expectations = grading.get("expectations", []) + for exp in raw_expectations: + if "text" not in exp or "passed" not in exp: + print(f"Warning: expectation in {grading_file} missing required fields (text, passed, evidence): {exp}") + result["expectations"] = raw_expectations + + # Extract notes from user_notes_summary + notes_summary = grading.get("user_notes_summary", {}) + notes = [] + notes.extend(notes_summary.get("uncertainties", [])) + notes.extend(notes_summary.get("needs_review", [])) + notes.extend(notes_summary.get("workarounds", [])) + result["notes"] = notes + + results[config].append(result) + + return results + + +def aggregate_results(results: dict) -> dict: + """ + Aggregate run results into summary statistics. + + Returns run_summary with stats for each configuration and delta. + """ + run_summary = {} + configs = list(results.keys()) + + for config in configs: + runs = results.get(config, []) + + if not runs: + run_summary[config] = { + "pass_rate": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0}, + "time_seconds": {"mean": 0.0, "stddev": 0.0, "min": 0.0, "max": 0.0}, + "tokens": {"mean": 0, "stddev": 0, "min": 0, "max": 0} + } + continue + + pass_rates = [r["pass_rate"] for r in runs] + times = [r["time_seconds"] for r in runs] + tokens = [r.get("tokens", 0) for r in runs] + + run_summary[config] = { + "pass_rate": calculate_stats(pass_rates), + "time_seconds": calculate_stats(times), + "tokens": calculate_stats(tokens) + } + + # Calculate delta between the first two configs (if two exist) + if len(configs) >= 2: + primary = run_summary.get(configs[0], {}) + baseline = run_summary.get(configs[1], {}) + else: + primary = run_summary.get(configs[0], {}) if configs else {} + baseline = {} + + delta_pass_rate = primary.get("pass_rate", {}).get("mean", 0) - baseline.get("pass_rate", {}).get("mean", 0) + delta_time = primary.get("time_seconds", {}).get("mean", 0) - baseline.get("time_seconds", {}).get("mean", 0) + delta_tokens = primary.get("tokens", {}).get("mean", 0) - baseline.get("tokens", {}).get("mean", 0) + + run_summary["delta"] = { + "pass_rate": f"{delta_pass_rate:+.2f}", + "time_seconds": f"{delta_time:+.1f}", + "tokens": f"{delta_tokens:+.0f}" + } + + return run_summary + + +def generate_benchmark(benchmark_dir: Path, skill_name: str = "", skill_path: str = "") -> dict: + """ + Generate complete benchmark.json from run results. + """ + results = load_run_results(benchmark_dir) + run_summary = aggregate_results(results) + + # Build runs array for benchmark.json + runs = [] + for config in results: + for result in results[config]: + runs.append({ + "eval_id": result["eval_id"], + "configuration": config, + "run_number": result["run_number"], + "result": { + "pass_rate": result["pass_rate"], + "passed": result["passed"], + "failed": result["failed"], + "total": result["total"], + "time_seconds": result["time_seconds"], + "tokens": result.get("tokens", 0), + "tool_calls": result.get("tool_calls", 0), + "errors": result.get("errors", 0) + }, + "expectations": result["expectations"], + "notes": result["notes"] + }) + + # Determine eval IDs from results + eval_ids = sorted(set( + r["eval_id"] + for config in results.values() + for r in config + )) + + benchmark = { + "metadata": { + "skill_name": skill_name or "", + "skill_path": skill_path or "", + "executor_model": "", + "analyzer_model": "", + "timestamp": datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ"), + "evals_run": eval_ids, + "runs_per_configuration": 3 + }, + "runs": runs, + "run_summary": run_summary, + "notes": [] # To be filled by analyzer + } + + return benchmark + + +def generate_markdown(benchmark: dict) -> str: + """Generate human-readable benchmark.md from benchmark data.""" + metadata = benchmark["metadata"] + run_summary = benchmark["run_summary"] + + # Determine config names (excluding "delta") + configs = [k for k in run_summary if k != "delta"] + config_a = configs[0] if len(configs) >= 1 else "config_a" + config_b = configs[1] if len(configs) >= 2 else "config_b" + label_a = config_a.replace("_", " ").title() + label_b = config_b.replace("_", " ").title() + + lines = [ + f"# Skill Benchmark: {metadata['skill_name']}", + "", + f"**Model**: {metadata['executor_model']}", + f"**Date**: {metadata['timestamp']}", + f"**Evals**: {', '.join(map(str, metadata['evals_run']))} ({metadata['runs_per_configuration']} runs each per configuration)", + "", + "## Summary", + "", + f"| Metric | {label_a} | {label_b} | Delta |", + "|--------|------------|---------------|-------|", + ] + + a_summary = run_summary.get(config_a, {}) + b_summary = run_summary.get(config_b, {}) + delta = run_summary.get("delta", {}) + + # Format pass rate + a_pr = a_summary.get("pass_rate", {}) + b_pr = b_summary.get("pass_rate", {}) + lines.append(f"| Pass Rate | {a_pr.get('mean', 0)*100:.0f}% ± {a_pr.get('stddev', 0)*100:.0f}% | {b_pr.get('mean', 0)*100:.0f}% ± {b_pr.get('stddev', 0)*100:.0f}% | {delta.get('pass_rate', '—')} |") + + # Format time + a_time = a_summary.get("time_seconds", {}) + b_time = b_summary.get("time_seconds", {}) + lines.append(f"| Time | {a_time.get('mean', 0):.1f}s ± {a_time.get('stddev', 0):.1f}s | {b_time.get('mean', 0):.1f}s ± {b_time.get('stddev', 0):.1f}s | {delta.get('time_seconds', '—')}s |") + + # Format tokens + a_tokens = a_summary.get("tokens", {}) + b_tokens = b_summary.get("tokens", {}) + lines.append(f"| Tokens | {a_tokens.get('mean', 0):.0f} ± {a_tokens.get('stddev', 0):.0f} | {b_tokens.get('mean', 0):.0f} ± {b_tokens.get('stddev', 0):.0f} | {delta.get('tokens', '—')} |") + + # Notes section + if benchmark.get("notes"): + lines.extend([ + "", + "## Notes", + "" + ]) + for note in benchmark["notes"]: + lines.append(f"- {note}") + + return "\n".join(lines) + + +def main(): + parser = argparse.ArgumentParser( + description="Aggregate benchmark run results into summary statistics" + ) + parser.add_argument( + "benchmark_dir", + type=Path, + help="Path to the benchmark directory" + ) + parser.add_argument( + "--skill-name", + default="", + help="Name of the skill being benchmarked" + ) + parser.add_argument( + "--skill-path", + default="", + help="Path to the skill being benchmarked" + ) + parser.add_argument( + "--output", "-o", + type=Path, + help="Output path for benchmark.json (default: /benchmark.json)" + ) + + args = parser.parse_args() + + if not args.benchmark_dir.exists(): + print(f"Directory not found: {args.benchmark_dir}") + sys.exit(1) + + # Generate benchmark + benchmark = generate_benchmark(args.benchmark_dir, args.skill_name, args.skill_path) + + # Determine output paths + output_json = args.output or (args.benchmark_dir / "benchmark.json") + output_md = output_json.with_suffix(".md") + + # Write benchmark.json + with open(output_json, "w") as f: + json.dump(benchmark, f, indent=2) + print(f"Generated: {output_json}") + + # Write benchmark.md + markdown = generate_markdown(benchmark) + with open(output_md, "w") as f: + f.write(markdown) + print(f"Generated: {output_md}") + + # Print summary + run_summary = benchmark["run_summary"] + configs = [k for k in run_summary if k != "delta"] + delta = run_summary.get("delta", {}) + + print(f"\nSummary:") + for config in configs: + pr = run_summary[config]["pass_rate"]["mean"] + label = config.replace("_", " ").title() + print(f" {label}: {pr*100:.1f}% pass rate") + print(f" Delta: {delta.get('pass_rate', '—')}") + + +if __name__ == "__main__": + main() diff --git a/.github/skills/skill-creator/scripts/generate_report.py b/.github/skills/skill-creator/scripts/generate_report.py new file mode 100644 index 0000000..959e30a --- /dev/null +++ b/.github/skills/skill-creator/scripts/generate_report.py @@ -0,0 +1,326 @@ +#!/usr/bin/env python3 +"""Generate an HTML report from run_loop.py output. + +Takes the JSON output from run_loop.py and generates a visual HTML report +showing each description attempt with check/x for each test case. +Distinguishes between train and test queries. +""" + +import argparse +import html +import json +import sys +from pathlib import Path + + +def generate_html(data: dict, auto_refresh: bool = False, skill_name: str = "") -> str: + """Generate HTML report from loop output data. If auto_refresh is True, adds a meta refresh tag.""" + history = data.get("history", []) + holdout = data.get("holdout", 0) + title_prefix = html.escape(skill_name + " \u2014 ") if skill_name else "" + + # Get all unique queries from train and test sets, with should_trigger info + train_queries: list[dict] = [] + test_queries: list[dict] = [] + if history: + for r in history[0].get("train_results", history[0].get("results", [])): + train_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)}) + if history[0].get("test_results"): + for r in history[0].get("test_results", []): + test_queries.append({"query": r["query"], "should_trigger": r.get("should_trigger", True)}) + + refresh_tag = ' \n' if auto_refresh else "" + + html_parts = [""" + + + +""" + refresh_tag + """ """ + title_prefix + """Skill Description Optimization + + + + + + +

""" + title_prefix + """Skill Description Optimization

+
+ Optimizing your skill's description. This page updates automatically as Claude tests different versions of your skill's description. Each row is an iteration — a new description attempt. The columns show test queries: green checkmarks mean the skill triggered correctly (or correctly didn't trigger), red crosses mean it got it wrong. The "Train" score shows performance on queries used to improve the description; the "Test" score shows performance on held-out queries the optimizer hasn't seen. When it's done, Claude will apply the best-performing description to your skill. +
+"""] + + # Summary section + best_test_score = data.get('best_test_score') + best_train_score = data.get('best_train_score') + html_parts.append(f""" +
+

Original: {html.escape(data.get('original_description', 'N/A'))}

+

Best: {html.escape(data.get('best_description', 'N/A'))}

+

Best Score: {data.get('best_score', 'N/A')} {'(test)' if best_test_score else '(train)'}

+

Iterations: {data.get('iterations_run', 0)} | Train: {data.get('train_size', '?')} | Test: {data.get('test_size', '?')}

+
+""") + + # Legend + html_parts.append(""" +
+ Query columns: + Should trigger + Should NOT trigger + Train + Test +
+""") + + # Table header + html_parts.append(""" +
+ + + + + + + +""") + + # Add column headers for train queries + for qinfo in train_queries: + polarity = "positive-col" if qinfo["should_trigger"] else "negative-col" + html_parts.append(f' \n') + + # Add column headers for test queries (different color) + for qinfo in test_queries: + polarity = "positive-col" if qinfo["should_trigger"] else "negative-col" + html_parts.append(f' \n') + + html_parts.append(""" + + +""") + + # Find best iteration for highlighting + if test_queries: + best_iter = max(history, key=lambda h: h.get("test_passed") or 0).get("iteration") + else: + best_iter = max(history, key=lambda h: h.get("train_passed", h.get("passed", 0))).get("iteration") + + # Add rows for each iteration + for h in history: + iteration = h.get("iteration", "?") + train_passed = h.get("train_passed", h.get("passed", 0)) + train_total = h.get("train_total", h.get("total", 0)) + test_passed = h.get("test_passed") + test_total = h.get("test_total") + description = h.get("description", "") + train_results = h.get("train_results", h.get("results", [])) + test_results = h.get("test_results", []) + + # Create lookups for results by query + train_by_query = {r["query"]: r for r in train_results} + test_by_query = {r["query"]: r for r in test_results} if test_results else {} + + # Compute aggregate correct/total runs across all retries + def aggregate_runs(results: list[dict]) -> tuple[int, int]: + correct = 0 + total = 0 + for r in results: + runs = r.get("runs", 0) + triggers = r.get("triggers", 0) + total += runs + if r.get("should_trigger", True): + correct += triggers + else: + correct += runs - triggers + return correct, total + + train_correct, train_runs = aggregate_runs(train_results) + test_correct, test_runs = aggregate_runs(test_results) + + # Determine score classes + def score_class(correct: int, total: int) -> str: + if total > 0: + ratio = correct / total + if ratio >= 0.8: + return "score-good" + elif ratio >= 0.5: + return "score-ok" + return "score-bad" + + train_class = score_class(train_correct, train_runs) + test_class = score_class(test_correct, test_runs) + + row_class = "best-row" if iteration == best_iter else "" + + html_parts.append(f""" + + + + +""") + + # Add result for each train query + for qinfo in train_queries: + r = train_by_query.get(qinfo["query"], {}) + did_pass = r.get("pass", False) + triggers = r.get("triggers", 0) + runs = r.get("runs", 0) + + icon = "✓" if did_pass else "✗" + css_class = "pass" if did_pass else "fail" + + html_parts.append(f' \n') + + # Add result for each test query (with different background) + for qinfo in test_queries: + r = test_by_query.get(qinfo["query"], {}) + did_pass = r.get("pass", False) + triggers = r.get("triggers", 0) + runs = r.get("runs", 0) + + icon = "✓" if did_pass else "✗" + css_class = "pass" if did_pass else "fail" + + html_parts.append(f' \n') + + html_parts.append(" \n") + + html_parts.append(""" +
IterTrainTestDescription{html.escape(qinfo["query"])}{html.escape(qinfo["query"])}
{iteration}{train_correct}/{train_runs}{test_correct}/{test_runs}{html.escape(description)}{icon}{triggers}/{runs}{icon}{triggers}/{runs}
+
+""") + + html_parts.append(""" + + +""") + + return "".join(html_parts) + + +def main(): + parser = argparse.ArgumentParser(description="Generate HTML report from run_loop output") + parser.add_argument("input", help="Path to JSON output from run_loop.py (or - for stdin)") + parser.add_argument("-o", "--output", default=None, help="Output HTML file (default: stdout)") + parser.add_argument("--skill-name", default="", help="Skill name to include in the report title") + args = parser.parse_args() + + if args.input == "-": + data = json.load(sys.stdin) + else: + data = json.loads(Path(args.input).read_text()) + + html_output = generate_html(data, skill_name=args.skill_name) + + if args.output: + Path(args.output).write_text(html_output) + print(f"Report written to {args.output}", file=sys.stderr) + else: + print(html_output) + + +if __name__ == "__main__": + main() diff --git a/.github/skills/skill-creator/scripts/improve_description.py b/.github/skills/skill-creator/scripts/improve_description.py new file mode 100644 index 0000000..a270777 --- /dev/null +++ b/.github/skills/skill-creator/scripts/improve_description.py @@ -0,0 +1,248 @@ +#!/usr/bin/env python3 +"""Improve a skill description based on eval results. + +Takes eval results (from run_eval.py) and generates an improved description +using Claude with extended thinking. +""" + +import argparse +import json +import re +import sys +from pathlib import Path + +import anthropic + +from scripts.utils import parse_skill_md + + +def improve_description( + client: anthropic.Anthropic, + skill_name: str, + skill_content: str, + current_description: str, + eval_results: dict, + history: list[dict], + model: str, + test_results: dict | None = None, + log_dir: Path | None = None, + iteration: int | None = None, +) -> str: + """Call Claude to improve the description based on eval results.""" + failed_triggers = [ + r for r in eval_results["results"] + if r["should_trigger"] and not r["pass"] + ] + false_triggers = [ + r for r in eval_results["results"] + if not r["should_trigger"] and not r["pass"] + ] + + # Build scores summary + train_score = f"{eval_results['summary']['passed']}/{eval_results['summary']['total']}" + if test_results: + test_score = f"{test_results['summary']['passed']}/{test_results['summary']['total']}" + scores_summary = f"Train: {train_score}, Test: {test_score}" + else: + scores_summary = f"Train: {train_score}" + + prompt = f"""You are optimizing a skill description for a Claude Code skill called "{skill_name}". A "skill" is sort of like a prompt, but with progressive disclosure -- there's a title and description that Claude sees when deciding whether to use the skill, and then if it does use the skill, it reads the .md file which has lots more details and potentially links to other resources in the skill folder like helper files and scripts and additional documentation or examples. + +The description appears in Claude's "available_skills" list. When a user sends a query, Claude decides whether to invoke the skill based solely on the title and on this description. Your goal is to write a description that triggers for relevant queries, and doesn't trigger for irrelevant ones. + +Here's the current description: + +"{current_description}" + + +Current scores ({scores_summary}): + +""" + if failed_triggers: + prompt += "FAILED TO TRIGGER (should have triggered but didn't):\n" + for r in failed_triggers: + prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n' + prompt += "\n" + + if false_triggers: + prompt += "FALSE TRIGGERS (triggered but shouldn't have):\n" + for r in false_triggers: + prompt += f' - "{r["query"]}" (triggered {r["triggers"]}/{r["runs"]} times)\n' + prompt += "\n" + + if history: + prompt += "PREVIOUS ATTEMPTS (do NOT repeat these — try something structurally different):\n\n" + for h in history: + train_s = f"{h.get('train_passed', h.get('passed', 0))}/{h.get('train_total', h.get('total', 0))}" + test_s = f"{h.get('test_passed', '?')}/{h.get('test_total', '?')}" if h.get('test_passed') is not None else None + score_str = f"train={train_s}" + (f", test={test_s}" if test_s else "") + prompt += f'\n' + prompt += f'Description: "{h["description"]}"\n' + if "results" in h: + prompt += "Train results:\n" + for r in h["results"]: + status = "PASS" if r["pass"] else "FAIL" + prompt += f' [{status}] "{r["query"][:80]}" (triggered {r["triggers"]}/{r["runs"]})\n' + if h.get("note"): + prompt += f'Note: {h["note"]}\n' + prompt += "\n\n" + + prompt += f""" + +Skill content (for context on what the skill does): + +{skill_content} + + +Based on the failures, write a new and improved description that is more likely to trigger correctly. When I say "based on the failures", it's a bit of a tricky line to walk because we don't want to overfit to the specific cases you're seeing. So what I DON'T want you to do is produce an ever-expanding list of specific queries that this skill should or shouldn't trigger for. Instead, try to generalize from the failures to broader categories of user intent and situations where this skill would be useful or not useful. The reason for this is twofold: + +1. Avoid overfitting +2. The list might get loooong and it's injected into ALL queries and there might be a lot of skills, so we don't want to blow too much space on any given description. + +Concretely, your description should not be more than about 100-200 words, even if that comes at the cost of accuracy. + +Here are some tips that we've found to work well in writing these descriptions: +- The skill should be phrased in the imperative -- "Use this skill for" rather than "this skill does" +- The skill description should focus on the user's intent, what they are trying to achieve, vs. the implementation details of how the skill works. +- The description competes with other skills for Claude's attention — make it distinctive and immediately recognizable. +- If you're getting lots of failures after repeated attempts, change things up. Try different sentence structures or wordings. + +I'd encourage you to be creative and mix up the style in different iterations since you'll have multiple opportunities to try different approaches and we'll just grab the highest-scoring one at the end. + +Please respond with only the new description text in tags, nothing else.""" + + response = client.messages.create( + model=model, + max_tokens=16000, + thinking={ + "type": "enabled", + "budget_tokens": 10000, + }, + messages=[{"role": "user", "content": prompt}], + ) + + # Extract thinking and text from response + thinking_text = "" + text = "" + for block in response.content: + if block.type == "thinking": + thinking_text = block.thinking + elif block.type == "text": + text = block.text + + # Parse out the tags + match = re.search(r"(.*?)", text, re.DOTALL) + description = match.group(1).strip().strip('"') if match else text.strip().strip('"') + + # Log the transcript + transcript: dict = { + "iteration": iteration, + "prompt": prompt, + "thinking": thinking_text, + "response": text, + "parsed_description": description, + "char_count": len(description), + "over_limit": len(description) > 1024, + } + + # If over 1024 chars, ask the model to shorten it + if len(description) > 1024: + shorten_prompt = f"Your description is {len(description)} characters, which exceeds the hard 1024 character limit. Please rewrite it to be under 1024 characters while preserving the most important trigger words and intent coverage. Respond with only the new description in tags." + shorten_response = client.messages.create( + model=model, + max_tokens=16000, + thinking={ + "type": "enabled", + "budget_tokens": 10000, + }, + messages=[ + {"role": "user", "content": prompt}, + {"role": "assistant", "content": text}, + {"role": "user", "content": shorten_prompt}, + ], + ) + + shorten_thinking = "" + shorten_text = "" + for block in shorten_response.content: + if block.type == "thinking": + shorten_thinking = block.thinking + elif block.type == "text": + shorten_text = block.text + + match = re.search(r"(.*?)", shorten_text, re.DOTALL) + shortened = match.group(1).strip().strip('"') if match else shorten_text.strip().strip('"') + + transcript["rewrite_prompt"] = shorten_prompt + transcript["rewrite_thinking"] = shorten_thinking + transcript["rewrite_response"] = shorten_text + transcript["rewrite_description"] = shortened + transcript["rewrite_char_count"] = len(shortened) + description = shortened + + transcript["final_description"] = description + + if log_dir: + log_dir.mkdir(parents=True, exist_ok=True) + log_file = log_dir / f"improve_iter_{iteration or 'unknown'}.json" + log_file.write_text(json.dumps(transcript, indent=2)) + + return description + + +def main(): + parser = argparse.ArgumentParser(description="Improve a skill description based on eval results") + parser.add_argument("--eval-results", required=True, help="Path to eval results JSON (from run_eval.py)") + parser.add_argument("--skill-path", required=True, help="Path to skill directory") + parser.add_argument("--history", default=None, help="Path to history JSON (previous attempts)") + parser.add_argument("--model", required=True, help="Model for improvement") + parser.add_argument("--verbose", action="store_true", help="Print thinking to stderr") + args = parser.parse_args() + + skill_path = Path(args.skill_path) + if not (skill_path / "SKILL.md").exists(): + print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr) + sys.exit(1) + + eval_results = json.loads(Path(args.eval_results).read_text()) + history = [] + if args.history: + history = json.loads(Path(args.history).read_text()) + + name, _, content = parse_skill_md(skill_path) + current_description = eval_results["description"] + + if args.verbose: + print(f"Current: {current_description}", file=sys.stderr) + print(f"Score: {eval_results['summary']['passed']}/{eval_results['summary']['total']}", file=sys.stderr) + + client = anthropic.Anthropic() + new_description = improve_description( + client=client, + skill_name=name, + skill_content=content, + current_description=current_description, + eval_results=eval_results, + history=history, + model=args.model, + ) + + if args.verbose: + print(f"Improved: {new_description}", file=sys.stderr) + + # Output as JSON with both the new description and updated history + output = { + "description": new_description, + "history": history + [{ + "description": current_description, + "passed": eval_results["summary"]["passed"], + "failed": eval_results["summary"]["failed"], + "total": eval_results["summary"]["total"], + "results": eval_results["results"], + }], + } + print(json.dumps(output, indent=2)) + + +if __name__ == "__main__": + main() diff --git a/.github/skills/skill-creator/scripts/package_skill.py b/.github/skills/skill-creator/scripts/package_skill.py new file mode 100644 index 0000000..f48eac4 --- /dev/null +++ b/.github/skills/skill-creator/scripts/package_skill.py @@ -0,0 +1,136 @@ +#!/usr/bin/env python3 +""" +Skill Packager - Creates a distributable .skill file of a skill folder + +Usage: + python utils/package_skill.py [output-directory] + +Example: + python utils/package_skill.py skills/public/my-skill + python utils/package_skill.py skills/public/my-skill ./dist +""" + +import fnmatch +import sys +import zipfile +from pathlib import Path +from scripts.quick_validate import validate_skill + +# Patterns to exclude when packaging skills. +EXCLUDE_DIRS = {"__pycache__", "node_modules"} +EXCLUDE_GLOBS = {"*.pyc"} +EXCLUDE_FILES = {".DS_Store"} +# Directories excluded only at the skill root (not when nested deeper). +ROOT_EXCLUDE_DIRS = {"evals"} + + +def should_exclude(rel_path: Path) -> bool: + """Check if a path should be excluded from packaging.""" + parts = rel_path.parts + if any(part in EXCLUDE_DIRS for part in parts): + return True + # rel_path is relative to skill_path.parent, so parts[0] is the skill + # folder name and parts[1] (if present) is the first subdir. + if len(parts) > 1 and parts[1] in ROOT_EXCLUDE_DIRS: + return True + name = rel_path.name + if name in EXCLUDE_FILES: + return True + return any(fnmatch.fnmatch(name, pat) for pat in EXCLUDE_GLOBS) + + +def package_skill(skill_path, output_dir=None): + """ + Package a skill folder into a .skill file. + + Args: + skill_path: Path to the skill folder + output_dir: Optional output directory for the .skill file (defaults to current directory) + + Returns: + Path to the created .skill file, or None if error + """ + skill_path = Path(skill_path).resolve() + + # Validate skill folder exists + if not skill_path.exists(): + print(f"❌ Error: Skill folder not found: {skill_path}") + return None + + if not skill_path.is_dir(): + print(f"❌ Error: Path is not a directory: {skill_path}") + return None + + # Validate SKILL.md exists + skill_md = skill_path / "SKILL.md" + if not skill_md.exists(): + print(f"❌ Error: SKILL.md not found in {skill_path}") + return None + + # Run validation before packaging + print("🔍 Validating skill...") + valid, message = validate_skill(skill_path) + if not valid: + print(f"❌ Validation failed: {message}") + print(" Please fix the validation errors before packaging.") + return None + print(f"✅ {message}\n") + + # Determine output location + skill_name = skill_path.name + if output_dir: + output_path = Path(output_dir).resolve() + output_path.mkdir(parents=True, exist_ok=True) + else: + output_path = Path.cwd() + + skill_filename = output_path / f"{skill_name}.skill" + + # Create the .skill file (zip format) + try: + with zipfile.ZipFile(skill_filename, 'w', zipfile.ZIP_DEFLATED) as zipf: + # Walk through the skill directory, excluding build artifacts + for file_path in skill_path.rglob('*'): + if not file_path.is_file(): + continue + arcname = file_path.relative_to(skill_path.parent) + if should_exclude(arcname): + print(f" Skipped: {arcname}") + continue + zipf.write(file_path, arcname) + print(f" Added: {arcname}") + + print(f"\n✅ Successfully packaged skill to: {skill_filename}") + return skill_filename + + except Exception as e: + print(f"❌ Error creating .skill file: {e}") + return None + + +def main(): + if len(sys.argv) < 2: + print("Usage: python utils/package_skill.py [output-directory]") + print("\nExample:") + print(" python utils/package_skill.py skills/public/my-skill") + print(" python utils/package_skill.py skills/public/my-skill ./dist") + sys.exit(1) + + skill_path = sys.argv[1] + output_dir = sys.argv[2] if len(sys.argv) > 2 else None + + print(f"📦 Packaging skill: {skill_path}") + if output_dir: + print(f" Output directory: {output_dir}") + print() + + result = package_skill(skill_path, output_dir) + + if result: + sys.exit(0) + else: + sys.exit(1) + + +if __name__ == "__main__": + main() diff --git a/.github/skills/skill-creator/scripts/quick_validate.py b/.github/skills/skill-creator/scripts/quick_validate.py new file mode 100644 index 0000000..ed8e1dd --- /dev/null +++ b/.github/skills/skill-creator/scripts/quick_validate.py @@ -0,0 +1,103 @@ +#!/usr/bin/env python3 +""" +Quick validation script for skills - minimal version +""" + +import sys +import os +import re +import yaml +from pathlib import Path + +def validate_skill(skill_path): + """Basic validation of a skill""" + skill_path = Path(skill_path) + + # Check SKILL.md exists + skill_md = skill_path / 'SKILL.md' + if not skill_md.exists(): + return False, "SKILL.md not found" + + # Read and validate frontmatter + content = skill_md.read_text() + if not content.startswith('---'): + return False, "No YAML frontmatter found" + + # Extract frontmatter + match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL) + if not match: + return False, "Invalid frontmatter format" + + frontmatter_text = match.group(1) + + # Parse YAML frontmatter + try: + frontmatter = yaml.safe_load(frontmatter_text) + if not isinstance(frontmatter, dict): + return False, "Frontmatter must be a YAML dictionary" + except yaml.YAMLError as e: + return False, f"Invalid YAML in frontmatter: {e}" + + # Define allowed properties + ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata', 'compatibility'} + + # Check for unexpected properties (excluding nested keys under metadata) + unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES + if unexpected_keys: + return False, ( + f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. " + f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}" + ) + + # Check required fields + if 'name' not in frontmatter: + return False, "Missing 'name' in frontmatter" + if 'description' not in frontmatter: + return False, "Missing 'description' in frontmatter" + + # Extract name for validation + name = frontmatter.get('name', '') + if not isinstance(name, str): + return False, f"Name must be a string, got {type(name).__name__}" + name = name.strip() + if name: + # Check naming convention (kebab-case: lowercase with hyphens) + if not re.match(r'^[a-z0-9-]+$', name): + return False, f"Name '{name}' should be kebab-case (lowercase letters, digits, and hyphens only)" + if name.startswith('-') or name.endswith('-') or '--' in name: + return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens" + # Check name length (max 64 characters per spec) + if len(name) > 64: + return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters." + + # Extract and validate description + description = frontmatter.get('description', '') + if not isinstance(description, str): + return False, f"Description must be a string, got {type(description).__name__}" + description = description.strip() + if description: + # Check for angle brackets + if '<' in description or '>' in description: + return False, "Description cannot contain angle brackets (< or >)" + # Check description length (max 1024 characters per spec) + if len(description) > 1024: + return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters." + + # Validate compatibility field if present (optional) + compatibility = frontmatter.get('compatibility', '') + if compatibility: + if not isinstance(compatibility, str): + return False, f"Compatibility must be a string, got {type(compatibility).__name__}" + if len(compatibility) > 500: + return False, f"Compatibility is too long ({len(compatibility)} characters). Maximum is 500 characters." + + return True, "Skill is valid!" + +if __name__ == "__main__": + if len(sys.argv) != 2: + print("Usage: python quick_validate.py ") + sys.exit(1) + + valid, message = validate_skill(sys.argv[1]) + print(message) + sys.exit(0 if valid else 1) \ No newline at end of file diff --git a/.github/skills/skill-creator/scripts/restructure_evals.py b/.github/skills/skill-creator/scripts/restructure_evals.py new file mode 100644 index 0000000..d5a34ea --- /dev/null +++ b/.github/skills/skill-creator/scripts/restructure_evals.py @@ -0,0 +1,42 @@ +#!/usr/bin/env python3 +"""Restructure eval workspace: move grading.json into run-1/ dirs and add summary fields.""" +import json +import shutil +from pathlib import Path + +base = Path("/Volumes/sourcecode/repos/cratis/Documentation/.github/skills/skills-eval-workspace/iteration-1") + +for grading_file in sorted(base.rglob("grading.json")): + parent = grading_file.parent + if parent.name not in ("with_skill", "without_skill"): + continue + + with open(grading_file) as f: + grading = json.load(f) + + expectations = grading.get("expectations", []) + passed = sum(1 for e in expectations if e.get("passed", False)) + failed = len(expectations) - passed + total = len(expectations) + pass_rate = round(passed / total, 4) if total > 0 else 0.0 + + grading["summary"] = { + "pass_rate": pass_rate, + "passed": passed, + "failed": failed, + "total": total + } + + run_dir = parent / "run-1" + run_dir.mkdir(exist_ok=True) + + with open(run_dir / "grading.json", "w") as f: + json.dump(grading, f, indent=2) + + timing_file = parent / "timing.json" + if timing_file.exists(): + shutil.copy(timing_file, run_dir / "timing.json") + + print(f"OK {parent.parent.parent.name}/{parent.parent.name}/{parent.name} pass_rate={pass_rate} ({passed}/{total})") + +print("Done!") diff --git a/.github/skills/skill-creator/scripts/run_eval.py b/.github/skills/skill-creator/scripts/run_eval.py new file mode 100644 index 0000000..e58c70b --- /dev/null +++ b/.github/skills/skill-creator/scripts/run_eval.py @@ -0,0 +1,310 @@ +#!/usr/bin/env python3 +"""Run trigger evaluation for a skill description. + +Tests whether a skill's description causes Claude to trigger (read the skill) +for a set of queries. Outputs results as JSON. +""" + +import argparse +import json +import os +import select +import subprocess +import sys +import time +import uuid +from concurrent.futures import ProcessPoolExecutor, as_completed +from pathlib import Path + +from scripts.utils import parse_skill_md + + +def find_project_root() -> Path: + """Find the project root by walking up from cwd looking for .claude/. + + Mimics how Claude Code discovers its project root, so the command file + we create ends up where claude -p will look for it. + """ + current = Path.cwd() + for parent in [current, *current.parents]: + if (parent / ".claude").is_dir(): + return parent + return current + + +def run_single_query( + query: str, + skill_name: str, + skill_description: str, + timeout: int, + project_root: str, + model: str | None = None, +) -> bool: + """Run a single query and return whether the skill was triggered. + + Creates a command file in .claude/commands/ so it appears in Claude's + available_skills list, then runs `claude -p` with the raw query. + Uses --include-partial-messages to detect triggering early from + stream events (content_block_start) rather than waiting for the + full assistant message, which only arrives after tool execution. + """ + unique_id = uuid.uuid4().hex[:8] + clean_name = f"{skill_name}-skill-{unique_id}" + project_commands_dir = Path(project_root) / ".claude" / "commands" + command_file = project_commands_dir / f"{clean_name}.md" + + try: + project_commands_dir.mkdir(parents=True, exist_ok=True) + # Use YAML block scalar to avoid breaking on quotes in description + indented_desc = "\n ".join(skill_description.split("\n")) + command_content = ( + f"---\n" + f"description: |\n" + f" {indented_desc}\n" + f"---\n\n" + f"# {skill_name}\n\n" + f"This skill handles: {skill_description}\n" + ) + command_file.write_text(command_content) + + cmd = [ + "claude", + "-p", query, + "--output-format", "stream-json", + "--verbose", + "--include-partial-messages", + ] + if model: + cmd.extend(["--model", model]) + + # Remove CLAUDECODE env var to allow nesting claude -p inside a + # Claude Code session. The guard is for interactive terminal conflicts; + # programmatic subprocess usage is safe. + env = {k: v for k, v in os.environ.items() if k != "CLAUDECODE"} + + process = subprocess.Popen( + cmd, + stdout=subprocess.PIPE, + stderr=subprocess.DEVNULL, + cwd=project_root, + env=env, + ) + + triggered = False + start_time = time.time() + buffer = "" + # Track state for stream event detection + pending_tool_name = None + accumulated_json = "" + + try: + while time.time() - start_time < timeout: + if process.poll() is not None: + remaining = process.stdout.read() + if remaining: + buffer += remaining.decode("utf-8", errors="replace") + break + + ready, _, _ = select.select([process.stdout], [], [], 1.0) + if not ready: + continue + + chunk = os.read(process.stdout.fileno(), 8192) + if not chunk: + break + buffer += chunk.decode("utf-8", errors="replace") + + while "\n" in buffer: + line, buffer = buffer.split("\n", 1) + line = line.strip() + if not line: + continue + + try: + event = json.loads(line) + except json.JSONDecodeError: + continue + + # Early detection via stream events + if event.get("type") == "stream_event": + se = event.get("event", {}) + se_type = se.get("type", "") + + if se_type == "content_block_start": + cb = se.get("content_block", {}) + if cb.get("type") == "tool_use": + tool_name = cb.get("name", "") + if tool_name in ("Skill", "Read"): + pending_tool_name = tool_name + accumulated_json = "" + else: + return False + + elif se_type == "content_block_delta" and pending_tool_name: + delta = se.get("delta", {}) + if delta.get("type") == "input_json_delta": + accumulated_json += delta.get("partial_json", "") + if clean_name in accumulated_json: + return True + + elif se_type in ("content_block_stop", "message_stop"): + if pending_tool_name: + return clean_name in accumulated_json + if se_type == "message_stop": + return False + + # Fallback: full assistant message + elif event.get("type") == "assistant": + message = event.get("message", {}) + for content_item in message.get("content", []): + if content_item.get("type") != "tool_use": + continue + tool_name = content_item.get("name", "") + tool_input = content_item.get("input", {}) + if tool_name == "Skill" and clean_name in tool_input.get("skill", ""): + triggered = True + elif tool_name == "Read" and clean_name in tool_input.get("file_path", ""): + triggered = True + return triggered + + elif event.get("type") == "result": + return triggered + finally: + # Clean up process on any exit path (return, exception, timeout) + if process.poll() is None: + process.kill() + process.wait() + + return triggered + finally: + if command_file.exists(): + command_file.unlink() + + +def run_eval( + eval_set: list[dict], + skill_name: str, + description: str, + num_workers: int, + timeout: int, + project_root: Path, + runs_per_query: int = 1, + trigger_threshold: float = 0.5, + model: str | None = None, +) -> dict: + """Run the full eval set and return results.""" + results = [] + + with ProcessPoolExecutor(max_workers=num_workers) as executor: + future_to_info = {} + for item in eval_set: + for run_idx in range(runs_per_query): + future = executor.submit( + run_single_query, + item["query"], + skill_name, + description, + timeout, + str(project_root), + model, + ) + future_to_info[future] = (item, run_idx) + + query_triggers: dict[str, list[bool]] = {} + query_items: dict[str, dict] = {} + for future in as_completed(future_to_info): + item, _ = future_to_info[future] + query = item["query"] + query_items[query] = item + if query not in query_triggers: + query_triggers[query] = [] + try: + query_triggers[query].append(future.result()) + except Exception as e: + print(f"Warning: query failed: {e}", file=sys.stderr) + query_triggers[query].append(False) + + for query, triggers in query_triggers.items(): + item = query_items[query] + trigger_rate = sum(triggers) / len(triggers) + should_trigger = item["should_trigger"] + if should_trigger: + did_pass = trigger_rate >= trigger_threshold + else: + did_pass = trigger_rate < trigger_threshold + results.append({ + "query": query, + "should_trigger": should_trigger, + "trigger_rate": trigger_rate, + "triggers": sum(triggers), + "runs": len(triggers), + "pass": did_pass, + }) + + passed = sum(1 for r in results if r["pass"]) + total = len(results) + + return { + "skill_name": skill_name, + "description": description, + "results": results, + "summary": { + "total": total, + "passed": passed, + "failed": total - passed, + }, + } + + +def main(): + parser = argparse.ArgumentParser(description="Run trigger evaluation for a skill description") + parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file") + parser.add_argument("--skill-path", required=True, help="Path to skill directory") + parser.add_argument("--description", default=None, help="Override description to test") + parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers") + parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds") + parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query") + parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold") + parser.add_argument("--model", default=None, help="Model to use for claude -p (default: user's configured model)") + parser.add_argument("--verbose", action="store_true", help="Print progress to stderr") + args = parser.parse_args() + + eval_set = json.loads(Path(args.eval_set).read_text()) + skill_path = Path(args.skill_path) + + if not (skill_path / "SKILL.md").exists(): + print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr) + sys.exit(1) + + name, original_description, content = parse_skill_md(skill_path) + description = args.description or original_description + project_root = find_project_root() + + if args.verbose: + print(f"Evaluating: {description}", file=sys.stderr) + + output = run_eval( + eval_set=eval_set, + skill_name=name, + description=description, + num_workers=args.num_workers, + timeout=args.timeout, + project_root=project_root, + runs_per_query=args.runs_per_query, + trigger_threshold=args.trigger_threshold, + model=args.model, + ) + + if args.verbose: + summary = output["summary"] + print(f"Results: {summary['passed']}/{summary['total']} passed", file=sys.stderr) + for r in output["results"]: + status = "PASS" if r["pass"] else "FAIL" + rate_str = f"{r['triggers']}/{r['runs']}" + print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:70]}", file=sys.stderr) + + print(json.dumps(output, indent=2)) + + +if __name__ == "__main__": + main() diff --git a/.github/skills/skill-creator/scripts/run_loop.py b/.github/skills/skill-creator/scripts/run_loop.py new file mode 100644 index 0000000..36f9b4e --- /dev/null +++ b/.github/skills/skill-creator/scripts/run_loop.py @@ -0,0 +1,332 @@ +#!/usr/bin/env python3 +"""Run the eval + improve loop until all pass or max iterations reached. + +Combines run_eval.py and improve_description.py in a loop, tracking history +and returning the best description found. Supports train/test split to prevent +overfitting. +""" + +import argparse +import json +import random +import sys +import tempfile +import time +import webbrowser +from pathlib import Path + +import anthropic + +from scripts.generate_report import generate_html +from scripts.improve_description import improve_description +from scripts.run_eval import find_project_root, run_eval +from scripts.utils import parse_skill_md + + +def split_eval_set(eval_set: list[dict], holdout: float, seed: int = 42) -> tuple[list[dict], list[dict]]: + """Split eval set into train and test sets, stratified by should_trigger.""" + random.seed(seed) + + # Separate by should_trigger + trigger = [e for e in eval_set if e["should_trigger"]] + no_trigger = [e for e in eval_set if not e["should_trigger"]] + + # Shuffle each group + random.shuffle(trigger) + random.shuffle(no_trigger) + + # Calculate split points + n_trigger_test = max(1, int(len(trigger) * holdout)) + n_no_trigger_test = max(1, int(len(no_trigger) * holdout)) + + # Split + test_set = trigger[:n_trigger_test] + no_trigger[:n_no_trigger_test] + train_set = trigger[n_trigger_test:] + no_trigger[n_no_trigger_test:] + + return train_set, test_set + + +def run_loop( + eval_set: list[dict], + skill_path: Path, + description_override: str | None, + num_workers: int, + timeout: int, + max_iterations: int, + runs_per_query: int, + trigger_threshold: float, + holdout: float, + model: str, + verbose: bool, + live_report_path: Path | None = None, + log_dir: Path | None = None, +) -> dict: + """Run the eval + improvement loop.""" + project_root = find_project_root() + name, original_description, content = parse_skill_md(skill_path) + current_description = description_override or original_description + + # Split into train/test if holdout > 0 + if holdout > 0: + train_set, test_set = split_eval_set(eval_set, holdout) + if verbose: + print(f"Split: {len(train_set)} train, {len(test_set)} test (holdout={holdout})", file=sys.stderr) + else: + train_set = eval_set + test_set = [] + + client = anthropic.Anthropic() + history = [] + exit_reason = "unknown" + + for iteration in range(1, max_iterations + 1): + if verbose: + print(f"\n{'='*60}", file=sys.stderr) + print(f"Iteration {iteration}/{max_iterations}", file=sys.stderr) + print(f"Description: {current_description}", file=sys.stderr) + print(f"{'='*60}", file=sys.stderr) + + # Evaluate train + test together in one batch for parallelism + all_queries = train_set + test_set + t0 = time.time() + all_results = run_eval( + eval_set=all_queries, + skill_name=name, + description=current_description, + num_workers=num_workers, + timeout=timeout, + project_root=project_root, + runs_per_query=runs_per_query, + trigger_threshold=trigger_threshold, + model=model, + ) + eval_elapsed = time.time() - t0 + + # Split results back into train/test by matching queries + train_queries_set = {q["query"] for q in train_set} + train_result_list = [r for r in all_results["results"] if r["query"] in train_queries_set] + test_result_list = [r for r in all_results["results"] if r["query"] not in train_queries_set] + + train_passed = sum(1 for r in train_result_list if r["pass"]) + train_total = len(train_result_list) + train_summary = {"passed": train_passed, "failed": train_total - train_passed, "total": train_total} + train_results = {"results": train_result_list, "summary": train_summary} + + if test_set: + test_passed = sum(1 for r in test_result_list if r["pass"]) + test_total = len(test_result_list) + test_summary = {"passed": test_passed, "failed": test_total - test_passed, "total": test_total} + test_results = {"results": test_result_list, "summary": test_summary} + else: + test_results = None + test_summary = None + + history.append({ + "iteration": iteration, + "description": current_description, + "train_passed": train_summary["passed"], + "train_failed": train_summary["failed"], + "train_total": train_summary["total"], + "train_results": train_results["results"], + "test_passed": test_summary["passed"] if test_summary else None, + "test_failed": test_summary["failed"] if test_summary else None, + "test_total": test_summary["total"] if test_summary else None, + "test_results": test_results["results"] if test_results else None, + # For backward compat with report generator + "passed": train_summary["passed"], + "failed": train_summary["failed"], + "total": train_summary["total"], + "results": train_results["results"], + }) + + # Write live report if path provided + if live_report_path: + partial_output = { + "original_description": original_description, + "best_description": current_description, + "best_score": "in progress", + "iterations_run": len(history), + "holdout": holdout, + "train_size": len(train_set), + "test_size": len(test_set), + "history": history, + } + live_report_path.write_text(generate_html(partial_output, auto_refresh=True, skill_name=name)) + + if verbose: + def print_eval_stats(label, results, elapsed): + pos = [r for r in results if r["should_trigger"]] + neg = [r for r in results if not r["should_trigger"]] + tp = sum(r["triggers"] for r in pos) + pos_runs = sum(r["runs"] for r in pos) + fn = pos_runs - tp + fp = sum(r["triggers"] for r in neg) + neg_runs = sum(r["runs"] for r in neg) + tn = neg_runs - fp + total = tp + tn + fp + fn + precision = tp / (tp + fp) if (tp + fp) > 0 else 1.0 + recall = tp / (tp + fn) if (tp + fn) > 0 else 1.0 + accuracy = (tp + tn) / total if total > 0 else 0.0 + print(f"{label}: {tp+tn}/{total} correct, precision={precision:.0%} recall={recall:.0%} accuracy={accuracy:.0%} ({elapsed:.1f}s)", file=sys.stderr) + for r in results: + status = "PASS" if r["pass"] else "FAIL" + rate_str = f"{r['triggers']}/{r['runs']}" + print(f" [{status}] rate={rate_str} expected={r['should_trigger']}: {r['query'][:60]}", file=sys.stderr) + + print_eval_stats("Train", train_results["results"], eval_elapsed) + if test_summary: + print_eval_stats("Test ", test_results["results"], 0) + + if train_summary["failed"] == 0: + exit_reason = f"all_passed (iteration {iteration})" + if verbose: + print(f"\nAll train queries passed on iteration {iteration}!", file=sys.stderr) + break + + if iteration == max_iterations: + exit_reason = f"max_iterations ({max_iterations})" + if verbose: + print(f"\nMax iterations reached ({max_iterations}).", file=sys.stderr) + break + + # Improve the description based on train results + if verbose: + print(f"\nImproving description...", file=sys.stderr) + + t0 = time.time() + # Strip test scores from history so improvement model can't see them + blinded_history = [ + {k: v for k, v in h.items() if not k.startswith("test_")} + for h in history + ] + new_description = improve_description( + client=client, + skill_name=name, + skill_content=content, + current_description=current_description, + eval_results=train_results, + history=blinded_history, + model=model, + log_dir=log_dir, + iteration=iteration, + ) + improve_elapsed = time.time() - t0 + + if verbose: + print(f"Proposed ({improve_elapsed:.1f}s): {new_description}", file=sys.stderr) + + current_description = new_description + + # Find the best iteration by TEST score (or train if no test set) + if test_set: + best = max(history, key=lambda h: h["test_passed"] or 0) + best_score = f"{best['test_passed']}/{best['test_total']}" + else: + best = max(history, key=lambda h: h["train_passed"]) + best_score = f"{best['train_passed']}/{best['train_total']}" + + if verbose: + print(f"\nExit reason: {exit_reason}", file=sys.stderr) + print(f"Best score: {best_score} (iteration {best['iteration']})", file=sys.stderr) + + return { + "exit_reason": exit_reason, + "original_description": original_description, + "best_description": best["description"], + "best_score": best_score, + "best_train_score": f"{best['train_passed']}/{best['train_total']}", + "best_test_score": f"{best['test_passed']}/{best['test_total']}" if test_set else None, + "final_description": current_description, + "iterations_run": len(history), + "holdout": holdout, + "train_size": len(train_set), + "test_size": len(test_set), + "history": history, + } + + +def main(): + parser = argparse.ArgumentParser(description="Run eval + improve loop") + parser.add_argument("--eval-set", required=True, help="Path to eval set JSON file") + parser.add_argument("--skill-path", required=True, help="Path to skill directory") + parser.add_argument("--description", default=None, help="Override starting description") + parser.add_argument("--num-workers", type=int, default=10, help="Number of parallel workers") + parser.add_argument("--timeout", type=int, default=30, help="Timeout per query in seconds") + parser.add_argument("--max-iterations", type=int, default=5, help="Max improvement iterations") + parser.add_argument("--runs-per-query", type=int, default=3, help="Number of runs per query") + parser.add_argument("--trigger-threshold", type=float, default=0.5, help="Trigger rate threshold") + parser.add_argument("--holdout", type=float, default=0.4, help="Fraction of eval set to hold out for testing (0 to disable)") + parser.add_argument("--model", required=True, help="Model for improvement") + parser.add_argument("--verbose", action="store_true", help="Print progress to stderr") + parser.add_argument("--report", default="auto", help="Generate HTML report at this path (default: 'auto' for temp file, 'none' to disable)") + parser.add_argument("--results-dir", default=None, help="Save all outputs (results.json, report.html, log.txt) to a timestamped subdirectory here") + args = parser.parse_args() + + eval_set = json.loads(Path(args.eval_set).read_text()) + skill_path = Path(args.skill_path) + + if not (skill_path / "SKILL.md").exists(): + print(f"Error: No SKILL.md found at {skill_path}", file=sys.stderr) + sys.exit(1) + + name, _, _ = parse_skill_md(skill_path) + + # Set up live report path + if args.report != "none": + if args.report == "auto": + timestamp = time.strftime("%Y%m%d_%H%M%S") + live_report_path = Path(tempfile.gettempdir()) / f"skill_description_report_{skill_path.name}_{timestamp}.html" + else: + live_report_path = Path(args.report) + # Open the report immediately so the user can watch + live_report_path.write_text("

Starting optimization loop...

") + webbrowser.open(str(live_report_path)) + else: + live_report_path = None + + # Determine output directory (create before run_loop so logs can be written) + if args.results_dir: + timestamp = time.strftime("%Y-%m-%d_%H%M%S") + results_dir = Path(args.results_dir) / timestamp + results_dir.mkdir(parents=True, exist_ok=True) + else: + results_dir = None + + log_dir = results_dir / "logs" if results_dir else None + + output = run_loop( + eval_set=eval_set, + skill_path=skill_path, + description_override=args.description, + num_workers=args.num_workers, + timeout=args.timeout, + max_iterations=args.max_iterations, + runs_per_query=args.runs_per_query, + trigger_threshold=args.trigger_threshold, + holdout=args.holdout, + model=args.model, + verbose=args.verbose, + live_report_path=live_report_path, + log_dir=log_dir, + ) + + # Save JSON output + json_output = json.dumps(output, indent=2) + print(json_output) + if results_dir: + (results_dir / "results.json").write_text(json_output) + + # Write final HTML report (without auto-refresh) + if live_report_path: + live_report_path.write_text(generate_html(output, auto_refresh=False, skill_name=name)) + print(f"\nReport: {live_report_path}", file=sys.stderr) + + if results_dir and live_report_path: + (results_dir / "report.html").write_text(generate_html(output, auto_refresh=False, skill_name=name)) + + if results_dir: + print(f"Results saved to: {results_dir}", file=sys.stderr) + + +if __name__ == "__main__": + main() diff --git a/.github/skills/skill-creator/scripts/utils.py b/.github/skills/skill-creator/scripts/utils.py new file mode 100644 index 0000000..51b6a07 --- /dev/null +++ b/.github/skills/skill-creator/scripts/utils.py @@ -0,0 +1,47 @@ +"""Shared utilities for skill-creator scripts.""" + +from pathlib import Path + + + +def parse_skill_md(skill_path: Path) -> tuple[str, str, str]: + """Parse a SKILL.md file, returning (name, description, full_content).""" + content = (skill_path / "SKILL.md").read_text() + lines = content.split("\n") + + if lines[0].strip() != "---": + raise ValueError("SKILL.md missing frontmatter (no opening ---)") + + end_idx = None + for i, line in enumerate(lines[1:], start=1): + if line.strip() == "---": + end_idx = i + break + + if end_idx is None: + raise ValueError("SKILL.md missing frontmatter (no closing ---)") + + name = "" + description = "" + frontmatter_lines = lines[1:end_idx] + i = 0 + while i < len(frontmatter_lines): + line = frontmatter_lines[i] + if line.startswith("name:"): + name = line[len("name:"):].strip().strip('"').strip("'") + elif line.startswith("description:"): + value = line[len("description:"):].strip() + # Handle YAML multiline indicators (>, |, >-, |-) + if value in (">", "|", ">-", "|-"): + continuation_lines: list[str] = [] + i += 1 + while i < len(frontmatter_lines) and (frontmatter_lines[i].startswith(" ") or frontmatter_lines[i].startswith("\t")): + continuation_lines.append(frontmatter_lines[i].strip()) + i += 1 + description = " ".join(continuation_lines) + continue + else: + description = value.strip('"').strip("'") + i += 1 + + return name, description, content