Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .github/instructions/generic.instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,8 @@ Provide project context and coding guidelines that AI should follow when generat

- When using `getConfiguration().inspect()`, always pass a scope/Uri to `getConfiguration(section, scope)` — otherwise `workspaceFolderValue` will be `undefined` because VS Code doesn't know which folder to inspect (1)
- **path.normalize() vs path.resolve()**: On Windows, `path.normalize('\test')` keeps it as `\test`, but `path.resolve('\test')` adds the current drive → `C:\test`. When comparing paths, use `path.resolve()` on BOTH sides or they won't match (2)
- **Path comparisons vs user display**: Use `normalizePath()` from `pathUtils.ts` when comparing paths or using them as map keys, but preserve original paths for user-facing output like settings, logs, and UI (1)
- **CI test jobs need webpack build**: Smoke/E2E/integration tests run in a real VS Code instance against `dist/extension.js` (built by webpack). CI jobs must run `npm run compile` (webpack), not just `npm run compile-tests` (tsc). Without webpack, the extension code isn't built and tests run against stale/missing code (1)
- **Use inspect() for setting checks with defaults from other extensions**: When checking `python.useEnvironmentsExtension`, use `config.inspect()` and only check explicit user values (`globalValue`, `workspaceValue`, `workspaceFolderValue`). Ignore `defaultValue` as it may come from other extensions' package.json even when not installed (1)
- **API is flat, not nested**: Use `api.getEnvironments()`, NOT `api.environments.getEnvironments()`. The extension exports a flat API object (1)
- **PythonEnvironment has `envId`, not `id`**: The environment identifier is `env.envId` (a `PythonEnvironmentId` object with `id` and `managerId`), not a direct `id` property (1)
89 changes: 89 additions & 0 deletions .github/skills/debug-failing-test/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
---
name: debug-failing-test
description: Debug a failing test using an iterative logging approach, then clean up and document the learning.
---

Debug a failing unit test by iteratively adding verbose logging, running the test, and analyzing the output until the root cause is found and fixed.

## Workflow

### Phase 1: Initial Assessment

1. **Run the failing test** to capture the current error message and stack trace
2. **Read the test file** to understand what is being tested
3. **Read the source file** being tested to understand the expected behavior
4. **Identify the assertion that fails** and what values are involved

### Phase 2: Iterative Debugging Loop

Repeat until the root cause is understood:

1. **Add verbose logging** around the suspicious code:
- Use `console.log('[DEBUG]', ...)` with descriptive labels
- Log input values, intermediate states, and return values
- Log before/after key operations
- Add timestamps if timing might be relevant

2. **Run the test** and capture output

3. **Assess the logging output:**
- What values are unexpected?
- Where does the behavior diverge from expectations?
- What additional logging would help narrow down the issue?

4. **Decide next action:**
- If root cause is clear → proceed to fix
- If more information needed → add more targeted logging and repeat

### Phase 3: Fix and Verify

1. **Implement the fix** based on findings
2. **Run the test** to verify it passes
3. **Run related tests** to ensure no regressions

### Phase 4: Clean Up

1. **Remove ALL debugging artifacts:**
- Delete all `console.log('[DEBUG]', ...)` statements added
- Remove any temporary variables or code added for debugging
- Ensure the code is in a clean, production-ready state

2. **Verify the test still passes** after cleanup

### Phase 5: Document and Learn

1. **Provide a summary** to the user (1-3 sentences):
- What was the bug?
- What was the fix?

2. **Record the learning** by following the learning instructions (if you have them):
- Extract a single, clear learning from this debugging session
- Add it to the "Learnings" section of the most relevant instruction file
- If a similar learning already exists, increment its counter instead

## Logging Conventions

When adding debug logging, use this format for easy identification and removal:

```typescript
console.log('[DEBUG] <location>:', <value>);
console.log('[DEBUG] before <operation>:', { input, state });
console.log('[DEBUG] after <operation>:', { result, state });
```

## Example Debug Session

```typescript
// Added logging example:
console.log('[DEBUG] getEnvironments input:', { workspaceFolder });
const envs = await manager.getEnvironments(workspaceFolder);
console.log('[DEBUG] getEnvironments result:', { count: envs.length, envs });
```

## Notes

- Prefer targeted logging over flooding the output
- Start with the failing assertion and work backwards
- Consider async timing issues, race conditions, and mock setup problems
- Check that mocks are returning expected values
- Verify test setup/teardown is correct
125 changes: 125 additions & 0 deletions .github/skills/run-e2e-tests/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
---
name: run-e2e-tests
description: Run E2E tests to verify complete user workflows like environment discovery, creation, and selection. Use this before releases or after major changes.
---

Run E2E (end-to-end) tests to verify complete user workflows work correctly.

## When to Use This Skill

- Before submitting a PR with significant changes
- After modifying environment discovery, creation, or selection logic
- Before a release to validate full workflows
- When user reports a workflow is broken

**Note:** Run smoke tests first. If smoke tests fail, E2E tests will also fail.

## Quick Reference

| Action | Command |
| ----------------- | -------------------------------------------------------------- |
| Run all E2E tests | `npm run compile && npm run compile-tests && npm run e2e-test` |
| Run specific test | `npm run e2e-test -- --grep "discovers"` |
| Debug in VS Code | Debug panel → "E2E Tests" → F5 |

## How E2E Tests Work

Unlike unit tests (mocked) and smoke tests (quick checks), E2E tests:

1. Launch a real VS Code instance with the extension
2. Exercise complete user workflows via the real API
3. Verify end-to-end behavior (discovery → selection → execution)

They take longer (1-3 minutes) but catch integration issues.

## Workflow

### Step 1: Compile and Run

```bash
npm run compile && npm run compile-tests && npm run e2e-test
```

### Step 2: Interpret Results

**Pass:**

```
E2E: Environment Discovery
✓ Can trigger environment refresh
✓ Discovers at least one environment
✓ Environments have required properties
✓ Can get global environments

4 passing (45s)
```

**Fail:** Check error message and see Debugging section.

## Debugging Failures

| Error | Cause | Fix |
| ---------------------------- | ---------------------- | ------------------------------------------- |
| `No environments discovered` | Python not installed | Install Python, verify it's on PATH |
| `Extension not found` | Build failed | Run `npm run compile` |
| `API not available` | Activation error | Debug with F5, check Debug Console |
| `Timeout exceeded` | Slow operation or hang | Increase timeout or check for blocking code |

For detailed debugging: Debug panel → "E2E Tests" → F5

## Prerequisites

E2E tests have system requirements:

- **Python installed** - At least one Python interpreter must be discoverable
- **Extension builds** - Run `npm run compile` before tests
- **CI needs webpack build** - Run `npm run compile` (webpack) before tests, not just `npm run compile-tests` (tsc)

## Adding New E2E Tests

Create files in `src/test/e2e/` with pattern `*.e2e.test.ts`:

```typescript
import * as assert from 'assert';
import * as vscode from 'vscode';
import { waitForCondition } from '../testUtils';
import { ENVS_EXTENSION_ID } from '../constants';

suite('E2E: [Workflow Name]', function () {
this.timeout(120_000); // 2 minutes

let api: ExtensionApi;

suiteSetup(async function () {
const extension = vscode.extensions.getExtension(ENVS_EXTENSION_ID);
assert.ok(extension, 'Extension not found');
if (!extension.isActive) await extension.activate();
api = extension.exports;
});

test('[Test description]', async function () {
// Use real API (flat structure, not nested!)
// api.getEnvironments(), not api.environments.getEnvironments()
await waitForCondition(
async () => (await api.getEnvironments('all')).length > 0,
60_000,
'No environments found',
);
});
});
```

## Test Files

| File | Purpose |
| ----------------------------------------------- | ------------------------------------ |
| `src/test/e2e/environmentDiscovery.e2e.test.ts` | Discovery workflow tests |
| `src/test/e2e/index.ts` | Test runner entry point |
| `src/test/testUtils.ts` | Utilities (`waitForCondition`, etc.) |

## Notes

- E2E tests are slower than smoke tests (expect 1-3 minutes)
- They may create/modify files - cleanup happens in `suiteTeardown`
- First run downloads VS Code (~100MB, cached in `.vscode-test/`)
- For more details on E2E tests and how they compare to other test types, refer to the project's testing documentation.
112 changes: 112 additions & 0 deletions .github/skills/run-integration-tests/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
---
name: run-integration-tests
description: Run integration tests to verify that extension components work together correctly. Use this after modifying component interactions or event handling.
---

Run integration tests to verify that multiple components (managers, API, settings) work together correctly.

## When to Use This Skill

- After modifying how components communicate (events, state sharing)
- After changing the API surface
- After modifying managers or their interactions
- When components seem out of sync (UI shows stale data, events not firing)

## Quick Reference

| Action | Command |
| ------------------------- | ---------------------------------------------------------------------- |
| Run all integration tests | `npm run compile && npm run compile-tests && npm run integration-test` |
| Run specific test | `npm run integration-test -- --grep "manager"` |
| Debug in VS Code | Debug panel → "Integration Tests" → F5 |

## How Integration Tests Work

Integration tests run in a real VS Code instance but focus on **component interactions**:

- Does the API reflect manager state?
- Do events fire when state changes?
- Do different scopes return appropriate data?

They're faster than E2E (which test full workflows) but more thorough than smoke tests.

## Workflow

### Step 1: Compile and Run

```bash
npm run compile && npm run compile-tests && npm run integration-test
```

### Step 2: Interpret Results

**Pass:**

```
Integration: Environment Manager + API
✓ API reflects manager state after refresh
✓ Different scopes return appropriate environments
✓ Environment objects have consistent structure

3 passing (25s)
```

**Fail:** Check error message and see Debugging section.

## Debugging Failures

| Error | Cause | Fix |
| ------------------- | --------------------------- | ------------------------------- |
| `API not available` | Extension activation failed | Check Debug Console |
| `Event not fired` | Event wiring issue | Check event registration |
| `State mismatch` | Components out of sync | Add logging, check update paths |
| `Timeout` | Async operation stuck | Check for deadlocks |

For detailed debugging: Debug panel → "Integration Tests" → F5

## Adding New Integration Tests

Create files in `src/test/integration/` with pattern `*.integration.test.ts`:

```typescript
import * as assert from 'assert';
import * as vscode from 'vscode';
import { waitForCondition, TestEventHandler } from '../testUtils';
import { ENVS_EXTENSION_ID } from '../constants';

suite('Integration: [Component A] + [Component B]', function () {
this.timeout(120_000);

let api: ExtensionApi;

suiteSetup(async function () {
const extension = vscode.extensions.getExtension(ENVS_EXTENSION_ID);
assert.ok(extension, 'Extension not found');
if (!extension.isActive) await extension.activate();
api = extension.exports;
});

test('[Interaction test]', async function () {
// Test component interaction
});
});
```

## Test Files

| File | Purpose |
| -------------------------------------------------------- | -------------------------------------------------- |
| `src/test/integration/envManagerApi.integration.test.ts` | Manager + API tests |
| `src/test/integration/index.ts` | Test runner entry point |
| `src/test/testUtils.ts` | Utilities (`waitForCondition`, `TestEventHandler`) |

## Prerequisites

- **CI needs webpack build** - Run `npm run compile` (webpack) before tests, not just `npm run compile-tests` (tsc)
- **Extension builds** - Run `npm run compile` before tests

## Notes

- Integration tests are faster than E2E (30s-2min vs 1-3min)
- Focus on testing component boundaries, not full user workflows
- First run downloads VS Code (~100MB, cached in `.vscode-test/`)
Loading