Skip to content

chore(workflows): Deploy autonomous workflow system#23

Open
blackboxprogramming wants to merge 1 commit intomainfrom
automation/autonomous-workflows
Open

chore(workflows): Deploy autonomous workflow system#23
blackboxprogramming wants to merge 1 commit intomainfrom
automation/autonomous-workflows

Conversation

@blackboxprogramming
Copy link
Owner

Autonomous Workflow Deployment

Deploying the enhanced autonomous workflow system:

Workflows Added

  • autonomous-orchestrator.yml - Master coordinator (test, build, review, deploy)
  • autonomous-self-healer.yml - Auto-fixes failures
  • autonomous-cross-repo.yml - Cross-repo sync
  • autonomous-dependency-manager.yml - Smart dependency updates
  • autonomous-issue-manager.yml - AI-powered issue triage

Features

  • Auto-detects project type
  • AI-powered code review
  • Self-healing on failures
  • Auto-merge when tests pass
  • Memory persistence between runs

Deployed by BlackRoad Automation

Deployed workflows:
- autonomous-orchestrator.yml
- autonomous-self-healer.yml
- autonomous-cross-repo.yml
- autonomous-dependency-manager.yml
- autonomous-issue-manager.yml

Co-Authored-By: BlackRoad Bot <bot@blackroad.ai>
Copilot AI review requested due to automatic review settings February 6, 2026 00:57
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ffe5bb4f8c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

workflow_run:
workflows: ["Autonomous Orchestrator", "CI", "Test"]
types: [completed]
conclusions: [failure]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Remove unsupported workflow_run conclusions filter

workflow_run does not accept a conclusions key, so this trigger is not actually filtered to failures; in this file there is no job-level if guard on github.event.workflow_run.conclusion, which means the self-healer can run after successful upstream workflows and attempt unnecessary fixes/commits. This should be moved to a supported condition (e.g., job if) to keep the workflow scoped to real failures.

Useful? React with 👍 / 👎.

# Check for common vulnerabilities
echo "Checking for common vulnerabilities..."
if [ -f "package.json" ]; then
npm audit --json 2>/dev/null | jq -r '.metadata.vulnerabilities | to_entries[] | select(.value > 0)' && VULNS=$((VULNS + 1))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Count npm vulnerabilities only when audit finds any

This jq command exits successfully even when it prints nothing (e.g., all vulnerability counts are zero), so the && VULNS=$((VULNS + 1)) branch runs for any repository that has a package.json. As a result, steps.scan.outputs.vulns becomes non-zero on clean projects and the later auto-fix step is triggered unnecessarily on every run.

Useful? React with 👍 / 👎.

-H "Content-Type: application/json" \
-d '{
"title": "'"$TITLE"'",
"body": "'"$(echo "$BODY" | head -c 2000 | jq -Rs .)"'",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Build valid JSON body for AI issue analysis request

The request body injects jq -Rs . output inside extra quotes, which produces invalid JSON (e.g., "body": ""text""), so the API call fails and falls back to {} every time. That means AI-derived labels/priority/assignee are never populated, breaking the intended triage behavior for newly opened issues.

Useful? React with 👍 / 👎.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a comprehensive autonomous workflow system designed to automate repository operations including testing, building, code review, deployment, dependency management, issue triage, self-healing, and cross-repository synchronization. The system consists of five interconnected GitHub Actions workflows that make decisions and modifications with minimal human oversight.

Changes:

  • Adds autonomous orchestrator for master coordination of repository operations
  • Implements self-healing workflow to automatically detect and fix common failures
  • Adds intelligent issue management with AI-powered triage and stale issue cleanup
  • Introduces automated dependency update management with PR creation
  • Provides cross-repository synchronization for workflows, configs, and dependencies

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 35 comments.

Show a summary per file
File Description
.github/workflows/autonomous-orchestrator.yml Master coordinator that handles testing, building, security scanning, code review, auto-merge, auto-deploy, and scheduled maintenance
.github/workflows/autonomous-self-healer.yml Diagnoses workflow failures and attempts automated fixes for lint, dependency, security, and code issues
.github/workflows/autonomous-issue-manager.yml Automatically triages new issues with labels, manages stale issues, creates failure reports, and generates statistics
.github/workflows/autonomous-dependency-manager.yml Manages dependency updates for npm and Python projects with configurable update strategies and PR creation
.github/workflows/autonomous-cross-repo.yml Synchronizes workflows, configurations, and dependencies across multiple repositories in the BlackRoad ecosystem

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +314 to +316
if grep -rn --include="*.js" --include="*.ts" --include="*.py" --include="*.json" \
-E "(password|secret|api_key|token)\s*[:=]\s*['\"][^'\"]+['\"]" . 2>/dev/null | \
grep -v node_modules | grep -v ".git" | head -5; then
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The security scanner uses basic regex patterns to detect hardcoded credentials, but the patterns are insufficient and could miss many real security issues. For example, the pattern (password|secret|api_key|token)\s*[:=]\s*['\"][^'\"]+['\"] won't catch credentials without quotes, environment variable assignments, or credentials split across lines. Additionally, the scanner only checks a limited set of file extensions and excludes node_modules but doesn't exclude other common locations where test fixtures or example credentials might legitimately appear (like test/, examples/, docs/).

Use a dedicated secret scanning tool like TruffleHog, GitGuardian, or GitHub's built-in secret scanning instead of regex patterns. If regex must be used, expand the patterns and add context-aware filtering to reduce false positives.

Suggested change
if grep -rn --include="*.js" --include="*.ts" --include="*.py" --include="*.json" \
-E "(password|secret|api_key|token)\s*[:=]\s*['\"][^'\"]+['\"]" . 2>/dev/null | \
grep -v node_modules | grep -v ".git" | head -5; then
if grep -RIn \
--exclude-dir={.git,node_modules,tests,test,docs,documentation,examples,example,fixtures} \
--include="*.js" --include="*.ts" --include="*.py" --include="*.json" \
--include="*.yml" --include="*.yaml" --include="*.env" --include=".env.*" \
-E '([Pp]ass(word)?|[Pp]asswd|[Ss]ecret|[Aa][Pp][Ii][_:-]?[Kk]ey|[Tt]oken|AUTH_TOKEN)\s*[:=]\s*([\"\']?).{8,}\2|^(export\s+)?[A-Z0-9_]*(PASS|SECRET|TOKEN|KEY)[A-Z0-9_]*\s*=\s*.+|AKIA[0-9A-Z]{16}' \
. 2>/dev/null | grep -viE 'dummy|example|sample|test' | head -5; then

Copilot uses AI. Check for mistakes.
Comment on lines +431 to +458
auto-merge:
name: "Auto-Merge"
needs: [analyze, test-and-build, security-scan, code-review]
if: |
github.event_name == 'pull_request' &&
needs.test-and-build.outputs.test_result != 'failed' &&
needs.test-and-build.outputs.build_result != 'failed' &&
needs.security-scan.outputs.severity != 'critical'
runs-on: ubuntu-latest

steps:
- name: Enable Auto-Merge
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo "Enabling auto-merge for PR #${{ github.event.pull_request.number }}"

# Try multiple merge strategies
gh pr merge ${{ github.event.pull_request.number }} \
--repo ${{ github.repository }} \
--auto \
--squash \
--delete-branch 2>/dev/null || \
gh pr merge ${{ github.event.pull_request.number }} \
--repo ${{ github.repository }} \
--squash 2>/dev/null || \
echo "Auto-merge queued - waiting for required checks"

Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The auto-merge job will automatically merge pull requests without human review based solely on automated checks. The conditions only check that tests/builds didn't fail and that there are no critical security issues, but they don't verify that tests actually ran (they could be skipped), don't check code quality, and don't ensure that the changes are actually correct or appropriate. The workflow uses --auto --squash --delete-branch which means it will merge and delete branches without any human oversight.

This poses significant risks:

  1. PRs with skipped tests will be auto-merged
  2. Changes that pass tests but introduce logical errors will be merged
  3. Malicious changes that don't trigger the basic security scan will be merged
  4. No human review of the actual changes

Auto-merge should require explicit opt-in per PR (via labels or comments) and should have stricter conditions, including verification that tests actually ran and passed, code review approval, and checks for breaking changes.

Copilot uses AI. Check for mistakes.
Comment on lines +316 to +349
- name: Create Issue for Manual Review
if: needs.diagnose.outputs.fixable == 'maybe'
uses: actions/github-script@v7
with:
script: |
const failureType = '${{ needs.diagnose.outputs.failure_type }}';
const details = `${{ needs.diagnose.outputs.failure_details }}`;

await github.rest.issues.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: `[Self-Healer] ${failureType.replace(/_/g, ' ').replace(/\b\w/g, l => l.toUpperCase())} Needs Review`,
body: `## Automated Failure Analysis

**Failure Type:** ${failureType}
**Run ID:** ${{ github.event.workflow_run.id || 'N/A' }}

### Error Details
\`\`\`
${details.substring(0, 2000)}
\`\`\`

### AI Analysis
The self-healing agent attempted to analyze this issue but requires human review.

### Suggested Actions
1. Review the error logs above
2. Check recent changes that may have caused this
3. Apply appropriate fix

---
*Created by BlackRoad Self-Healing Agent*`,
labels: ['bug', 'automated', 'needs-triage']
});
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AI fix job makes an unauthenticated API call to an external service and then creates GitHub issues based on potentially untrusted data from that service. The details field from the diagnosis is inserted directly into the issue body without any sanitization or validation. If the failure logs contain malicious content or GitHub formatting exploits, they will be included verbatim in the issue. The substring(0, 2000) truncation is applied but doesn't prevent injection of malicious markdown or HTML.

Sanitize and validate all data from external sources before using it in GitHub API calls. Consider using GitHub's built-in issue templates instead of dynamically generated content. Add authentication to the external API calls to ensure responses come from a trusted source.

Copilot uses AI. Check for mistakes.
Comment on lines +199 to +272
stale-cleanup:
name: "Stale Cleanup"
if: github.event_name == 'schedule' || github.event.inputs.action == 'cleanup_stale'
runs-on: ubuntu-latest

steps:
- name: Find Stale Issues
uses: actions/github-script@v7
with:
script: |
const staleDays = parseInt('${{ env.STALE_DAYS }}');
const closeDays = parseInt('${{ env.CLOSE_DAYS }}');
const now = new Date();

// Get open issues
const issues = await github.rest.issues.listForRepo({
owner: context.repo.owner,
repo: context.repo.repo,
state: 'open',
per_page: 100
});

for (const issue of issues.data) {
// Skip PRs
if (issue.pull_request) continue;

const updatedAt = new Date(issue.updated_at);
const daysSinceUpdate = Math.floor((now - updatedAt) / (1000 * 60 * 60 * 24));

const hasStaleLabel = issue.labels.some(l => l.name === 'stale');
const isProtected = issue.labels.some(l =>
['pinned', 'security', 'priority:high', 'in-progress'].includes(l.name)
);

if (isProtected) continue;

// Already marked stale - check if should close
if (hasStaleLabel && daysSinceUpdate >= closeDays) {
await github.rest.issues.update({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
state: 'closed',
state_reason: 'not_planned'
});

await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
body: `This issue has been automatically closed due to inactivity.\n\nIf this is still relevant, please reopen it with additional context.\n\n---\n*Closed by BlackRoad Autonomous Agent*`
});

console.log(`Closed stale issue #${issue.number}`);
}
// Mark as stale
else if (!hasStaleLabel && daysSinceUpdate >= staleDays) {
await github.rest.issues.addLabels({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
labels: ['stale']
});

await github.rest.issues.createComment({
owner: context.repo.owner,
repo: context.repo.repo,
issue_number: issue.number,
body: `This issue has been automatically marked as **stale** because it has not had recent activity.\n\nIt will be closed in ${closeDays} days if no further activity occurs.\n\n---\n*Marked by BlackRoad Autonomous Agent*`
});

console.log(`Marked issue #${issue.number} as stale`);
}
}
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stale cleanup job automatically closes issues after they've been marked stale for the configured number of days, but it doesn't check if the issue has unread comments, pending actions, or is actively being worked on. An issue could be closed while someone is writing a response or has just added new information. The workflow uses state_reason: 'not_planned' for all closed stale issues, which is inaccurate - some might be resolved, some might be waiting for information, etc.

Add checks to verify that there are no recent comments or activity before auto-closing. Consider using different close reasons based on issue state. Allow issues to be un-marked as stale if there's new activity rather than having a hard deadline.

Copilot uses AI. Check for mistakes.
echo "Synced workflows to ${{ matrix.repo }}"

- name: Create PR
if: github.event.inputs.dry_run != 'true'
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cross-repo workflow has a dry_run input parameter that defaults to true, but the condition checking for dry run uses github.event.inputs.dry_run != 'true'. For scheduled or push-triggered runs (not workflow_dispatch), github.event.inputs.dry_run will be empty/null, which != 'true', so the workflow will proceed with actual changes. This means the "safety" of dry_run being true by default only applies to manual runs, not automated ones.

Change the default behavior to be safe for automated runs. Use a condition like github.event.inputs.dry_run == 'false' so that changes only happen when explicitly set to false, or add a separate check to ensure automated runs don't make changes without explicit configuration.

Suggested change
if: github.event.inputs.dry_run != 'true'
if: github.event.inputs.dry_run == 'false'

Copilot uses AI. Check for mistakes.
Comment on lines +144 to +167
run: |
BRANCH="deps/npm-update-$(date +%Y%m%d)"
git config user.name "BlackRoad Dependency Bot"
git config user.email "deps@blackroad.ai"

# Check if branch already exists
if git ls-remote --exit-code origin "$BRANCH" 2>/dev/null; then
echo "Branch already exists, updating..."
git fetch origin "$BRANCH"
git checkout "$BRANCH"
git merge main --no-edit || true
else
git checkout -b "$BRANCH"
fi

git add package.json package-lock.json
git commit -m "chore(deps): Update npm dependencies

Update type: ${{ github.event.inputs.update_type || 'safe' }}
Tests: ${{ steps.test.outputs.result || 'not run' }}

Co-Authored-By: BlackRoad Bot <bot@blackroad.ai>" || true

git push -u origin "$BRANCH" --force
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The workflow uses git push -u origin "$BRANCH" --force which force-pushes updates to the dependency branch. Force pushing can cause problems if multiple runs of this workflow overlap or if someone else has checked out the branch. Additionally, the logic attempts to merge main into an existing branch with git merge main --no-edit || true, which silently ignores merge conflicts with || true. If there are conflicts, the merge fails silently and the branch may be in an inconsistent state before being force-pushed.

Remove the --force flag from the push command, or check if the branch needs updating before force-pushing. Handle merge conflicts explicitly instead of suppressing them with || true. Consider using a unique branch name for each update cycle to avoid collisions.

Copilot uses AI. Check for mistakes.
issue_comment:
types: [created]
schedule:
- cron: '0 */4 * * *' # Every 4 hours
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scheduled maintenance job runs every 4 hours (line 17: cron: '0 */4 * * *') and includes operations like updating dependencies, cleaning stale branches, and potentially creating health issues. Running dependency updates every 4 hours is excessive and could create significant noise with constant PRs and commits. Most projects update dependencies weekly or monthly, not multiple times per day.

Additionally, the maintenance job creates a "Repository Health Alert" issue every 4 hours if the health score is below 70, which could spam the repository with duplicate issues.

Change the schedule to a more reasonable frequency (daily or weekly) for maintenance tasks, or separate different maintenance tasks into different schedules (e.g., dependency updates weekly, health checks daily, stale cleanup daily).

Copilot uses AI. Check for mistakes.
workflow_run:
workflows: ["Autonomous Orchestrator", "CI", "Test"]
types: [completed]
conclusions: [failure]
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The self-healer workflow listens for workflow_run events with conclusion "failure", but the conclusions field is not a valid workflow_run trigger filter. The correct field name is conclusion (singular), not conclusions (plural). This means the workflow will never be triggered by failed workflow runs as intended.

Change conclusions: [failure] to conclusion: failure on line 10 to fix the trigger configuration.

Suggested change
conclusions: [failure]
conclusion: failure

Copilot uses AI. Check for mistakes.
Comment on lines +209 to +231
- name: Update Dependencies
id: update
run: |
pip install pip-tools safety 2>/dev/null || true

if [ -f "requirements.txt" ]; then
# Backup original
cp requirements.txt requirements.txt.bak

# Update all packages
pip install --upgrade $(cat requirements.txt | grep -v "^#" | cut -d'=' -f1 | tr '\n' ' ') 2>/dev/null || true

# Regenerate requirements with updated versions
pip freeze > requirements.txt.new

# Check for changes
if ! diff -q requirements.txt requirements.txt.new > /dev/null 2>&1; then
mv requirements.txt.new requirements.txt
echo "changes=true" >> $GITHUB_OUTPUT
else
echo "changes=false" >> $GITHUB_OUTPUT
fi
fi
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Python dependency update logic is fundamentally flawed. It tries to upgrade all packages by parsing requirements.txt, extracting package names by removing version pins with cut -d'=' -f1, and then installing the latest versions. However, this approach:

  1. Doesn't preserve important version constraints (like package>=1.0,<2.0)
  2. Uses pip freeze to generate a new requirements.txt, which includes ALL installed packages (including transitive dependencies), dramatically expanding the requirements file
  3. Doesn't account for packages installed from git repos or with extras (e.g., package[extra])
  4. The || true suppresses all errors, so if installations fail, the workflow continues silently

This will likely break Python projects. Instead, use tools like pip-tools (pip-compile --upgrade) or poetry to manage updates properly while preserving dependency constraints.

Copilot uses AI. Check for mistakes.
Comment on lines +248 to +290

if [ "${{ needs.analyze.outputs.has_tests }}" = "true" ]; then
case "${{ needs.analyze.outputs.project_type }}" in
nodejs)
npm test 2>&1 && RESULT="passed" || RESULT="failed"
;;
python)
pytest -v 2>&1 && RESULT="passed" || python -m unittest discover 2>&1 && RESULT="passed" || RESULT="failed"
;;
go)
go test ./... 2>&1 && RESULT="passed" || RESULT="failed"
;;
rust)
cargo test 2>&1 && RESULT="passed" || RESULT="failed"
;;
esac
fi

echo "result=$RESULT" >> $GITHUB_OUTPUT
echo "Test result: $RESULT"

- name: Run Build
id: build
continue-on-error: true
run: |
RESULT="skipped"

if [ "${{ needs.analyze.outputs.has_build }}" = "true" ]; then
case "${{ needs.analyze.outputs.project_type }}" in
nodejs)
npm run build 2>&1 && RESULT="passed" || RESULT="failed"
;;
rust)
cargo build --release 2>&1 && RESULT="passed" || RESULT="failed"
;;
cloudflare-worker)
npx wrangler build 2>/dev/null && RESULT="passed" || RESULT="skipped"
;;
esac
fi

echo "result=$RESULT" >> $GITHUB_OUTPUT
echo "Build result: $RESULT"
Copy link

Copilot AI Feb 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test execution uses short-circuit operators that can mask test failures. For example: npm test 2>&1 && RESULT="passed" || RESULT="failed" will set RESULT to "failed" even if npm test doesn't exist. Similarly, pytest -v 2>&1 && RESULT="passed" || python -m unittest discover 2>&1 && RESULT="passed" || RESULT="failed" has confusing logic where if pytest fails but unittest passes, RESULT will be "passed" on the last command, but the overall exit code depends on the last command.

The Python test command chain is particularly problematic:

  • If pytest fails (exit 1), the OR operator tries unittest
  • If unittest passes (exit 0), RESULT="passed" executes and succeeds
  • But the overall exit code might still be from pytest failing

This creates ambiguity in test results. Run each test framework separately with explicit result tracking instead of using chained OR operators.

Suggested change
if [ "${{ needs.analyze.outputs.has_tests }}" = "true" ]; then
case "${{ needs.analyze.outputs.project_type }}" in
nodejs)
npm test 2>&1 && RESULT="passed" || RESULT="failed"
;;
python)
pytest -v 2>&1 && RESULT="passed" || python -m unittest discover 2>&1 && RESULT="passed" || RESULT="failed"
;;
go)
go test ./... 2>&1 && RESULT="passed" || RESULT="failed"
;;
rust)
cargo test 2>&1 && RESULT="passed" || RESULT="failed"
;;
esac
fi
echo "result=$RESULT" >> $GITHUB_OUTPUT
echo "Test result: $RESULT"
- name: Run Build
id: build
continue-on-error: true
run: |
RESULT="skipped"
if [ "${{ needs.analyze.outputs.has_build }}" = "true" ]; then
case "${{ needs.analyze.outputs.project_type }}" in
nodejs)
npm run build 2>&1 && RESULT="passed" || RESULT="failed"
;;
rust)
cargo build --release 2>&1 && RESULT="passed" || RESULT="failed"
;;
cloudflare-worker)
npx wrangler build 2>/dev/null && RESULT="passed" || RESULT="skipped"
;;
esac
fi
echo "result=$RESULT" >> $GITHUB_OUTPUT
echo "Build result: $RESULT"
STATUS=0
if [ "${{ needs.analyze.outputs.has_tests }}" = "true" ]; then
case "${{ needs.analyze.outputs.project_type }}" in
nodejs)
if npm test 2>&1; then
RESULT="passed"
else
RESULT="failed"
STATUS=1
fi
;;
python)
PYTEST_STATUS=1
UNITTEST_STATUS=1
# Try pytest first, if available
if command -v pytest >/dev/null 2>&1; then
if pytest -v 2>&1; then
PYTEST_STATUS=0
else
PYTEST_STATUS=$?
fi
fi
# If pytest was not successful, fall back to unittest
if [ "$PYTEST_STATUS" -ne 0 ]; then
if python -m unittest discover 2>&1; then
UNITTEST_STATUS=0
else
UNITTEST_STATUS=$?
fi
fi
if [ "$PYTEST_STATUS" -eq 0 ] || [ "$UNITTEST_STATUS" -eq 0 ]; then
RESULT="passed"
else
RESULT="failed"
STATUS=1
fi
;;
go)
if go test ./... 2>&1; then
RESULT="passed"
else
RESULT="failed"
STATUS=1
fi
;;
rust)
if cargo test 2>&1; then
RESULT="passed"
else
RESULT="failed"
STATUS=1
fi
;;
esac
fi
echo "result=$RESULT" >> $GITHUB_OUTPUT
echo "Test result: $RESULT"
exit $STATUS
- name: Run Build
id: build
continue-on-error: true
run: |
RESULT="skipped"
STATUS=0
if [ "${{ needs.analyze.outputs.has_build }}" = "true" ]; then
case "${{ needs.analyze.outputs.project_type }}" in
nodejs)
if npm run build 2>&1; then
RESULT="passed"
else
RESULT="failed"
STATUS=1
fi
;;
rust)
if cargo build --release 2>&1; then
RESULT="passed"
else
RESULT="failed"
STATUS=1
fi
;;
cloudflare-worker)
if npx wrangler build 2>/dev/null; then
RESULT="passed"
else
RESULT="skipped"
fi
;;
esac
fi
echo "result=$RESULT" >> $GITHUB_OUTPUT
echo "Build result: $RESULT"
exit $STATUS

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants