Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
c00f12e
Add EmissionSuppression storage + share zeroing
ppolewicz Feb 10, 2026
3a6c098
Add EmissionSuppressionOverride (root-only)
ppolewicz Feb 10, 2026
b7982b8
Add voting extrinsic + on-epoch vote collection
ppolewicz Feb 10, 2026
24b53af
Add emission suppression tests
ppolewicz Feb 10, 2026
f4fda36
Add KeepRootSellPressureOnSuppressedSubnets
ppolewicz Feb 10, 2026
6694b52
Address code review: events, root guard, coldkey swap conflict
ppolewicz Feb 11, 2026
11b0d5f
Add 8 additional test scenarios from code review
ppolewicz Feb 11, 2026
68748b9
Add /fix and /ship Claude Code skills for automated lint, test, and s…
ppolewicz Feb 12, 2026
c3283ab
Address code review: R-1 fail-fast validation, R-2 weight fix, R-5 Ow…
ppolewicz Feb 12, 2026
df8f8f4
Fix unused variable warnings in emission_suppression tests
ppolewicz Feb 12, 2026
b9dbe95
Fix skills location
ppolewicz Feb 12, 2026
bf0b574
Replace KeepRootSellPressure bool with RootSellPressureOnSuppressedSu…
ppolewicz Feb 13, 2026
0cc440f
cargo fmt
ppolewicz Feb 13, 2026
884ac91
Fix recycle mode tests: use add_dynamic_network, add tao_outflow trac…
ppolewicz Feb 13, 2026
6d8ca77
Address code review: R-1 swap fallback, R-3 cache suppression, R-8 co…
ppolewicz Feb 13, 2026
684169a
Add basic CLAUDE.md
ppolewicz Feb 13, 2026
ce3a4e9
Merge remote-tracking branch 'origin/devnet-ready' into emission_supp…
ppolewicz Feb 13, 2026
136c1a4
Address code review: Disable mode recycles root alpha to validators, …
ppolewicz Feb 17, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 45 additions & 0 deletions .claude/skills/fix/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
---
name: fix
description: Commit changes, run Rust fix tools, run tests, and amend with any fixes
---

# Fix Skill

Commit current changes with a descriptive message, then run Rust fix tools one by one, amending the commit after each tool if it produced changes, then run unit tests and fix any failures.

## Steps

1. **Initial commit**: Stage all changes and create a commit with a descriptive message summarizing the changes (use `git add -A && git commit -m "<descriptive message>"`). If there are no changes to commit, create no commit but still proceed with the fix tools below.

2. **Run each fix tool in order**. After EACH tool, check `git status --porcelain` for changes. If there are changes, stage them and amend the commit (`git add -A && git commit --amend --no-edit`).

The tools to run in order:

a. `cargo check --workspace`
b. `cargo clippy --fix --workspace --all-features --all-targets --allow-dirty`
c. `cargo fix --workspace --all-features --all-targets --allow-dirty`
d. `cargo fmt --all`

3. **Run unit tests in a Sonnet subagent**: Launch a Task subagent (subagent_type: `general-purpose`, model: `sonnet`) that runs:
```
cargo test -p pallet-subtensor --lib
```
The subagent must:
- Run the test command and capture full output.
- If all tests pass, report success and return.
- If any tests fail, analyze the failures: read the failing test code AND the source code it tests, determine the root cause, apply fixes using Edit tools, and re-run the tests to confirm the fix works.
- After fixing, if there are further failures, repeat (up to 3 fix-and-retest cycles).
- Return a summary of: which tests failed, what was fixed, and whether all tests pass now.

4. **Amend commit with test fixes**: After the subagent returns, if any code changes were made (check `git status --porcelain`), stage and amend the commit (`git add -A && git commit --amend --no-edit`). Then re-run the fix tools from step 2 (since code changes from test fixes may need formatting/clippy cleanup), amending after each if there are changes.

5. **Final output**: Show `git log --oneline -1` so the user can see the resulting commit.

## Important

- Use `--allow-dirty` flags on clippy --fix and cargo fix since the working tree may have unstaged changes between steps.
- If a fix tool fails (step 2/4), stop and report the error to the user rather than continuing.
- Do NOT run `scripts/fix_rust.sh` itself — run the individual commands listed above instead.
- Do NOT skip any step. Run all four fix tools even if earlier ones produced no changes.
- The test subagent must fix source code to make tests pass, NOT modify tests to make them pass (unless the test itself is clearly wrong).
- If the test subagent cannot fix all failures after 3 cycles, it must return the remaining failures so the main agent can report them to the user.
94 changes: 94 additions & 0 deletions .claude/skills/ship/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
---
name: ship
description: Ship the current branch: fix, push, create PR, watch CI, fix failures, code review
---

# Ship Skill

Ship the current branch: fix, push, create PR if needed, watch CI, fix failures, and perform code review.

## Phase 1: Fix and Push

1. **Run `/fix`** — invoke the fix skill to commit, lint, and format.
2. **Push the branch** to origin: `git push -u origin HEAD`.
3. **Create a PR if none exists**:
- Check: `gh pr view --json number 2>/dev/null` — if it fails, no PR exists yet.
- If no PR exists, create one:
- Use `git log main..HEAD --oneline` to understand all commits on the branch.
- Read the changed files with `git diff main...HEAD --stat` to understand scope.
- Create the PR with `gh pr create --title "<concise title>" --body "<detailed markdown description>" --label "skip-cargo-audit"`.
- The description must include: a **Summary** section (bullet points of what changed and why), a **Changes** section (key files/modules affected), and a **Test plan** section.
- If a PR already exists, just note its number/URL.

## Phase 2: Watch CI and Fix Failures

4. **Poll CI status** in a loop:
- Run: `gh pr checks --json name,state,conclusion,link --watch --fail-fast 2>/dev/null || gh pr checks`
- If `--watch` is not available, poll manually every 90 seconds using `gh pr checks --json name,state,conclusion,link` until all checks have completed (no checks with state "pending" or conclusion "").
- **Ignore these known-flaky/irrelevant checks** — treat them as passing even if they fail:
- `validate-benchmarks` (benchmark CI — not relevant)
- Any `Contract E2E Tests` check that failed only due to a timeout (look for timeout in the failure link/logs)
- `cargo-audit` (we already added the skip label)
- Also ignore any checks related to `check-spec-version` and `e2e` tests — these are environment-dependent and not fixable from code.

5. **If there are real CI failures** (failures NOT in the ignore list above):
- For EACH distinct failing check, launch a **separate Task subagent** (subagent_type: `general-purpose`, model: `sonnet`) in parallel. Each subagent must:
- Fetch the failed check's logs: use `gh run view <run-id> --log-failed` or the check link to get failure details.
- Investigate the root cause by reading relevant source files.
- Return a **fix plan**: a description of what needs to change and in which files, with specific code snippets showing the fix.
- **Wait for all subagents** to return their fix plans.

6. **Aggregate and apply fixes**:
- Review all returned fix plans for conflicts or overlaps.
- Apply the fixes using Edit/Write tools.
- Run `/fix` again (invoke the fix skill) to commit, lint, and format the fixes.
- Push: `git push`.

7. **Re-check CI**: Go back to step 4 and poll again. Repeat the fix cycle up to **3 times**. If CI still fails after 3 rounds, report the remaining failures to the user and stop.

## Phase 3: Code Review

8. **Once CI is green** (or only ignored checks are failing), perform a thorough code review.

9. **Launch a single Opus subagent** (subagent_type: `general-purpose`, model: `opus`) for the review:
- It must get the full PR diff: `git diff main...HEAD`.
- It must read every changed file in full.
- It must produce a numbered list of **issues** found, where each issue has:
- A unique sequential ID (e.g., `R-1`, `R-2`, ...).
- **Severity**: critical / major / minor / nit.
- **File and line(s)** affected.
- **Description** of the problem.
- The review must check for: correctness, safety (no panics, no unchecked arithmetic, no indexing), edge cases, naming, documentation gaps, test coverage, and adherence to Substrate/Rust best practices.
- Return the full list of issues.

10. **For each issue**, launch TWO subagents **in parallel**:
- **Fix designer** (subagent_type: `general-purpose`, model: `sonnet`): Given the issue description and relevant code context, design a concrete proposed fix with exact code changes (old code -> new code). Return the fix as a structured plan.
- **Fix reviewer** (subagent_type: `general-purpose`, model: `opus`): Given the issue description, the relevant code context, and the proposed fix (once the fix designer returns — so the reviewer runs AFTER the designer, but reviewers for different issues run in parallel with each other). The reviewer must check:
- Does the fix actually solve the issue?
- Does it introduce new problems?
- Is it the simplest correct fix?
- Return: approved / rejected with reasoning.

Implementation note: For each issue, first launch the fix designer. Once the fix designer for that issue returns, launch the fix reviewer for that issue. But all issues should be processed in parallel — i.e., launch all fix designers at once, then as each designer returns, launch its corresponding reviewer. You may batch reviewers if designers finish close together.

11. **Report to user**: Present a formatted summary:
```
## Code Review Results

### R-1: <title> [severity]
**File**: path/to/file.rs:42
**Issue**: <description>
**Proposed fix**: <summary of fix>
**Review**: Approved / Rejected — <reasoning>

### R-2: ...
```
Ask the user which fixes to apply (all approved ones, specific ones by ID, or none).

## Important Rules

- Never force-push. Always use regular `git push`.
- All CI polling must have a maximum total wall-clock timeout of 45 minutes. If CI hasn't finished by then, report current status and stop waiting.
- When fetching CI logs, if `gh run view` output is very long, focus on the failed step output only.
- Do NOT apply code review fixes automatically — always present them for user approval first.
- Use HEREDOC syntax for PR body and commit messages to preserve formatting.
2 changes: 2 additions & 0 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
- never use slice indexing like `arr[n..]` or `arr[i]`; use `.get(n..)`, `.get(i)` etc. instead to avoid panics (clippy::indexing_slicing)
- never use `*`, `+`, `-`, `/` for arithmetic; use `.saturating_mul()`, `.saturating_add()`, `.saturating_sub()`, `.saturating_div()` or checked variants instead (clippy::arithmetic_side_effects)
5 changes: 5 additions & 0 deletions pallets/subtensor/src/coinbase/root.rs
Original file line number Diff line number Diff line change
Expand Up @@ -300,6 +300,11 @@ impl<T: Config> Pallet<T> {
SubnetTaoFlow::<T>::remove(netuid);
SubnetEmaTaoFlow::<T>::remove(netuid);

// --- 12b. Emission suppression.
EmissionSuppression::<T>::remove(netuid);
EmissionSuppressionOverride::<T>::remove(netuid);
let _ = EmissionSuppressionVote::<T>::clear_prefix(netuid, u32::MAX, None);

// --- 13. Token / mechanism / registration toggles.
TokenSymbol::<T>::remove(netuid);
SubnetMechanism::<T>::remove(netuid);
Expand Down
42 changes: 38 additions & 4 deletions pallets/subtensor/src/coinbase/run_coinbase.rs
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,7 @@ impl<T: Config> Pallet<T> {

// --- 3. Inject ALPHA for participants.
let cut_percent: U96F32 = Self::get_float_subnet_owner_cut();
let root_sell_pressure_mode = KeepRootSellPressureOnSuppressedSubnets::<T>::get();

for netuid_i in subnets_to_emit_to.iter() {
// Get alpha_out for this block.
Expand All @@ -212,6 +213,9 @@ impl<T: Config> Pallet<T> {
let root_proportion = Self::root_proportion(*netuid_i);
log::debug!("root_proportion: {root_proportion:?}");

// Check if subnet emission is suppressed (compute once to avoid double storage read).
let is_suppressed = Self::is_subnet_emission_suppressed(*netuid_i);

// Get root alpha from root prop.
let root_alpha: U96F32 = root_proportion
.saturating_mul(alpha_out_i) // Total alpha emission per block remaining.
Expand Down Expand Up @@ -239,10 +243,37 @@ impl<T: Config> Pallet<T> {
});

if root_sell_flag {
// Only accumulate root alpha divs if root sell is allowed.
PendingRootAlphaDivs::<T>::mutate(*netuid_i, |total| {
*total = total.saturating_add(tou64!(root_alpha).into());
});
// Determine disposition of root alpha based on suppression mode.
if is_suppressed
&& root_sell_pressure_mode == RootSellPressureOnSuppressedSubnetsMode::Disable
{
// Disable mode: recycle root alpha back to subnet validators.
PendingValidatorEmission::<T>::mutate(*netuid_i, |total| {
*total = total.saturating_add(tou64!(root_alpha).into());
});
} else if is_suppressed
&& root_sell_pressure_mode == RootSellPressureOnSuppressedSubnetsMode::Recycle
{
// Recycle mode: swap alpha → TAO via AMM, then burn the TAO.
let root_alpha_currency = AlphaCurrency::from(tou64!(root_alpha));
if let Ok(swap_result) = Self::swap_alpha_for_tao(
*netuid_i,
root_alpha_currency,
TaoCurrency::ZERO, // no price limit
true, // drop fees
) {
Self::record_tao_outflow(*netuid_i, swap_result.amount_paid_out);
Self::recycle_tao(swap_result.amount_paid_out);
} else {
// Swap failed: recycle alpha back to subnet to prevent loss.
Self::recycle_subnet_alpha(*netuid_i, root_alpha_currency);
}
} else {
// Enable mode (or non-suppressed subnet): accumulate for root validators.
PendingRootAlphaDivs::<T>::mutate(*netuid_i, |total| {
*total = total.saturating_add(tou64!(root_alpha).into());
});
}
} else {
// If we are not selling the root alpha, we should recycle it.
Self::recycle_subnet_alpha(*netuid_i, AlphaCurrency::from(tou64!(root_alpha)));
Expand All @@ -269,6 +300,9 @@ impl<T: Config> Pallet<T> {
if Self::should_run_epoch(netuid, current_block)
&& Self::is_epoch_input_state_consistent(netuid)
{
// Collect emission suppression votes for this subnet.
Self::collect_emission_suppression_votes(netuid);

// Restart counters.
BlocksSinceLastStep::<T>::insert(netuid, 0);
LastMechansimStepBlock::<T>::insert(netuid, current_block);
Expand Down
81 changes: 80 additions & 1 deletion pallets/subtensor/src/coinbase/subnet_emissions.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ use safe_math::FixedExt;
use substrate_fixed::transcendental::{exp, ln};
use substrate_fixed::types::{I32F32, I64F64, U64F64, U96F32};

/// Emission suppression threshold (50%). Subnets with suppression fraction
/// above this value are considered emission-suppressed.
const EMISSION_SUPPRESSION_THRESHOLD: f64 = 0.5;

impl<T: Config> Pallet<T> {
pub fn get_subnets_to_emit_to(subnets: &[NetUid]) -> Vec<NetUid> {
// Filter out root subnet.
Expand All @@ -27,7 +31,8 @@ impl<T: Config> Pallet<T> {
block_emission: U96F32,
) -> BTreeMap<NetUid, U96F32> {
// Get subnet TAO emissions.
let shares = Self::get_shares(subnets_to_emit_to);
let mut shares = Self::get_shares(subnets_to_emit_to);
Self::apply_emission_suppression(&mut shares);
log::debug!("Subnet emission shares = {shares:?}");

shares
Expand Down Expand Up @@ -246,4 +251,78 @@ impl<T: Config> Pallet<T> {
})
.collect::<BTreeMap<NetUid, U64F64>>()
}

/// Normalize shares so they sum to 1.0.
pub(crate) fn normalize_shares(shares: &mut BTreeMap<NetUid, U64F64>) {
let sum: U64F64 = shares
.values()
.copied()
.fold(U64F64::saturating_from_num(0), |acc, v| {
acc.saturating_add(v)
});
if sum > U64F64::saturating_from_num(0) {
for s in shares.values_mut() {
*s = s.safe_div(sum);
}
}
}

/// Check if a subnet is currently emission-suppressed, considering override first.
pub(crate) fn is_subnet_emission_suppressed(netuid: NetUid) -> bool {
match EmissionSuppressionOverride::<T>::get(netuid) {
Some(true) => true,
Some(false) => false,
None => {
EmissionSuppression::<T>::get(netuid)
> U64F64::saturating_from_num(EMISSION_SUPPRESSION_THRESHOLD)
}
}
}

/// Zero the emission share of any subnet whose suppression fraction exceeds 50%
/// (or is force-suppressed via override), then re-normalize the remaining shares.
pub(crate) fn apply_emission_suppression(shares: &mut BTreeMap<NetUid, U64F64>) {
let zero = U64F64::saturating_from_num(0);
let mut any_zeroed = false;
for (netuid, share) in shares.iter_mut() {
if Self::is_subnet_emission_suppressed(*netuid) {
*share = zero;
any_zeroed = true;
}
}
if any_zeroed {
Self::normalize_shares(shares);
}
}

/// Collect emission suppression votes from root validators for a subnet
/// and update the EmissionSuppression storage.
/// Called once per subnet per epoch. No-op for root subnet.
pub(crate) fn collect_emission_suppression_votes(netuid: NetUid) {
if netuid.is_root() {
return;
}
let root_n = SubnetworkN::<T>::get(NetUid::ROOT);
let mut suppress_stake = U64F64::saturating_from_num(0u64);
let mut total_root_stake = U64F64::saturating_from_num(0u64);

for uid in 0..root_n {
let hotkey = Keys::<T>::get(NetUid::ROOT, uid);
let root_stake = Self::get_stake_for_hotkey_on_subnet(&hotkey, NetUid::ROOT);
let stake_u64f64 = U64F64::saturating_from_num(u64::from(root_stake));
total_root_stake = total_root_stake.saturating_add(stake_u64f64);

let coldkey = Owner::<T>::get(&hotkey);
if let Some(true) = EmissionSuppressionVote::<T>::get(netuid, &coldkey) {
suppress_stake = suppress_stake.saturating_add(stake_u64f64);
}
}

let suppression = if total_root_stake > U64F64::saturating_from_num(0u64) {
suppress_stake.safe_div(total_root_stake)
} else {
U64F64::saturating_from_num(0u64)
};
EmissionSuppression::<T>::insert(netuid, suppression);
}
}
Loading
Loading