Team Sorter divides participants into balanced teams based on ratings, achieving optimal minimum rating difference across all teams.
- Optimal balancing: Uses exact branch-and-bound backtracking algorithm to find the global minimum rating difference across teams (when ratings are provided)
- Flexible ratings: Supports participant
ratingas either a decimal or a list of decimals (1.0 to 10.0) with one-decimal precision - Placeholder padding: Automatically adds median-rated placeholder participants when participant count is not divisible by team count
- Equal team sizes: Maintains equal team sizes; each team must have at least 1 real (non-placeholder) participant
- Solution enumeration: Counts all optimal solutions and randomly selects one per run via reservoir sampling for rated input; unrated input uses direct random assignment
- JSON API: Language-agnostic interface via JSON stdin/stdout or files
- CLI & Library: Available as both a standalone binary and importable Go package
Input must satisfy:
number_of_teams>= 2participantsis not empty- Number of participants ≥
number_of_teams+ 1 (minimum 2 real participants per configuration) - Each participant:
- Has a unique name (case-insensitive)
- Name is not empty
- Rating value(s) are in range [1.0, 10.0] (finite, not NaN/Inf)
- Ratings are all-or-none: either all participants have ratings, or none have ratings
- If
ratingis a list for one participant, all participants must provide the same list length
The sorter has two execution modes:
- Rated input: uses exact backtracking with pruning to guarantee the globally optimal solution
- Unrated input: uses a per-call randomizer to produce balanced team sizes without synthetic ratings
Rated mode flow:
- Padding: If
participants % teams != 0, adds(teams - remainder)placeholder participants with the median rating of real participants - Optimal assignment: Explores all valid assignments respecting:
- Equal team sizes (or size + 1 for first
remainderteams) - Placeholder cap per team:
ceil(placeholder_count / team_count) - Each team must include ≥ 1 real participant
- Multi-rating objective: balances per-criterion team spreads (e.g., attack/mid/defense) and total team spread together
- Equal team sizes (or size + 1 for first
- Solution counting: Tracks all distinct assignments that achieve the minimum
rating_diff - Random selection: Uses reservoir sampling to randomly pick one among all optimal solutions
Unrated mode flow:
- Padding: If
participants % teams != 0, adds placeholders (without ratings) - Random assignment: Randomly shuffles/assigns participants into valid equal-size teams
{
"number_of_teams": 3,
"participants": [
{"name": "Alice", "rating": [9.5, 8.0, 7.5]},
{"name": "Bob", "rating": [8.0, 8.5, 7.0]},
{"name": "Charlie", "rating": [7.5, 7.0, 8.0]},
{"name": "Diana", "rating": [10.0, 9.0, 8.5]}
]
}Fields:
number_of_teams(int): Number of teams to createparticipants(array):name(string): Unique participant identifierrating(number or number[]): Skill/strength rating value(s) in range [1.0, 10.0]- Number keeps single-rating behavior
- List enables multi-criteria balancing: teams are optimized to balance each criterion spread (e.g., attack/mid/defense) and average team rating spread
{
"teams": [
{
"name": "Team 1",
"total_rating": 18.5,
"members": [
{"name": "Alice", "rating": 9.5},
{"name": "Charlie", "rating": 9.0}
]
},
...
],
"meta": {
"team_count": 3,
"participant_count": 4,
"placeholder_count": 2,
"members_per_team": 2,
"solution_count": 12,
"min_team_rating": 17.5,
"max_team_rating": 19.0,
"rating_diff": 1.5
}
}Team Fields:
name: Auto-generated team identifier (e.g., "Team 1")total_rating: Sum of all member ratingsmembers: List of assigned participants withis_placeholderflag (omitted for real participants)
Meta Fields:
team_count: Number of teams createdparticipant_count: Count of real (non-placeholder) participants from inputplaceholder_count: Number of auto-added placeholder participantsmembers_per_team: Exact number of members in each teamsolution_count: Total number of distinct optimal solutions with the same minimumrating_diffmin_team_rating: Lowest team rating summarymax_team_rating: Highest team rating summaryrating_diff: Difference between max and min summaries (0 = perfect balance)
When input uses rating lists with n values, each meta rating field becomes a list with n+1 values:
[criterion_1, criterion_2, ..., criterion_n, avg].
The last value is the participant's average rating across all criteria, used for optimization and team balancing.
For single-value ratings, these fields stay as scalars for backward compatibility.
import "team_sorter/pkg/teamsorter"
req := teamsorter.SortTeamsRequest{
NumberOfTeams: 3,
Participants: []teamsorter.Participant{
{Name: "Alice", Ratings: []teamsorter.Rating{9.5, 8.0, 7.5}},
{Name: "Bob", Ratings: []teamsorter.Rating{8.0, 8.5, 7.0}},
},
}
resp, err := teamsorter.SortTeams(req)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Optimal solutions: %d\n", resp.Meta.SolutionCount)go build -o bin/sorter ./cmd/sorter/echo '{"number_of_teams": 3, "participants": [...]}' | ./bin/sorter./bin/sorter -f input.json -o output.json
jq . output.jsongo run ./cmd/demoCLI Options:
-d '<json>': Pass JSON as command-line string-f <file>: Read JSON from file-o <file>: Write JSON output to file (default: stdout)-list: Output all optimal solutions as list items in the form{ "Team N": ["Name", ...] }
Example:
./bin/sorter -f input.json -list | jq .If ratings are provided and solution_count is 24, this mode returns a JSON array with 24 objects.
If ratings are not provided, this mode returns all valid team permutations (can be very large).
Given 4 participants and 3 teams:
{
"number_of_teams": 3,
"participants": [
{"name": "Ali", "rating": 10.0},
{"name": "Mira", "rating": 10.0},
{"name": "Bek", "rating": 10.0},
{"name": "Dana", "rating": 9.0}
]
}The sorter:
- Calculates that 2 placeholders are needed (6 total / 3 teams = 2 each)
- Computes median of [10, 10, 10, 9] = 10.0
- Adds 2 placeholders with rating 10.0
- Finds 12 optimal solutions with
rating_diff = 1.0:- Each team has 2 members
- One team = 19.0 (real + real, e.g., Ali + Dana)
- Two teams = 20.0 each (real + placeholder)
- Randomly selects one solution per run
Sample meta output:
{
"team_count": 3,
"participant_count": 4,
"placeholder_count": 2,
"members_per_team": 2,
"solution_count": 12,
"rating_diff": 1.0
}All validation errors return JSON with an error field:
{"error": "participant rating must be between 1.0 and 10.0"}Common errors:
"number_of_teams must be at least 2""participants must not be empty""participants count must be at least number_of_teams + 1"- "participant name must not match reserved format Placeholder "
"participant rating must be between 1.0 and 10.0""all participants must have the same number of rating values""participant name must be unique""no team sorting solution found"
All ratings and team totals are normalized to one decimal place (e.g., 9.5, 10.0, 6.2). JSON output uses string-formatted decimals to preserve precision:
{"rating": 9.5, "total_rating": 19.0}go test ./...- For typical inputs (< 20 participants, 2-6 teams), execution is instantaneous
- Exact algorithm explores all valid combinations; worst case is exponential but pruning (early termination, empty-team symmetry) makes it practical
- Placeholder padding reduces effective participant count without affecting solution quality
See LICENSE file.