From 5a2c92522cff058f72009bfdc85ee1420241ea12 Mon Sep 17 00:00:00 2001 From: Klappy Date: Sun, 10 May 2026 23:09:52 +0000 Subject: [PATCH] Refresh virtues-revisited essay: incorporate survey, personality framing, radar MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The original closing arc described the constraints survey as 'what I am building next.' That survey was built the same day, with depth the four- bullet sketch could not anticipate: a seven-phase method, dual-state (current + desired) framing, DOLCHEO Constraints/Observations/Opens as typed outputs, and a runtime contract wiring oddkit_preflight, _gate, _challenge, _encode, _validate to phase boundaries. This revision: - Replaces 'What This Becomes Next' with 'What This Became' — compresses the seven-phase shape into a paragraph, names the dual-state output, introduces the three DOLCHEO artifact types, names the runtime contract wiring without re-specifying the method. - Adds 'The Survey Is a Personality Test for the Product' as the new payoff section. Embeds the canon worked-example radar SVG inline (pilot-stage authentication service, dual polygons, tension-opposite axis ordering, MoSCoW scale). Names the three things the picture makes visible: the gap is the roadmap, tension-opposite axes show the tradeoff structurally, an honest profile is asymmetric. - Updates the top blockquote and hook to reflect the new arc. - Adds canon/methods/quality-attribute-tension-survey.md to derives_from. - Adds 'tension-survey' and 'personality-radar' tags. - Adds Where to Go Next entry pointing at the survey method canon. - Refreshes og_description and twitter_description for social-graph parity with the new framing. Original load-bearing sections preserved intact: 2018 reflection, What the Article Got Right, What the Article Only Sketched, What the Matrix Looks Like, Article Lacked an Operator, Honest Is Overwhelming, What I Stand By. The five-claim central thesis is unchanged. Three ledger Opens (P1 observability inclusion, P2 ODD tie strength, P3 MoSCoW position) deliberately not addressed — those remain pending operator review per odd/ledger/2026-05-10-software-virtues-canon-package. --- writings/software-virtues-revisited.md | 170 +++++++++++++++++++++---- 1 file changed, 144 insertions(+), 26 deletions(-) diff --git a/writings/software-virtues-revisited.md b/writings/software-virtues-revisited.md index 335b55f..7f59edb 100644 --- a/writings/software-virtues-revisited.md +++ b/writings/software-virtues-revisited.md @@ -18,32 +18,34 @@ tags: - tradeoffs - ilities - tension-matrix + - tension-survey + - personality-radar - revisit epoch: E0008.4 date: 2026-05-10 # Discovery -hook: "Eight years later, a partner asked me which '-ilities' actually trade off against which. I sent him the article and told him I still stand by it — then realized it never built the full map." -description: "Eight years ago I wrote about ten software virtues and the tensions among them. A partner asked today which actually trade off against which, in which use cases — and I realized the article I'd been recommending for a decade had only listed tensions per-virtue and never assembled the full map. So I built it, and then realized the map was a worked example of something larger: a constraints survey for new work with agents." +hook: "Eight years later, a partner asked me which '-ilities' actually trade off against which. I sent him the article, told him I still stand by it — then realized it never built the full map. So I built the map, then a survey on top of it, then a radar that shows your product its own personality." +description: "Eight years ago I wrote about ten software virtues and the tensions among them. A partner asked today which actually trade off against which, and the article I'd been recommending for a decade only listed tensions per-virtue. So I built the full matrix, then a seven-phase constraints survey on top of it, then realized the cleanest framing was that the survey is a personality test for the product — and the radar it produces is the roadmap, visualized." slug: software-virtues-revisited # Social graph og_title: "Software Virtues, Revisited" -og_description: "Eight years later, a partner asked me which '-ilities' actually trade off against which. I sent him the article and told him I still stand by it — then realized it never built the full map." +og_description: "Eight years later, a partner asked me which '-ilities' actually trade off against which. The article I'd been recommending for a decade never built the full map. So I built it — and a personality test for the product." og_type: article twitter_card: summary_large_image twitter_title: "Software Virtues, Revisited" -twitter_description: "Eight years later, a partner asked me which '-ilities' actually trade off against which. I sent him the article and told him I still stand by it — then realized it never built the full map." +twitter_description: "Eight years later, a partner asked me which '-ilities' actually trade off against which. The article I'd been recommending for a decade never built the full map. So I built it — and a personality test for the product." # Relationships -derives_from: "https://medium.com/@klappy/what-are-software-virtues-and-how-to-prioritize-them-f0b583741afe (2018-09-09 — original article), canon/observations/quality-attribute-tension-matrix.md, canon/principles/quality-attributes-are-in-tension.md, canon/definitions/software-virtues-vocabulary.md, odd/maturity.md" +derives_from: "https://medium.com/@klappy/what-are-software-virtues-and-how-to-prioritize-them-f0b583741afe (2018-09-09 — original article), canon/observations/quality-attribute-tension-matrix.md, canon/principles/quality-attributes-are-in-tension.md, canon/definitions/software-virtues-vocabulary.md, canon/methods/quality-attribute-tension-survey.md, odd/maturity.md" complements: "writings/agentic-software-development.md" status: active --- # Software Virtues, Revisited — What I Wrote in 2018, What I Stand By, and What I Owed the Reader -> I wrote about ten software virtues and the tensions among them in 2018. Eight years later, a partner asked me which "-ilities" actually trade off against which, in which use cases. I sent him the article and told him I still stand by it. Then I realized the article only listed the tensions named per-virtue — it never built the full map. So I built it, and then realized the map was a worked example of something larger: a constraints survey for new work with agents. +> I wrote about ten software virtues and the tensions among them in 2018. Eight years later, a partner asked me which "-ilities" actually trade off against which, in which use cases. I sent him the article and told him I still stand by it. Then I realized the article only listed the tensions named per-virtue — it never built the full map. So I built it. Then I realized the map was a worked example of something larger — a constraints survey for new work with agents — so I built that too. And then I realized the cleanest framing of the whole thing is that the survey is a personality test for the product, and the radar it draws is the roadmap. --- @@ -153,26 +155,140 @@ If you have read the original article and felt overwhelmed, the matrix is what y Ian got the table he asked for. You get the table he asked for too. -## What This Becomes Next - -Here is the part I did not see clearly until this morning, and that I think is actually the most useful turn in the whole eight-year arc: - -The matrix is not a reference document. It is **a constraints survey for new work with agents.** - -When you scope a new project — a new product, a new service, a new feature, a new agent task — you are implicitly ranking quality attributes whether you say so or not. Most projects fail to articulate the ranking, hit the tensions sideways, and call the resulting confusion "scope creep" or "technical debt" or "we shipped the wrong thing." The article from 2018 named the ten virtues. The matrix names the tensions among them. The constraints survey is what you do *with* the matrix when you sit down to start something new. - -The flow looks like this: - -- Pick the quality attributes that matter for *this* project. (Usually a subset of the ten, occasionally with additions from the broader universe — securability, auditability, accessibility, observability, whatever the project actually requires.) -- Rank them for the project's current phase. -- Reference the tension graph — for the canonical ten, the matrix; for any other set, generate the graph dynamically against the same principle. -- Encode the chosen priorities and predicted sacrifices as constraints the agents working on the project inherit. - -That last step is what makes this an agent-work tool rather than an architectural diagram. Agents — whether you mean LLM-driven coding agents, autonomous task runners, or the next generation of whatever this becomes — work best when the constraints they are operating under are explicit. A constraints survey grounded in the tension graph gives them a coherent answer to the question "what am I optimizing for, and what am I willing to sacrifice?" That answer is what stops an agent from cheerfully refactoring for maintainability while shipping a project whose phase calls for urgency, or from optimizing for originality on a project whose users need stability above all else. - -I did not see this clearly when I started writing the matrix this morning. I saw it when I told a partner I had built the table, and the next sentence out of his mouth was that he hates reinventing the wheel and would absolutely use the abstractions. The article — once a piece of management writing aimed at a human audience that mostly bounced off it — turns out to be exactly the right shape for instrumented agent work. Or rather: the article was the seed, the matrix is the worked example, and the constraints survey is what makes both into infrastructure. - -That last layer is what I am building next. The article is staying load-bearing for one more turn. +## What This Became + +The first draft of this essay closed with a forward-looking sketch: *the matrix is a constraints survey for new work with agents, and that's what I'm building next.* By the time I finished publishing, the survey was already built. The canon now carries the [seven-phase method](klappy://canon/methods/quality-attribute-tension-survey) the sketch promised. The arc landed faster than the essay anticipated, which is the right way for an essay about a tradeoff system to age — the system kept moving while the writing was settling. + +The shape of the method is small enough to describe in a paragraph and big enough that it took the rest of the day to specify. You start by naming the project's stage — proof of concept, pilot, production, refactor, or *other*-with-a-name. You select the ilities that apply, both the ones the project optimizes for **today** and the ones it should optimize for **going forward**; the two lists are usually close and never identical. You rank each ility on the MoSCoW scale — *Won't, Could, Should, Must* — for both states. You consult the tension matrix (or generate dynamically for non-canonical ilities) to surface the sacrifices the desired ranking commits to and the sacrifices the current state is already living with. You acknowledge each sacrifice explicitly, and you let the inevitable rejections kick you back to re-rank. Then you encode the result. + +The encoding is where the survey stops being a workshop artifact and starts being infrastructure. Three [DOLCHEO](klappy://canon/definitions/dolcheo-vocabulary) artifact types fall out of one survey pass: **Constraints** capturing the desired state (the priorities the project commits to, the sacrifices it accepts, the ilities it explicitly does not optimize for), **Observations** capturing the current state (the observed level of each ility, with the source — telemetry, code review, contributor survey, lived experience), and **Opens** capturing the gap between them (one roadmap item per non-zero gap, ranked by magnitude, direction of investment named). The agent working on the project inherits the Constraints automatically. The Observations are the falsifiable baseline against which future surveys measure drift. The Opens are the prioritized roadmap. + +The whole loop is wired to [oddkit](https://oddkit.klappy.dev) actions: `preflight` opens the survey, `gate` enforces every phase boundary, `challenge` pressure-tests sacrifices before they're accepted, `encode` produces the typed output, `validate` closes the survey. The runtime contract is mechanical — it adds no rules, it just enforces the ones canon already carries. A re-run is triggered by phase transition, scope change, tension surprise, stakeholder change, or measured drift; the prior artifacts are archived, the new ones supersede. + +That description is the spec. The reason the spec is interesting is not that the spec is interesting. It is that the spec produces a picture. + +## The Survey Is a Personality Test for the Product + +The cleanest way to think about the survey's output is not as a list of constraints but as a personality profile. The same shape humans use to map traits onto a person — a radar chart with one axis per trait — maps naturally onto a project. Each ility is a trait. The project's ranking on each ility is its score. The polygon connecting the scores is the project's *shape*. Two products can have similar functions and entirely different personalities, and that personality difference is the most important thing a stakeholder, a reviewer, or an inheriting agent needs to know about the project up front. + +The framing earns the radar chart for free. It is also the framing that lets the survey reach audiences who would never read a method doc. "What's your product's personality?" is a question that gets answered. "Have you completed your quality-attribute tension survey?" is a question that gets ignored. The personality framing is the spoonful of sugar; the survey is the medicine. + +The default radar has one axis per ility, ordered so that **ilities in strong mutual tension sit across from each other on the chart**, and ilities that reinforce each other sit adjacent. The visual benefit is structural: a spike on one axis with a valley directly across shows the polygon *leaning* toward what the project prioritizes against what it sacrifices. Adjacent synergistic ilities form smooth contours. The polygon's silhouette becomes the tradeoff record by construction. + +The scale is the same MoSCoW scale the 2018 article surfaced. Quantitative 0–100 scoring is rejected as false precision; *Won't / Could / Should / Must* matches the level at which decisions are actually made. + +And the survey produces *two* polygons, not one — because every survey is dual-state. The **desired** polygon (solid, full opacity) is the personality the project commits to. The **current** polygon (dashed outline, lighter) is the personality the project actually has today. Where the polygons coincide, the project is on-target on that axis. Where they diverge, the gap is a roadmap item. The radar *is* the roadmap, visualized. + +Here is the worked example from canon — a hypothetical pilot-stage authentication service: + + + Project Personality Radar — Pilot-stage Authentication Service (Current and Desired) + Radar chart with ten axes showing both the current and desired priority profiles of a hypothetical pilot-stage authentication service. The desired polygon (solid blue) reaches level 4 on Stability, Reality, and Interoperability and level 3 on Maintainability and Affordability. The current polygon (dashed orange) is smaller, sitting at level 2 on Stability, Maintainability, and Interoperability and level 3 on Reality and Affordability. Gaps between the two polygons on Stability, Maintainability, Reality, and Interoperability visualize the roadmap. Axes ordered tension-opposite clockwise from top: Stability, Maintainability, Reality, Affordability, Efficiency, Urgency, Versatility, Originality, Usability, Interoperability. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Stability + Maintainability + Reality + Affordability + Efficiency + Urgency + Versatility + Originality + Usability + Interoperability + + + + + 1 + 2 + 3 + 4 + + + + + Pilot-Stage Authentication Service + Personality Radar — MoSCoW (1=Won't, 2=Could, 3=Should, 4=Must) + + + + + + Desired + + Current + + + +Three things the picture makes visible at a glance that the table buries. + +**The gap between the polygons is the roadmap.** The visible gap segments — Stability (top, +2), Maintainability (upper-right, +1), Reality (right, +1), Interoperability (upper-left, +2) — are the four committed investment directions, ranked by gap magnitude. The bottom and bottom-left of the chart show coincident polygons: those axes are on-target. The radar audits the project's trajectory, not just its commitments. + +**Tension-opposite axes show their tradeoff structurally.** Stability sits at the top, Urgency directly across at the bottom — the project's spike on Stability (level 4) with valley on Urgency (level 1) shows the strongest single tradeoff being made, visualized as the radar's most pronounced lean. The same pattern across Reality/Originality, Affordability/Usability, Interoperability/Efficiency. The silhouette is the tradeoff record. + +**An honest profile is asymmetric, and the two polygons should usually differ.** A near-circular polygon means the ranking was rubber-stamped — no real prioritization happened. A perfectly-coincident pair of polygons means either the project is already at its committed personality (rare on a first survey) or the current state was reported aspirationally rather than observationally. Both failure modes are diagnoseable from the picture. + +A project asked which virtues it prioritizes will sometimes claim all of them. A project asked which personality it has will not — because *all of them* is not a personality. The reframe extracts an honest answer where the direct question extracted a defensive one. That is the point. ## Where to Go Next @@ -184,4 +300,6 @@ If you want the principle behind the map: [Quality Attributes Are In Tension](kl If you want the vocabulary the canon uses: [Software Virtues Vocabulary](klappy://canon/definitions/software-virtues-vocabulary). +If you want the survey itself — the seven-phase method that turns the matrix into an operational artifact and produces the radar — start at [Quality Attribute Tension Survey](klappy://canon/methods/quality-attribute-tension-survey). + If you want the discipline that makes the map worth carrying: [the rest of klappy.dev](https://klappy.dev).