Affordances in Software Engineering
Module 6 of 6 Advanced 45 min

Affordance Analysis Framework

Prerequisites: affordance-theory-for-engineers, code-as-communication, system-architecture-affordances, anti-affordances-designed-friction, affordance-decay-evolution

What You'll Learn

Why This Matters

You have spent five modules building vocabulary. You can now name what you see: signifiers that mislead, perceived affordances that diverge from real ones, constraints that guide, anti-affordances that protect, and affordance degradation that accumulates over time. But vocabulary alone does not produce action. What you need is a repeatable way to pick up any codebase or API surface and produce a critique that colleagues can read, act on, and use to guide refactoring decisions.

This module delivers that. The affordance analysis framework is not new theory — it is a structured application of everything you have already learned. Think of it as the difference between knowing the vocabulary of chess and knowing how to sit down and evaluate a board position. The pieces are the same. The framework is the method for seeing what the position actually says.

Engineering teams make expensive decisions — API versioning strategies, library boundaries, deprecation schedules — based on informal intuitions about what is clear and what is not. The framework gives you a way to make that judgment rigorous, repeatable, and communicable.

Core Concept

The framework organizes affordance analysis across four evaluation dimensions. Apply each dimension to every layer of the system you are analyzing: code, component, and system.

Discoverability: What does this system make visible and easy to find? What is hidden? Evaluate signifier quality: naming, type signatures, module structure, error messages, documentation that supplements (not substitutes for) structure. A system with strong discoverability affords correct usage to engineers without prior familiarity. A system with weak discoverability forces engineers to reverse-engineer intent from implementation — a reliable symptom of hidden affordances and affordance debt.

The key question: can an engineer who has never seen this system before determine what to do next from the system's own structure, without asking for help?

Constraint quality: What does this system make hard? Classify each constraint you find using the subtypes from the shared contract: physical (type systems, immutability, compile-time enforcement), logical (required parameters, versioned API contracts, schemas), or cultural (naming conventions, review gates, linter rules). Then determine: is each constraint intentional — a deliberate anti-affordance authored to prevent misuse — or is it the residue of affordance degradation, a friction point that emerged without design?

This distinction matters. A RefundAuthorization required parameter is a physical constraint and an intentional anti-affordance: it makes unauthorized refunds impossible without explicit elevation. A confusing method name like process() in a namespace with forty other process() variants is unproductive friction — it constrains engineers, but toward wrong usage, not correct usage. One protects; the other harms.

Feedback completeness: Does the system tell engineers when they are using it incorrectly, and does it guide them toward correction? Evaluate: types that reject invalid states, linter rules that catch drift, runtime errors that explain rather than stack-trace-dump, test failures that locate the problem precisely, error messages that include actionable guidance. Feedback is the mechanism that closes the loop between perceived affordance and real affordance. When feedback is absent, engineers must discover misuse the hard way: in production.

A system can have clear discoverability and still fail on feedback. A function that looks obvious to call — clear name, reasonable signature — but silently swallows errors and returns null is a feedback failure. The perceived affordance (safety, clarity) diverges from the real affordance (silent failure). This divergence is among the most expensive classes of affordance failure.

Audience fit: For whom are these affordances designed? What background does the system assume? What breaks for engineers outside that assumed audience? The same signifier communicates differently to a senior engineer who wrote the library and to a junior engineer onboarding six months after it shipped. Affordances are relational. This dimension makes the relational nature explicit: name the assumed audience, then evaluate what the system does for engineers outside it.

Audience fit is where affordance debt becomes most visible. As a system evolves, the assumed audience often shifts — a library built for internal use gets adopted externally, an API designed for synchronous patterns gains async consumers — and the affordances that worked for the original audience become silent landmines for the new one.

Applying the framework: For each dimension, ask three questions at each layer:

Organize findings by dimension, not by layer. The complete output of this analysis is the system's affordance landscape — the map of what it makes easy, hard, hidden, and misleading for a given engineer audience. This produces a critique that is actionable: dimension-level findings map cleanly to specific types of improvement work (naming refactor, type strengthening, error message revision, audience documentation).

Concrete Example

Here is a full affordance analysis of the PaymentService library. This is the system you have seen across every prior module. What follows is what a real critique looks like.


PaymentService Affordance Critique System: PaymentService library, v1–v3. Audience: five product teams with varying familiarity.

Dimension 1: Discoverability

Code layer — what is easy: The top-level surface is well-named. PaymentClient, PaymentConfig, PaymentResult signal their purpose without inspection. A new engineer can identify the entry point immediately.

Code layer — what is hidden: PaymentClient.charge() accepts an undocumented third argument: idempotencyKey. The capability exists — it is a real affordance — but nothing in the method signature, JSDoc, or type definitions signals it. An engineer who does not already know it exists will not discover it. This is a hidden affordance: real, working, and invisible. Engineers who miss it write non-idempotent charge calls. This is not an edge case; idempotency in payment processing is critical.

Code layer — what is misleading: In v3, the library exposes both getTransaction() and fetchTransaction(). These are not aliases. They hit different internal paths with subtly different caching behaviors. Nothing in the names signals the difference. An engineer choosing between them will guess — and will be wrong approximately half the time. This is affordance failure: the system actively misleads.

Component layer: LegacyPaymentClient sits alongside PaymentClient in the public module. It is not clearly marked as internal. Engineers encountering it for the first time cannot tell whether it is the current implementation or a deprecated one. The @Deprecated annotation exists but is not surfaced in the module-level exports. A new engineer who picks up LegacyPaymentClient because it appeared first in autocomplete is using a signifier system that failed them.

System layer: v1, v2, and v3 coexist in the deployed infrastructure. No consumer-facing documentation specifies which version supports which feature. Engineers integrating the library must read the changelog, the ADRs, and the source to reconstruct the version matrix. The cost of discovery is high.

Dimension 2: Constraint Quality

Well-designed constraints: refund() requires a RefundAuthorization object — a physical constraint that makes unauthorized refunds impossible without explicit permission elevation. This is a deliberate anti-affordance: it adds friction, but the friction is productive. The cost (one extra object to construct) is small; the protection (audit trail, role enforcement) is significant.

PaymentConfig is a builder that enforces a required initialization sequence. You cannot call build() without providing a gatewayUrl and a clientSecret. This is a logical constraint: the type system makes incomplete configuration a compile-time error rather than a runtime surprise.

Degraded constraints: The coexistence of getTransaction() and fetchTransaction() is an example of a cultural constraint that has broken down. The original convention (one canonical method per operation) has drifted. No linter enforces it. No code review checklist flags it. The constraint that was supposed to prevent ambiguity now produces it. This is affordance degradation — accumulated drift, not intentional design.

Affordance debt assessment: The v1-to-v3 versioning situation carries significant affordance debt. The library is technically functional: payments charge, refunds process, transactions resolve. The implementation quality is not the problem. The problem is communicative clarity: engineers cannot determine from the library's own surface which version to use, which methods are canonical, or which behaviors are guaranteed. That gap between what the system should signal and what it actually signals is affordance debt.

Dimension 3: Feedback Completeness

Strengths: Type errors on missing RefundAuthorization are immediate and clear. PaymentResult's union type — success, failure, pending — forces engineers to handle all three variants at the call site. This is strong feedback: the type system makes incomplete handling a compile-time error.

Weaknesses: When charge() is called without an idempotencyKey and the network request is retried, the library does not warn. It succeeds silently — twice. A double-charge occurs with no feedback at the call site, no log warning, and no exception. The feedback loop is broken exactly at the point where it matters most.

Error messages from the library are generic HTTP status codes with no context. A 402 response from charge() does not tell the engineer whether the failure was a card decline, a configuration error, or an upstream timeout. Each failure mode requires different remediation; the undifferentiated error makes correct recovery guesswork.

Dimension 4: Audience Fit

The library was designed for the platform team that wrote it and the two initial product teams who onboarded during v1. For that audience, the hidden idempotencyKey parameter was acceptable — it was communicated through Slack and onboarding sessions.

By v3, three additional product teams have integrated the library. They did not receive the Slack briefing. They do not know about idempotencyKey. For them, the hidden affordance is an active hazard. The library's affordance landscape was calibrated to an audience that no longer exists as the primary consumer.

This is the most common audience fit failure: a system designed for a small, informed, homogeneous audience gets adopted by a larger, more diverse audience, and the assumptions encoded in the signifiers no longer hold.


Actionable Recommendations:

Analogy

Think of the framework as a building inspection checklist. An inspector does not decide what a building should look like. They apply a standard set of dimensions — structural integrity, electrical safety, plumbing, egress — to whatever building they are evaluating. Each dimension produces findings; the findings produce a report; the report drives remediation.

The affordance analysis framework works the same way. You are not deciding what the PaymentService library should be. You are applying four dimensions — discoverability, constraint quality, feedback completeness, audience fit — to what it already is. The findings surface what the system communicates well and where it fails. The recommendations follow from the findings.

Like a building inspection, the framework is a diagnostic tool, not a design generator. It tells you what is wrong. Fixing it requires judgment, architectural skill, and the concepts from Modules 01–05.

Going Deeper

Calibrating dimensions to the system type. The four dimensions carry different weight for different systems. A public-facing SDK used by thousands of external developers demands extreme discoverability: the hidden idempotencyKey pattern would be catastrophic. An internal domain service used by three teams might tolerate lower discoverability if the team maintains strong onboarding rituals and documentation. Constraint quality matters more for safety-critical systems. Feedback completeness matters more for systems that fail silently. Audience fit matters most when the consumer audience is diverse or unfamiliar. A skilled analysis notes which dimensions are most load-bearing for the system at hand.

Affordance debt quantification. The framework surfaces affordance debt qualitatively. Some teams find it useful to make it quantitative: count the number of hidden affordances (capabilities with no signifiers), count signifier inconsistencies (pairs of methods with different names for the same operation), measure onboarding time to first successful integration. None of these metrics replace judgment, but they make affordance debt visible in ways that drive prioritization.

Embedding affordance language in code review. The framework's vocabulary transfers directly into code review comments. Instead of "this naming is confusing," write: "This is a signifier failure — the name communicates X but the behavior is Y." Instead of "this is hard to use," write: "This creates unproductive friction — the constraint does not protect against a real failure mode." Precise language produces more actionable feedback and builds a shared vocabulary on the team.

Revisiting hidden affordances. Module 01 introduced the idempotencyKey as the canonical hidden affordance: the capability exists, it works, and nothing signals it. The framework's Discoverability dimension is specifically designed to surface these. Hidden affordances are uniquely dangerous because they are invisible by definition — engineers cannot act on what they cannot see. During an analysis, actively look for capabilities that exist in tests or internal documentation but are absent from the public API surface. Those are your hidden affordances.

Common Misconceptions

"A high-scoring analysis means the system is well-designed." This is wrong. The framework surfaces findings; it does not produce a score. A system can perform well on three dimensions and catastrophically on one. The PaymentService library has strong constraint quality (RefundAuthorization, PaymentResult union types) and poor discoverability (hidden idempotencyKey, getTransaction/fetchTransaction ambiguity). That profile does not average out. The discoverability failure produces double-charges in production. Treat each dimension as independent.

"Affordance analysis and code review are the same thing." Standard code review evaluates correctness: does the code do what it says? Affordance analysis evaluates communication: does the system tell engineers what they need to know to use it correctly? A function can be perfectly correct and an affordance failure simultaneously. The charge() function without idempotency warning is correct — it charges exactly once — but it is a feedback failure because it permits a usage pattern (retry without key) that will cause double-charges silently. Code review would pass it. Affordance analysis catches it.

"The framework tells you what to build." The framework is diagnostic, not prescriptive. It identifies where signifiers are weak, where constraints are missing or broken, where feedback loops are absent, where audience assumptions have drifted. It does not generate solutions. Knowing that idempotencyKey is a hidden affordance does not tell you whether to make it a required parameter, a required field in a configuration object, a default with a documented opt-out, or something else. That design decision requires judgment informed by the full context: backward compatibility, team conventions, consumer needs. The framework gives you the problem statement. The engineering skills from earlier modules give you the solution space.

Check Your Understanding

  1. You are reviewing a new TypeScript HTTP client library before your team adopts it. The library has a request() method with the following signature:

    // Poor affordance
    async request(config: object): Promise<any>

    Apply the Discoverability dimension. What are the signifier failures? What hidden affordances might exist? What would a stronger version of this signature look like?

Reveal answer The signature has three signifier failures: `config: object` communicates nothing about what configuration is expected, `Promise` conceals the response shape, and `request` is so generic it fails to communicate what kind of request (HTTP verb, retry behavior, timeout handling). Hidden affordances include whatever configuration keys `object` secretly accepts (auth headers, timeout, retry count), which exist but are invisible. A stronger version: `async request(config: RequestConfig): Promise>` where `RequestConfig` is a typed interface with required and optional fields made explicit. The return type forces handling of the typed response structure.
  1. A team argues that their internal service does not need strong signifiers because "everyone on the team knows how it works." Evaluate this claim using the Audience Fit dimension. Under what conditions is it defensible, and when does it become a liability?
Reveal answer The claim is defensible only if the team is stable, onboarding is infrequent, and the service's scope is not growing. It becomes a liability the moment any of those assumptions break: a new hire joins, a team member leaves, the service is adopted by another team, or the system grows complex enough that even current team members cannot hold it entirely in memory. Affordances are relational — they are calibrated to an audience. The implicit assumption encoded in weak signifiers is that the audience is homogeneous and informed. That assumption has an expiration date.
  1. The PaymentService library's v1–v3 version coexistence is described as affordance debt. Why is this affordance debt rather than just technical debt?
Reveal answer Technical debt measures implementation cost: the v1–v3 coexistence requires maintaining three code paths, three sets of integration tests, and backward-compatibility shims. That is technical debt. Affordance debt measures communicative clarity: engineers integrating the library cannot determine from the library's own surface which version to use, which methods are canonical, or which behaviors are guaranteed. A library that is technically functional but communicatively opaque has high affordance debt with potentially low technical debt. The two dimensions are orthogonal. Resolving the technical debt (consolidating to a single version) would likely also reduce affordance debt, but that is not guaranteed — a single version that is still confusingly named and poorly signaled would fix the technical problem while leaving the affordance problem intact.
  1. You apply the Feedback Completeness dimension to an API and discover that error responses are untyped and contain only HTTP status codes. What is the affordance failure here, and what does it cost engineers in practice?
Reveal answer The failure is a broken feedback loop: the perceived affordance of calling the API (I will know what went wrong) diverges from the real affordance (I will receive a number). When the feedback loop breaks, engineers cannot write correct error-handling code from the API contract alone. They must test failure scenarios manually, read internal source code, or consult documentation — all of which are forms of unproductive friction. In practice this means: incorrect retry logic (retrying on 402, which is a card decline, not a transient failure), missing error-specific recovery paths, and runtime bugs discovered in production rather than at development time. The cost scales with the number of distinct error modes the API can produce.
  1. A colleague proposes adding twenty new linter rules to enforce naming conventions across your codebase. Apply the Constraint Quality dimension. What questions would you ask to evaluate whether these are productive anti-affordances or unproductive friction?
Reveal answer Ask: Does each rule protect against a real failure mode? A rule that enforces `isLoading`/`hasError` prefix conventions for boolean variables prevents a naming ambiguity that has actually caused bugs. A rule that mandates trailing underscores on private fields in a TypeScript codebase where `private` already enforces access control adds friction with no protective benefit. Second: Is the rule enforceable at the right time? Compile-time or lint-time rules are productive; runtime rules are almost always unproductive. Third: Does the rule block legitimate use cases? An over-specified naming rule that requires every async function to end in `Async` makes cross-platform code (where naming is dictated by a protocol) impossible to write without workarounds. Constraints that force workarounds are themselves an affordance failure.

Key Takeaways

References