40.42 Structured Loop

A task executes repeatedly with a single pre-test or post-test condition at a single entry/exit point. The loop has exactly one back-edge and exactly one exit. Contrast with 40.42 Arbitrary Cycles, which allows multiple entry and exit points and unstructured back-edges.


Motivating Scenario

An AI PR review agent processes a shared work queue. The loop structure is: check whether the queue is empty, fetch the next PR if not, run automated code review, post a comment, then check again. The pre-test variant checks first — if the queue is empty on arrival, no fetch occurs and the agent terminates immediately. The post-test variant fetches first and checks afterward, guaranteeing at least one execution.

The key insight: the agent has no knowledge of total queue depth at startup. The loop condition is re-evaluated from live state on each iteration. This is structurally different from a static batch (Fan-Out over a known list) because the queue may receive new items between iterations. The loop terminates only when the queue is confirmed empty at check time.

Structure

Zoom and pan enabled · Concrete example: AI PR review queue processor (pre-test variant)

Key Metrics

MetricSignal
Iterations per run How many PRs processed per agent invocation. Tracks throughput and queue drain rate.
Loop exit reason Did the loop exit on empty queue, timeout, or error? Unexpected exits signal queue or agent health issues.
Iteration duration variance High variance signals inconsistent PR complexity or external API latency. Stable loops have narrow duration distributions.
Queue growth rate vs drain rate If inbound rate exceeds drain rate, the loop will never terminate. Monitor as a capacity signal.
NodeWhat it doesWhat it receivesWhat it produces
Queue Empty? Evaluates the queue state before each fetch. XOR-split: routes to Fetch if items exist, exits if empty. This is the single loop condition — both entry check and exit gate. Queue length signal from work queue Route token: "queue non-empty" or "queue empty"
Fetch Next PR Dequeues the next PR from the work queue. Atomically removes the item to prevent double-processing by concurrent agents. Work queue reference PR diff, metadata, linked issue context
AI Code Review Runs static analysis, logic checks, and style review over the PR diff. Produces structured findings at line-level granularity. PR diff + repo context Review findings: severity, line refs, suggested fixes
Post Comment Formats findings into GitHub review comments and calls the GitHub API. Marks the PR as reviewed. Review findings + PR metadata Posted review comments, PR status update

When to Use

Use when
Avoid when

Value Profile

Origin of ValueWhere it appearsHow it is captured
Future Cashflow AI Code Review node Review quality is constant per PR — the loop structure multiplies that quality across all items in the queue. The value per iteration is fixed; total value scales linearly with queue depth.
Governance Queue Empty? check node The exit condition encodes the processing policy — what counts as "done". A queue that allows concurrent writes while the agent processes means the condition must be evaluated atomically. Race conditions here break the governance invariant.
Conditional Action Each iteration Each loop body execution is a compute spend. Pre-test variant avoids any spend on empty queues. The condition check is cheap; the review is expensive. Correct routing at the gate is high-leverage.
Risk Exposure Fetch Next PR node Non-atomic dequeue creates duplicate processing risk in multi-agent deployments. Two agents checking simultaneously may both see "non-empty" and fetch the same PR. Fix: queue must implement atomic pop semantics.
VCM analog: Work Token loop. The queue is the token pool. Each dequeue consumes one token. The agent loops until the pool is empty. Token issuance (new PRs arriving) and token consumption (reviews completing) are concurrent — the loop condition observes a live token count, not a snapshot.

Dynamics and Failure Modes

Infinite loop from non-terminating condition

A new PR is pushed to the queue faster than the agent can process them. The queue is never empty at check time. The loop runs indefinitely, consuming compute without progress toward termination. Fix: decouple the termination condition from queue depth — add a wall-clock timeout, a maximum iteration count, or a "drain mode" signal that closes the queue to new writes while the agent finishes.

Post-test overshoot

In the post-test variant, the agent fetches and reviews a PR, then checks the queue. If the queue was empty when the final fetch happened but a new PR arrived before the check, the agent continues for one more iteration than the calling process expected. This is semantically correct but surprises callers that assume "queue empty at call time means zero iterations". Fix: document the variant explicitly and prefer pre-test when zero-iteration behavior is required.

Partial iteration failure breaks loop invariant

AI Code Review completes but Post Comment fails (GitHub API timeout). The PR is dequeued and reviewed but the comment is never posted. The loop re-checks and the queue shows the next item — the failed PR is silently skipped. Fix: use a two-phase approach: mark the PR as "in review" on dequeue, and only remove the in-review mark after Post Comment succeeds. Failed reviews remain visible for retry.

Variants

VariantModificationWhen to use
Pre-Test Loop (while) Condition checked before body executes — zero iterations possible if condition is false on first check Empty queue on arrival is valid and should produce no work — the common case for queue drain agents
Post-Test Loop (do-while) Condition checked after body executes — at least one iteration is guaranteed The loop body must execute once before termination can be evaluated (e.g., initial state setup or mandatory first pass)
Counted Loop Iteration counter is the exit condition, not a data state check Processing exactly N items per invocation — pagination, batch size limits, or rate-limiting scenarios
Loop with Accumulator A result artifact grows across iterations (e.g., summary of all reviews). Final output is produced outside the loop after exit. Aggregation across all iterations is required — e.g., a weekly PR quality report summarizing all reviews in the batch

Related Patterns

PatternRelationship
40.42 Arbitrary CyclesUnstructured generalization — multiple entry and exit points. Use only when the structured single-exit constraint cannot be satisfied.
50.52 RecursionA loop where the body invokes the parent process. Use when items decompose hierarchically rather than processing uniformly.
10.15 Evaluator-OptimizerQuality-gated loop — body executes until output meets a quality threshold rather than until a queue is empty.
30.31 Feedback LoopClose the loop across process instances — outputs from one run influence the next run's parameters rather than looping within a single instance.

Investment Signal

Structured loops are the backbone of queue-draining agent architectures. The pattern is observable at the infrastructure level: a well-instrumented loop emits a metric per iteration, making throughput, error rate, and drain rate directly auditable. Systems built on structured loops are easier to scale than arbitrary cycle systems because the single exit condition is a clean scaling seam — add more agents, each running the same loop independently.

The moat is in queue design, not loop design. An atomic, durable work queue that prevents double-processing and survives agent failures is the critical infrastructure. Organizations that have built or integrated high-quality queue primitives compound their advantage because the loop pattern applies to every sequential agent workload in the system.

Red flag: a loop with no iteration limit and no external termination signal is an unbounded compute commitment. If the queue can grow faster than it drains, the agent is running a cost sink, not a work processor. Due diligence should verify that every structured loop has a bounded worst-case termination path.