30.35 Multiple Instances with a Priori Design-Time Knowledge

Multiple instances of a task execute concurrently; the number is fixed at design time and all must complete before the process continues. The instance count is a workflow constant, not a runtime variable.


Motivating Scenario

A compliance system runs exactly three parallel audit agents on every vendor application: Legal, Financial, and Technical. This structure is mandated by the firm's vendor onboarding policy — three dimensions, always, no exceptions. The workflow definition encodes the number 3. All three agents must complete before the Approve/Reject decision node activates.

The key insight: the instance count is a policy constant, not a data-driven variable. "Three audit dimensions" reflects a regulatory or organizational requirement that does not change per execution. This is structurally different from runtime-known counts (30.36) — here, no count-determination step is needed because the count is embedded in the process definition itself.

Structure

Zoom and pan enabled · Concrete example: Vendor compliance audit pipeline

Key Metrics

MetricSignal
Per-instance completion time Latency distribution per audit type — identifies the consistent bottleneck across runs
Decision accuracy vs. human reviewer Fraction of Approve/Reject decisions matching expert human review — primary quality signal
Join wait time Time between first and last instance completion — quantifies the cost of the slowest-instance bottleneck
Instance failure rate Fraction of instances that fail or timeout per cycle — a high rate on one dimension signals a fragile agent
NodeWhat it doesWhat it receivesWhat it produces
Start Audit AND split: simultaneously activates Legal, Financial, and Technical audit agents. Fixed fan-out of 3. Vendor application package Three parallel audit activations, each with a context slice
Legal Audit Checks regulatory compliance, contract terms, sanctions screening, jurisdiction risks. Vendor legal documents + jurisdiction context Legal audit report: pass/flag/fail + findings
Financial Audit Reviews financial statements, credit risk indicators, payment history, fraud signals. Vendor financial records + credit data Financial audit report: risk score + red flags
Technical Audit Assesses security posture, data handling practices, API compliance, infrastructure maturity. Vendor technical documentation + questionnaire Technical audit report: security score + gaps
Approve / Reject AND join: waits for all three audit reports. Applies decision rules across the combined findings. Produces a vendor decision with reasoning. Three audit reports Vendor decision: approved / rejected / conditional

When to Use

Use when
Avoid when

Value Profile

Origin of ValueWhere it appearsHow it is captured
Future Cashflow Approve/Reject decision quality Parallel execution compresses audit cycle time from days to minutes. Faster vendor onboarding directly increases throughput.
Governance AND join at decision node The join enforces the policy that all three dimensions must be evaluated. Skipping any dimension is architecturally impossible — governance is encoded in the topology.
Risk Exposure Slowest audit instance Total latency equals the slowest agent. A single stuck Technical Audit blocks the decision regardless of the other two completing. Monitor per-instance latency independently.
VCM analog: Governance Token. The AND join is a governance constraint. No vendor is approved without all three audits passing through the join. This structural guarantee cannot be bypassed at runtime — the workflow engine enforces it. Equivalent to requiring M-of-M signatures before a transaction clears.

Dynamics and Failure Modes

Slowest-instance bottleneck

The decision node cannot activate until all three audits complete. If the Financial Audit contacts a slow external credit API, the Legal and Technical agents finish minutes earlier and idle. The effective wall-clock time equals the slowest agent. Fix: set per-instance timeouts; if an instance exceeds its budget, it submits a partial result flagged as "timeout" and the decision node applies degraded-mode rules.

Instance result schema mismatch

Each audit agent produces output in a slightly different format. The Approve/Reject node expects a uniform schema. If one agent produces a non-conformant report (e.g., Financial Audit returns a free-text paragraph instead of a structured risk score), the decision node cannot process it. Fix: define a shared audit report schema at design time; each agent validates its output before submitting to the join.

Correlated failures across instances

All three agents call the same vendor portal to fetch documents. If the portal is down, all three fail simultaneously. The AND join never receives its required inputs. Fix: route document fetching to a pre-fetch step that runs before the AND split; distribute cached artifacts to each audit agent, breaking the shared dependency.

Variants

VariantModificationWhen to use
M-of-N Approval Join activates when M of N instances complete (e.g., 2 of 3 audits) One dimension is optional or best-effort; process must not stall on a single low-priority instance
Weighted Decision Approve/Reject node applies different weights per audit dimension Legal findings are more consequential than Technical gaps; decision logic must reflect this asymmetry
Tiered Audit Fast pre-screening agent runs before AND split; only passes proceed to full three-way audit High volume of applications; most fail basic eligibility before full audit cost is incurred

Related Patterns

PatternRelationship
60.61 MI No SyncThe unsynchronized variant — use when downstream does not need all instances to complete
60.63 MI RuntimeWhen instance count is not a design-time constant but is known before execution starts
20.25 Consensus (M-of-N)When instances vote rather than audit — consensus requires agreement, not just completion
20.22 Human-in-the-LoopAdd a human review gate after the AND join for high-stakes decisions