A task is enabled by an external signal. The trigger is lost if no task instance is waiting to consume it when it fires. Semantics are fire-and-forget: the signal source does not know whether the trigger was consumed or dropped, and does not retry. Contrast with 70.72 Persistent Trigger, which queues signals until a consumer is available.
A market data pipeline monitors equity prices and fires a "Price Alert" signal whenever a stock crosses a predefined threshold. An AI analysis agent is designed to respond to these alerts by running a short-term momentum analysis. However, the analysis agent pool has limited capacity — only a fixed number of agents run concurrently. When all agents are busy, a threshold crossing fires the alert but no agent is idle to consume it. The trigger is dropped: the move is logged but not analyzed.
The key design decision: the pipeline owner has determined that a price signal older than a few seconds has no analysis value. Queuing stale alerts would cause the analysis agent to run on outdated market conditions, potentially producing harmful recommendations. Dropping the trigger is the correct behavior — only real-time signals are worth consuming. This is the defining criterion for the transient trigger: staleness makes the signal worthless, so loss is preferable to delay.
| Metric | Signal |
|---|---|
| Trigger drop rate | Fraction of fired signals that were not consumed. Target: below the acceptable miss threshold for the signal type. Rising drop rate signals capacity shortfall. |
| Agent utilization at trigger time | Fraction of agents busy when a trigger fires. High utilization explains drops. If consistently above 80%, pool size is under-provisioned for the signal rate. |
| Signal-to-drop correlation with volatility | Are drops clustered around high-volatility periods? Confirms that coverage collapse happens exactly when it is most costly. |
| Time from trigger to analysis completion | For consumed triggers: end-to-end latency. Measures whether analysis is fast enough to be actionable given signal time-value decay. |
| Node | What it does | What it receives | What it produces |
|---|---|---|---|
| Price Monitor | Continuously evaluates incoming market feed against threshold conditions. Fires an alert signal when a threshold crossing is detected. XOR-split: routes to Alert Handler or terminates the monitoring cycle. | Real-time market feed, threshold configuration | Alert signal with timestamp, ticker, price, threshold |
| Alert Handler | Checks whether an idle analysis agent is available. Routes the signal to the analysis agent if one is waiting; otherwise routes to Trigger Dropped. This is the transient semantics gate — it does not wait or queue. | Alert signal | Route token: consumed or dropped |
| Price Analysis | Runs momentum analysis on the triggered price event: recent trend, volume profile, correlated assets. Produces a structured analysis for the event log. | Alert signal + real-time market context | Analysis report: signal strength, trend, recommendation |
| Trigger Dropped | Records that a trigger was fired but not consumed. Writes to the event log with the original signal metadata. Enables post-hoc audit of missed coverage. | Alert signal (undelivered) | Drop record: timestamp, ticker, reason (no agent available) |
| Event Log | XOR-join: receives either an analysis report or a drop record and appends it to the durable event log. Downstream consumers read the log for reporting and capacity planning. | Analysis result or drop record | Appended event log entry |
| Origin of Value | Where it appears | How it is captured |
|---|---|---|
| Future Cashflow | Price Analysis node | Value is created only when a trigger is consumed and analysis runs on a live signal. Dropped triggers represent opportunity cost — the fraction of threshold crossings analyzed is the effective coverage rate. Higher consumer capacity improves coverage, directly increasing value produced. |
| Governance | Alert Handler decision node | The agent-availability check is a capacity governance constraint. It prevents over-commitment: the system will not queue work beyond what can be processed in real time. The drop-or-consume decision is a policy, not a failure — it encodes the organization's time-value judgment on stale signals. |
| Conditional Action | Price Monitor | The monitor runs continuously and fires only on threshold crossings. This is a conditional action cost model — the system pays for monitoring continuously but pays for analysis only on consumed triggers. Cost scales with signal frequency, not with consumer capacity. |
| Risk Exposure | Trigger Dropped node | Systematic drops during high-volatility periods mean the analysis agent is least available precisely when signals are most frequent and most valuable. Drop rate and signal value are inversely correlated — capacity planning must target peak signal rate, not average rate. |
VCM analog: Spot market token. The trigger is a token that expires immediately if not consumed. No queueing, no reservation, no retry. The source issues the token; the consumer either takes it or it vanishes. Value from unclaimed tokens is permanently lost — there is no settlement mechanism.
Market volatility events produce signal bursts — many threshold crossings in seconds. All agents become occupied analyzing the first signals, and the burst tail is entirely dropped. The analysis log shows a gap precisely when market conditions were most interesting. Fix: size the agent pool for burst capacity, not steady-state. Use historical volatility distributions to set the P95 signal rate as the target pool size, not the mean rate.
The Trigger Dropped path is implemented but the log write fails silently (disk full, network partition). Dropped triggers produce no record. The event log appears clean, giving a false impression of full coverage. Fix: the drop record write must be synchronous and must itself have a fallback — write to a local buffer and flush asynchronously with at-least-once delivery to the event log.
The Alert Handler checks an agent availability registry that is updated with a 500ms lag. An agent that received a trigger 200ms ago is still marked as idle in the registry. The next trigger is routed to it, producing a conflict — two triggers delivered to one agent simultaneously. Fix: availability checks must be synchronous with agent state. Use a pull-based model where agents claim triggers from a queue rather than a push-based model where the handler selects an agent.
| Variant | Modification | When to use |
|---|---|---|
| Transient with TTL | Trigger is queued but expires after a fixed time-to-live. Consumed if an agent becomes available within the TTL; dropped on expiry. | Short-lived tolerance for delay — a few seconds of delay is acceptable, but stale signals beyond a threshold should not be processed |
| Sampled Transient | Only a fraction of fired triggers are routed to consumers (e.g., 1 in N). The rest are dropped by design. | Signal rate far exceeds consumer capacity and a representative sample is sufficient — e.g., statistical monitoring of high-frequency sensor streams |
| Priority Transient | Triggers are assigned priority scores. When agents are busy, only high-priority triggers displace low-priority ones in the queue. | Not all signals are equal — a large threshold crossing is more valuable than a small one and should preempt in-progress analysis of lower-priority signals |
| Pattern | Relationship |
|---|---|
| 80.84 Persistent Trigger | The durable alternative — signals are queued and never dropped. Use when every signal must eventually be processed regardless of consumer availability. |
| 70.71 Deferred Choice | Multiple competing event sources — the process waits for any of several triggers. Transient semantics apply when the first consumed trigger cancels the others. |
| 10.14 Retry-Fallback | For cases where a dropped trigger should trigger a retry on a lower-priority consumer rather than a silent drop. |
Transient trigger systems are the correct architecture for real-time signal processing where latency matters more than completeness. The pattern is a first-class design choice, not a limitation to engineer around. Organizations that explicitly model drop semantics are making a time-value judgment — they have decided that stale analysis is worse than no analysis. This is a meaningful architectural commitment that reveals how the organization thinks about information freshness.
The strategic value is in drop rate management. Systems that maintain low drop rates during peak signal periods have invested in agent pool sizing and fast agent startup. These systems have a capacity moat — competitors with slower agent initialization or smaller pools miss more signals during the moments that matter most.
Red flag: a system described as "transient" that has no drop logging is a system that has accepted signal loss without measuring it. Drop rate is the primary health metric for this pattern. If the organization cannot report their historical drop rate per signal type, they have no visibility into the actual coverage of their monitoring system.