Conversational Try/Catch: How RGS Handles Late Intent Switching in Chatbots

Conversational Try/Catch: How RGS Handles Late Intent Switching in Chatbots

2026-04-27 by Chris

Most chatbot builders break when users change their mind mid-conversation. RGS makes intent switching a first-class feature — no workarounds, no duct tape.

If you've ever built a conversational flow, you know the problem: users don't follow your script. They start asking about their order, then halfway through, they want your office address. Traditional flow builders treat this as an edge case. Reactive Graph Sequencing treats it as business as usual.

In this post, we'll walk through a concrete scenario — an order details chatbot — that demonstrates how RGS handles late intent switching using a pattern we call Conversational Try/Catch.

You can try and inspect a live example here: https://wanderer-flow.de/flows/Catch-unexpected-inputs-with-Try-and-Throw-nodes-lfvdiy3m25xrk68ogeras94402y46fyz


The Scenario

A customer opens a chat with your e-commerce support bot. The conversation is designed to:

  1. A Prompt node captures their initial message.
  2. A GPT node receives the input, classifies the intent, and determines what the user needs.
  3. A Queue node picks up the identified intent and routes it to the appropriate Task node — in this case, "Order Details."
  4. The task flow continues. A second Prompt node further down the conversation asks the user for their order number.

Simple enough. But here's where it gets interesting.

Instead of entering an order number, the user types: "What's your address?"

This has nothing to do with order tracking. In most chatbot systems, this is where things go sideways. But in RGS, this is where things get elegant.

  1. A Throw node attached to the order-number prompt detects that the input doesn't match expectations. It performs a clean exit and throws its state upward.
  2. A Try node, positioned before the GPT node at the top of the conversation graph, catches that thrown state.
  3. The intercepted prompt — the one that couldn't be handled in the specialized order-number context — is sent back to the GPT node for re-evaluation.
  4. The GPT node classifies the new intent. The Queue and Task nodes react accordingly. The conversation seamlessly pivots to answering the address question.

The user never notices the plumbing. The conversation just flows.


How Traditional Chatbot Builders Handle This (Spoiler: Poorly)

If you've worked with conventional flow builders — whether visual drag-and-drop platforms, Dialogflow-style NLU systems, or custom-coded solutions — you've likely encountered these pain points when dealing with mid-conversation intent changes:

1. Rigid, Linear Flows

Most chatbot builders model conversations as decision trees or linear sequences. Once a user enters a branch (say, "Order Details"), the flow is locked in. There's no native mechanism to jump back to an earlier decision point and re-evaluate. If the user says something unexpected, the bot either asks again, shows a fallback message, or breaks entirely.

2. Fallback Hell

The standard approach to unexpected input is a "fallback intent" — a catch-all that fires when nothing matches. But fallbacks are blunt instruments. They don't know where the user was in the conversation or what context was active. They can't redirect the user to a meaningful new path. At best, they say "Sorry, I didn't understand that. Could you try again?" At worst, they restart the entire conversation.

3. Global Intent Detection as a Band-Aid

Some platforms try to solve this with "global intents" — intents that can fire from anywhere in the conversation. But this creates its own problems: global intents compete with local ones, priority conflicts emerge, and the flow designer ends up managing a tangled web of overrides. The more intents you add, the more brittle the system becomes.

4. Manual State Management

Developers who need real intent-switching capability often resort to custom code. They store conversation state in variables, write middleware to detect out-of-scope inputs, manually rewind the flow to an earlier point, and hope the re-entry doesn't corrupt the context. This works — until the conversation gets complex. Then it becomes a maintenance nightmare.

5. No Retroactive Re-Evaluation

This is the fundamental limitation. Traditional flow builders execute forward. Once a node has run, its result is final. There's no concept of going back to a previous node, changing its state, and having everything downstream automatically recalculate. If you want to re-route a conversation, you have to explicitly code every possible re-routing path.


Why Conversational Try/Catch Works So Well in RGS

The RGS approach to this problem isn't a workaround or a special feature bolted on top. It emerges naturally from how Reactive Graph Sequencing works at a fundamental level.

The Key Insight: Influencing Earlier Node States

In RGS, every node holds state. And crucially, state can be pushed from one node to any other node in the graph — including nodes that are earlier in the sequence. This is the superpower that makes Conversational Try/Catch possible.

When the Throw node fires, it doesn't just emit an error signal. It pushes its state — the unhandled user input — to the Try node that sits before the GPT node at the top of the graph. This changes the Try node's state.

And here's where RGS fundamentally differs from every traditional flow builder: when a node's state changes, the entire graph re-traverses and generates a new execution sequence.

The graph doesn't just "continue from where it left off." It re-evaluates everything. The GPT node sees the new input. The Queue and Task nodes receive the new intent classification. The entire downstream sequence recalculates. The conversation adapts.

It's Not Exception Handling — It's Re-Sequencing

In traditional programming, try/catch is about handling errors. In RGS, the Try/Throw pattern is about redirecting the flow of state through the graph. The Throw node doesn't signal "something went wrong." It signals "this input belongs somewhere else" — and it knows exactly where to send it.

This is possible because the throw node has access to the inherited traversal sequence. The throw node therefore knows exactly which try node preceded it in the sequence. No manual wiring is involved. This is pure graph theory.

Continuous Re-Evaluation, Not One-Shot Execution

Traditional flow builders are event-driven: a message arrives, the flow processes it, done. RGS is state-driven: when state changes anywhere in the graph, the entire graph re-sequences. This means:

  • The GPT node doesn't need to "know" that the input was rerouted. It just processes whatever state it currently has.
  • The Queue and Task nodes don't need special "intent changed" logic. They simply respond to the new sequence.
  • The conversation messages recalculate automatically. Old messages that no longer apply can become invalid. New messages appear based on the current path.

The graph continuously asks: "Given everything we know right now, what should be happening?" That question is re-answered every time state changes. This is what makes late intent switching feel seamless rather than forced.

Clean Separation of Concerns

Notice how each node in this pattern has a single, clear responsibility:

  • Prompt node: Capture user input
  • GPT node: Classify intent
  • Queue node: Route to the right task
  • Task node: Execute the task logic
  • Throw node: Signal that input doesn't belong here
  • Try node: Catch rerouted input and feed it back into the classification pipeline

No node needs to understand the full conversation flow. No node contains special-case logic for intent switching. The behavior emerges from the graph structure and RGS's reactive re-sequencing. Add a new intent? Add a new task node. The Try/Throw pattern handles the rerouting automatically.


The Bigger Picture

This order-tracking example is just one instance of a much broader capability. The Conversational Try/Catch pattern works for any situation where user input might not match the current context:

  • A user configuring a product who suddenly asks about shipping
  • A patient filling out a medical form who asks about insurance coverage
  • A lead in a sales flow who pivots to a support question

In each case, the specialized context (product config, medical form, sales flow) can Throw the unexpected input back to a general-purpose classifier, which re-evaluates and re-routes the conversation — all without losing context, without custom code, and without the user ever feeling like the bot got confused.

This is what happens when intent switching isn't an edge case you handle, but a natural consequence of how your system works.


Wanderer is the first flow builder built on Reactive Graph Sequencing. Start building your own adaptive conversations at wanderer-flow.de.