Your user sends: "Opening hours? Address? I want skincare products." Most chatbot builder picks one. Wanderer processes all three.
Try it yourself. Just type to the chat on this page: "I would like to buy a skin cream, please, and I have questions about my last order. And what are the opening hours?"
Watch what happens. The bot doesn't panic. It doesn't pick the first intent and ignore the rest. It doesn't ask you to "please repeat your other questions one at a time." It identifies all three requests, tells you it found them, and works through each one — sequentially, transparently, and completely.
This is multi-intent serialization, and it's one of the most practical things Wanderer can do.
Think about how people actually talk to chatbots. They don't send one clean, perfectly scoped request per message. They dump everything at once. It's natural. It's efficient. It's how you'd talk to a human.
"What are your opening hours, where are you located, and do you carry organic skincare?"
That's three distinct intents packed into a single input. And here's what happens in most chatbot builder on the market: the system detects one intent — usually the first or the strongest match — and runs with it. The other two? Gone. Silently dropped. The user has to repeat themselves, which means frustration, which means abandonment.
This isn't a niche edge case. It happens constantly. Anyone who's ever used a chatbot has experienced it. Anyone who's ever built one has struggled with it. The standard workaround is to tell users to ask one thing at a time. That's not a solution. That's an admission of failure.
Wanderer solves this with a structural approach, not a hack. Here's the actual architecture behind multi-intent serialization:
The user's message enters through a Prompt Input Node. From there, it's sent to a GPT Node, which analyzes the raw input and breaks it down into structured intents — returned as JSON. This is where the intelligence lives: the GPT Node doesn't just classify the message into a single category. It identifies every intent present and outputs them in a structured format that the rest of the graph can work with.
It looks like this:
"intent": {
"order": true,
"store": false,
"product": true,
}
The structured JSON flows into a Queue Node. This is the orchestrator. The Queue Node doesn't execute anything itself — it organizes. Using Reactive Graph Sequencing (RGS), it discovers which Task Nodes in the graph are contextually relevant based on the intents it received. The edges connecting the Queue Node to the Task Nodes determine which tasks get activated: if the GPT Node identified "opening hours," "skincare products," and "order inquiry" as intents, only the Task Nodes connected via matching edges get unlocked.
The order in which tasks are processed is determined by edge strength — the weight of the connection between the Queue Node and each Task Node. This gives you full visual control over task priority, right there in the editor.
Once the queue knows what to do and in what order, it begins working. It sets the first Task Node to running = true. That task's entire subflow executes — messages are sent, user input is collected, APIs are called, whatever the flow requires.
Each Task Node is aware of its position in the sequence. It knows whether it's the first, second, or third task, and it knows the total task count. This allows the bot to adapt its messaging dynamically: different framing for the first task versus transitions into subsequent ones, and different behavior depending on whether there's one task or five.
Here's the critical piece. Further down each task's branch sits a Done Node. When a Done Node executes, it signals to the Queue Node that its parent task is complete. The queue then moves on to the next task and sets that one to running = true.
The beauty of this design is that the Done Node determines when a task is finished — not the Queue Node, and not a timer. This means it doesn't matter whether a task completes instantly (like sending store hours) or requires extended interaction (like asking for an order number, waiting for the user to type it, validating it, and forwarding it to support). The Done Node fires when it fires. The queue waits. No polling, no timeouts, no race conditions.
This is what makes the system genuinely sequential rather than just parallel with a wrapper. Each task gets the time and space it needs to fully resolve before the next one begins.
Because each Task Node receives metadata from the Queue — its position, the total count, what's been completed — the bot can shape its communication at every stage.
Look at how this plays out in practice:
Three tasks:
Two tasks:
→ Handles opening hours → Handles skincare
One task:
→ Handles it directly. No "let's begin," no completion summary. The framing scales down because it doesn't need ceremony for a single request.
This isn't cosmetic. It's structural intelligence. The transitions between tasks ("Regarding the question about...", "Let's take a look at your order now") happen because each task knows it's not the first one in the queue and adjusts accordingly. All of this is configurable visually in the editor — you decide what the bot says before, between, and after tasks.
Without multi-intent handling:
(Skincare? Order question? Never happened.)
With Wanderer's multi-intent serialization:
Notice the third task. It didn't just fire off a canned response. It asked a follow-up question, waited for user input, processed it, and then the Done Node fired to complete the task. That's real sequential task execution with full conversational depth per task — not string concatenation.
In a traditional chatbot builder, conversation flow is linear. One input maps to one intent maps to one response path. The architecture doesn't have a concept of "multiple things to do." You'd have to build a custom middleware layer to split inputs, manage a queue, track completion state per task, handle tasks that require user interaction mid-sequence, and reassemble everything into a coherent conversation.
Wanderer's underlying technology — Reactive Graph Sequencing (RGS) — makes this behavior architecturally natural. RGS continuously evaluates the state of the entire graph. When the Queue Node populates with tasks, the graph doesn't choke. It re-sequences. When a Done Node fires, the graph reacts — the queue advances, the next Task Node activates, and the conversation continues. There's no polling loop, no state machine you have to hand-code. The graph is the state machine, and RGS keeps it moving.
Multi-intent isn't an academic curiosity. It directly impacts metrics that businesses care about:
For support bots, e-commerce assistants, intake forms, and FAQ systems, multi-intent handling isn't a nice-to-have. It's the difference between a bot that feels like a tool and a bot that feels like a wall.
Look across the chatbot builder landscape. Dialogflow, Botpress, ManyChat, Voiceflow, Landbot — pick your favorite. Try sending a multi-intent message. In the vast majority of cases, you'll get a single-intent response. Some platforms have partial workarounds, but none offer a clean, structural, visual solution for decomposing and sequentially processing multiple intents from a single input — let alone one where each task is context-aware, completion is driven by explicit Done Nodes, and the bot dynamically frames the entire conversation around the workload.
This is one of those problems that's so universal, so obvious, that it's almost invisible.
Multi-intent serialization isn't a plugin or a premium feature. It's just how Wanderer works.