Connect Any Custom Bot to Genesys Digital Channels
Built your own bot with LangChain, LlamaIndex, or a custom LLM pipeline? If it exposes a REST API, ContextQue connects it to Genesys web chat, SMS, WhatsApp, and Messenger through Digital Bot Connector v2.
The Challenge: Custom Bots Have the Hardest Integration Path
If you have built a custom AI bot — whether it is powered by GPT-4, Claude, Gemini, an open-source model like Llama or Mistral, or a retrieval-augmented generation (RAG) pipeline — there is no native Genesys integration for your stack. You must build Digital Bot Connector v2 middleware from scratch.
That means implementing the full DBC v2 protocol: session management, message format translation, health check endpoints, and error handling. You also need to solve the timeout problem. LLM-powered bots routinely need 20 to 60 seconds (or more) per response, especially when they perform retrieval, chain-of-thought reasoning, or call external tools. The default DBC v2 timeout configuration will terminate these requests well before the bot finishes generating a response.
On top of that, every custom bot has a different API contract. Your LangChain agent returns data differently from a raw OpenAI API wrapper, which differs from a LlamaIndex query engine. Building a one-off DBC v2 adapter means writing translation logic specific to your bot's request and response format — and rewriting it every time your bot's API evolves.
How ContextQue Solves It
ContextQue's REST adapter defines a simple, documented API contract that your bot needs to implement. Accept a JSON payload with the user's message and conversation context; return a JSON payload with the bot's response. That is it. ContextQue handles everything else: DBC v2 protocol compliance, session state, timeout management, and message format translation for each Genesys channel.
Extended timeout configuration is where ContextQue delivers the most value for custom bots. You configure the maximum response time your bot needs — whether that is 15 seconds for a fast intent classifier or 90 seconds for a complex RAG pipeline with multiple LLM calls — and ContextQue manages the DBC v2 session to keep the Genesys connection alive for the full duration.
Authentication is flexible: API keys, OAuth 2.0 bearer tokens, or mutual TLS. Your bot runs on your infrastructure, behind your firewall, with your security policies. ContextQue connects to it without requiring you to change hosting, expose public endpoints, or adopt a specific framework. There is no vendor lock-in.
REST API Integration Capabilities
Any LLM or Framework
Works with LangChain, LlamaIndex, Haystack, custom GPT-4/Claude/Gemini wrappers, open-source models (Llama, Mistral, Mixtral), and any other bot that can serve a REST endpoint.
Extended Timeout Configuration
LLM bots need time to think. Configure per-bot timeouts from 10 seconds to over 90 seconds. ContextQue manages the DBC v2 session lifecycle to prevent Genesys from dropping the connection.
Flexible Authentication
Secure the connection to your bot with API keys, OAuth 2.0 bearer tokens, or mutual TLS. Your bot stays on your infrastructure with your security policies intact.
Message Format Translation
ContextQue translates between DBC v2's structured message format and your bot's API contract. Rich responses (buttons, cards, images) are mapped to each Genesys channel's supported format.
DIY Integration vs. ContextQue
Build It Yourself
- ✕ Implement the full DBC v2 protocol specification
- ✕ Solve LLM timeout issues with no reference code
- ✕ Build session state management from scratch
- ✕ Write channel-specific message formatting
- ✕ Re-implement when your bot's API changes
Use ContextQue
- ✓ DBC v2 protocol fully implemented
- ✓ Configurable timeouts up to 90+ seconds
- ✓ Session state handled automatically
- ✓ Simple API contract — your bot just returns JSON
- ✓ No vendor lock-in — bring your own infrastructure
Frequently Asked Questions
What does my bot's API need to look like?
Your bot needs a single HTTP POST endpoint that accepts a JSON payload containing the user's message text, conversation ID, and optional context metadata. It returns a JSON response with the bot's reply text and optional structured elements (buttons, cards, handoff signals). ContextQue provides a documented API specification with examples for common frameworks.
My LLM bot takes 30–60 seconds to respond. Will Genesys drop the connection?
Not with ContextQue. Standard DBC v2 configurations time out well before that, which is why most custom LLM integrations fail. ContextQue manages the DBC v2 session lifecycle with configurable extended timeouts, keeping the Genesys connection alive while your bot completes its reasoning, retrieval, and generation steps.
Can I use this with a bot running on my private network?
Yes. ContextQue supports connecting to bots on private networks through secure tunneling or VPN peering. Your bot never needs to be exposed to the public internet. Authentication options include API keys, OAuth 2.0, and mutual TLS to match your organization's security requirements.
Ready to Connect Your Bot to Genesys?
Join the waitlist to get early access when ContextQue launches.
You're on the list!
Check your email for confirmation.
Free to join. No spam — just launch updates.
Genesys Partner or ISV? Join our Partner Program instead.