Streaming & Real-Time
MeetLoyd provides comprehensive real-time streaming capabilities for agent responses, event monitoring, and live collaboration. All streaming uses Server-Sent Events (SSE) for efficient, browser-compatible real-time communication.
Conversation Streaming
Stream agent responses token-by-token for real-time chat experiences. As the agent thinks, you see each word appear -- just like typing in a messaging app.
Stream Event Types
| Event | Description |
|---|---|
| token | A generated text token (partial response) |
| tool_call | Agent started using a tool |
| done | Response complete (includes token usage and cost) |
| error | An error occurred |
| thinking | Agent is reasoning (for models with extended thinking) |
| heartbeat | Keep-alive signal (every 5 seconds) |
MeetLoyd sends heartbeat events every 5 seconds during streaming to prevent reverse-proxy idle timeouts. Your client should ignore these events -- they carry no data. If you stop receiving heartbeats, the connection has likely dropped.
Live Message Injection
You can send messages to an agent while it is still streaming a response. This is useful for corrections, additional context, or changing direction mid-task.
How It Works
- Your message is stored in an in-memory queue keyed by conversation or thread ID
- At the top of each executor iteration (between tool calls), the queue is drained
- Messages are appended to the conversation history with a clear "live message" marker
- The LLM sees them as new context and can acknowledge, pivot, or continue as needed
- Messages auto-expire after 10 minutes if never drained
Injection only works while the agent is actively streaming. If the stream has finished, send a new message normally.
Command Center Streaming
Real-time monitoring of agent executions across your organization. The command center streams events for all running agents, giving you a live operational view.
Event Types
| Event | Description |
|---|---|
| Run started | Agent execution initiated |
| Run status changed | Execution state transition |
| Run progress | Token and cost progress updates |
| Tool started/completed | Tool execution lifecycle |
| LLM call | LLM API call metrics |
| Memory operation | Memory read/write events |
| Error | Execution error |
| Alert triggered/resolved | Alert condition changes |
You can filter the stream by specific agents or event types to focus on what matters.
Best Practices
SSE connections can drop due to network issues. Implement exponential backoff reconnection so the user experience remains smooth.
Always close EventSource connections when they are no longer needed. Leaked connections waste resources on both client and server.
Rendering every single token event can cause jank. Batch updates using requestAnimationFrame for a smoother experience.
If a pipeline's throughput cannot keep up with incoming events, backpressure builds. Watch pipeline metrics and consider increasing batch sizes or scaling destination capacity.
Next: Learn about Avatars for video responses from agents.