Skip to main content

Streaming & Real-Time

MeetLoyd provides comprehensive real-time streaming capabilities for agent responses, event monitoring, and live collaboration. All streaming uses Server-Sent Events (SSE) for efficient, browser-compatible real-time communication.

Conversation Streaming

Stream agent responses token-by-token for real-time chat experiences. As the agent thinks, you see each word appear -- just like typing in a messaging app.

Stream Event Types

EventDescription
tokenA generated text token (partial response)
tool_callAgent started using a tool
doneResponse complete (includes token usage and cost)
errorAn error occurred
thinkingAgent is reasoning (for models with extended thinking)
heartbeatKeep-alive signal (every 5 seconds)
Heartbeat

MeetLoyd sends heartbeat events every 5 seconds during streaming to prevent reverse-proxy idle timeouts. Your client should ignore these events -- they carry no data. If you stop receiving heartbeats, the connection has likely dropped.

Live Message Injection

You can send messages to an agent while it is still streaming a response. This is useful for corrections, additional context, or changing direction mid-task.

How It Works

  1. Your message is stored in an in-memory queue keyed by conversation or thread ID
  2. At the top of each executor iteration (between tool calls), the queue is drained
  3. Messages are appended to the conversation history with a clear "live message" marker
  4. The LLM sees them as new context and can acknowledge, pivot, or continue as needed
  5. Messages auto-expire after 10 minutes if never drained
info

Injection only works while the agent is actively streaming. If the stream has finished, send a new message normally.

Command Center Streaming

Real-time monitoring of agent executions across your organization. The command center streams events for all running agents, giving you a live operational view.

Event Types

EventDescription
Run startedAgent execution initiated
Run status changedExecution state transition
Run progressToken and cost progress updates
Tool started/completedTool execution lifecycle
LLM callLLM API call metrics
Memory operationMemory read/write events
ErrorExecution error
Alert triggered/resolvedAlert condition changes

You can filter the stream by specific agents or event types to focus on what matters.

Best Practices

Handle Reconnection Gracefully

SSE connections can drop due to network issues. Implement exponential backoff reconnection so the user experience remains smooth.

Clean Up Connections

Always close EventSource connections when they are no longer needed. Leaked connections waste resources on both client and server.

Buffer UI Updates

Rendering every single token event can cause jank. Batch updates using requestAnimationFrame for a smoother experience.

Monitor Pipeline Backpressure

If a pipeline's throughput cannot keep up with incoming events, backpressure builds. Watch pipeline metrics and consider increasing batch sizes or scaling destination capacity.


Next: Learn about Avatars for video responses from agents.