The messaging layer is always the part everyone gets wrong. Real-time communication sounds simple until you need presence tracking, scoped messaging, peer-to-peer video, and horizontal scaling - all working together without falling over. We looked at what existed and decided to start from scratch.
Symple is a complete real-time messaging and presence protocol. JSON-based, built on Socket.IO, designed as the signalling backbone for WebRTC applications. Still running in production. Still doing what it was built to do.
Building Symple is what taught us to think about protocols at the systems level - the same thinking we now apply to MCP and AI orchestration. Different domain, same discipline.
Protocol design
Every message is JSON with a consistent structure: type, from, to, id, data. Types are message, presence, command, event, with an optional subtype for specialization (call:init, call:offer, etc.).
Why JSON over binary? Debuggability. When you’re troubleshooting a real-time system at 2am, you want to read messages in your logs without a decoder ring.
Peers are addressed as user|id - the user field alone targets all sockets for that user (multi-device), add the socket id to target a specific connection. One addressing scheme covers broadcast, room-cast, user-cast, and device-specific routing.
CLIENT SERVER CLIENT
| | |
+-- connect(auth) -----------> | validate token (Redis) |
|<-- presence probe ----------> | --> broadcast -------------> |
|<-- presence response <------- | <-- broadcast <------------- |
| | |
| MESSAGING | |
|-- message(to: room) -------> | --> room broadcast -------> |
|-- message(to: user|id) ----> | --> direct route ----------> |
| | |
| SIGNALLING (via messages) | |
|-- call:init ----------------> | --> route -----------------> |
|<- call:accept <-------------- | <-- route <----------------- |
|-- call:offer (SDP) ---------> | --> route -----------------> |
|<- call:answer (SDP) <-------- | <-- route <----------------- |
|<- call:candidate (ICE) <----> | <--> route <-->-----------> |
| | |
|<=========== WebRTC P2P (media bypasses server) ============> |
| | |
| HORIZONTAL SCALING | |
| Redis pub/sub |
| +----+----+ |
| Server 1 Server 2 |
| | |
| Ruby client (direct |
| Redis inject, no WS) |
Presence
Presence isn’t a separate subsystem - it’s just messages with type: 'presence'. Connect broadcasts online: true, disconnect broadcasts online: false. Same routing, same format.
New clients send a presence probe on connect. Remote peers respond with their current state (without the probe flag, preventing loops). This bootstraps the roster without server-side state management - peers learn about each other directly. The client-side Roster tracks peers in memory and fires events on changes (addPeer, removePeer).
WebRTC signalling
The full call lifecycle runs as message subtypes layered on regular messaging:
| Subtype | Direction | Purpose |
|---|---|---|
call:init | Caller to Callee | Initiate |
call:accept | Callee to Caller | Accept |
call:offer | Caller to Callee | SDP offer |
call:answer | Callee to Caller | SDP answer |
call:candidate | Both | Trickle ICE |
call:hangup | Either | End call |
The CallManager wires this together. It maps Symple message events to WebRTCPlayer methods and player events back to Symple messages. The WebRTCPlayer doesn’t know about Symple, the CallManager doesn’t know about ICE internals. The signalling transport is pluggable.
ICE candidates arriving before the remote description is set get buffered and flushed when it lands - handling the race condition most WebRTC implementations get wrong.
Horizontal scaling
Multiple server instances behind a load balancer, connected via @socket.io/redis-adapter. A message from a client on Server A to a user on Server B publishes to Redis, the adapter on B picks it up and delivers. Socket IDs are globally unique across instances, so user|id addressing works seamlessly.
Session data lives in Redis at symple:session:<token> with configurable TTL. Token auth works identically regardless of which server the client connects to.
The Ruby client
Production Rails apps need server-side push - background jobs, model callbacks, event triggers. The traditional approach is an HTTP POST to the messaging server. Slow, doesn’t scale.
The Ruby client bypasses HTTP entirely. It encodes Socket.IO-compatible packets and publishes them directly to Redis. No WebSocket connection from Ruby. No round-trip. Rails model callback fires, Redis publish happens, message appears on the client. The emit: true flag tells the adapter to rebroadcast across all server instances.
Also handles session lifecycle - token creation on login, TTL extension on activity, cleanup on logout - with ActiveRecord hooks for automatic sync.
Client packages
Split into two with a clear boundary:
symple-client - pure messaging: connection management, routing, presence roster, rooms, event bus. 9KB. No media dependencies.
symple-client-player - media engines on top: WebRTCPlayer (two-way video/audio), MJPEGPlayer, WebcamPlayer, and CallManager. Engines register with a preference score and capability check - the app queries Media.preferredEngine() to pick the best option for the platform and falls back gracefully.
Routing
Three patterns cover every real-time messaging case:
- No
tofield - broadcast to all joined rooms tois a string ("user|id") - direct to a specific peer or usertois an array (["room1", "room2"]) - multicast to named rooms
Dynamic rooms let clients join and leave on the fly. Socket.IO handles room mechanics, Redis adapter ensures it works across server instances.
Getting started
npm install symple-server
node server.js
import { SympleClient } from 'symple-client'
const client = new SympleClient({
url: 'http://localhost:4500',
token: 'your-auth-token',
peer: { name: 'My App', group: 'public' }
})
client.on('connect', () => console.log('Connected'))
client.on('message', (m) => console.log('Message:', m))
client.on('presence', (p) => console.log('Presence:', p))