Skip to main content

Telegraph Example

note

Open the Maglev Telegraph Sample and start a chat room, then join that chat room from another tab on the same device, or a different device. Note that you can chain nodes together, B joins A, and C joins B. In this way Telegraph is a distributed mesh chat room.

The source code is located here . The only files of interest in terms of Maglev are telegraph.proto and rpc_logic.ts. Everything else is just to make the demo look fancy and work on mobile.

Humble Beginnings: Protobuf#

When an engineer or team begins making something like Telegraph, the first thing that generally needs to be figured out and agreed upon is the RPC layout. Maglev uses Google Protobuf for it's IDL, so this layout is formalized into a strongly typed language that gets checked into source control. These files generally end in .proto and are stored in a separate directory at the root of the source tree called protos.

Let's take a look at the one and only Telegraph protobuf file.

protos/telegraph.proto
syntax = "proto3";
package wlabs.examples;
service Telegraph {
rpc SendMsg(SendMsgRequest) returns (SendMsgResponse);
rpc Hello(GreetingRequest) returns (GreetingResponse);
rpc Goodbye(GreetingRequest) returns (GreetingResponse);
}
message SendMsgRequest { Msg msg = 1; }
message SendMsgResponse {}
message GreetingRequest { Trace origin = 1; }
message GreetingResponse {
Trace origin = 1;
repeated Msg last_msgs = 2;
}
message Msg {
string uuid = 1;
repeated Trace traces = 2;
string chat_text = 3;
}
message Trace {
string node_id = 1;
string human_name = 2;
uint64 timestamp = 3;
}

As far as Protobuf goes this is pretty boring, it defines 3 unary RPCs, SendMsg, Hello and Goodbye along with their respective argument and return types.

But... chat rooms let people send messages back and forth, so why aren't these RPCs streaming? This is part of the magic of Maglev, each and every node can host RPCs endpoints. Which is to say there is no concept of a "server" or "client" in Maglev. You could use a browser tab running on your phone as your companies "backend server" if you so chose, it is a node in the network just like any other node and is reachable by anyone with it's ID and permissions.

So in the Telegraph example, each node will simple host all 3 RPC calls, and when it receives a message, it will forward that message on to all the other nodes it knows about. This implicitly forms a mesh network of nodes! There is no "server" hosting each chat room, it's a distributed, decentralized collection of nodes chatting with each other.

Implementing The RPC Calls#

Remember that there are no "servers" in Maglev. In our case we are going to have all nodes in Telegraph implement the above RPCs. This is done pretty simply in Maglev TypeScript.

src/rpc_logic.ts
import { action } from 'mobx';
import { wlabs } from './proto_gen';
import { store } from './store';
const seenMsgUuid = new Set<string>();
(async () => {
const selfNode = await store.maglev.nodes.self();
wlabs.examples.Telegraph.registerHandler(store.maglev, {
hello: action((req, ctx) => {
store.updateKnownNode(ctx.peerAuthInfo.nodeId, {
broadcastTo: true,
name: req.origin.humanName,
});
return {
origin: makeTrace(selfNode),
lastMsgs: store.messages.slice(-100),
};
}),
goodbye: action((_req, ctx) => {
store.updateKnownNode(ctx.peerAuthInfo.nodeId, { broadcastTo: false });
return {};
}),
sendMsg: async (req, ctx) => {
const msg = req.msg!;
// Track and ignore messages we've already seen to prevent cycles.
if (seenMsgUuid.has(msg.uuid!)) {
return {};
}
seenMsgUuid.add(msg.uuid!);
// Save the message now, before we append the trace with our own node info.
store.saveMessage(msg);
// Add ourselves to the message trace, and broadcast the message to all
// other peers we are tracking.
msg.traces!.push(makeTrace(selfNode));
for (const [id, info] of store.knownNodesById) {
if (id === ctx.peerAuthInfo.nodeId || !info.broadcastTo) {
// Don't transmit back to the peer that just sent us this message or
// to a peer we don't have broadcast set to true on.
continue;
}
void info.telegraphClient.sendMsg({ msg });
}
for (const trace of msg.traces) {
store.updateKnownNode(trace.nodeId, { name: trace.humanName });
}
return {};
},
});
})();

The outer most (async () => { is just an async function that we immediately invoke, it lets us use await. We start by grabbing out own node ID, then register and RPC handler using wlabs.examples.Telegraph.registerHandler. See the TypeScript documentation if this syntax is unfamiliar to you.

Much of what this code does is to update a MobX store which is purely for UI. That's not terribly important to understanding Maglev and is outside the scope of this example, it's just how we chose to build the example web app. Both hello and goodbye just update this store, and return back some info to the caller.

The sendMsg RPC handler loops over all other Maglev nodes that this nodes knows about and forwards on the message to them after appending a trace to it. Then, like hello and goodbye is updates the store for the UI.

Calling RPCs#

The flip side of implementing RPCs is to actually call them. Let's take a peak at the addOriginNode method, which adds another node as the origin (root) of our own little part of the distributed chat mesh.

src/rpc_logic.ts
export async function addOriginNode(id: string) {
// Say hello to the origin node, get a reply and add it as an origin.
const client = wlabs.examples.Telegraph.createClient(
store.maglev.nodes.get(id)
);
const res = await client.hello({
origin: makeTrace(await store.maglev.nodes.self()),
});
store.addOriginNodeInfo(id, res.origin, client);
for (const msg of res.lastMsgs) {
store.saveMessage(msg);
}
}

The first line creates a client instance for the Telegraph service and binds it to the node's ID. The function then invokes the hello RPC before saving some info in the store for the UI.

The sendMessageText function is just as simple.

src/rpc_logic.ts
export async function sendMessageText(chatText: string) {
const msg = {
uuid: uuidV4(),
traces: [makeTrace(await store.maglev.nodes.self())],
chatText,
};
seenMsgUuid.add(msg.uuid);
for (const [_id, info] of store.knownNodesById) {
if (info.broadcastTo) {
void info.telegraphClient.sendMsg({ msg });
}
}
store.saveMessage(msg);
}

We create a message object, which has to match the message type defined in our protobuf file. Then that message is sent to each and every node that we know about. The info.telegraphClient is simply and object returned by calling wlabs.examples.Telegraph.createClient and passing in an node ID like in the addOriginNode function.

note

What's up with the void there?

In TS async code, this signals the compiler that the thing to the right of the void keyword returns a promise, and we as the programmer know that, and are choosing to purposefully ignore the promise results.

That's It#

And that's all there is to it. When we register the RPC handler, Maglev will automatically begin connecting to infra in the background. Connecting to another node is done for you as well, lazily. All you as the developer need to, is "host" and RPC, or invoke one.