Telegraph Example
note
Open the Maglev Telegraph Sample and start a chat room, then join that chat room from another tab on the same device, or a different device. Note that you can chain nodes together, B joins A, and C joins B. In this way Telegraph is a distributed mesh chat room.
The source code is located here
.
The only files of interest in terms of Maglev are
telegraph.proto
and
rpc_logic.ts
.
Everything else is just to make the demo look fancy and work on mobile.
#
Humble Beginnings: ProtobufWhen an engineer or team begins making something like Telegraph, the first thing
that generally needs to be figured out and agreed upon is the RPC layout. Maglev
uses Google Protobuf for it's IDL, so this layout is formalized into a strongly
typed language that gets checked into source control. These files generally
end in .proto
and are stored in a separate directory at the root of the source
tree called protos
.
Let's take a look at the one and only Telegraph protobuf file.
As far as Protobuf goes this is pretty boring, it defines 3 unary RPCs,
SendMsg
, Hello
and Goodbye
along with their respective argument and return
types.
But... chat rooms let people send messages back and forth, so why aren't these RPCs streaming? This is part of the magic of Maglev, each and every node can host RPCs endpoints. Which is to say there is no concept of a "server" or "client" in Maglev. You could use a browser tab running on your phone as your companies "backend server" if you so chose, it is a node in the network just like any other node and is reachable by anyone with it's ID and permissions.
So in the Telegraph example, each node will simple host all 3 RPC calls, and when it receives a message, it will forward that message on to all the other nodes it knows about. This implicitly forms a mesh network of nodes! There is no "server" hosting each chat room, it's a distributed, decentralized collection of nodes chatting with each other.
#
Implementing The RPC CallsRemember that there are no "servers" in Maglev. In our case we are going to have all nodes in Telegraph implement the above RPCs. This is done pretty simply in Maglev TypeScript.
The outer most (async () => {
is just an async function that we immediately
invoke, it lets us use await
. We start by grabbing out own node ID, then
register and RPC handler using wlabs.examples.Telegraph.registerHandler
. See
the TypeScript documentation if this syntax is unfamiliar to you.
Much of what this code does is to update a MobX
store which is purely for UI.
That's not terribly important to understanding Maglev and is outside the scope
of this example, it's just how we chose to build the example web app. Both
hello
and goodbye
just update this store, and return back some info to the
caller.
The sendMsg
RPC handler loops over all other Maglev nodes that this nodes
knows about and forwards on the message to them after appending a trace to it.
Then, like hello
and goodbye
is updates the store for the UI.
#
Calling RPCsThe flip side of implementing RPCs is to actually call them. Let's take a peak
at the addOriginNode
method, which adds another node as the origin (root) of
our own little part of the distributed chat mesh.
The first line creates a client instance for the Telegraph service and binds it
to the node's ID. The function then invokes the hello
RPC before saving some
info in the store for the UI.
The sendMessageText
function is just as simple.
We create a message object, which has to match the message type defined in our
protobuf file. Then that message is sent to each and every node that we know
about. The info.telegraphClient
is simply and object returned by calling
wlabs.examples.Telegraph.createClient
and passing in an node ID like in the
addOriginNode
function.
note
What's up with the void
there?
In TS async code, this signals the compiler that the thing to the right of the
void
keyword returns a promise, and we as the programmer know that, and are
choosing to purposefully ignore the promise results.
#
That's ItAnd that's all there is to it. When we register the RPC handler, Maglev will automatically begin connecting to infra in the background. Connecting to another node is done for you as well, lazily. All you as the developer need to, is "host" and RPC, or invoke one.