5 min read

An attempt at making geographically distributed services easy to build

Writing an application that runs on many servers in separate locations around the world is no easy task. You have to deal with access to application data, routing, inter-service communication, and many other things. The situation quickly becomes much more complicated than having an application server with a pool of connections to a database. If a team wants to take advantage of the key properties of a globally deployed app (namely: lower latency due to physical proximity to end-users), you have to deal with a lot of pain.

TL;DR I've published v0.0.1 of a Go library called libsdk (pronounced Lipstick) to aid in writing distributed apps. This first release is centred around replicating a local database (starting with SQLite) via a messaging system (starting with NATS), while remaining as interoperable with the ecosystem of net/http as possible. I hope to expand it into a comprehensive toolkit for distributed apps.

Platforms such as Cloudflare Workers, Fly.io, Deno Deploy, and others look to solve the distributed datastore problem in a number of ways. Cloudflare has D1, Fly.io has Postgres with Global Replication, Deno has KV. What do all of these have in common? They're tightly coupled to the platform (and therefore closed source and proprietary), and they're opaque. While they have each been described in detail in various blog posts, there is no way to run any of them on your own infrastructure, and there is no opportunity to fine-tune them or tweak them to any specific use-cases (i.e. they optimize for the "average" use-cases).

That's not to take away from how cool they are! When I first read about D1, I was blown away by the awesomeness of it. As a self-proclaimed distributed systems nerd, I couldn't help but think "wouldn't it be awesome if there was an OSS version of this?" Well, there sort of is? Litestream is a project that allows backing up SQLite databases to help make SQLite "disaster recoverable", and its sibling LiteFS is a FUSE-based filesystem aimed at replicating SQLite databases across a fleet of running servers. With SQLite emerging as the "edge database du jour", these two projects offer some compelling functionality. What are the trade-offs here? Per the LiteFS documentation, there is a hit to write performance due to the limitations of FUSE (which they aim to mitigate in the future with a bespoke filesystem). Something they don't list as a trade-off, but could be seen as such is LiteFS' dependency on Consul. That's quite a heavy system to add on top of a technology as lightweight as SQLite, and it historically doesn't scale well across geographies unless you do it juuust right, which I'm told requires a team of experts.

Enough with the preamble, I decided that the "complexities of building lightweight distributed web services" was a problem I was interested in taking a stab at, so I started looking at alternative approaches. One thing that came to mind was pairing a common architecture for SoA and microservice development (messaging/queues) with a lightweight database to achieve a simple mechanism for writing distributed applications. This is when I git init'd a new project called libsdk. Now you might bemoan the super generic name, but I chose it to pair with another pet project I started last year called Makeup, and so I say libsdk is pronounced "Lipstick" in this context. Makeup and Lipstick. Get it?

Lipstick is starting with one piece of the lightweight distributed app puzzle, and if there's interest in the project I aim to expand it to other aspects as well. To start, the goal is to connect instances of a service via a fabric, which is an interface describing an async communication layer. The first implementation of fabric is using NATS (specifically, JetStream), a super lightweight and battle-tested async messaging system. The fabric is then used to create a store, which is an interface to a local database. The first implementation of store is using SQLite, as is similarly lightweight and battle-tested. When you pair fabric and store, you get a durable, highly available, replicated local database for your application. Cool.

It works by creating an append-only log of the write transactions that take place in your application. store provides an Exec method which in turn executes a "Transaction Handler", like so:

func InsertPersonHandler(tx store.Tx, args ...any) (any, error) {
	q := `
	INSERT INTO people (first_name, last_name, email)
	VALUES($1, $2, $3);

	id, err := tx.ReadWrite().Exec(q, args...)
	if err != nil {
		return nil, errors.Wrap(err, "failed to Exec")

	return id, nil

This transaction handler does exactly what it seems: it implements a database transaction! Your transaction handler is given a store.Tx object which can execute Read or ReadWrite queries, and under the hood, any write operations are recorded as TxRecord objects and written to the fabric for replication. The store will wait for confirmation from fabric that a record has successfully been added to the log before committing the transaction to the local database. The store is also constantly replaying these records in the background to keep the local copy of the database up to date with write transactions from other instances of the service.

This model has some interesting properties. Firstly, the fabric becomes the logical datastore with the append-only log becoming the source of truth for your application's data. Secondly, any node with a connection to the fabric can bootstrap from nothing. By connecting to the fabric and replaying the transactions from the beginning of time, an entire database can be recreated from scratch. libsdk starts up an application by first running a set of migrations against a brand new SQLite database, and then replays the records from the fabric using the transaction handlers as shown above, which makes for an extremely nimble deployment strategy for any new geography or replica of a service since it only needs a connection to the fabric to gain access to the entire dataset.

How is this better than having your app connect to a database server like MySQL or Postgres? Well, async message/queue systems like NATS and Redpanda are designed from the ground up to work in a federated manner, allowing distributed instances of their server to replicate, even across regions and clouds. Traditional databases on the other hand are usually designed around the 'read replica' model where a distributed replica can be a source of reading data, but not writing. There are many, many situations where these database servers are the right technology. They power almost everything! I do nonetheless believe there is a niche for a distributed-first application architecture (let's call it the middle ground between edge workers and centralized microservices) that does not lend itself well to that traditional type of database, and with this project I intend to find out if anyone else agrees with me.

As with any system, there are tradeoffs! With libsdk's model, you will have an external dependency on whatever fabric driver you choose (NATS is the only one, for starters). This is similar to what I called out for LiteFS above, but in this case, I hope to make the fabric driver modular so that other systems like Kafka/Redpanda and others could be plugged in. The plan is to also implement inter-service messaging with the fabric, and so the dependency will get you two major pieces of the puzzle: data replication and async communication. This will allow for not only single-service apps (like LiteFS solves for), but multi-service apps (or "globally distributed microservices", if you insist on calling them that). There is also an inherent eventual consistency to this model that may not be suitable to all applications, but is generally well accepted in highly distributed application design.

As I mentioned in the TL;DR, libsdk is aiming to be entirely compatible with the wide ecosystem of net/http, and so I hope to let anyone bring any web service framework they like, from gorilla/mux to echo, gin, and more. This is an early experiment, so please don't build anything real with it yet, but PLEASE do send me your feedback on the idea via GitHub issues and try out the proof-of-concept v0.0.1 release by exploring the example in the repo. More to come!