Gilbert's Devlogs

Software engineer. Advocate against complexity.


It's a little late, but I just realized I've been solving a bunch of small annoying issues and thought it'd be good to write them down here.


iOS only allows https requests. However, I need to test on a server running locally on http.

NSExceptionAllowsInsecureHTTPLoads is the key you need to set to bypass this restriction. Here's the code you need to add to your Xcode project's Info.plist (beware, some of these keys may already exist in your project):


Here it says localhost, but in my case I had add an extra entry for – my laptop's local ip address – since I was testing my app on-device.


My side project uses Firebase, and wow has it come a long way since I first tried it several years ago.

We're building a mobile app. Some of our data is sourced by 3rd party APIs. Since those APIs have limits, we don't want the client to access them directly, as that situation could easily break those limits. So instead, we're going to have a Firebase function (basically an AWS Lambda – which came first?) retrieve data from the API, if and only if that data is not present in our database.

"Serverless", in other words. That is the appealing part about using Firebase – you use their features directly, instead of building them yourself, into your own server. But how will it play out in practice?


Nice, it works now! And it does pick up changes automatically (I think my TypeScript config was wrong).

It's convenient how the cli lists the available function urls in the terminal. Much less guessing in that regard.


Usually you can use http://localhost:3000 to visit your app, but not http://x.localhost:3000. There are ways to configure specific domains to work this way, but that isn't enouh when you need arbitrary subdomains to all point to your same server.

This gist explains how to do it on MacOS. It was written in 2013, so here are the shortened instructions.

First install dnsmasq with brew:

$ brew update
$ brew install dnsmasq

Then initialize the configuration and start it up:

$ echo "address=/localhost/" > /usr/local/etc/dnsmasq.conf
$ brew services start dnsmasq

Not only will this start dnsmasq, but brew will also ensure it starts when your computer restarts.

And that's it! Urls such as http://x.localhost:3000 and http://x.y.localhost:3000 will all point to your server listening on port 3000.


When designing a SQL table polymorphic references, it's preferable to create a composite index with the type column first. For example:

CREATE INDEX my_poly_ref ON upvote (target_type, target_id);

This is preferable because you'll more likely to query by a type without an id than by an id without a type. In fact I can't think of reason to do the latter, personally.



My current work task is to dig into the codebase of codesandbox. One of the libraries they use for state management is overmind. Like an alterative to redux, at first glance.


Interesting how Overmind has its own functional programming utilities. I guess it's nice that you don't have to install another library to get the basics. But I also hope they get properly tree-shaken when they aren't used.


I just discovered one of our tests has a race condition. It took a while to debug, but I've narrowed it down to the interaction between mocha-parallel-tests and our little stub utility:

const originals: Array<[any, any, any]> = [];

export function stub<T, K extends keyof T>(obj: T, key: K, value: T[K]): void {
  originals.push([obj, key, key in obj ? obj[key] : '#delete#']);
  obj[key] = value;
export function resetStubs() {
  for (let [obj, key, originalValue] of originals) {
    if (originalValue === '#delete#') {
      delete obj[key];
    else {
      obj[key] = originalValue;

Example use: stub(MyModule, 'foo', () => 'myValue'). The nice part is how TypeScript ensures the return value of your function stub is the same type as the original. In other words, you can't accidentally stub an incorrectly shaped value.

mocha-parallel-tests forks your node process for each test file you run in your repo. Normally this is fine, but it seems we have two tests in separate files that are stubbing the same module and property. One overwrites the other and thereby causes the other to fail.

We definitely don't want to give up the speed benefits to parallel test running, and I want to keep the elegant nature of using the current stub function. Solving this will take a bit of thought.


Ok, it looks like there's a point when mocha-parallel-tests decides to fork the process (I'm only guessing fork because I don't see any require('child_process') in the source). Before that point, modules are shared and only get required once. After that, modules are no longer shared.

The fix was to add a simple require() in the correct spot (with an explanatory comment) and update our stubs utility to handle concurrent access. The final gist for that is here. It may be incomplete for more complex use cases, but it's working well so far for us – no more non-deterministic test failures!


SolarJS is a fullstack framework that hits a sweetspot between developer ergonomics and staticly typed correctness, yielding a pleasant sense of maintainability not seen in the nodejs world. However, I originally built it for a project, and so there are non-critical-but-still-important features that need to be built out.

One of those features is production builds. Solar has SSR built in, and bundles the same JS code to hydrate on the browser. However, it was built with productivity as a priority (refresh takes only half a second), and I never got around to optimizing it for production.

Tired of refreshing cloudflare's cache, it's finally time to make full use of rollup to create builds that no longer need to be manually cleared.


The fastest code is the code that doesn't need to run.

Now that all assets are generated at build time instead of runtime (as they should be), the solar-dev package can now be moved to your devDependencies. This will speed up server start time by making your runtime app size significantly smaller.

Running in production with no issues. It's nice when things work.


Prolog, like lisp, is a homoiconic language. Very cool!

This seems important for building extensible systems.


Because Prolog is dynamically typed, it provides predicates to do runtime checking:

  • Shapes (atom, list, etc.)
  • Error-throwing shapes (atom_si, list_si, etc.)
  • must_be for fully instantiated terms, e.g. must_be(atom, abc).
  • can_be for terms that may not be fully instantiated, e.g. can_be(list, [a, Xs]).

"Fully instantiated" means no variable terms.

More details here.


Mainstream programming is incredibly tedious. Wheels are reinvented several times over. Pendulums swing back and forth, each swing touted as the next best thing. Incidental complexity extrudes from every nook and cranny. Layers heap upon layers, each riddled with their own special variety of bugs.

Every day it all makes me wonder, There has to be a better way. And there must be – surely?

Many have tried, but maybe not many enough. My current intuition and inspiration is to go higher up the stack. Right now that means learning Prolog – a logic programming language with the most well-beaten path.

Over 40 years old, many interesting technologies have been built using Prolog. My guess for the reason it never "took off" is because using it is so vastly different than C-like languages that the learning curve is daunting to the average programmer. However, in order to make the long overdue future of programming a reality, these self-imposed mental barriers must be disposed of. (And now that programmers are calling themselves "engineers", that should come with such a responsibility!).

In any case, perhaps Prolog will provide a path forward. I document my learning experience here.


Digging around, Prolog is actually an interesting family of languages and implementations. The most interesting to me is Datalog. Its actually a subset of Prolog, but with a grand attribute: it's guaranteed to terminate (on finite sets of data).

Just like Prolog, there are many implementations of Datalog, each with their own set of tradeoffs. But most are geared towards large scale backend development.

I want to bring these ideas to JavaScript and the browser. I can already see Datalog being useful as a datastore.

Think about it – state management is probably the most difficult part of frontend development. For simple apps, we avoid it. But larger apps force us to reach for larger solutions like Redux, PouchDB, minimongo, and so on.

But each these all have one thing in common: querying a single source of truth. Is Datalog not the epitome of such concerns? We'll just have to see.

The next step is to research how to implement a Datalog, and then pick the right tradeoffs to make it lightweight and fast enough to use in a browser. Cheap to make, cheap to use.

I'm excited.


We use Buildkite for our continuous integration pipeline. They're nice because they let us "bring our own servers"; they don't need access to our source code and instead make requests to our machine (the "agent"). Our machine does work, sends back a response, and Buildkite displays that response on their website.

However, today our agent went offline; Buildkite is not detecting it. Currently debugging why.


Turns out the old process was still running; it was still writing 401 errors to the log.

Just killed it. Everything seems to be in order now.