Gilbert's Devlogs

Software engineer. Advocate against complexity.


Elixir's API compatibility between eager and lazy is quite nice. For example:

|> Enum.each(fn id ->
  IO.puts "Getting #{id}"
|> Enum.take(5)

If item_ids has a lot of elements, then Enum.each would run for all of them (similar to JavaScript's, and Enum.take would grab the first 5 of those results.

If you want to do this lazily instead, you can just replace Enum.each with Stream.each, which creates a lazy list. Then, when Enum.take does its job, the lazy list only runs for the first 5 items.

Pretty cool.


Figuring out ecto's upsert behavior was a major pain. Take the following example:

|> User.changeset(attrs)
|> Repo.insert!(
  on_conflict: :replace_all,
  conflict_target: :github_id

Here, :replace_all tells ecto to update the existing record in the db with your data from the changeset (when a conflict occurs).

This is great. However, the unexpected part is that :replace_all also includes the id column in its updates. In other words, when a conflict occurs, ecto updates the existing record with a new incremental id. What??

I don't know when you'd want this. But to stop it, you need to explicitly tell ecto to exclude id:

  on_conflict: {:replace_all_except, [:id]},
  conflict_target: :steam_appid


Had to figure out where to put custom mix tasks. First you need a folder lib/mix/tasks. Then put a file in there such as my.hello.ex that looks like this:

defmodule Mix.Tasks.My.Hello do
  use Mix.Task

  @shortdoc "An example custom mix task"

  def run(_) do
    IO.puts "Hello, task."

Then you can run your task in the command line with mix my.hello


If you use :httpc, create your url with single quotes and not double quotes!


Trying to get ueberauth to work on my new phoenix application.


Ok, so there are lots of files involved. Apart from what's in the readme, here's what's missing:


It's a little late, but I just realized I've been solving a bunch of small annoying issues and thought it'd be good to write them down here.


iOS only allows https requests. However, I need to test on a server running locally on http.

NSExceptionAllowsInsecureHTTPLoads is the key you need to set to bypass this restriction. Here's the code you need to add to your Xcode project's Info.plist (beware, some of these keys may already exist in your project):


Here it says localhost, but in my case I had add an extra entry for – my laptop's local ip address – since I was testing my app on-device.


My side project uses Firebase, and wow has it come a long way since I first tried it several years ago.

We're building a mobile app. Some of our data is sourced by 3rd party APIs. Since those APIs have limits, we don't want the client to access them directly, as that situation could easily break those limits. So instead, we're going to have a Firebase function (basically an AWS Lambda – which came first?) retrieve data from the API, if and only if that data is not present in our database.

"Serverless", in other words. That is the appealing part about using Firebase – you use their features directly, instead of building them yourself, into your own server. But how will it play out in practice?


Nice, it works now! And it does pick up changes automatically (I think my TypeScript config was wrong).

It's convenient how the cli lists the available function urls in the terminal. Much less guessing in that regard.


Usually you can use http://localhost:3000 to visit your app, but not http://x.localhost:3000. There are ways to configure specific domains to work this way, but that isn't enouh when you need arbitrary subdomains to all point to your same server.

This gist explains how to do it on MacOS. It was written in 2013, so here are the shortened instructions.

First install dnsmasq with brew:

$ brew update
$ brew install dnsmasq

Then initialize the configuration and start it up:

$ echo "address=/localhost/" > /usr/local/etc/dnsmasq.conf
$ brew services start dnsmasq

Not only will this start dnsmasq, but brew will also ensure it starts when your computer restarts.

And that's it! Urls such as http://x.localhost:3000 and http://x.y.localhost:3000 will all point to your server listening on port 3000.


When designing a SQL table polymorphic references, it's preferable to create a composite index with the type column first. For example:

CREATE INDEX my_poly_ref ON upvote (target_type, target_id);

This is preferable because you'll more likely to query by a type without an id than by an id without a type. In fact I can't think of reason to do the latter, personally.



My current work task is to dig into the codebase of codesandbox. One of the libraries they use for state management is overmind. Like an alterative to redux, at first glance.


Interesting how Overmind has its own functional programming utilities. I guess it's nice that you don't have to install another library to get the basics. But I also hope they get properly tree-shaken when they aren't used.


I just discovered one of our tests has a race condition. It took a while to debug, but I've narrowed it down to the interaction between mocha-parallel-tests and our little stub utility:

const originals: Array<[any, any, any]> = [];

export function stub<T, K extends keyof T>(obj: T, key: K, value: T[K]): void {
  originals.push([obj, key, key in obj ? obj[key] : '#delete#']);
  obj[key] = value;
export function resetStubs() {
  for (let [obj, key, originalValue] of originals) {
    if (originalValue === '#delete#') {
      delete obj[key];
    else {
      obj[key] = originalValue;

Example use: stub(MyModule, 'foo', () => 'myValue'). The nice part is how TypeScript ensures the return value of your function stub is the same type as the original. In other words, you can't accidentally stub an incorrectly shaped value.

mocha-parallel-tests forks your node process for each test file you run in your repo. Normally this is fine, but it seems we have two tests in separate files that are stubbing the same module and property. One overwrites the other and thereby causes the other to fail.

We definitely don't want to give up the speed benefits to parallel test running, and I want to keep the elegant nature of using the current stub function. Solving this will take a bit of thought.


Ok, it looks like there's a point when mocha-parallel-tests decides to fork the process (I'm only guessing fork because I don't see any require('child_process') in the source). Before that point, modules are shared and only get required once. After that, modules are no longer shared.

The fix was to add a simple require() in the correct spot (with an explanatory comment) and update our stubs utility to handle concurrent access. The final gist for that is here. It may be incomplete for more complex use cases, but it's working well so far for us – no more non-deterministic test failures!


SolarJS is a fullstack framework that hits a sweetspot between developer ergonomics and staticly typed correctness, yielding a pleasant sense of maintainability not seen in the nodejs world. However, I originally built it for a project, and so there are non-critical-but-still-important features that need to be built out.

One of those features is production builds. Solar has SSR built in, and bundles the same JS code to hydrate on the browser. However, it was built with productivity as a priority (refresh takes only half a second), and I never got around to optimizing it for production.

Tired of refreshing cloudflare's cache, it's finally time to make full use of rollup to create builds that no longer need to be manually cleared.


The fastest code is the code that doesn't need to run.

Now that all assets are generated at build time instead of runtime (as they should be), the solar-dev package can now be moved to your devDependencies. This will speed up server start time by making your runtime app size significantly smaller.

Running in production with no issues. It's nice when things work.