Elixir

Image of Author
September 14, 2022 (last updated September 16, 2024)

Other writings I have on Elixir

Elixir and VSCode Explorer sort order

In Elixir projects it is a common naming convention to have a file for a context, e.g., context.ex, and then a folder of the same name, e.g., context/, which contains code that the context encapsulates.

You can change the VSCode Explorer's sort order to group these files and folders. "explorer.sortOrder": "mixed" will 'mix' files and folders together when sorting, as opposed to the default setting of sorting folders first, and then files second.

You can enable this globally or in project settings.

What functionality should live in contexts?

I wish I started keeping this note a long time ago because this has been an interesting issue for me since I started learning Elixir.

TLDR is I use the nested contexts model I first saw articulated by Devon Estes. Here are the rules in general

  1. Schemas are just for schemas and their manipulation (via changesets, etc). No Repo calls!
  2. Each schema as a "secondary context" which these days I call a "schema context" instead. It "hides" the schema from the rest of the app.
  3. Each secondary context gets called by the "primary context" which is essentially an orchestrator of secondary contexts. You shouldn't be making Repo calls in primary contexts. In some platonic ideal a primary context is just some chain of calls to various secondary contexts.

Additional recommendations are as follows: In the most general sense this defines a tree-like hierarchy of contexts of arbitrary depth (i.e., there can be more that 'two layers'.) Leaf node contexts are hiding something (lol). Intermediary and root node contexts are orchestrating child contexts. Contexts are always hiding one or more things. The schema contexts are hiding the schemas and the Repo, in the least. Higher order contexts are hiding lower order contexts, etc.

The Phoenix guide on contexts is open-ended. The nested contexts are approximately the same advice within the Phoenix guides, but perhaps with a little more explicit rules to help you make decisions about new functionality.

Anonymous functions

https://hexdocs.pm/elixir/main/anonymous-functions.html#clauses-and-guards

Anonymous functions have support for the case statement syntax, allowing for "cool" anonymous functions. For example, an Enum.reduce where you use case statements to effectively rename and reduce keywords.

In the example below I have an initial opts param of [size: 0, mime_type: ""] and I want to reduce those opts to [content_size: 0, content_type: ""] so long as the previous corresponding keyword is actually passed in.

Enum.reduce(opts, [], fn
  {:size, size}, acc -> [{:content_length, size} | acc]
  {:mime_type, mime_type}, acc -> [{:content_type, mime_type} | acc]
end)

Errors and Exceptions

https://hexdocs.pm/elixir/main/try-catch-and-rescue.html

When to use ! in function names

Many functions in the standard library follow the pattern of having a counterpart that raises an exception instead of returning tuples to match against. The convention is to create a function (foo) which returns {:ok, result} or {:error, reason} tuples and another function (foo!, same name but with a trailing !) that takes the same arguments as foo but which raises an exception if there's an error. foo! should return the result (not wrapped in a tuple) if everything goes fine. The File module is a good example of this convention.

Fail fast / let it crash

At the end of the day, "fail fast" / "let it crash" is a way of saying that, when something unexpected happens, it is best to start from scratch within a new process, freshly started by a supervisor, rather than blindly trying to rescue all possible error cases without the full context of when and how they can happen.Ï

Env vars

You often see System.get_env/1 when reading env vars, but don't forget about System.fetch_env!/1! It will throw if it fails to find the env var. This can save you a lot of headaches trying to double check if the env var is set at the time you need it to be. If you are setting it too late, fetch_env! will throw and you will probably need to read more about #runtime.exs below. This is also a common problem when something should be in config/runtime.exs but is in some other config/ location. The env var could be set when the app is "live" but isn't set when the env var needs to be read, which is earlier in the release cycle.

runtime.exs

This is not an easy step to understand the first time you learn about it. Here's a quote from the Elixir docs on Configuration and releases,

Configuration files provide a mechanism for us to configure the environment of any application. Elixir provides two configuration entry points:

  • config/config.exs — this file is read at build time, before we compile our application and before we even load our dependencies. This means we can't access the code in our application nor in our dependencies. However, it means we can control how they are compiled

  • config/runtime.exs — this file is read after our application and dependencies are compiled and therefore it can configure how our application works at runtime. If you want to read system environment variables (via System.get_env/1) or any sort of external configuration, this is the appropriate place to do so

The takeaway here is to avoid using System.get_env/1 and System.fetch_env!/1 in config/config.exs and to move those calls into config/runtime.exs. The alternative is adding pre-existing environment variables everywhere your code ends up running. For example I had set a System.fetch_env! in config/config.exs and it threw in the build step when deploying to Fly.io. This means I'd have to add env vars to either the fly.toml file or the Dockerfile the builder step is executed on. Or I just call System.fetch_env! in config/runtime.exs and headaches are solved. This allows for the normal experience of just setting env vars on your runtime machines and nowhere else.

Hosting on fly.io

Connecting to your running app

https://fly.io/docs/elixir/the-basics/iex-into-running-app/

fly ssh issue --agent

This command will issue a key and add it to your SSH agent (i.e., ssh-agent). (1) I can't tell if this is actually necessary because I am able to connect without doing it. (2) I can't tell if it's creating a public/private keypair on my local client and sharing the public key with the fly machine (this is my guess), or it's instead creating the keys on the fly machine and adding the public key to the agent.

It says next to run,

fly ssh console --pty -C "bin/$APP_NAME remote"

It actually says to type -C app/bin/$APP_NAME remote but that didn't work for me, and when I connected the normal way (fly ssh console) I saw I was already in the /app folder.

--pty is on by default normally I believe, and is short for psuedo-tty, or psuedo- terminal, which is just the normal CLI terminal you'd expect. I think it's specifiable because you might want to not do that sometimes, e.g., executing arbitrary commands in a CICD script.

-C, --command is the command to execute.

--select will let you choose which machine you connect to.

Lastly, here is a section near the end of the cloud script (cat bin/cloud) explaining what the remote param does,

remote Connects to the running system via a remote shell

Another, easier way to do it is first ssh into the vm itself with the universal,

fly ssh console -a $APP_NAME

Then, find the bin/$APP_NAME script and pass the remote param to it,

bin/$APP_NAME remote

Phoenix

Using Phoenix.json_library/0 in testing

Here is an test I wrote to ensure my encoder logic worked as intended, and which demonstrates how you can use Phoenix.json_library/0 to simplify assertions. The decode option to use keys: :atoms is pretty important to the developer ergonomics and you have to 'just know' that the default Phoenix implementation is Jason in order to pass the right options to Jason.decode/2.

note = %Note{texts: [], audios: [], images: []}

{:ok, json_note} = Phoenix.json_library().encode(note)

{:ok, json_note_map} = Phoenix.json_library().decode(json_note, keys: :atoms)

id = json_note_map.id
first_text_content =
  json_note_map.texts
  |> List.first()
  |> Map.get(:content)

Phoenix common actions and their corresponding routes

https://hexdocs.pm/phoenix/routing.html#resources

File uploads

https://hexdocs.pm/phoenix_live_view/uploads.html

I spent way too long figuring out that I was using the wrong upload_errors. The errors are general or entry-dependent.

For general errors, which seems to only be :too_many_files, use upload_errors/1. For entry-specific errors, including :too_large, :not_accepted, etc, use upload_errors/2.

Controllers can't find views

I've ran into a nasty problem where the controller can't find the views even though they are there and there was no (obvious) git changes that caused the problem. As a hopeless last effort I ran mix compile --force and mix phx.digest.clean. I have read the docs for both commands and still have no real clue what happened, but the problem was resolved.

~p sigil

This is a verified routes sigil. I assume this sigil will error/warn when the route you attempt to use is not a verified route within your Phoenix application. You use it like this: ~p"/some/verified/route"

HEEX templates

:if and :for syntactic sugar

There are two syntactic sugars to know: :if and :for

~H"""
<div :if={@someBoolean}>
  <div :for={l <- @someList}>
    <p>{l.text}</p>
  </div>
</div>
"""

It is hard to find documentation for this. The best I can find is this section on heex extensions.

dynamic attributes

You can assign a group of attributes using {} syntax

<.someComponent {[a: 1, b: 2]} />
<.someComponent {%{a: 1, b: 2}} />

This allows you to be creative with how you manipulate a large number of assigns (note: prefer refactoring to smaller assigns over manipulating large assigns).

<.someComponent {Map.drop(assigns, [:unwanted_key])} />
<.someComponent {Map.take(assigns, [:key1, :key2])} />

Sessions, conn.assigns, and socket.assigns

A great primer on this topic is the plug docs on session vs assigns.

HTTP is a stateless protocol. Sessions are the classic way to "maintain state", and are classically stored in cookies, which the browser will by default send back to you.

When handling a single request, you will store information in assigns. conn.assigns lasts for a single request-response lifecycle.

WebSockets are a stateful protocol. This mean socket.assigns lasts for across multiple message exchanges (which are kind of like request-response lifecycles). This is where things get tricky, because a session in a cookie is serving a similar purpose to websockets, in the sense of facilitating a stateful exchange of information. So, socket.assigns and cookie sessions fulfill similar roles. Sessions are essentially even more permanent than websockets. On hard refresh, you rebuild your socket.assigns, often from a user token persisted within a session cookie.

Ecto

Extra SQL functions you might not know you have

Ecto, when backed by Ecto SQL, and particularly one of the supported adapters for it (e.g., the postgres adapter), gives you extra Repo functions. One notable function is query/4, which let's you run arbitrary sql: Repo.query("SELECT * FROM table"). This is useful in a few scenarios: complex sql queries for applications that need it, and #Complicated migrations.

Complicated migrations

Here is a classic example of a complicated migration: Adding a non-null column. It is complicated because all pre-existing rows will have to be populated with some value. In the general case there are three steps.

  1. Generate the new column as nullable initially
  2. Execute arbitrary SQL to set the value for pre-existing rows (you could also use a default value when generating the initial column, if your situation allows for it).
  3. Modify the column to remove the default value and add non-nullability

Here is my first attempt at ever trying to do all three in a single migration. It might not be what a more experienced Ecto migrator would do, but it worked.

defmodule Cloud.Repo.Migrations.AddSizeToImageAndAudio do
  use Ecto.Migration

  def up do
    alter table(:images) do
      add :size, :integer
    end

    alter table(:audios) do
      add :size, :integer
    end

    # ensure the tables are created fully before modifications
    flush()

    repo().update_all("audios", set: [size: 1_000_000])
    repo().update_all("images", set: [size: 1_000_000])

    # ensure the pre-existing rows have non-null values before dropping the
    # default
    flush()

    execute "ALTER TABLE audios ALTER COLUMN size DROP DEFAULT;"
    execute "ALTER TABLE audios ALTER COLUMN size SET NOT NULL;"
    execute "ALTER TABLE images ALTER COLUMN size DROP DEFAULT;"
    execute "ALTER TABLE images ALTER COLUMN size SET NOT NULL;"
  end

  def down do
    alter table(:images) do
      remove :size
    end

    alter table(:audios) do
      remove :size
    end
  end
end

Don't use modify/3 if you are not changing the column value type

From the docs on modify/3,

If you want to modify a column without changing its type, such as adding or dropping a null constraints, consider using the execute/2 command with the relevant SQL command instead of modify/3, if supported by your database. This may avoid redundant type updates and be more efficient, as an unnecessary type update can lock the table, even if the type actually doesn't change.

Working with associations

The best documentation of working with associations in general is in the put_assoc documentation where they describe an example: adding a comment to a post.

Dropping the test database

I was having trouble figuring out how to do this because I couldn't find information on mix environments, so forgot how to call the following.

MIX_ENV=test mix ecto.drop, etc.

PS, mix ecto.setup and mix ecto.reset are not builtin ecto commands, of which there's only four. The extra commands are present in generate Phoenix projects and can be found in your project's mix.exs file in aliases/0. In Phoenix projects, then, you could run MIX_ENV=test mix ecto.reset.

New changeset per change

I cannot find a direct quote for this anywhere, but I spent a bunch of time trying to update a changeset to "fix it". I think this is wrong. The preferred approach is to create new changesets per change.

Comments

defmodule Comments do
  # normal comment

  """
  multline comment
  I do not think it is markdown enabled
  without being attached to a module attribute
  """
end

doctests

https://hexdocs.pm/ex_unit/1.12/ExUnit.DocTest.html https://hexdocs.pm/elixir/1.12/writing-documentation.html#doctests

defmodule Comments do
  @doc """
  This is a module attribute.

  ## More

  It accepts markdown.
  """
  def bye do
    "bye"
  end

  @doc """
  This is a doc with a "doctest".
  Says hello to the given `name`.

  Returns `:ok`.

  ## Examples

      iex> # this is an unnecessary demonstration
      ...> # of a multiline doctest
      ...> Comments.world(:john)
      :ok
  """
  @doc since: "1.3.0"
  def world(name) do
    IO.puts("hello #{name}")
  end
end

IO

https://hexdocs.pm/elixir/1.12/IO.html

IO.inspect

Inspect has many options. Here is a useful one: IO.inspect(x, limit: :infinity)

This might not work for things like Phoenix LiveView sockets, I don't fully understand why, but the short answer is you can combine it with another option: IO.inspect(x, limit: :infinity, structs: false). In fact, with the Phoenix LiveView socket in particular, the problem is just the structs, and not the limit, so this will work too, for non-massive sockets: IO.inspect(x, structs: false)

IO.write with \r

I can find no reference to this online. If you want to console.log a value without clogging up your logs, you can use \r to replace the value in a string. In the example below, you do not get 1 million printouts. You get 1 printout that replaces the number 1 million times.

for x <- 1..1_000_000 do
  IO.write("\rnum: " <> Integer.to_string(x))
end

Maybe I'm a noob and this is not just an elixir thing? It seems related to \n for newline, so it might be a thing in general

Mix

  • Remove old dependencies: shell mix deps.clean --unlock --unused

mix does not compile modules outside of lib

(I cannot find a definitive reference for this fact, but you can test it for yourself.)

Say you have a user.ex defined within a mix project,

defmodule User do
  defstruct name: "Alice", age: 20
end

If that file is defined in ./lib/user.ex, then after mix -S iex, you can instantiate a User.

> %User{}
%User{name: "John", age: 20}

If you move that file to ./user.ex, then after mix -S iex, or running recompile within iex, you can not instantiate a User.

> %User{}
error: User.__struct__/1 is undefined, cannot expand struct User. Make sure the struct name is correct. If the struct name exists and is correct but it still cannot be found, you likely have cyclic module usage in your code
  iex:10

** (CompileError) cannot compile code (errors have been logged)

Resources