Building Your Personal Toolbox

We each have our own methods that operate behind-the-scenes while we engage in the process of software development. I like to think of this as the "craft." A woodworker hand-shapes chisels, constructs planes custom-fit to their own hands. A painter thoughtfully composes their pallet and designs a color-mixing strategy that suits their visual style.

We as conjurers of software are lucky to count ourselves in this group — we have the power to build our own tools (and far more efficiently than a hand-carved plane, at that!). Exploring this power can be incredibly satisfying, and creatively enriching. It’s a slow process, something that develops over years, and there are no obligations, requirements, deadlines, or endgames — it’s something that happens naturally as we work, and discover new ways of working that click.

The real magic lies in an inevitable fact: we all develop our own unique approaches to the craft, which creates boundless potential for learning from one another.

This post is my first foray into the world of craft-sharing, one lightweight approach that could be utilized or adapted to help in your own process of toolbox construction. Without further ado, here’s how I combine the acts of note-taking and tool-building when I join a company.

a visual representation of my brain on the first day of a new job
Figure 1. A visual representation of my brain on the first day of a new job

I recently began a new engagement with a great startup called Rising Team. They’re building software to help remote creatives like me learn more about each other and the collaborative process, to enrich our relationships and the work we do (they’re also hiring! I’ve learned from prior experiences that orienting myself within a completely new codebase and technology stack can be overwhelming — a feeling that can only be derailed through copious amounts of note-taking.

However, as much as I love paper notes, over the years I’ve become attached to a medium which works even better for me in the "getting ✨ done" department of software development. That format? And please give it a chance — shell scripts.

A Single-File Toolbox

Specifically, a single shell script, which takes the form of a .bash_<companyname> file in my home directory. I have one of these for every company, project, codejam I’ve ever been a part of — anything which requires notes and the ability to do various things with the information therein (things relating to a computer, mostly). On my first day at the new gig, I created ~/.bash_risingteam with #!/usr/bin/env bash at the top, and added the following to my ~/.bashrc (irrelevant where):

#!/usr/bin/env bash
source "$HOME/.bash_risingteam"
You could substitute s/bash/zsh/g in this post (or any shell/language of your choosing) and find the same result — a set of "functional notes" which simultaneously document and help automate everything you must do within a given category of work. The language is less important than the concept, and I’m hoping whether or not you’re a fan of bash, you can adapt some of the ideas from this post. Though I would count myself an appreciator of bash as a whole (due to the directness of things like file manipulation) even I can admit the language is terrible — YMMV :)

With that small act of creation, I’ve gone a long way towards organizing my thoughts — I now have a one-stop-shop for all information and code I need to jot down for myself while on the job, a couple keystrokes away. To make this even more convenient, the first thing to add is an "edit and reload" command for this shell config file itself:

#!/usr/bin/env bash

function srt() {
  vim "${BASH_SOURCE[0]}"     # BASH_SOURCE refers to the path of the current
  source "${BASH_SOURCE[0]}"  # script file (in this case ~/.bash_risingteam)

Now typing srt from anywhere will open this file, and reload it in the current shell upon save. This seems minor, but I’ve found it can work wonders by encouraging me preserve everything useful, regardless of how small or how hectic current circumstances may be. Replace vim with the editor of your choice — VSCode → code, Atom → atom, etc (best-practice would be to use the $EDITOR env var if you happen to set it; I stuck with vim here for clarity).

I tend to use extremely brief names for the tools in these boxes, but by all means use something more descriptive if you prefer — these are custom-built for your own hands after all! In this case, srt → "source rising team" to me, though I’d refactor in the name of descriptiveness if I were expecting broader use.

If you happen to use the fantastic script linter brew install shellcheck to help keep shell scripts a little less wild and woolly, you’ll need to add # shellcheck source=.bash_<yourfilename> above the second line of that function to prevent it complaining about not being able to follow a dynamic source call.


So what else ends up in these files? That’s the exciting part — it depends entirely on your own working style and the unique challenges of the job at hand. Here are the things that quickly revealed themselves as worthwhile tools to hang on to, in the case of Rising Team:

Command-line URL Management

I always have a bunch of urls to keep track of related to any given job, and browser bookmark management tools are a chaotic nightmare to me. I’d much rather be able to type a quick command at the terminal, be presented with a list of relevant URLs, fuzzy-search the list to select one of them, and go.

My command-line bookmarks tooling is intended to do just that, as simply as possible, while still feeling great to use (to me at least). rto → "rising team open":

function bookmarks() ( set -euo pipefail
  fzf -0 -1 -e --reverse --no-info --height="50%" \
      --tiebreak="begin,length" --query="${1:-}" \
  | awk -F' ' '{print $NF}' \
  | xargs -n 1 open

function rto() ( set -euo pipefail
  bookmarks "$@" <<-BOOKMARKS
    dev        http://localhost:8080
    devadmin   http://localhost:8080/admin/

# note: urls have been modified from originals for clarity/security purposes

Here’s a demo of rto in action:

This version only supports listing and selecting URLs from a hand-curated list which lives directly in ~/.bash_risingteam (the single-file aspect of all this is very dear to my heart!). In a fuller-featured version, I’ve added the ability to add and edit URLs via CLI commands, without the overhead of opening the whole file, in about 20 extra lines. If you’re interested, email me and I’d be happy to share the full script.

You may be wondering, What is this "fzf" in the "bookmarks" function above? fzf is a wonderful little command line tool purpose-built for displaying lists of items, and letting you effortlessly search and select them. It’s a UI pattern that really feels like a superpower, and works well for a surprisingly large array of tasks — file-finding, git branch selection, interactive grep, and of course, now bookmark management (and many more that I haven’t even thought of yet). You can find install instructions on the fzf git respsitory page, but the easiest way to install is homebrew: brew install fzf

As for the meaning of some of the stranger parts of that fzf gobbledegook above, it’s actually pretty easy to break down — with a little help from man fzf:

fzf \    # invoke fzf
-0 \     # exit immediately if there's no match for query string
-1 \     # select imediately if there's only one match for query string
-e \     # enable 'exact-match', where quoted terms are matched exactly
--reverse \    # closest match at top of list instead of bottom (preference)
--no-info \    # hide superfluous info about list such as count (preference)
--height="50%" \    # make the fzf render area 50% of the current pane height
--tiebreak="begin,length" \    # favor matches earlier in string as top criteria
--query="${1:-}" \             # if called as `rto str`, use "str" as initial query
| awk -F' ' '{print $NF}' \    # after selection, pare down to 2nd part only (URL)
| xargs -n 1 open              # using xargs, pass URL to `open` (in browser)

Convenient Local Database Access

function rtdb() ( set -euxo pipefail
  psql -h localhost -p 5432 -U evan -d dev-database ${*:+--command} "$*"

rtdb is a simple wrapper for connecting to our PostgreSQL database. It encapsulates a single command, which might not seem very useful. But looks can be deceiving — in one stroke this accomplishes numerous things:

  • Convenience — connecting to the database now takes 5 keystrokes (including enter) instead of dozens. Yes, a command like this will quickly be ingrained in your shell history, but it’s just like caching — the times when it’s not there are the ones that matter most

  • Record of arguments — some of the args provided here match their default values, but by spelling them all out explicitly I can eliminate all bad assumptions I might make down the line (and I’m very bad at making assumptions).

  • Management of arguments — my two most common use cases for this command are:

    • Running with no arguments to drop into an interactive shell → rtdb

    • Running with SQL arguments, like rtdb \\d user_account    ("\d" → Postgres "describe" parlance, and the "\" needs to be excaped here, so "\\d")

    With the simple (though obtuse-looking) arg manipulation you see in the function above, these two use-cases are optimized for. Anything outside of that realm? The underlying command is readily available to modify, thanks to the shell’s built in command logging functionality via set -x — more on that below.

Here’s the end result:

INPUT >>> rtdb
+ psql -h localhost -p 5432 -U evan -d dev-database
psql (14.2)
Type "help" for help.


INPUT >>> rtdb \\d user_account
# double-backslash escape is necessary here, unless quoting
+ psql -h localhost -p 5432 -U evan -d dev-database --command '\d user_account
                                        Table "public.user_account"
    Column    |           Type           | Collation | Nullable |
 id           | integer                  |           | not null |

You may be wondering what this appalling new style of writing function signatures is all about: function srt() ( set -euxo pipefail. Fear not, there is a reason behind each of these inscrutable runes, described in glorious detail here. The most important players are the e and x: the former causes the whole function to exit if any command within it results in error (as you’d expect a function would in any reasonable language — alas, not bash); the latter prints out each command to console as it’s executed, which gives priceless visibility-by-default into the commands you’re running with each alias (in point of fact, this ability is the main reason I prefer functions over standard shell aliases).

As suggested in the linked article above, putting set -euxo pipefail (or some subset depending on context) at the top of your bash scripts is a good idea generally. However, that won’t work in this context, where we’re actually source-ing the script, loading it into our shell’s current environment. This is effectively the same as running set -e directly in a shell — the next command that fails will exit the whole shell! Failing fast is often a good thing, but not when you want your shell to stick around so you can observe the aftermath.

So instead, an extra set of parentheses gives us a convenient mechanism for changing this behavior within our functions exclusively, leaving the shell environment around them unaffected. The syntax function name() ( …​ ) in bash causes the function to spawn a "sub-shell", which is has its own environment completely isolated from your main shell. Within this environment, set -euxo pipefail does what we want and doesn’t make the front fall off. The result:

INPUT >>> rtdb
+ psql -h localhost -p 5432 -U evan -d dev-database

It gives me a sense of calm to see the fully-constructed commands spelled out right after invoking the short alias version. It’s a constant reassurance in an otherwise chaotic, complex mental environment — Everything is working as intended, and all this nuance will be easy to reference if you need it, just a few finger-twerks away.

Worth noting that sometimes set -euxo pipefail doesn’t make sense, for instance when source-ing other files with the function, which will cause a large amount of output to be printed. This is why I left it off of the srt function above. Right tool → right job and all that.

Company-specific $PATH additions

# make `rt` command universally available:
export PATH="$PATH:$HOME/risingteam/scripts"

Rising Team has a toolbox of their own, in the form of an rt script which lives at the root of the main repository. This adds that command to my $PATH, so calling rt from anywhere can provide access to the Rising Team library of commands.

Development Server Management + Persistence

function rts() ( set -euxo pipefail
  local backend="cd ~/risingteam; pipenv run ./scripts/rt server dev"
  local frontend="cd ~/risingteam/frontend; npm start"
  tmex risingteam --reattach -p "${backend}" "${frontend}"

Here’s a demo of rts in action:

rts is a little dev-server wrangler, which every single project I work on seems to end up calling for. It starts two servers — backend and frontend — in side-by-side tmux panes. The 2-in-1 aspect isn’t even the main draw here — once this command is run, the server session can be easily closed and the servers will keep running in the background. Combining the two commands into one view just reduces the number of things to think about, things that can go wrong out of sight. Running rts ("rising team server") again at any later point will re-attach to the same session, without restarting anything.

This all works with brevity, thanks to a small tool I created a few years back called tmex npm install -g tmex. All tmex does is transform a series of commands (frontend and backend server commands, in this case) into a much more convoluted tmux incantation, and executes it — the same effect could be accomplished easily through tmux directly (or if that’s not your cup of tea, GNU Screen). However, as number of commands grows the tmux syntax quickly becomes unwieldy; with tmex (a minimalist but fully-fledged layout manager in and of itself) arranging an entire dashboard of commands becomes trivial.

function rtk() ( set -euxo pipefail
  tmux kill-session -t risingteam

And here’s the darker side, rtk ("rising team kill"). This command cleanly shuts down the tmux session started by the one above, so that a subsequent rts may start fresh.

Automate Everything

function rtreset() ( set -euxo pipefail
  rtk &>/dev/null && true
  cd "$HOME/risingteam"
  ./scripts/rt data reset || ./scripts/rt data init
  echo "Enter a password for local risingteam admin user:"
  ./scripts/rt manage rt_createsuperuser --email
  for name in "session_admin" "demo_mode"; do
    rtdb "INSERT INTO waffle_flag (name, superusers, staff, authenticated, testing, rollout, note, languages, created, modified) VALUES ('${name}', TRUE, FALSE, FALSE, FALSE, FALSE, '', '', NOW(), NOW())"
  sleep 10 && open "http://local.rtkit.test:3006/auth/login?next=/get-started/sub/monthly/1" &

For the grand finale, a grand reset-the-world. I find that the ability to obliterate my local dev database from orbit and set up a brand-spanking new one usually tends to obliterate stress as well — no more worrying about clawing my way out of a badly-applied migration, only to find out some errant app code was up to no good in the meantime, etc. etc. The usual problem is that this is far from a one-button process, so I’ll frequently have a command which automates as much as possible. The above:

  • kills any servers I have running via rtk:

rtk &>/dev/null && true

  • ensures we’re in the proper directory:

cd "$HOME/risingteam"

  • database go 💥, and a fresh one is created:

./scripts/rt data reset || ./scripts/rt data init

  • tells me to enter a password (in a moment):

echo "Enter a password for local risingteam admin user:"

  • runs a pre-existing script which creates a new admin user, accepting said password:

./scripts/rt manage rt_createsuperuser --email

  • inserts a couple of necessary admin feature flag records into the new database:

for name in "session_admin" "demo_mode"; do
  rtdb "INSERT INTO waffle_flag (name, superusers, staff, authenticated, testing, rollout, note, languages, created, modified) VALUES ('${name}', TRUE, FALSE, FALSE, FALSE, FALSE, '', '', NOW(), NOW())"
  • in the background (triggered by & at the end), waits 10 seconds and then opens a browser to the local signup page so I can finish setting up the user:

sleep 10 && open "http://local.rtkit.test:3006/auth/login?next=/get-started/sub/monthly/1" &

  • concurrently, starts all servers with rts so the above is possible:


There you have it: the first few contraptions in my own handmade toolbox. I’d love to hear about yours — keep on building ⚒