If you’re setting out to build a highly usable developer tool, it goes without saying that a proper CLI to interface with your API is paramount. As Zeit and Heroku have been setting the tone for these types of developer tools by doing extensive research into best practices when it comes to a command line “experience”, we started our quest by digging into their findings.
Since the Stream CLI is currently in public beta, the methods and philosophies we found from our research, as well as those we unearthed ourselves are fresh in our minds and we wanted to take a few minutes to outline what we found to be best practices among other CLI tools and developers’ needs when it comes to building a proper CLI.
Below is a step by step explanation of how we would go about building another CLI and some explanations about why we chose to do things the way we did.
Options
A good number of open-source projects have arisen to help facilitate the scaffolding and overall development of a CLI.
Aside from our backend infrastructure here at Stream, which is written primarily in Go we use JavaScript for the many of our tools — its flexibility between front and backend projects, the large number of open-source contributions to it, its overall global presence and its ease of use (for some of the aforementioned reasons) all make it an obvious choice for creating a powerful tool with a low barrier to entry.
Likewise, if you’re setting out on an adventure to build a CLI, there will be dozens of open-source projects built with JavaScript that are available to help you get started. To be fair, when we started looking into building a CLI, Commander and Corporal were hitting the top of Google and npm on nearly every search, but we wanted something more robust — a battle-tested project that provided everything we needed in one go, rather than a package that simply parsed arguments passed them along with a command.
That’s when we found Oclif.
Oclif
Oclif is a JavaScript-based CLI framework that was open-sourced by the team behind Heroku. It comes packed with pre-built functionality and even offers extendability through the use of plugins.
At a glance, there were a few major features that stuck out when we were looking into Oclif:
- Multi-command support
- Auto-parsing of command arguments and/or flags
- Configuration support
- Auto documenting codebase
Ultimately, the availability of these features was also the primary reason why we chose to move forward with using Oclif as the base for our CLI tool here at Stream.
Remember, these are just some of the built-in features that Oclif ships with out of the box. For a comprehensive list of options, we recommend taking a look at the official Oclif docs here.
Multi-Command Support vs. Single-Command Support
It’s important to note that, if you have a single endpoint or method you’re calling, single-command (e.g. grep
) support is all that you’ll need. If you’re developing a larger CLI tool, such as the one we created for Stream, you’ll likely need to opt-in for multi-command support (e.g. npm
or git
). Here’s a quick breakdown of the difference:
Single:
$ stream --api_key=foo --api_secret=bar --name=baz --email=qux
Multi:
$ stream config:set --api_key=foo --api_secret=bar --name=baz --email=qux
While they may look similar, there is one key difference between the two options: single command does not allow for subcommands or “scoping” as we like to call it. This simply means that complicated or nested commands are not made possible with single command support.
Both types of commands take arguments, regardless of the configuration. Without arguments, it wouldn’t be a CLI. One advantage to multi-command support is that it delimits the calls with a colon (e.g. “:”), allowing you to keep things organized. Better yet, you can organize your directory structure using nested directories as shown in the src code on GitHub.
Sometimes it’s a bit difficult to conceptualize in the beginning, however, once you get your hands dirty creating a CLI for the first time, it’ll all come together and make sense.
Auto Parsing
Under the hood, Oclif handles parsing command line arguments that are passed in. Generally, with Node.js, you’d have to pull arguments out of the array provided by process.argv. Although this isn’t particularly difficult, it’s definitely error-prone… especially when you toss in requirements for validations or casting to strings/booleans.
If you’re not planning on using Oclif to handle the parsing for you and just need to move forward with a simple setup, we would recommend minimist, a package dedicated to argument parsing in the command line.
Configuration Support
With any server-side integration (whether it’s an API or SDK), you’ll (hopefully) likely have to provide a token of some sort (for security and identity reasons).
For our integration, we needed to persist the configuration credentials for a user (e.g. API key & secret, name, and email) in a secure location on the user’s computer. Without persisting this type of data, we would have to make sure that every API call to Stream included the proper credentials and, let’s face it, nobody wants to pass arguments with every command.
To get around this issue, we leverage Oclif’s built-in support for managing configuration files by storing user credentials in a config.js
file within the config directory on their machine. Typically the config directory resides in ~/.config/stream-cli
on Unix machines or %LOCALAPPDATA%\stream-cli
on Windows machines. With the help of Oclif, we don’t have to worry about detecting the user's machine type as they take care of this distinction under the hood, making it easy to get within the class of your command using this.config.configDir
.
Knowing this, we were able to create a small utility to collect and store the necessary credentials using the fs-extra package. Have a look at the code here.
Docs for configuration options within Oclif can be found here.
Auto Documenting Codebase
We were very happy (and surprised) to find that Oclif supports auto-documenting commands. Without this sort of functionality, we would have to manually change our README and underlying docs every time we made a change such as adding/removing a command argument, changing a command name or modifying the directory structure within our commands subdirectory. You can probably imagine how difficult this would be to maintain within a large CLI project like the Stream CLI.
With the help of the @oclif/dev-cli package, we were able to add a single script; to our package.json file that is run during the build process. The command scans the directory structure and magically generates docs, as shown here.
Interactive & Raw Argument Support
Sometimes, when calling a command via a CLI tool, one of the last things you may have taking up space in your brain is all of the required arguments for that command, especially if there is a large number of them. While you can always use the — help flag to print out required arguments, sometimes it’s best to provide an interactive prompt that asks the user for various information if it’s missing from the provided flags.
For example, rather than calling:
$ stream config:set --api_key=foo --api_secret=bar --name=baz --email=qux
The user can call (with zero arguments passed):
$ stream config:set
And they will be prompted with this:
There are several options for prompting users and we’ve found Enquirer to be the easiest package to work with. Although this package is similar in functionality to Inquirer, the Enquirer API tends to be a bit more forgiving and easier to work with.
It’s important to try to apply this prompt style functionality on all of your multi-argument commands, if possible. However, make sure to check the flags to ensure that you’re not prompting the user for information if they’ve already passed it. For example:
if (!flags.name || !flags.email || !flags.key || !flags.secret) {
const res = await prompt([
{
type: 'input',
name: 'name',
message: What is your full name?
,
required: true,
},
{
type: 'input',
name: 'email',
message: What is your email address associated with Stream?
,
required: true,
},
{
type: 'input',
name: 'key',
message: What is your Stream API key?
,
required: true,
},
{
type: 'password',
name: 'secret',
message: What is your Stream API secret?
,
required: true,
},
]);
for (const key in res) {
if (res.hasOwnProperty(key)) {
flags[key] = res[key];
}
}
}
Note how we check the flags and display the prompt ONLY if the flags do not exist.
Make it Pretty
Command lines are generally thought of as bland green and white text on a black background. News flash: there’s not actually anything stopping you from making your CLI stand out. In fact, developers love when colors are introduced to the command line — colors help differentiate errors vs. success messages, events/timestamps and more.
If you want to make things pretty, Chalk is a great (if not the best) package to use. It provides an extensive API for adding colors to your CLI with little to no overhead.
To integrate Chalk into your CLI:
import chalk from ‘chalk’;
Then, wrap your string with the chalk method, color, and optional styling (bold, italics, etc.) to add some flair to your output:
this.log(This is a response and it’s ${chalk.blue.bold.italic(‘bold, blue, and italicized’)}
);
Use Tables for Large Responses
Let’s face it, no developer wants to comb through a large response returned by your API. With that being the case, it is important to always return something meaningful and easy-to-read. One of our favorite ways to accomplish providing the user with an easily digestible output is to use a table:
In the example above, we chose the cli-table package to help display data in tables, as it provides an easy-to-use and flexible API that supports the following:
- Vertical and horizontal displays
- Text/background color support
- Text alignment (left, center, right) with padding
- Custom column width support
- Auto truncation based on predefined width
Printing JSON for Parsing with Bash & JQ
The beauty of providing a CLI is that it can be called either by a user or by a script. Part of creating a highly approachable and usable tool is defaulting to communication that immediately makes sense to the user. With that said, scripting allows for a hands-off approach, which is especially helpful when the user would like to run a set of commands rather than firing off one-off commands.
While the Stream CLI defaults to returning user-friendly (and human-readable) outputs (see Make It Pretty and Use Tables for Large Responses), we understand that, when running a script, you will likely want a verbose response instead of a human-readable message. To access the raw response data, we added a --json
flag that allows the user to specify the raw payload as JSON for the response output.
Below is a quick example showing how to fetch credentials for a user from the Stream CLI, piping the output directly to JQ, a lightweight and flexible command-line JSON processor:
#! /bin/bashrun=$(stream config:get --json)name=$(jq --raw-output '.name' <<< "${run}")email=$(jq --raw-output '.email' <<< "${run}")apiKey=$(jq --raw-output '.apiKey' <<< "${run}")apiSecret=$(jq --raw-output '.apiSecret' <<< "${run}")echo $nameecho $emailecho $apiKeyecho $apiSecret
We found that providing this functionality is especially useful for Stream Chat, should the user want to set up their chat infrastructure, provision users and permissions, etc. in one go without using the underlying REST API.
Publishing
Publishing a CLI may seem daunting, however, it’s no different than publishing any other package on npm. The basic steps are as follows:
- Update the
oclif.manifest.json
file using tools provided by the @oclif/dev-cli package. This file scans the directory and updates the manifest file with the updated version of the CLI, along with all commands that are available to the user. The manifest file can then be updated by callingrm -f oclif.manifest.json && oclif-dev manifest
from your command line. - Update the docs to reflect any changes made to the commands. This is also a tool provided by the @oclif/dev-cli package and can be run using
oclif-dev readme --multi
(or--single
if you’re running a single-command CLI). - Bump the npm version using the version command (e.g.
npm version prerelease
). The full docs on thenpm version
command can be found here. - Publish the release to npm with the
npm publish
command.
A user can then install the CLI globally with npm or yarn:
npm -g install <YOUR_CLI_PACKAGE>
OR
yarn global add <YOUR_CLI_PACKAGE>
If you need to distribute your CLI as a tarball, we recommend looking at the oclif-dev pack command provided by the @oclif/dev-cli package — this command will allow you to deploy packages to Homebrew and other OS-specific package managers, or simply run them independently on the system.
Key Takeaways
If you’d like to dig into the full source code behind the Stream CLI, you can find the open-source GitHub repo here. While the key takeaways in this post are not an exhaustive list of our suggestions for best practices, we do hope that you walk away from this post with some additional knowledge to apply to your CLI. To summarize our main takeaways from this endeavor:
- For inspiration, look at the functionality that Zeit and Heroku provide within their CLI to make for an awesome developer command line “experience”.
- If your API/CLI requires data persistence, store that data in a cache directory that is specific to your CLI. Load this using a util file as we do at Stream. Also, note that the fs-extra package will come in handy for this type of thing (even though support is built into Oclif).
- Oclif is the way to go, especially if you’re building a large CLI, as opposed to a single-command CLI. If you’re building a single-command CLI you can still use Oclif, just make sure to specify that it’s a single-command API when you’re scaffolding your CLI.
- Don’t want to use a framework? That’s okay! The package minimistprovides argument parsing in the command line and can easily be used within your project.
- Use prompts, when you can, with Enquirer or another package of your choosing. Users should be walked through the requirements of the command and asked for the data the command needs in order to execute properly. Note that this should never be required (e.g. let the user bypass the prompt if they pass the correct arguments).
- Use colors when possible to make your CLI a little easier on the eye. Chalkserves as a great tool for this.
- If you have response data that is well enough structured, don’t just print it out to the user (unless that’s what they specify). Instead, drop it in a table using cli-table.
- Always allow the user to specify the output type (e.g. JSON), but default to a message that is human-readable.
- Keep it fast! For time-consuming tasks such as file uploads or commands that require multiple API calls, we recommend showing a loading indicator to let the user know that work is being done in the background. If you’re looking for a package on npm, we recommend checking out ora.
As always, we’d love to hear your thoughts and opinions, as well, so please feel free to drop them in the comments below!
If you’re interested in building a chat product on top of the Stream platform, we recommend running through our interactive tutorial. For the full docs on the Stream Chat API, you can view them here.