•almost 3 years ago
One of my favorite parts of my job as a Developer Evangelist at Stream is building sample applications. It is an enthralling way to engage and interact with potential and existing customers, as well as show off the fun technology we use and build with every single day. The applications I build range from small code snippets outlining how to perform basic operations, such as marking an item as “read” in Stream, to large microservice based applications that generally require robust backend architectures, like Winds.
Last year, I created a post on how to build a RESTful API with Restify. Now that Express and Restify are nearly neck and neck in terms of requests per second, I thought it might be interesting to show you how I go about structuring my APIs with Express (just to toss in a little friendly competition / play devil-oper’s advocate 😉).
Structuring Your API
The way you choose to structure your API is one of the most important decisions you’ll make. You must ensure that it’s smart, flexible, and easy to use - this is a must. If it’s not easy to use, other developers will not understand what you’re building nor will they be able to figure out how to build on top of it. Think before you build (I know. Planning sucks. Especially when you are excited to get going, but it pays off).
│ ├── ...
│ ├── config
│ │ └── index.js
│ ├── controllers
│ │ ├── ...
│ ├── models
│ │ ├── ...
│ ├── package.json
│ ├── routes
│ │ ├── ...
│ ├── server.js
│ ├── utils
│ │ ├── ...
All source code is stored in /src. It compiles down from ES6+ to ES5 into the /dist directory for execution on the server. You’re probably asking yourself why you’d take the extra step to write in something that is just going to be compiled down? Good question. ES6+ standards provide some pretty killer additional functionalities, such as arrow functions, modified scoping, destructuring, rest/spread parameter handling, and more!
Let’s have a look at the compilation that takes place in the build.sh file:
That is ALL you need to be able to write in a super awesome language while having it still be supported in all the usual places! That said, the code above may look like gibberish, so let’s break it down 🤓:
- Denotes that this is an executable bash file
- rm -rf dist && mkdir dist
- Removes the /dist directory if it exists (cleanup).
- Creates a new /dist directory.
- npx babel src --out-dir dist --ignore node_modules
- Compiles every file to ES5 and moves the files to the /dist directory, with the exception of node_modules (those are already compiled).
- cp src/package.json dist
- By design, npx doesn’t migrate json files, so we need to copy it ourselves using the cp command.
- cd dist && yarn install --production --modules-folder node_modules
Move into the /dist directory and install the npm modules using yarn
Running the build is as simple as running the following command from your terminal:
Note: You will need to ensure that the build.sh file is executable...
OR if you are like me and enjoy automating everything, you can create an npm script like so:
Which can be executed by running the following from your terminal:
The Main File
The following file, server.js, contains the most important logic and sits on the top-level of our codebase. The beginning portion imports all of the necessary npm modules, followed by our config and logger utility.
Next, we take advantage of the Express use method to invoke several of our imported middleware libraries (cors, compression, and our body-parser). Please note that there are several other middleware libraries that we include for additional functionality (e.g. email, logging, jwt authentication, etc.). Last but not least, after a bit of initialization, we dynamically include all routes and pass the API context to the route for binding.
Note: The customer logger module can be used with most logging services (Papertrail, Loggly, etc.). For this demo, as well as other projects, I like to use Papertrail. You may need to adjust the settings and ENV variables if you use something other than Papertrail.
To keep things tidy and organized, all routing logic (e.g. GET /users) is kept in its own route file inside of a /routes directory.
As you can see, the contents of the route file above hold all references to the controllers for GET, POST, PUT, and DELETE operations. This works because we import and reference the User Controller, passing along the necessary parameters and/or data with every API call.
Controllers include the database model associated with the data that they will be handling, receiving data from the routes, and then making an informed decision on how to handle the data. Finally, the controllers communicate through the models which then talk to the database, and return a status code with a payload.
If you’re a visual person, a production instance should look a little something like this:
And, the code for an example user controller would look like this:
Mongoose Models (MongoDB)
Mongoose is a wonderful ODM (Object Data Modeling) library for Node.js and MongoDB. If you’re familiar with the reference ORM (Object Resource Mapping) and libraries for Node.js, such as Sequelize and Bookshelf, Mongoose is pretty straightforward. The massive benefit with Mongoose is how easy it is to structure MongoDB schemas - there’s no need to fuss around with custom business logic.
What’s even more exciting are the many goodies like middleware, plugins, object population, and schema validation either baked in, or one yarn (I love yarn) or one npm install away. It’s truly remarkable how popular the project has become among developers who use MongoDB.
When it comes to Mongoose models, I tend to keep things somewhat flat (or at least a maximum of 3 deeply nested objects) to avoid confusion. Here’s an example of a user model pulled directly from a project currently under development here at Stream:
Note: When it comes to hosting and running MongoDB, I like to use MongoDB Atlas. It’s a database as a service provided by the makers of MongoDB themselves. If you don’t want to use a free MongoDB Atlas instance, you’re welcome to use a local version. Additionally, if you want to monitor your data, MongoDB Compass is an excellent choice!
Custom utilities can be used for a variety of things – basically, anything you want. I generally reserve them for separating concerns and keeping my code clean. Some examples include establishing database connections, sending emails, logging to an external service, and even communicating with HTTP based service here at Stream.
I’m often asked the question of when to turn something into a utility and my answer is always the same… When you find yourself 1) reusing code OR 2) jamming third-party services into code where it just doesn’t feel right.
Here’s an example of a utility I wrote to help called the Stream Personalization REST API. This integration was completed in about a dozen lines of code:
The code above can now be called from any file like so:
APIs are the building blocks of modern applications and Stream is your feed API. They govern how an application can talk to another, as well as to the database. While we have other flavors of API structures (GraphQL, etc.), RESTful APIs continue to pull their own weight and aren’t going anywhere soon.
If you’re interested in seeing a fully built out skeleton for a REST API built with Node.js, Express, Mongoose, and MongoDB, head over to this GitHub repo.
Looking for a good read on MongoDB vs. Stream? Check out this breakdown: https://getstream.io/activity-feeds/mongodb/
As always, if you have any questions, please don’t hesitate to reach out to me on Twitter or below in the comments. Thank you!