Today API’s are a must for most app’s running on devices collecting or fetching data of any kind or website’s either running as a single page application or simply fetching writing/reading data from its internal or some external system.
Often, API’s are even more important than the application itself, so starting to design and implement the API first makes sense.
In this article I am going to describe a simple Node.JS based RESTful API I created for my app “Puckify” to collect some statistical data and in-app purchases users do. Today it is very easy to set up a simple API, so i am going to show how to do that in 30 minutes.
This API, once finished, could be integrated into systemctl on a Linux based OS to make it auto boot and even restart on crashes.
I will follow a simple MC (Model-Controller) architecture, set up a version based API with some GET and POST function while connecting to the MongoDB Cloud (you can also connect to your local MongoDB).
On the following figure you can see the folder structure of our API we are going to create. At the end of this article I will offer a ZIP file containing the complete installable API on any computer having Node.JS installed.
As you can see we have two sub folders called “api”, which has the main logic of our api and a config folder were we will put our DB connection stuff.
server.js is the main script file we will need to start to run the API, main.js contains the bootstrapping code to set up everything from the api folder.
This API will be able to support multiple versions. Keep in mind that any API running today never ever runs only a single version. Once you update your running API, having your clients running on iOS or Android based devices from the App Stores, people might update the app weeks or months later after your new API version has been published. That means users will still access your API the way the old client on their smart phone works. In this case we need to support multiple versions on our API so users don’t get into trouble once you update your API.
I basically follow the version string format [MAJOR].[MINOR].[PATCH] like 1.0.0. If you fix a bug or do some simple text changes, you should increase the PATCH number like from 1.0.0 to 1.0.1. MINOR stays for minor changes in the application which in some case can affect clients connecting to your API. So changing your API from 1.0.0 to 1.1.0 is to be added to your current running versions having both 1.0 and 1.1 running on your API. MAJOR stays for big changes which also leads to changes affecting your clients and should be added to the current running versions.
Patching your API from 1.2.3 to 1.2.4 should NOT affect your clients connecting to the 1.2 version. This is why for my controller and model files I only keep the MAJOR and MINOR part of the version as PATCH is not to be considered for API’s.
It is up to you how you define the version format. You could use simple revisioning number like a single number or a longer one like 1.2.10.344.23. But keep in mind to reflect it properly on your API (as you might have a website running along with the API having the same version format).
This API will also support running in multiple environments. For now we will define a TEST and PROD mode. The difference between both will be the virtual host it will listen on as in TEST mode it will listen on api-test.eser-esen.de while on PROD it will listen on api.eser-esen.de.
We will also secure this API with a simple HMAC-SHA256 hashing algorithm. That means, clients accessing this API will need to send an Authorization header along the request containing a plaintext value (let’s call it username) and a hash generated out of the username. The API will generate a hash using the username and the secret key stored and match the hashes. If it matches it will grant access, block otherwise with a 403 status code.
Our application is Node.JS based, so we will code in JavaScript. The modules we are going to use for our API will be:
- mongo (MongoDB driver)
- https (the tool for the network part to handle HTTPS)
- express (the web framework handling the requests & responses)
- vhost (to define virtual host like api.domain.com, as we do on usual web servers)
- fs (to handle the certificates as we will run with HTTPS)
- crypto (to handle the authorization part as we will allow access with simple HMAC-SHA256 tokens only)
- more stuff we will use are
- moment (as one of our functions will simply return the current timestamp using momentJS)
- body-parser (to easy parsing request bodies when you API is running)
server.js
First, we will run in strict mode and load all modules we need for server.js. Using “use strict”; simply makes YOU code cleaner than without, such as not allowing you use undeclared variables.
"use strict"; var express = require("express"); var vhost = require("vhost"); var crypto = require("crypto"); var fs = require("fs"); var https = require("https"); var port = 9999; var params = {}; var current_key = ""; var hash_key = "MySecretPassphrase";
Our port will be 9999, but you can define any port you want. Keep in mind, if you run this API on a server which has a web application running on port 80 or 443, this API will not run when you try to use the same port. Later i will explain how to make it run on port 80 or 443 parallelly with your other web applications on the same server using the same web server like Nginx.
params will hold all parameters send along with the application execution from the command line.
current_key will be used to parse all the parameter you offer to this application when starting. As we will support multiple environments we will check for the “env” parameter expecting either empty (to default to TEST) or TEST or PROD.
hash_key is used for our simple token based authorization logic. We will use the HMAC-SHA256 method which will take the plain-text username the clients are going to send, encrypt it using hash_key and compare the result with the hash the client has sent along the request to allow or disallow the request. This is a very simple authorization method but still good enough to secure your API. Later I will explain what other steps you can take to make your API even more secure.
Now lets add the code to parse the arguments we get from the command line and store it in:
// iterate through parameters providing for this server and store process.argv.forEach(function (val, index) { if( index >= 2) { if( index % 2 == 0 && val.startsWith("-")) { current_key = val.substr(1).toLowerCase(); params[current_key] = ""; } else { params[current_key] = val; } } });
As the first two elements from process.argv will contain the path to the node application and your server.js file we need to start parsing from index 2. The way we will call our API with parameter will look like the following:
node server.js -env PROD
This is why the first IF block is checking for the dash to start saving it as key in the current_key object expecting the next argv element being the parameter value.
In the following code we define our encryption function to generate the hash using HMAC-SHA256 and our hash_key:
// main encrypt function using hmac256 function encrypt( text){ var crypted = crypto.createHmac("sha256", hash_key).update(text).digest("hex"); return crypted; }
Now we do check our params object for the env parameter, default to TEST if it is empty or throw error if the caller provided neither TEST or PROD.
if( !params["env"]) { params["env"] = "TEST"; } // env param is mandatory if( params.env != "TEST" && params.env != "PROD") { console.log("Error: env parameter is mandatory. Allowed values are TEST and PROD."); process.exit(1); }
As we will support HTTPS, we need a certificate (and the private one) to secure our connections. You can either use self-signed certificate (not recommended) or by a certificate for your production environment or use Lets Encrypt (recommended) for free and very easy to set up using the script certbot-auto.
Check this link to see how you can set up your certificate using certbot-auto free.
The steps to create your certificate with certbot-auto are easy as described as follows:
- Run certbot-auto certonly
- Choose Nginx if you have a running Nginx server or webroot if you prepared a web folder where certbot can set up your certificate as it will try to verify the domain you are trying to set up the domain for
- Type in the domain name you want to create the certificate for.
- Once finished, you will find your certificate files at
/etc/letsencrypt/live/xyz.yourdomain.com/privkey.pem
In my case I set up a certificate for api-test.eser-esen.de. So add the following code to server.js (using my path in this case)
var options = { key: fs.readFileSync("/etc/letsencrypt/live/api"+(params["env"] == "TEST" ? "-test" : "")+".eser-esen.de/privkey.pem"), cert: fs.readFileSync("/etc/letsencrypt/live/api"+(params["env"] == "TEST" ? "-test" : "")+".eser-esen.de/fullchain.pem") };
As you can see, I do check for the “env” parameter to switch between the certificate files. I created one for TEST and one for PROD. This is required as my app “Puckify” was running on iOS and iOS does NOT allow connecting to non secured or self-signed URI’s. Actually, there is a way to allow self-signed certificates access by iOS devices but I found it way to hard to make it running, so instead, spend 3 minutes using certbot-auto to create a valid certificate.
Now the main part of server.js comes into play. Using everything from above we now create our express instance which allows us to define rules to handle incoming requests in a sequential order.
First we check for authorization, then we check for the domain the client is accessing this application from, finally blocking everything else if none of the above rules match.
var app = express() .use(function(req, res, next) { // handle authorization first var path = req.path && req.path.length > 0 && req.path.startsWith("/") ? req.path.substr(1) : ""; var tokens = path.split("/"); var auth = req.get("Authorization"); if( !auth || !auth.match(/:/)) { res.status(401).send({error: "001: Unauthorized"}); } else { tokens = auth.split(":"); var encrypted = encrypt(tokens[0]); if( encrypted != tokens[1]) { res.status(401).send({error: "002: Unauthorized "}); } else { // OK next(); } } }) .use(vhost("api"+(params.env == "TEST" ? "-test" : "")+".eser-esen.de", require("./main.js").app)) .use(function(req, res) { // if none matches, throw 404 res.status(403); });
In the first “use” function call we fetch the “Authorization” header which has to look as follows:
Authorization: itsme:3b4852a227a9ffb99cb0e5480ff12cff6de88fd3410859f5c36499af839b4d58
We take the plain text part which in this case is “itsme”, use the encrypt function to generate the hash and then compare it to the one from the Authorization header. If it matches, we continue, if not, we block the request with the code 401 (Not authorized) returning some error message in the body (just for info). I also added two kinds of error messages. The first with 001 comes into effect if something is wrong with the header, the second, if the hash does not match (wrong password used?).
In the second “use” function call we check for the virtual host the API is actually being called for, if the defined host matches, we include our main.js starting the code behind.
If none of both “use” calls match, we end up with 403 (Forbidden). This can happen if the authorization is correct but the client somehow connected to this API with another host as defined in server.js. So the last “use” call should never happen if you set up everything correctly.
The main code for server.js is done and we finally start the server printing a string into the command line telling that the API is now running.
https.createServer( options, app).listen(port); // start listening console.log("test api "+params.env+" started on port " + port);
main.js
The code in main.js are only a few lines. It basically connects to MongoDB and once connected, initialize the routes for the API to make it listen on those.
"use strict"; var express = require("express"); const MongoClient = require('mongodb').MongoClient; var bodyParser = require("body-parser"); var app = express(); var config = require("./config/db"); app.use(bodyParser.urlencoded({ extended: true })); app.use(bodyParser.json()); // init mongo db client then init routes const client = new MongoClient(config.url, { useNewUrlParser: true }); client.connect(function(err, database) { if (err) return console.log(err); const db = client.db(config.dbname); // connect to database with name "test" require("./api/routes").default(app, db); }); exports.app = app;
db.js
db.js simply holds the DB credentials plus the database name. The connection string is to be found on the MongoDB Atlas account using the Short SRV string working with MongoDB Version 3.6+. Depending on how you want to connect, MongoDB Atlas provides the details on how to connect. More details about how to connect with Node.JS can be found here.
"use strict"; exports.url = "mongodb+srv://[USERNAME]:[PASSWORD]@clusterXXX-XYZ.mongodb.net?retryWrites=true"; exports.dbname = "test";
routes.js
In the routes.js file we initialize the controller for version 1.0 and define the routes our API will listen on. We map each route to a function call on the controller using the method functions the router in express is providing us. At the end of all valid routes, we need a wildcard one to catch up all invalid routes leading to a 404 with some basic error message, otherwise your client will wait endlessly until the client timeout triggers as the API does not respond at all without this last route.
"use strict"; exports.default = function(app, db) { var v1_0 = require("./controllers/controller.1.0"); v1_0.model.db = db; // Routes app.route("/1.0/datetime").get(v1_0.get_datetime); app.route("/1.0/book").post(v1_0.post_book); app.route("/1.0/book/:id").get(v1_0.get_book); // we need this to end all other routes this API does not know with 404 app.get('*', function(req, res){ res.status(404).json({error: "Unknown route"}); }); };
Now imagine you are implementing a version 2 of your API offering more functions. The routes.js file could look like as follows:
"use strict"; exports.default = function(app, db) { var v1_0 = require("./controllers/controller.1.0"); var v1_1 = require("./controllers/controller.1.1"); var v2_0 = require("./controllers/controller.2.0"); // assign the db instance to all models behind all controllers v1_0.model.db = db; v1_1.model.db = db; v2_0.model.db = db; // Routes app.route("/1.0/datetime").get(v1_0.get_datetime); app.route("/1.0/book").post(v1_0.post_book); app.route("/1.0/book/:id").get(v1_0.get_book); // routes for 1.1, added two more function on this version // addbook for example could work differently here compared to version 1.0 app.route("/1.1/datetime").get(v1_1.get_datetime); app.route("/1.1/book").post(v1_1.post_book); app.route("/1.1/book/:id").get(v1_1.get_book); app.route("/1.1/article").post(v1_1.post_article); app.route("/1.1/articles").get(v1_1.get_articles); // routes for 2.0, added two more functions compared to 1.1 // again, existing functions from versions before could work different here // older clients wont work against 2.0 probably, so you keep offering 1.0 and 1.1 app.route("/2.0/datetime").get(v2_0.get_datetime); app.route("/2.0/book").post(v2_0.post_book); app.route("/2.0/book/:id").get(v2_0.get_book); app.route("/2.0/article").post(v2_0.post_article); app.route("/2.0/articles").get(v2_0.get_articles); app.route("/2.0/video").post(v2_0.post_video); app.route("/2.0/video/:id").get(v2_0.get_video); // we need this to end all other routes this API does not know with 404 app.get('*', function(req, res){ res.status(404).json({error: "Unknown route"}); }); };
To ease the handling of your growing routes, you could split the routes.js file into version based files like routes.1.0.js and routes.1.1.js etc.
So in main.js you simply add as many
require("./api/routes.X.Y.js").default(app, db);
as you have versions deployed. So every time you update or patch a single version you only need to deploy the single file and your routes.X.Y.js files will not grow as the example above the more versions you deploy.
controller.1.0.js
The controller file defines all functions your API for the given version will offer. Here you basically do some validation (some people do it on the model, it is up to you) and the model should handle the real read/write stuff.
For now we have a simple datetime function which generates three types of DateTime strings we can generate with momentJS, then a function to add a book storing a document into MongoDB and another one to read out by using its ID.
"use strict"; var model = require("../models/model.1.0"); var moment = require("moment"); exports.model = model; exports.get_datetime = function(req, res) { var date = new Date(); res.json({datetime: date.getTime(), datetime2: date.toUTCString(), datetime3: date.toISOString()}); }; exports.post_addbook = function(req, res) { var params = req.body; params.datetime = moment().format("DD.MM.YYYY HH:mm:ss"); model.addBook(params, function(ok, details) { if(ok) { res.status(200).json({details: details}); } else { res.status(400).json({error: "Error storing book", details: details}); } }); }; exports.get_book = function(req, res) { console.log(req.params.id); var results = model.getBook(req.params.id, function(results) { res.json(results); }); };
As you remember the route for reading a single book had a single parameter “:id”. Whatever you now put into the query right after /book/ will be available in the req variable at req.params.id.
model.1.0.js
The model file has the real logic doing the read and write stuff. Here you should do the real work like reading documents from MongoDB (or whatever source) and write into it.
I basically work with callbacks as all DB related actions are asynchronous. Be careful when you do stuff synchronously. Fetching data with some driver working asynchronously while calling it in a synchronous way will lead to empty results ending in hours of headaches wondering why you don’t get any result. Database and network related actions better be handled asynchronously. There are ways to fetch and write data in a synchronous way, but that leads to your API blocking the current thread as long as the action runs. This is important when your API does handle other stuff in the same thread at the same time.
exports.test = function() { console.log("hi"); }; exports.addBook = function(params, callback) { exports.db.collection("books") .insertOne(params) .then(result => { console.log("book insert done. inserted:"+result.insertedCount); console.log(result); callback(result.insertedCount >= 1, result.ops[0]); }) .catch(err => { callback(false, err); }); }; exports.getBook = function(id, callback) { var mongo = require('mongodb'); return exports.db.collection("books").find({_id: new mongo.ObjectID(id)}).toArray(function(err, results){ console.log(results); callback(results[0]); }); };
That’s it! Now run the following commands to make it run:
npm install node server.js -env TEST
You can find the ZIP with all files, ready to install and start here: testapi.nodejs.tar.gz. I also added the certbot-auto from https://certbot.eff.org/docs/install.html (link)
Node.JS App on Port 80 while Webserver runs on Port 80?
Nginx
You need to define a server which catches the API domain. To do so add the following configuration (using my api test domain, put yours instead) to your sites-enabled (or sites-available then symlink it into sites-enabled and) and restart Nginx.
server { listen 80; server_name api-test.eser-esen.de; location / { proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $http_host; proxy_pass "http://127.0.0.1:9999"; } }
For port 443 (SSL) you better create a port 80 configuration on Nginx and use certbot-auto choosing the Nginx option. This will automatically adapt your configuration file and all configuration parameters including the certificates into your Nginx configuration.
In this case you don’t need the certificates to be loaded on your Node.JS App because Nginx will handle the SSL stuff while the connection between your Nginx and Node.JS is unsecured as it runs locally. If your Node.JS app and Nginx run on different servers, leave the certificate options in your Node.JS app and point to https://.. in your proxy_pass instead so the connection is secure.
Apache
For this you need mod_proxy to be installed and ready. Then prepare a vhost configuration for your API domain and use the following (again using my host as an example):
<VirtualHost *:80> ServerName api-test.eser-esen.de ProxyPreserveHost on ProxyPass / http://localhost:9999/ </VirtualHost>
The same applies here when you run your API on port 443 (SSL), then you also need to load the certificate stuff as well. If you need to run on SSL between Node.JS and Apache then replace http with https in the ProxyPass directive.
Additional steps to make your API even more secure
Having your API completely public makes it the most complicated thing to secure your API as you never know who actually connects to your API.
There are different questions you need to clarify when you design and implement your API:
- Will the API be completely public?
- Free app fetching data from an API? Think of Google Maps on devices, there is no need to login, but you can use it for free, search for locations, calculate routes all loaded from Google’s API.
- Do users always need login credentials to use my API?
- Think of Facebook, cannot be used without logging in.
- Do I want my API to be public but want to assure it is only accessed by my mobile app?
- Again Google Maps. It is for free, but users can login and use their profile data (stored locations, history of locations, etc.) while using Google Maps.
Depending on your needs you need to follow some rules and apply patterns to secure your app.
If you think you can fulfill the requirement from point 3 (allow API to be used only by your app) then you are wrong. In fact, it is impossible to make your API safe from being used by bots, humans or what not using your app. What you can do is to make it as hard as possible for others to abuse your API or access in a way you don’t want them to.
You can never verify the client itself, but you can verify their behaviour. Besides all that these are the following approaches you can do to make your API more secure or at least make it as hard as possible for those who you don’t want to have access to your API.
- Use timestamps. Integrated into the request headers
- Sending the timestamp and comparing the one sent with the current timestamp on your server will allow you block so-called “replay” attacks. Allow a time difference of a few seconds as you cannot expect that every entity on planet earth is in perfect sync all the time.
- If a bad guy somehow copies a request by sniffing and resending it to your API hoping to get the data he is not allowed to, it will fail as his timestamp in the request will be in the past (old enough so your time difference a way higher than a few seconds)
- User logins.
- If possible make your API accessible only with user credentials.
- So any stranger won’t get any access as he has to to sign up (which you might only enforce via your app)
- Still, if someone steals valid credentials he will get access to your API.
- Throttling mechanism.
- Using the API described in this article, someone could take down your API by simply bombing it permanently with request or do a brute force like attack trying to fetch data by trying different request parameters.
- The API would try to handle each request ending up with overloaded request coming in from a few IP’s or maybe just a single ip.
- A throttling mechanism to be integrated by defining a frequency of allowed requests per IP
- Additionally you could define a per-app usage quota. Either by a user login or some unique id you make your app send to the API and store it and control the traffic by that ID limiting it down to a proper frequency. Basic delays would be enough making users wait longer the more requests they do. Be careful, users connecting to your API from the same network having the same IP will be affected when another user from the same network hits the quota or threshold.
- No keys in URI.
- Extending your API by new functions and probably security related content might end up in putting keys in the URI. Never do that as URI are logged on servers and readable for humans while body payloads and headers are not.
- So keep putting your keys either on the payload or the header.
- Use Apple’s DeviceCheck API and Google’s SafetyNet API.
- Both API’s will help you figure out by verifying a token your app is sending against the API, which then confirms the app integrity and actually proofing that this request is really coming from your app being installed on a real (iOS or Android) device.
- Since both are also an API there is some chance that someone might spoof or even steal a valid token getting access to your API using this technique, but as I said make it as hard as possible.
Always keep in mind, making your API more secure should NOT affect your users. Your API should not require your users to do more steps as needed just because you want to triple check if they are allowed to use or slow down your API because of complex security related time-consuming tasks which at the end, kills your apps success.
If you publish your iOS app or Android app to the world, you basically public your source code too. Since anyone can decompile your app running on his device figuring out how you connect to your API and what data you are sending. Storing passwords plain text in source code may get exposed this way. So always follow the security rule: “Make it as hard as possible”.
You must be logged in to post a comment.