Photo by Scott Trento on Unsplash
An almost complete checklist for running RESTful APIs in production
4 min read
Building RESTful APIs using micro-service architecture is a very common practice now. However, if the correct ecosystem is not in place to support running these services in production, maintenance becomes a pain. This article will try to outline most of the necessary decisions to be made, tools to be deployed and any other points to be noted to build, deploy and run RESTful APIs in production.
Choice of language and framework.
In most cases, you are free to use a language and framework of your choice. That is one of the main benefits of going with micro-services architecture. But care should be taken about the cold start time of your application in case your API is expected to serve skewed transaction load. Cold start time is the time taken by the application to boot up and ready itself before it can start accepting connections.
So choosing the right language and framework is important depending on the kind of traffic being served by the application. On a tangent, another way address the sharp change to traffic is to build the application on serverless infra.
Deployment pipelines with the right strategy
The goal should be to avoid downtime during deployment. Deployments of newer versions of API should be automated using a CI/CD pipeline and should follow one of the suitable deployment strategies Couple of examples are the Blue-Green strategy and rolling deployment strategy. The strategy suitable for your application depends on various factors like whether your application can support instances of two different versions running at the same time etc.
Log aggregation and monitoring
Logging is essential to debug any issue with the application. With containers being the preferred mode of deployment generally, it is not ideal to follow the traditional way of logging to a file. There can be n number of containers and subsequently n number of log files. It will become a mammoth of a task to go through all the log files to trace a single instance of an issue depending on how distributed the application deployment is.
A simple solution to overcome this problem is to use a log aggregation stack. If you are familiar with Kubernetes, you must have heart of the famous EFK/ELK stack. You can find out more about them by just searching for them online but the general idea is that all applications should log to
stdout and a tool like logstash/fluentd will scrape the log entries written to
stdout and store them in Elastic-search. As a follow up step, these log entries can then be queried and monitored using Kibana. In AWS, Cloudwatch offers the same set of features as ELK/EFK stack and should be sufficient if you are deploying your application on AWS.
Once you get the log aggregation from various containers into one common location in place, alerts can be built into the log monitoring tool to alert the support team in case where an unexpected error is written to the log entries.
Tracing is an absolute must for an API deployment. Tracing gives end-end visibility of each call made to the API endpoint. Depending on the amount of instrumentation you can put in place, you can profile and observe each call that is made. This will help you identify and iron out any performance issues that you may be facing with the APIs in production. Tracing also helps you isolate out the span or the section of the API call where the latency is higher and thereby assist you in quickly debugging and identifying the issue.
A very famous open source tracing tool is Jaeger
The performance of the application depends on the resources supplied to it such as vCPU, Memory. A monitoring step which can monitor the usage of such resources across the deployment and provide insights into the usage of the underlying resources will assist in tuning the application to suit the work load. Similar to log monitoring and alerting, the same setup can also be used to alert about any stress with respect to resource utilisation.
This is not a comprehensive list. As the title says, this is not complete. There are lot of other tools that can be plugged into the ecosystem such as dependency scanners, docker image scanners, static code analysers etc., with their own benefits. Those benefits are better suited from an implementation standpoint. This list should get you started with what you need to have in production once you deploy code to production as sensible defaults.
Did you find this article valuable?
Support Pavan Andhukuri by becoming a sponsor. Any amount is appreciated!