Michelle Sollicito, Senior Lead Developer, IntegraConnect
A popular method of implementing APIs in .Net Core is through the use of Microservices.
Microservices often make it easier to deploy and maintain APIs because each API runs in its own process (easier to scale up and down, easier to deploy separately) and “owns” a small part of the data model. This guide aims to help you to avoid the pitfalls of using this method of implementation by briefly detailing some of the Do’s and Don’ts of building APIs as Microservices.
Design Microservices carefully, identifying the exact functionality and data model “owned” by the Microservice.
Access data via the Microservice. Always use the associated Microservice to get to data rather than going direct to the data.
Assign each Microservice to a particular small team (typically an agile/scrum team that could be fed with two pizzas). That team is totally responsible for everything about that Microservice – development, a, deployment, bug fixes and, in most cases, hosting/infrastructure.
Always have a QA person on each Microservices team. Having one QA person with intimate knowledge of each Microservice, able to write automated tests, is essential. Testing is also more self-contained with Microservices often having a very well-defined interface to be tested, so that it is easier to write automated tests to be carried out during the CI/CD process.
Identify Microservice endpoints carefully. typical Microservice should consist of a number of REST calls involved in a domain - a single POST (create), a single GET (read), a single PATCH (update) and a single DELETE – mapping nicely on to the old data design methodologies based around CRUD. Usually a Microservice maps nicely onto the Controller in an MVC environment, if the Controller is designed correctly.
Employ Continuous Integration/Continuous Deployment techniques. Each Microservice can be deployed totally independently of all other Microservices, typically as a container (Docker is a great option), without having to wait for the rest of the application to be ready. From a software developer’s perspective, typically this will equate to one “master” branch in Git (or TFS or other source code control system); which therefore equates to one Jenkins (or bamboo or similar) deploy. Often each Microservice can be housed within a single docker container or AWS Lambda/Azure Cloud, deployed using a single Cloud Formation/Azure Resource template. It is often possible, at least in theory(!), to deploy changes or new versions of Microservices, and to scale up or down, without affecting any other Microservice. This can often save a great deal of money on infrastructure costs. In many good CI/CD environments, the deployed Microservice will contain the infrastructure (e.g. docker containers).
If your team is used to Microsoft technologies, Azure is the cloud environment to use. If your team is used to LAMP (linux, apache, mysql, php) or Java, generally AWS is a great environment
Use an Orchestration system that meets your needs. Kubernetes has now expanded and meets most needs without the overhead of too many belts and whistles. Both AWS and Azure offer a number of Orchestration options catering to those who need extra security etc. However weigh the advantages of those extra features against the inconvenience of perhaps a lack of portability and an overhead of administration.
Convert to .Net Core just “because”. There is a perception that Microservices require .Net Core but it just is not true. A Microservice based upon .Net framework can operate as a docker container and live in an Orchestration environment just the same as a .Net Core one can. It is often very difficult to move code to .Net Core because dependent libraries are not yet fully upgraded. So simply run code as a Microservice without upgrading to .Net Core. Recognize that the front end and the back end are independent so they do not both have to be written in the same technology.
Hire the wrong people. For example, typically the team that supports a Microservice will have to be highly-skilled as well as multi-skilled – which can be very expensive. Even if we assume for a moment that the team will only support the “back-end” Microservice (and therefore do not need to have UI skills of any kind), the team will probably have to include not only programming skills (such as C# or Java) and source code control skills (github/TFS etc.), but also database skills, DevOps skills and QA/testing skills. The DevOps skills in particular, are vital skills in a Microservices environment – the ability to set up scripts to automatically create the environment and/or machines, configure the container orchestration environment (security, scaling options, environment variables etc,) and deploy the code in a single deploy. Jenkins scripts, Cloud Formation Templates and Azure Resource Manager templates are complex technologies to learn and to understand even for those who are used to using yaml and json files. Unit testing skills are also vital to all developers. Get training for your developers when they need it. http is a great source of great training in this area.
Miss out on testing. Unit tests must be built into the code and into the deployment process so that issues with the code are found before they reach production. Integration tests too must be run before the code is deployed into new environments as part of the CI/CD process.
Forget the middle layer. Chatter can result from Microservices designed without a middle layer to “aggregate” results from the other Microservices. The middle layer Microservices should request information from the other Microservices and return a meaningful amount of information back to the client. Most common middle layer Microservices are Reporting Microservices, PDF generation Microservices and Printing Microservices, whereby many other Microservices may have to be referenced within a single conversation.
Allow “Fat” Calls. Instead of requiring that every call to a Microservice endpoint carries all the security information it requires to be separately authenticated, authorized and checked, have a single Microservice that generates a lightweight token (jwt is a good option) and then the token (usually sent in the request headers) can be used for all Microservices calls after that point to identify which resources the client has access to.
Allow diverse technologies for no good reason. Although it is theoretically ok for each Microservice to use its own database engine, its own caching mechanism etc. only employ different technologies where absolutely necessary. Think about the need for different skillsets and how that could cause issues for the organization. If your team are primarily used to SQL Server database, choosing to use DynamoDb instead requires not only a different skillset but also a different mindset, so consider such technology decisions carefully.
In addition to the above “Do’s and Don’ts” advice that comes from experience, I would suggest that if your team is used to Microsoft technologies, Azure is the cloud environment to use. If your team is used to LAMP (linux, apache, mysql, php) or Java, generally AWS is a great environment for your team. Having used both environments, I found Microsoft Azure easier to use for most things because of the GUI and the ease of finding what you need to know. AWS is very powerful and comprehensive but the power usually requires a lot of scripting. Typically, LAMP/Java programmers are more comfortable with that kind of scripting than Microsoft users in my experience.