Today we are going to talk about microservices and how to build a fast and secure network system using Nginx. Finally, we will show you how to build a micro-service demonstration quickly and easily using Fabric pattern.
Before we discuss the Fabric pattern, I want to talk about microservices and what it means from Nginx's point of view.
0:56- Great changes
Micro-service has triggered a major change in application architecture.
When I first started building applications, they were all the same. The overall architecture shown in the slide also symbolizes how the application is built.
At present, there is a kind of virtual machine (VM), which is normal Java for me. The functional components applied in the virtual machine exist in the form of objects. They communicate with each other in memory, and they will be processed back and forth and make method calls. Sometimes, you will use notification and other mechanisms to contact other systems in order to obtain data or transmit information.
With microservices, the construction mode of applications is completely different. Your functional components will change from communicating with each other in the memory of the same host through virtual machines to being deployed in containers and connected to each other through HTTP using Restful API calls.
This is very powerful because it gives you functional isolation. It provides you with finer-grained scalability, and you can get better flexibility to deal with failures. In many cases, it is a simple fact that you only need to use HTTP to make cross-network calls.
Now, this method also has some shortcomings.
anecdote
I have a dark secret. I am a Microsoft employee and have been engaged in it. Net has been developed for many years. When I was there, I built a video publishing platform called Showcase.
Showcase is a tool to publish all videos released by Microsoft to the Internet. People can watch these videos and learn the skills and techniques of using Microsoft Word. This is a very popular platform, many people use it, and many of them will comment on the videos we posted.
Showcase was a. Net single application from the beginning. As it became more and more popular, we decided that it should be replaced by SOA architecture. The conversion is relatively easy. Visual Studio essentially provides the ability to flip the switch, that is, convert your DLL call into a Restful API call. With some minor refactoring, we can make our code run well. We also use wireless city services for these comments and community functions in the application.
Closed loop problem
It seems that we are SOA feasible. In our first test, everything worked well. Until we switch the system to our staging environment and start using production environment data, we will see some serious problems. There are many comments on these issues on the page.
This is a very popular platform, and some of its pages have as many as 2000 comments. When we go deep into these problems, we know that these pages take one minute to render, because the wireless city service needs to fill in the user name first, and then launch a network call to the user database for each user name to obtain the user details and fill them into the rendered page. This is very inefficient. It takes a minute or two to render the page, while it usually takes only 5 to 6 seconds in memory.
slack
When we went through the process of finding and solving problems, we finally adjusted the optimization system through some measures, such as grouping all requests. We cached some data, and finally we optimized the network and really improved the performance.
So, what does this have to do with microservices? Yes, with microservices, you basically adopt SOA architecture and put it into ftl engine. In SOA architecture, all objects are contained in a single virtual machine, managed internally, and communicate with each other in memory. Now in microservices, HTTP is used for data exchange.
When you do this without problems, you will get good performance and linear scalability.
Nginx can cooperate well with micro-services.
Nginx is one of the best tools you can use to transition to microservices.
Some history about Nginx and microservices. We participated in the micro-service movement from the beginning, and we were the first to download applications from Docker Hub. Our customers and end users have the largest installation of microservices in the world, and they widely use Nginx in their infrastructure.
The reason is that Nginx is small, fast and reliable.
Nginx microservice reference architecture
For some time, we have been committed to using microservices in Nginx. This is a stylized Nginx microservice reference architecture, which we have established and is currently running on AWS.
We have six core microservices, all of which run in the Docker container. We decided to build a multilingual application so that each container can run different languages. At present, we use Ruby, Python, PHP, Java, Node.js
We built this system with a twelve-element application, and with a little modification, it will be better used for microservices and can replace Roku platform. Later, we will show you an application that actually runs in demo.
The value of MRA
Why build such a reference microservice architecture?
We set up this reference architecture because we need to provide customers with a blueprint for building microservices. We also want to test the functions of Nginx and Nginx Plus in the context of microservices and find out how to make better use of their advantages. Finally, we should ensure that we have a deep understanding of the micro-service ecosystem and what it can provide us.
network problems
Let's go back to the big shift we discussed.
From migrating all functional components of an application running in memory and managed by a virtual machine to working and communicating with each other through the network, you will essentially introduce a series of problems that need to be solved to make your application work effectively.
First, you need service discovery, second, you need load balancing for all different instances in the architecture, and third, you need to worry about performance and security.
Whether it is good or bad, these problems are inseparable and you must weigh them. I hope we have a way to solve all these problems.
Let's take a deeper look at each question.
Service discovery
Let's talk about service discovery. In a single application, the application engine will manage all object relationships. You never have to worry about the relative position of one object and another. You simply call a method, the virtual machine will connect to the object instance, and then destroy it after the call.
So for micro-services, you need to consider the location of those services. Unfortunately, this is not a universal standard process. The various service registration centers you are using, whether Zookeeper, Consul, etcd or others, will work in different ways. In this process, you need to register your services, and you need to be able to read where these services are and where they can be connected.
load leveling
The second question is about load balancing. When you have multiple service instances, you want to be able to easily connect to them, allocate your requests among them efficiently, and execute them in the fastest way, so load balancing between different instances is a very important issue.
Unfortunately, the simplest way of load balancing is very inefficient. When you start to use different and more complex schemes to achieve load balancing, it will also become more complicated and difficult to manage. Ideally, you want your developers to decide which load balancing scheme to use according to the requirements of your application. For example, if you are connected to a stateful application, you need persistence to ensure that your session information will be preserved.
Secure and fast communication
Perhaps the most daunting areas of microservices are performance and security.
When running in memory, everything is fast. Now running on the internet will be an order of magnitude slower.
Information that is usually safely stored in the system in binary format will now be transmitted on the network in text format. It is now easier to arrange sniffers on the network and be able to monitor all the data that your application is moving.
If you want to encrypt data at the transport layer, this will introduce a lot of overhead in terms of connection rate and CPU utilization. SSL/TLS requires nine steps to initialize the request during the whole implementation phase. When your system needs to handle thousands, tens of thousands, hundreds of thousands or millions of requests every day, this becomes an important obstacle to performance.
A solution
We believe that some solutions we have developed in Nginx will solve all these problems, providing you with powerful service discovery, excellent user-configurable load balancing and secure and fast encryption.
Network architecture
Let's discuss various ways to install and configure the network architecture.
We put forward three network models, which are not mutually exclusive, but we think they belong to multiple formats. These three modes are proxy mode, router mesh mode and structure mode, which are the most complex and perform load balancing in their headers in many aspects.
Agency mode
The proxy mode focuses entirely on the inbound traffic of your microservice application, but actually ignores the internal communication.
You will get all the benefits of HTTP traffic management provided by Nginx. You can have SSL/TLS termination, traffic shaping and security, and you can get WAF function through the latest versions of Nginx Plus and ModSecurity.
It can also be cached, and everything provided by Nginx to your single application can be added to your micro-service system, and with the help of Nginx Plus, service discovery can be realized. As your API instances float up and down, Nginx Plus can dynamically increase or decrease them in the load balancing tool.
Router mesh mode
Router mesh mode is similar to proxy mode. In this mode, we have a front-end proxy service to manage access traffic, but it also increases the centralized load balance between services.
Each service is connected to a centralized router grid, which manages the distribution of connections between different services. Router mesh mode also allows you to build a converged mode to add flexibility to your application, and allows you to take measures to monitor and reclaim failed service instances.
Unfortunately, because this mode adds an extra link, if you have to do SSL/TLS encryption, this actually aggravates the performance problem. This is the reason for introducing fabric patterns.
structural models
Fabric mode is a mode in which everything in its head is turned upside down.
Just like the other two modes before, there will be a proxy server to manage the incoming traffic, but unlike the router Mesh mode, you use Nginx Plus instead of the centralized router and run it in each container.
This Nginx Plus instance acts as a reverse and forwarding proxy for all HTTP traffic. With this system, you can get service discovery, strong load balancing and, most importantly, a high-performance encrypted network.
We will discuss how this happened and how we deal with the work. Let's look at the normal flow of how services connect and distribute their request structures.
Normal process
In this picture, you can see that the investment manager needs to communicate with the user manager to get information. The investment manager creates an HTTP client, which initiates a DNS request to the service registration center and obtains the returned IP address, and then initializes an SSL/TLS connection to the user manager, which requires nine stages of negotiation or "handshake". After the data transmission is completed, the virtual machine closes the connection and collects garbage from the HTTP client.
This is the whole process. This is simple and easy to understand. When you break it down into these steps, you can see how the pattern really completes the request and response process.
In structural mode, we changed this.
Details of fabric patterns
The first thing you will notice is that Nginx Plus runs in every service, and the application code communicates with Nginx Plus locally. Because these are local connections, there is no need to worry about encryption. It can be an HTTP request from Java or PHP code to the Nginx Plus instance, and it is a local HTTP request in the container.
You also notice that Nginx Plus will manage the connection to the service registry. We have a parser that obtains all user manager instances by asynchronously querying DNS instances of the registry, and establishes connections in advance, so that when Java services need to request some data from the user manager, they can use the pre-established connections.
Continuous SSL/TLS connection
Stateful, persistent and encrypted connections between microservices are real benefits.
Remember how the service instance in the first picture went through some processes, such as creating an HTTP client, negotiating an SSL/TLS connection, initiating a request and closing it? Here Nginx establishes the connection between micro-services in advance, and uses the Keepalive feature to keep the continuous connection between calls, so that it is not necessary to handle SSL/TLS negotiation for every request.
Essentially, we created a mini VPN connection from service to service. In our initial test, we found that the connection speed increased by 77%.
Fuse positive pole
In fabric mode and router mesh mode, you can also benefit from creating and using converged modes.
In essence, you defined an active health check within the service and set up a cache to save data when the service is unavailable, thus obtaining the complete fuse function.
So, now I can confirm that you think Fabirc mode sounds cool and you want to try it in the actual environment.