As code gets signed off by a developer, it goes to the infrastructure teams that deploy it in the dev/test environment and then validate it via a number of tests. The developer’s skill set usually doesn’t include knowledge of Kubernetes, service mesh parameters, or Ingress gateways. Beyond knowledge, there is usually enterprise grade separation of roles: the developer shouldn’t have access to the network configuration, unnecessary monitoring tools, and certainly not security objects such as certificates.
In traditional IT environments, the process looks like this — the code is completed, the ticket is open in the change management system, and a ticket gets assigned to an IT person that gets reviewed and implemented. This can take from a couple hours to days to in some extreme, but realistic cases — weeks.
Such an approach is fundamentally different in the microservices world, where applications get improved continuously. The goal is to allow any developer to submit code and see results instantly.
In this article, we provide a case example of a GitLab pipeline approach using AWS and Tetrate Service Bridge, demonstrating how a DevOps approach and a service mesh management platform can save time and optimize enterprise agility.
Deploying an app in AWS with TSB
With the GitLab pipeline approach, the execution phase starts as soon as the application code is committed to the repository. A developer or DevOps engineer has preconfigured ratio templates available. Those templates have preconfigured rules on how traffic is being shifted between stable and new versions of the application.
A service mesh platform, Tetrate Service Bridge (TSB), does the rest of abstraction. There is no more need to change, add, or remove multiple objects in your Kubernetes and service mesh environments. The concept called ServiceRoute abstracts and automates the creation and modification of those objects, so the operator only worries about the weights that need to be assigned to the endpoints; the rest is configured behind the scenes.
In this case, when a developer submits code to the pipeline console and selects the appropriate rate between application deployments when the tests are completed successfully, it’s just a matter of one click to shift all traffic to the new version and remove the old application. A developer may be given permission to initiate this traffic shift in TSB such that the changes are made under a service account, so the developer doesn’t have to be provisioned with any access to the infrastructure, only to the GitLab console.
The DevOps operator faces a similar task, sometimes they may need to perform additional tests on the application that is handed to them (by a developer or vendor), instead of going through a task list to make sure all the settings in the environment are set correctly. The application is uploaded to the GitLab repository and the process of deployment starts automatically. Granular settings are abstracted and predefined during the pipeline creation stage by TSB and translated to a single-click operation in the GitLab interface.
In summary, the example below models a full-application lifecycle in a very efficient approach that saves many hours that would otherwise be spent setting, changing, and adjusting the environment settings as well as time lost on spreading out the effort between multiple stakeholders. To remain competitive, the modern enterprise can’t afford such inefficiencies.
This great example of CI/CD Pipeline work by AWS https://aws.amazon.com/blogs/containers/ci-cd-with-amazon-eks-using-aws-app-mesh-and-gitlab-ci/ is used here as foundation and applied to the TSB (Tetrate Service Bridge) infrastructure.
Below is the high-level overview of what is implemented in this example:
Simplifying the mesh
Let’s cover some of the details of how Tetrate creates the baseline for the application development pipeline by simplifying the service mesh.
In this specific deployment, TSB management and control planes are deployed in an AWS EKS cluster. The benefits of using TSB are (a) easily configurable environment and (b) greater visibility of the workloads and application metrics.
Let’s quickly look at the concepts beyond AWS, Kubernetes and Istio objects that must be defined for the application to be functional. Having these objects defined allows users to rely on TSB for management implementation details for each cluster. Instead once defined parameters are distributed across the clusters from the central location and consistently applied to the environment as a whole. Additionally it allows TSB to collect data around configuration and traffic patterns and present it for analizes to the engineering team.
- TSB Workspace – a grouping of multiple clusters that TSB will manage and monitor
- Cluster –TSB defines this as a Kubernetes cluster; TSB is a platform that can manage and monitor multiple clusters under the workspace umbrella via a single set of policies and controls.
- Gateway and Traffic Groups – a logical concept that defines a group of Kubernetes namespaces for being controlled as a single entity
- Tetrate Istio Gateway – an object that allows TSB to dynamically program the Envoy ingress gateway by creating Virtual Service, Gateway, and Destination Rule objects as required by TSB centralized enterprise grade platform.
Note: The only application component that is created ahead of time is a simple, Kubernetes service for the test application. (It’s not a hard requirement to pre-create this object and it can also be moved to the CI/CD Pipeline for a unified approach). This service connects the public FQDN of the application to the Kubernetes namespace. All the traffic pattern logic is controlled by the Service Mesh (not Kubernetes) that allows very granular control over traffic patterns. TSB provides a management platform for the service mesh.
Back to the example:
The repo https://github.com/PetrMc/gitlab-aws-tsb.git – has only three directories:
a. Src directory – application source code that contains Dockerfile and index.html. The pipeline initiates container creation by downloading a simple HTTP server, transferring index.html page to the server root, and creating a Docker container
b. Template directory (has only two templates):
- The deployment template is used to define a Kubernetes object – the only variable used here is for specifying the application version and pointing to the container created in the previous step
- The ServiceRoute template is really the heart of this deployment – here we define how the traffic is shifted between blue and green deployments and how to reach both versions of the application. Based on this, TSB will configure all necessary components inside of the mesh. The ServiceRoute yaml file is applied using tctl — the Tetrate command line utility.
c. The root directory contains two files:
i. gitlab-ci.yml is pretty similar to the example from AWS website. The main difference are:
- In addition to the AWS, Kubernetes, and Docker components, we add and configure the TSB config utility called tctl mentioned above that communicates with the TSB Management plane via GRPC and Restful API calls. The tctl binary is downloaded from Tetrate Bintray public repository.
- There are only two phases included in this implementation. The original AWS example has more granular proportions defined. It’s a very simple copy-paste process to add granularity if needed. The current two phases are <20% new deployment/80% stable release> and <80% latest/20% previous version>
ii. The tsb-config.sh shell script again reuses a lot of AWS material and the main differences here are:
- Only two definitions are configured in the application itself (Kubernetes deployment) and via single definition file for the traffic pattern
- As mentioned above, the tctl command is used to configure service mesh. The tctl binary is delivered to the pipeline via curl.
NOTE: One more detail to mention is that in addition to the AWS variables, several TSB-specific ones are added in this example:
- – TSB_PASS – the password for TSB admin account (i.e. ‘demoblog123’)
- – TSB_TENANT – a specific TSB parameter (the default tenant is ‘tetrate’)
- – TSB_URL – fqdn of the TSB programming point that tctl will access via port 8443 (here is an example of value for this field – gitlab-tsb.example.com)
The current demo pipeline looks as follows:
d. The application is modified (e.g., index.html file is updated with new homepage content).
e. The pipeline starts with deploying required components for AWS, TSB, and Docker.
f. The next step is creating a simple image by combining a publicly available web server with a custom home page and publishing the image into AWS ECR.
g. After that the CI/CD pipeline expects the DevOps operator to manually choose what amount of traffic goes to the newly released version of the application.
h. When the testing for the new version of the application is concluded, the last step is to steer all traffic to the new application and remove the old version of the deployment.This is done by applying the modified ServiceRoute object via the tctl command and removing the previous application version by removing its deployment from Kubernetes.
During all these steps, TSB manages the traffic steering rules on the fly and collects RED metrics of both versions of the application and the ingress gateway.
Here is the video with the live demo of the CI/CD GitLab Deployment in AWS with TSB in the center of the picture.
A couple of notes about some obstacles we discovered while building the demo that might help those who are implementing a similar approach in their environment:
- The EKS deployment is only available to the user that created the cluster. For the service account used in the CI/CD pipeline to be able to manage Kubernetes settings, users need to be added to the Kubernetes configurations per: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html.
- DynamoDB used in this example is Region dependent – if the Table creator’s default AWS region is different than the pipeline region, the pipeline will not be able to access the database.
- AWS secrets and other variables defined in GitLab CI/CD pipeline are only readable by protected branches or tags. If no tags are used and the branch is not protected, the pipeline will not be able to access AWS.