The new version of GitLab 11 consumes less memory, works faster and speeds up the development process significantly. It is a convenient tool for implementing projects based on DevOps best practices. It gives teams the possibility of full and automatic control over the entire creation process.
As I mentioned in the previous article, the new GitLab 11 deserves special attention because of the automation features it offers to all functions related to DevOps. This is a complete novelty which will trigger the creative potential of every developer, freeing them from manual operations compatible with DevOps procedures. The whole idea of the latest version of GitLab is based on automated processes called Auto DevOps.
In the first part of our webinar, you can learn how GitLab helps to generate a fully automated pipeline.
How does Auto DevOps work in practice?
After installing the latest version of GitLab, we can proceed with the implementation of the new project (in the On-Premises version, the Cloud version is, of course, available at Gitlab.com). After creating a new repository, GitLab will immediately suggest launching a new Auto DevOps function for our project (from version 11 onwerds, the Auto DevOps service will be launched by default).
First, however, we need to define the place of our application’s implementation. In this respect, GitLab integrates with Kubernetes.
To quickly start the Kubernetes cluster configuration, we can use one of two integration options:
- The first option with an already existing cluster where we fill all the necessary data,
- The second option – using the Google Cloud Platform and creating a Kubernetes cluster for us inside it.
Of course, the second solution is very convenient and fast. The only thing we need is an active account in the Google ecosystem, e.g. Gmail. At this point, it is worth paying attention to the free trial period made available by GCP – it is 12 months long and $300 of the budget can be used for any activity in the cloud.
As soon as you log in, you have the opportunity of creating a new Kubernetes cluster in the Google cloud. We choose the zone of interest to us where the cluster will be created directly in GitLab, together with the number of nodes to be included in the new cluster and their type. By accepting all the settings, we begin the process of creating a cluster that will take about a minute – all depending on the resources used.
When the new cluster is ready for operation, we can easily manage it in GitLab.
- We will start with the installation of the application manager Helm Tiller on the Kubernetes cluster.
- Then we will install Ingress which will be responsible for organizing the traffic within the cluster. It will act as an entrypoint in our infrastructure. Moments after installation, we obtain the IP address that has been assigned for this service.
- Finally, we will also install the Prometheus responsible for monitoring the entire Kubernetes cluster. After a few moments, Prometheus will begin to provide basic information about the infrastructure. At a later stage, the scope of monitoring will be much broader.
- To finish our configuration, we need to take care of the domain the Autodevops service will use for the Auto Review and Auto Deploy processes. At this point, GitLab will immediately suggest using the free NIP.IO service to generate a temporary domain for our environment.
After saving the settings, you can go to the repository view and then to configure the autodevops service.
Speaking of settings, we can expand our configuration in the Settings panel.
We have three different options to choose from regarding the implementation strategy:
- First – means the usual implementation for production,
- Second one is based on time thresholds, systematic implementation for production,
- Third – means the implementation on staging and waiting for manual triggering of the implementation for production.
Let’s choose the first option and return to the main view of our repository.
Assuming that we want to use a locally existing project, we make the right part of the ready statement and after a while our remote repository in GitLab is ready. When we apply any changes to our repository (e.g., push or merge), GitLab will inform the GitLab Runner that it is necessary to build a project.
Editing the code
Returning to GitLab, we see that our new code is already visible. The first and last name of the user who has been working on it recently is displayed as well.
I would like to point out that our project does not include any automation related files that could suggest anything to GitLab. After introducing a small change to our project:
and creating a merge request, Gitlab will automatically create and run a CI / CD pipeline for us.
Interestingly, even though we have not configured anything in the CI / CD range, thanks to the new Auto DevOps pipeline service will be created automatically. In the window showing its details, we will gain an insight into all the stages of the automation process that has been prepared and launched for us.
The principle of operation is relatively simple. In the first stage of building our project, GitLab uses BuildPack Heroku to detect the language and create the right Docker image. Next, Gitlab will carry out a series of tests and, in our case, they will concern the quality of the code and tests provided by developers along with the code repository. Then a temporary preview environment will be created where our application will be implemented. That way, we get access to a live preview of the new version of our application instantly after implementation. At the current stage, we can evaluate it and decide whether we want to continue the implementation to the target environment or stop our work at this point. As I mentioned before, the Review App is purely demonstrative and temporary, which means that at the end it is extinguished and all resources used are released.
Let’s return to the beginning and the build phase. It takes place with the support of Heroku – Auto DevOps in GitLab 11 supports all programming languages and frameworks that are available through Heroku build packages, such as Ruby, Rails, Node, PHP, Python and Java. Thanks to this, GitLab automatically detects the language used by the programmer and performs default tests for it.
Considering that the pipeline has been generated automatically, we must remember that it is based on default settings – Kubernetes will start Kubom in the current stage using the default Helm Chart settings.
Returning to the dialog box (Merge Requests), we see a complete list of completed tasks. We have at our disposal: the generated URL with the implemented version of our projects (only for preview – it is not a production version, of course), and Code Quality with a suggestion of changing the code to improve its operation.
If we have no objections, we can merge.
After completing the merger and switching to the Pipelines category (the left-hand panel in the CI / CD options), we can see the new pipeline.
The last phases of the pipeline are changed. In the Review App, we have the final deployment on the production environment this time.
In the next step, after selecting the Enviroments (the left-hand panel in the CI / CD options), we see our code at the production stage. By default, we have created one instance of our project. What if we want to increase the scale of operations?
In the Settings category and the CI / CD tab, which is also located in the main menu, we have an option called Secret variables. By typing in the “PRODUCTION_REPLICAS” Key field, we can easily declare the desired number of instances of our project.
Returning to the Environments tab in the CI / CD category, we launch the Re-deploy button. That way, we refresh the implementation process in the scale we have just defined.
At this stage, GitLab provides us with another interesting option to monitor the deployment process and the performance of our application.
The data is generated automatically and presented in the form of graphical charts which we can use to make a detailed analysis.
This is what Auto DevOps is about – from build, through testing, quality verification, then implementation to the production environment, monitoring the entire process.
If you want to learn more about this topic and see other features of GitLab, you can find more information in the second part of the webinar.
The extended functions of GitLab 11 provide excellent support for the entire development process as part of DevOps best practices. Full automation saves time and results in greater optimization. We don’t have to think about procedures and stages anymore because they appear by themselves and at the right moment.
Contact us if you want to learn more about the new GitLab 11.
Software engineer and consultant in the areas of DevOps, Automation, Cloud, containerization. Extensive experience in implementing and maintaining DevOps practices. Currently passionate about cloud solutions, especially ones based on AWS. He is interested in broadly understood automation and the design of microservices and serverless software applications.