Due to the fact that DevOps is a never-ending, iterative process of continuous improvement, in practice the DevOps lifecycle is graphically represented as an infinite loop that shows how the different phases build on each other.
The DevOps lifecycle consists of eight phases that represent the processes, capabilities, and tools needed for development (on the left side of the loop) and operations (on the right side of the loop). During each phase, teams collaborate and communicate to maintain consistency, speed, and quality.
The planning phase covers everything that precedes the writing of the code by the developers and is the part where product and project managers have the main say. Requirements and feedback are gathered from customers and stakeholders to create a product roadmap that will guide future development. The product roadmap can be recorded and tracked using project management systems such as Jira, Azure Devops or Asana, which provide a wide range of tools to help track project progress, tasks, issues to resolve or milestones.
The roadmap can be further divided into epics, features and user stories to create a backlog of unresolved tasks (backlog) exactly according to customer requirements. The tasks in the backlog can and likely will be used to plan sprints and assign tasks to the team to start development.
In addition to the standard set of tools for software developers, the team also has a standard set of plug-ins installed in their development environments to aid the development process, help enforce a consistent code style, and avoid common security flaws or erroneous patterns when writing code.
This helps to teach developers proper code writing practices while also aiding collaboration by providing some consistency in source code. These tools also help address issues that can result in test failures, leading to fewer failed builds.
When the developer completes his task, he uploads his code to a shared code repository – the Git repository. There are a number of ways this can be done, but typically a developer will submit a pull request – a request to merge their new code with the shared source code. The other developer then checks the changes he or she has made and, when satisfied that there are no bugs or problems in the code, approves the pull request. This manual check is intended to be quick and easy, but is effective in identifying bugs early.
At the same time, a pull request triggers an automated process that compiles the resulting source code and runs a series of integration and unit tests to identify any degradations. If the build fails or any of the tests fail, the incorporation request fails and the developer is notified to fix the problem. By continuously checking for code changes in the shared repository and running builds and tests, integration issues that arise when working on shared source code can be minimized and highlight bugs occurring early in the development lifecycle.
Once the build is successful, it is automatically deployed to the staging environment for deeper testing outside of the production and development environment. The staging environment can be an existing hosting service or it can be a new environment provided as part of the deployment process. This practice of automatically provisioning a new environment at deployment time is referred to as “Infrastructure-as-Code (IaC)” and is an essential part of many automated DevOps pipelines.
After the application is deployed to the test environment, a series of manual and automated tests are performed. Manual testing can be traditional User Acceptance Testing (UAT), where testers use the application as a customer would use it to highlight any issues or enhancements that should be addressed before deployment to production.
Automated tests can simultaneously perform application security scans, check for infrastructure changes and compliance with best practices, test application performance, or run stress tests. The testing that is performed during this phase depends on the organization and what is relevant to the application, but this phase can be considered as a testing platform that allows new testing to be involved, even beyond those mentioned above, without interrupting the work of the developers or affecting the production environment.
The release phase is a milestone in the DevOps process – it’s the point at which we can say that the compiled source code is ready to be deployed to the production environment. At this stage, every code change has gone through a series of manual and automated tests, and the operations team can be confident that any noticeable problems or application degradations are unlikely.
Depending on the maturity of DevOps in the organization, a decision may be made to automatically deploy any build that makes it to this stage of the process. Developers can use feature switches (flags) to turn off new features so that customers don’t see them until they are ready to go live. Under this model, organizations manage to deploy multiple releases of their products every day.
Alternatively, an organization may want to control when builds are released to production. They may want to have a regular release schedule or release new features only after a milestone has been met. A manual approval process may be added in the release phase to allow only certain individuals within the organization to authorize a release to production.
Finally, the build is ready and put into the production environment. There are several tools and processes that can automate the release process to make it reliable and free of downtime.
The same infrastructure as code (IaC) that created the test environment can be configured to create the production environment. We already know that the test environment has been created successfully, so we can be confident that the production release will go off without a hitch.
Blue green deployment 1 allows us to transition to the new production environment without any downtime. Then a new environment is built that exists alongside the existing production environment. When the new environment is ready, the host service directs all new requests to the new environment. If at any point a problem is found with the new build, it can simply ensure that requests are routed back to the old environment for a period of time until the problem is resolved.
The new release is already active at this stage and customers are using it. The operations team is now making sure everything is running smoothly. Based on the configuration of the hosting service, the environment automatically adjusts for load to handle peaks and troughs in the number of active users.
The organization should also create a way for customers to provide feedback on their services, as well as provide tools to help collect and sort this feedback to help shape future product development. This feedback loop is important – no one knows what they want more than the customer, and customers are the best test team to spend far more time testing an application than the DevOps process ever could.
The “final” phase of the DevOps cycle is environment monitoring. This builds on the customer feedback provided in the operations phase by collecting data and providing analytics on customer behavior, performance, bugs, and more.
We can also do introspection and monitor the DevOps process itself, monitoring potential bottlenecks in the process that cause frustration or impact the productivity of development and operations teams.
All of this information is then fed back to the product manager and development team to close the process loop. It would be easy to say that the loop starts again here, but the reality is that the process is continuous. There is no beginning or end, just a constant evolution of the product throughout its lifecycle that only ends when the project is no longer maintained or no longer needed by customers.životnosti, ktorý sa končí až vtedy, keď sa projekt prestane udržiavať alebo ho už zákazníci nepotrebujú.
“DevOps? I don’t know what you are talking about. I can create your infrastructure in AWS, automate it with Terraform that runs in pipeline in GitLab, create Helm charts for your applications, deploy it in cooperation with Helmfile again from pipelines, obviously, and some other stuff. Is it enough? I don’t know. You tell me.”