The industry is still defining the basics of GitOps, but some enterprise DevOps platform teams have started to establish operational best practices to support it in production.
GitOps, according to a paper released this week by a new Cloud Native Computing Foundation GitOps working group, isn’t just about using Git to store application and infrastructure data. Rather, it is a set of workflows that define the desired state of a system in a versioned code repository, and then continuously maintain a running production system in compliance with that desired state.
This is also called a reconciliation loop, and this is the most crucial principle from GitOps, task force members said.
“What defines GitOps in relation to doing something with webhooks or THIS triggering something is this reconciliation loop, “said Christian Hernandez, senior senior technical director of marketing at Red Hat, in a presentation at KubeCon this week.” Something is still checking out [for updates]. “
While GitOps is still in its infancy in the industry, analysts said there are signs of growth, supported by the rise of advanced computing, the ubiquity of Kubernetes and widespread adoption. by companies of DevOps principles which are prerequisites for GitOps.
Jay lymanAnalyst, 451 Research / S & P Global
“GitOps goes hand in hand with DevOps and also embodies this interdisciplinary collaboration between software developers and IT operators,” said Jay Lyman, analyst at 451 Research, a division of S&P Global. “I’ve heard GitOps described as operations pull requests, and I think that’s an accurate way of describing how a Git-like process… can be of use to IT operations teams. “
The rise of DevOps and GitOps platforms as a means of separating the responsibilities of developer and platform operator within these platforms, motivated a representative from Microsoft Azure to get involved in the GitOps working group.
“Your commitment to your repository becomes, basically, your operation as an application developer… and you don’t worry about the actual CD in the cluster itself,” said Chris Sanders, program manager for Azure Automation at Microsoft. . “You worry about your CI part, and I think it’s huge, because it takes a lot of the [risk] out of places where [devs] can cause problems if we go there and try to do the CD part ourselves. “
IT Professionals Share Lessons Learned With GitOps “Day 2”
As the popularity of GitOps grows, enterprise platform teams must adapt observability and IT governance practices to adapt it.
GitOps started out at insurance company State Farm, based in Bloomington, Ill., With a three-person platform team formed in 2019. This team implemented GitOps pipeline models and an integration interface. developers for the company’s 7,000 developers in three distinct IT environments: – Amazon EKS, on-premises Kubernetes, and Cloud Foundry.
The team went through a trial and error process to establish their workflow as the platform grew, according to presenters at this week’s GitOpsCon.
“We were really overwhelmed – it was really hard for us to manage new tasks and also try to answer support questions all the time,” said Priyanka Ravi, software developer at State Farm and one of the original GitOps platforms. Members of the team. “So we’ve come up with a system where we have a weekly support rotation and one of us will be on call, monitoring a GitOps channel in Rocket.Chat.”
Members of the GitOps team also learned from a platform outage where they were not included in monitoring alerts that they should work more closely with the Kubernetes leadership team, a. said Mae Large, head of architecture at State Farm.
“It was an aspect of maturity for us, to actually drain a lot of these logs and risky events that emanate from the Flux system… and to be alerted when things don’t behave as expected,” said Large. “Over time we have improved with the metrics [through] Prometheus.”
The observability dashboards created by the GitOps team from this process prompted greater use of the GitOps platform, Large said.
“Managers were really excited about the transparency: ‘I can actually see the files change, the actual lines of code change to achieve this functionality,” she said. “It empowers them and… gives them better assurance that it’s good for production.”
State Farm’s GitOps team also designed mechanisms to give risk management and compliance teams visibility into the platform as it matures.
“We have a handful of [scripts] that run on a scheduled pipeline, [and] one of those … we affectionately call the Enforcer, “Ravi said.” That one uses Terraform Enterprise and works every night to make sure that [deployments are] all still meet the compliance standards that have been established. “
GitOps security guidelines are emerging
A common problem with the administration of the GitOps platform is IT security, especially the management of secrets. Most GitOps users rely on a secret management system like HashiCorp’s Vault because general security best practices require that secrets – data such as passwords and other system access credentials – should not be exposed in code repositories.
This goes against a purist definition of GitOps, in which the Git repository fully reflects the production environment, Microsoft’s Sanders said.
“[With] a management system like Vault … [you’re] do not even store the encrypted version [of secrets in Git], you store the reference, “he said.” So what’s the source of truth out there? There are a lot of things that I think are still under discussion. “
Still, more experienced GitOps users, such as financial services software maker Intuit, which created Argo CD, were able to share some IT security guidelines with GitOpsCon attendees this week.
As with non-GitOps environments, defense in depth is good GitOps security practice, but should be tailored to the GitOps environment, said Todd Ekenstam, senior software engineer at Intuit, in a GitOpsCon presentation.
For example, the CI / CD pipeline becomes the central point of change management under GitOps, securing access to this pipeline therefore becomes crucial, Ekenstam said.
“The CI / CD pipeline defines your policies and standards for deployment to production – this is how you apply your engineering process,” he said. “In that framework, you really want to protect it. You don’t want this pipeline to be bypassed or compromised.”
This involves maintaining a strict access control for the pipeline using short-lived credentials, Ekenstam said. Likewise, GitOps controllers such as Argo CD or Flux should operate on a least privilege principle, with the same Kubernetes cluster permissions as developers, and be subject to their own audit log process.
At the infrastructure level, GitOps administrators must secure container registers with analytics tools like Prisma Cloud or Aqua, and configure registries so that container image tags cannot be changed by a malicious actor, Ekstrom said. The Git repository where the code is stored must use branch protection rules that require code reviews before changes are merged into the main branch and automatically deployed to production. Finally, the production Kubernetes cluster should be hardened using as-code policy tools such as OPA to prevent the deployment of potentially insecure container images.
This equates to a long list of considerations for GitOps security, but the GitOps approach can also have inherent security benefits, Ekenstam said.
“You have the option of having code reviews and approvals on changes … a second pair of eyes and an audit trail as part of your [deployment process],” he said.
Beth Pariseau, Senior Editor at TechTarget, is an award-winning veteran of computer journalism. She can be reached at [email protected] or on Twitter @PariseauTT.