ACM & ArgoCD for Teams

17 February 2023

Tags : openshift, argocd, acm, gitops

Quickly deploying ArgoCD ApplicationSets using RHACM’s Global ClusterSet

I have written about how we can align our Tech to setup GitOps tooling so that it fits with our team structure.

How can we make these patterns real using tools like Advanced Cluster Manager (ACM) that help us deploy to a fleet of Clusters ? ACM supports Policy based deployments so we can track compliance of our clusters to the expected configuration management policy.

The source code is here - https://github.com/eformat/acm-gitops - git clone it so you can follow along.

Global ClusterSet’s

When a cluster is managed in ACM there are several resources created out of the box you can read about them here in the documentation. This includes a namespace called open-cluster-management-global-set. We can quickly deploy ApplicationSet’s in this global-namespace that generates Policy to create our team based ArgoCD instances.

We can leverage the fact that ApplicationSet’s can be associated with a Placement - that way we can easily control where our Policy and Team ArgoCD’s are deployed across our fleet of OpenShift clusters by using simple label selectors for example.

Bootstrap a Cluster Scoped ArgoCD for our Policies

We are going Bootstrap a cluster-scoped ArgoCD instance into the open-cluster-management-global-set namespace.

We will deploy our Team ArgoCD’s using ACM Policy that is generated using the PolicyGenerator tool which you can read about here from its' reference file.

Make sure to label the cluster’s where you want to deploy to with useglobal=true.

oc apply -f bootstrap-acm-global-gitops/setup.yaml

This deploys the following resources:

  • Subscription Resource - The GitOps operator Subscription, including disabling the default ArgoCD and setting cluster-scoped connections for our namespaces - see the ARGOCD_CLUSTER_CONFIG_NAMESPACES env.var that is part of the Subscription object. If your namespace is not added here, you will get namespace scoped connections for your ArgoCD, rather than all namespaces.

  • GitOpsCluster Resource - This resource provides a Connection between ArgoCD-Server and the Placement (where to deploy exactly the Application).

  • Placement Resource - We use a Placement resource for this global ArgoCD which deploys to a fleet of Clusters, where the Clusters needs to be labeled with useglobal=true.

  • ArgoCD Resource - The CR for our global ArgoCD where we will deploy Policy. We configure ArgoCD to download the PolicyGenerator binary, and configure kustomize to run with the setting:

kustomizeBuildOptions: --enable-alpha-plugins

Deploy the Team Based ArgoCD using Generated Policy

We are going to deploy ArgoCD for two teams now using the ACM PolicyGenerator.

The PolicyGenerator runs using kustomize. We specify the generator-input/ folder - that holds our YAML manifests for each ArgoCD - in this case one for fteam, one for zteam.

You can run the PolicyGenerator from the CLI to test it out before deploying - download it using the instructions here e.g.

kustomize build --enable-alpha-plugins team-gitops-policy/

We specify the placement rule placement-team-argo - where the Clusters needs to be labeled with teamargo=true.

We add some default compliance and control labels for grouping purposes in ACM Governance.

We also set the pruneObjectBehavior: "DeleteAll so that if we delete the ApplicationSet the generated Policy s deleted and all objects are removed. For this to work, we must also set the remediationAction to enforce for our Policies.

One last configuration is to set the ArgoCD IgnoreExtraneous compare option - as Policy is generated we do not want ArgoCD to be out of sync for these objects.

apiVersion: policy.open-cluster-management.io/v1
kind: PolicyGenerator
metadata:
  name: argocd-teams
placementBindingDefaults:
  name: argocd-teams
policyDefaults:
  placement:
    placementName: placement-team-argo
  categories:
    - CM Configuration Management
  complianceType: "musthave"
  controls:
    - CM-2 Baseline Configuration
  consolidateManifests: false
  disabled: false
  namespace: open-cluster-management-global-set
  pruneObjectBehavior: "DeleteAll"
  remediationAction: enforce
  severity: medium
  standards:
    - generic
  policyAnnotations: {"argocd.argoproj.io/compare-options": "IgnoreExtraneous"}
policies:
  - name: team-gitops
    manifests:
      - path: generator-input/

Make sure to label the cluster’s where you want to deploy to with teamargo=true.

To create our Team ArgoCD’s run:

oc apply -f applicationsets/team-argo-appset.yaml

To delete them, remove the AppSet

oc delete appset team-argo

Summary

You can now take this pattern and deploy it across multiple clusters that are managed by ACM. You can easily scale out the number of Team Based ArgoCD and have fine grained control over their individual configuration including third party plugins like Vault. ACM offers a single plane of glass to check if your clusters are compliant to the generated policies, and if not - take remedial action.

You can see the code in action in this video.

🏅Enjoy !!

Commentaires

Patterns with ArgoCD - Vault

04 November 2022

Tags : argocd, gitops, patterns, vault, security

Team Collaboration with ArgoCD

I have written before about collaborating using GitOps, ArgoCD and Red Hat GitOps Operator. How can we better align our deployments with our teams ?

The array of patterns and the helm chart are described in a fair bit of detail here. I want to talk about using one of these patterns at scale - hundred’s of apps across multiple clusters.

Platform Cluster ArgoCD, Tenant Team ArgoCD’s

For Product Teams working in large organisations that have a central Platform Team - this pattern is probably the most natural i think.

Figure - Platform ArgoCD, Namespaced ArgoCD per Team

It allows the platform team to control cluster and elevated privileges, activities like controlling namespaces, configuring cluster resources etc, in their Cluster Scoped ArgoCD, whilst Product Teams can control their namespaces independently of them.

  • The RedHat GitOps Operator (cluster scoped)

  • Platform Team (cluster scoped) ArgoCD instance

  • Team (namespace scoped) ArgoCD instances

When doing multi-cluster, i usually prefer to have ArgoCD "in the cluster" rather than remotely controlling a cluster. This seems better from an availability / single point of failure point of view. Of course if its a more edge use case, remote cluster connections may make sense.

OK, so the Tenant ArgoCD is deployed in namespaced mode and controls multiple namespaces belonging to a team. For each Team, a single ArgoCD instance per cluster normally suffices. You can scale up and shard the argo controllers, run in HA - not usually necessary at team scale (100 apps) - see the argocd doco if you need to do this though. There may be multiple non-production clusters - dev, test, qa etc and then you will have multiple production clusters (prod + dr etc) - each cluster have their own ArgoCD instances per Team.

All of this is controlled via gitops. A sensible code split is one git repo per team, so one for the Platform Team, one for each Product Team - i normally start with a mono repo and split later based on need or scale.

It is worth pointing out that any elevated cluster RBAC permissions that are needed by the Product Teams' are done via git PR’s into the platform team’s gitops repo. Once configured, the Tenant team is in control of their namespaces and can get on with managing their own products and tooling.

Secrets Management with ArgoCD Vault Plugin

To make this work at scale and in production within an organisation, the "batteries" for secrets management must be included! They are table stakes really. It’s fiddly, but worth the effort.

There are many ways to do secrets management beyond k8s secrets - KSOPS, External Secrets Operator etc. The method i want to talk about uses the ArgoCD Vault Plugin which i will abbreviate to AVP. It supports multiple secret backends by the way. In this case, i am going to use Hashicorp Vault and the k8s integration auth method. Setting up vault is dealt with separately but can be done on-cluster or off-cluster.

To get AVP working, you basically deploy the ArgoCD repo server with a ServiceAccount and use that secret to authenticate to Hashi Vault using k8s auth method. This way each ArgoCD instance uses the token associated with that service account to authenticate. Note that in OpenShift 4.11+ when creating new service accounts (SA), a service account token secret is no longer automatically generated.

Once done, our app secrets can be easily referenced from within source code using either annotations:

kind: Secret
apiVersion: v1
metadata:
  name: example-secret
  annotations:
    avp.kubernetes.io/path: "path/to/app-secret"
type: Opaque
data:
  password: <password-vault-key>

or directly via the full path:

  password: <path:kv/data/path/to/app-secret#password-vault-key>

We can also reference the secrets directly from our ArgoCD Application definitions. Here is an example of using helm (kustomize and plain yaml are also supported).

  source:
    repoURL: https://github.com/eformat/my-gitrepo.git
    path: gitops/my-app/chart
    targetRevision: main
    plugin:
      name: argocd-vault-plugin-helm
      env:
        - name: HELM_VALUES
          value: |
            image:
              repository: image-registry.openshift-image-registry.svc:5000/my-namespace/my-app
              tag: "1.2.3"
            password: <path:kv/data/path/to/app-secret#password-vault-key>

I also use a pattern to pass the vault annotation path down to the helm chart from the ArgoCD Application. To keep things clean (and you sane!) I normally have a Vault secret per-application (containing many KV2 - key:value pairs).

    plugin:
      name: argocd-vault-plugin-helm
      env:
        - name: HELM_VALUES
          value: |
            resources:
              limits:
                cpu: 500m         # a non secret value
            avp:
              secretPath: "kv/data/path/to/app-secret"  # use this in the annotations

This allows you to control the path to your secrets in Vault which can be configured by convention e.g. kv/data/cluster/namespace/app as an example.

ArgoCD Configuration - The Gory Details

OK, great. But how do i get there with my Team ArgoCD ? Let’s take a look in depth at the argocd-values.yaml file you might pass into the gitops-operator helm chart to bootstrap your ArgoCD.

The important bit for AVP integration is to mount the token from a service account that we have created - in this case the service account is called argocd-repo-vault and we set mountastoken to "true".

Next, we use an initContainer to download the AVP go binary and save it to a custom-tools directory. If you are doing this disconnected, the binary needs to be made available offline.

argocd_cr:
  statusBadgeEnabled: true
  repo:
    mountsatoken: true
    serviceaccount: argocd-repo-vault
    volumes:
    - name: custom-tools
      emptyDir: {}
    initContainers:
    - name: download-tools
      image: registry.access.redhat.com/ubi8/ubi-minimal:latest
      command: [sh, -c]
      env:
        - name: AVP_VERSION
          value: "1.11.0"
      args:
        - >-
          curl -Lo /tmp/argocd-vault-plugin https://github.com/argoproj-labs/argocd-vault-plugin/releases/download/v\${AVP_VERSION}/argocd-vault-plugin_\${AVP_VERSION}_linux_amd64 && chmod +x /tmp/argocd-vault-plugin && mv /tmp/argocd-vault-plugin /custom-tools/
      volumeMounts:
      - mountPath: /custom-tools
        name: custom-tools
    volumeMounts:
    - mountPath: /usr/local/bin/argocd-vault-plugin
      name: custom-tools
      subPath: argocd-vault-plugin

We need to create the glue between our ArgoCD Applications' and how they call/use the AVP binary. This is done using the configManagementPlugins stanza. Note we use three methods, one for plain YAML, one for helm charts, one for kustomize. The plugin name: is what we reference from our ArgoCD Application.

  configManagementPlugins: |
    - name: argocd-vault-plugin
      generate:
        command: ["sh", "-c"]
        args: ["argocd-vault-plugin -s team-ci-cd:team-avp-credentials generate ./"]
    - name: argocd-vault-plugin-helm
      init:
        command: [sh, -c]
        args: ["helm dependency build"]
      generate:
        command: ["bash", "-c"]
        args: ['helm template "$ARGOCD_APP_NAME" -n "$ARGOCD_APP_NAMESPACE" -f <(echo "$ARGOCD_ENV_HELM_VALUES") . | argocd-vault-plugin generate -s team-ci-cd:team-avp-credentials -']
    - name: argocd-vault-plugin-kustomize
      generate:
        command: ["sh", "-c"]
        args: ["kustomize build . | argocd-vault-plugin -s team-ci-cd:team-avp-credentials generate -"]

We make use of environment variables set within the AVP plugin for helm so that the namespace and helm values from the ArgoCD Application are set correctly. See the AVP documentation for full details of usage.

One thing to note, is the team-ci-cd:team-avp-credentials secret. This specifies how the AVP binary connects and authenticates to Hashi Vault. It is a secret that you need to set up. An example as follows for a simple hashi vault in-cluster deployment:

export AVP_TYPE=vault
export VAULT_ADDR=https://vault-active.hashicorp.svc:8200   # vault url
export AVP_AUTH_TYPE=k8s                              # kubernetes auth
export AVP_K8S_ROLE=argocd-repo-vault                 # vault role (service account name)
export VAULT_SKIP_VERIFY=true
export AVP_MOUNT_PATH=auth/$BASE_DOMAIN-$PROJECT_NAME

cat <<EOF | oc apply -n ${PROJECT_NAME} -f-
---
apiVersion: v1
stringData:
  VAULT_ADDR: "${VAULT_ADDR}"
  VAULT_SKIP_VERIFY: "${VAULT_SKIP_VERIFY}"
  AVP_AUTH_TYPE: "${AVP_AUTH_TYPE}"
  AVP_K8S_ROLE: "${AVP_K8S_ROLE}"
  AVP_TYPE: "${AVP_TYPE}"
  AVP_K8S_MOUNT_PATH: "${AVP_MOUNT_PATH}"
kind: Secret
metadata:
  name: team-avp-credentials
  namespace: ${PROJECT_NAME}
type: Opaque
EOF

I am leaving out the gory details of Vault/ACL setup which are documented elsewhere, however to create the auth secret in vault from the argocd-repo-vault ServiceAccount token, i use this shell script:

export SA_TOKEN=$(oc -n ${PROJECT_NAME} get sa/${APP_NAME} -o yaml | grep ${APP_NAME}-token | awk '{print $3}')
export SA_JWT_TOKEN=$(oc -n ${PROJECT_NAME} get secret $SA_TOKEN -o jsonpath="{.data.token}" | base64 --decode; echo)
export SA_CA_CRT=$(oc -n ${PROJECT_NAME} get secret $SA_TOKEN -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)

vault write auth/$BASE_DOMAIN-${PROJECT_NAME}/config \
  token_reviewer_jwt="$SA_JWT_TOKEN" \
  kubernetes_host="$(oc whoami --show-server)" \
  kubernetes_ca_cert="$SA_CA_CRT"

Why Do All of This ?

The benefit of all this gory configuration stuff:

  • we can now store secrets safely in a backend vault at enterprise scale

  • we have all of our ArgoCD’s use these secrets consistently with gitops in a multi-tenanted manner

  • we keep secrets values out of our source code

  • we can control all of this with gitops

It also means that the platform an product teams, can manage secrets in a safely consistent manner - but separately i.e. each team manages their own secrets and space in vault. This method also works if you are using the enterprise Hashi vault that uses namespaces - you can just set the env.var into your ArgoCD Application like so.

    plugin:
      name: argocd-vault-plugin-kustomize
      env:
        - name: VAULT_NAMESPACE
          value: "my-team-apps"

Tenant team’s are now fully in control of their namespaces and secrets and can get on with managing their own applications, products and tools !

Commentaires