Building a Harbor Operator with Crossplane and Terraform: Unlocking Kubernetes Potential Without Programming
- Mohamed Rafraf
- Dec 15, 2024
- 20 min read
Introduction
What if you could manage complex systems on top of Kubernetes, like a Harbor registry, using only Kubernetes objects—without having to build operators from scratch? No coding, no hassle—just simple, declarative configurations.
With Crossplane and Terraform, that’s exactly what you can do. By combining these powerful tools, you can automate and orchestrate complex infrastructure on Kubernetes without writing a single line of operator code.
In this post, I’ll walk you through creating a Harbor operator using Crossplane and Terraform, showing you how to manage and scale your infrastructure effortlessly, all through Kubernetes-native resources.
The Power of Crossplane and Terraform Together
What is Crossplane?
Crossplane extends Kubernetes, allowing you to manage not just containerized apps but any external resources—databases, cloud services, and beyond—using Kubernetes’ declarative model. It uses providers to interact with different systems, meaning you can manage anything from cloud infrastructure to on-prem services as Kubernetes resources, all without custom code.
What is Terraform?
Terraform is an Infrastructure-as-Code tool that lets you define and manage resources using configuration files. Like Crossplane, it uses providers to interact with external systems, allowing you to manage anything—cloud, on-prem, or even SaaS platforms. It tracks the state of your infrastructure, ensuring changes are safely applied and automatically managed.
Why Combine Them?
Crossplane and Terraform together create a powerful combination for managing infrastructure on Kubernetes. By merging Crossplane’s Kubernetes-native CRDs with Terraform’s extensive provider ecosystem and state management, you unlock the ability to manage even the most complex external resources through Kubernetes, without writing custom operators.
Here’s why they complement each other:
Kubernetes-Native Management with Crossplane: Crossplane allows you to define external resources (like databases, storage, or cloud services) as Kubernetes objects via custom CRDs. This provides a seamless way to control everything from your Kubernetes cluster.
Terraform’s Vast Provider Ecosystem: Terraform offers thousands of providers, much more than Crossplane currently supports. When Crossplane doesn’t have a provider for a specific resource, Terraform can step in. With Terraform, you can access an existing provider to manage external systems that don’t yet have native Crossplane support.
Advanced Use Cases Beyond Operators: Sometimes existing operators are not flexible enough for advanced use cases. Combining Terraform’s provider support with Crossplane’s declarative model gives you full control to automate and manage resources that would otherwise require custom operators.
This means you can:
Fill Gaps in Operator Support: When Crossplane lacks a specific provider for a technology, Terraform’s huge provider base can bridge that gap.
Automate Infrastructure Without Custom Code: Use Kubernetes-native tools to manage even unsupported external resources without having to code operators yourself.
Leverage the Best of Both Worlds: Crossplane’s Kubernetes-native automation with Terraform’s robust provider ecosystem and proven state management gives you unmatched flexibility.
By combining these tools, you truly harness the potential of Kubernetes as a control plane for both applications and infrastructure, regardless of the underlying technology or cloud provider.
Harbor and Why We Need a Custom Operator
What is Harbor?
Harbor is an open-source container registry that helps secure and manage container images, charts, and artifacts with features like image scanning, vulnerability assessments, and RBAC. It’s essential for managing containerized workloads, but scaling and automating Harbor across environments manually is complex.
Why Existing Operators Fall Short
The official Harbor operator hasn’t been maintained for a while, with outdated versions and limited features. Additionally, the Mittwald operator, while useful, doesn’t support key resources like user management, replication rules, or project configurations—leaving critical gaps in automation.
Why Combine Crossplane and Terraform for a Harbor Operator?
By combining Crossplane and Terraform, you can create a custom Harbor operator that overcomes these limitations. Here’s why it works:
Full Resource Management: Manage Harbor’s users, projects, registries, and replication rules through Kubernetes CRDs.
Terraform’s Extensive Provider Support: Use Terraform’s mature module to handle complex Harbor configurations and infrastructure, which Crossplane alone can’t fully cover.
Kubernetes-Native Automation: Define and manage Harbor entirely with Kubernetes objects, integrating seamlessly with GitOps and IaC workflows.
This approach gives you a cloud-agnostic, scalable Harbor solution where all configurations—users, projects, policies—are managed declaratively, making automation, version control, and continuous delivery simple and powerful.
What We Want to Achieve from the Harbor Operator
With the Harbor operator, our goal is to manage all aspects of Harbor—users, projects, registries, replication rules, and robot accounts—entirely through Kubernetes objects. This will allow us to seamlessly integrate Harbor management with Infrastructure-as-Code (IaC) and GitOps practices, making it easy to automate, version control, and continuously manage Harbor resources in a declarative way.
By using Crossplane and Terraform, we aim to:
Automate User Management: Create and manage Harbor users through Kubernetes manifests.
Control Registries and Projects: Define Harbor projects and configure external registries using CRDs.
Set Up Replication Rules: Automatically manage replication rules for syncing images between multiple registries.
Manage Robot Accounts: Use Kubernetes objects to handle Harbor robot accounts for CI/CD automation.
Examples of Kubernetes Manifests for Harbor Resources:
User Manifest
apiVersion: harbor.kubenoops.com/v1alpha1
kind: HarborUser
metadata:
name: my-harbor-user
spec:
username: omar
admin: true
email: "omar@gmail.com"
fullname: "omar sanchez"
password: "Password12345"
Registry Manifest
apiVersion: harbor.kubenoops.com/v1alpha1
kind: HarborRegistry
metadata:
name: example-harbor-registry
spec:
providerName: docker-hub
name: example-registry
endpointURL: https://registry.hub.docker.com
insecure: false
Project Manifest
apiVersion: harbor.kubenoops.com/v1alpha1
kind: HarborProject
metadata:
name: my-harbor-project1
spec:
projectName: crossplane1
public: true
vulnerabilityScanning: true
enableContentTrust: true
enableContentTrustCosign: false
autoSbomGeneration: true
Replication Rule Manifest
apiVersion: harbor.kubenoops.com/v1alpha1
kind: HarborReplication
metadata:
name: example-hrp
namespace: default
spec:
registryName: my-destination-registry
action: push
schedule: "0 0 * * *" # Runs daily at midnight
name: my-replication-rule
Robot Account Manifest
apiVersion: harbor.kubenoops.com/v1alpha1
kind: HarborRobotAccount
metadata:
name: habor-robot
spec:
level: system
secret: <secret>
name: my-robot
permission:
kind: system
namespace: /
access:
action: push
resource: repository
These examples demonstrate how we can declaratively manage every aspect of Harbor using Kubernetes objects. Whether it's creating a new user, setting up a registry, or automating image replication, everything is handled with simple YAML manifests—perfect for GitOps and IaC workflows. This approach ensures that Harbor resources are easy to automate, version, and scale.
Terraform Provider in Crossplane
The Terraform Provider in Crossplane lets you manage Terraform resources through Kubernetes, combining the strengths of both tools. Here’s how it works:
ProviderConfig
The ProviderConfig stores the credentials and backend configuration needed to run Terraform. It specifies where Terraform stores its state, like using a Kubernetes secret to securely store state files. This ensures seamless integration with your Kubernetes cluster.
Example of a Harbor ProviderConfig:
apiVersion: tf.upbound.io/v1beta1
kind: ProviderConfig
metadata:
name: harbor-tf
spec:
configuration: |
terraform {
backend "kubernetes" {
secret_suffix = "providerconfig-default"
namespace = "consultants-rafraf"
in_cluster_config = true
}
required_providers {
harbor = {
source = "goharbor/harbor"
version = "3.10.15"
}
}
}
provider "harbor" {
url="https://harbor.kubenoops.com"
password="xxxxxxxxxxxxxxxxxxx"
username="admin"
}
Storing Terraform State
With the Kubernetes backend, Terraform state files are stored in Kubernetes secrets, allowing state to be managed securely inside your cluster.
Workspace: Executing Terraform
A Workspace resource defines the Terraform module or inline configuration and triggers Terraform to execute the resource creation. You can store the results of Terraform executions in Kubernetes secrets for easy integration with other resources.
Example of a Workspace creating a Harbor Project:
apiVersion: tf.upbound.io/v1beta1
kind: Workspace
metadata:
name: harbor-project
spec:
providerConfigRef:
name: harbor-tf
forProvider:
env:
- name: TF_VAR_projectName
value: crossplane-0
module: |
variable "projectName" {
description = "Project Name"
type = string
}
resource "harbor_project" "main" {
name = var.projectName
public = true
vulnerability_scanning = true
enable_content_trust = true
enable_content_trust_cosign = false
auto_sbom_generation = true
}
source: Inline
Key Benefits
Declarative Management: Manage Terraform resources as Kubernetes objects.
State in Kubernetes: Securely store Terraform state within the cluster.
Seamless GitOps Integration: Use CRDs to manage infrastructure and easily integrate with GitOps workflows.
By using the Terraform provider in Crossplane, you can automate complex infrastructure management across clouds, all while staying within the Kubernetes ecosystem.
How Can we benefit from that terraform provider
By combining Crossplane Composition and CompositionClaim with the Terraform provider, we can create powerful, high-level CRDs to manage Harbor resources like projects, users, and replication rules directly through Kubernetes.
Composition allows us to bundle multiple resources (such as Harbor projects, users, and replication rules) into a single, reusable abstraction. This means users can manage complex Harbor setups without needing to interact with Terraform or Harbor’s API directly.
CompositionClaim further simplifies this process by allowing users to request Harbor resources via simple Kubernetes manifests. Behind the scenes, Crossplane handles the infrastructure management using Terraform, ensuring resources are provisioned and reconciled automatically.
This approach provides:
Declarative Harbor management through Kubernetes-native objects.
Seamless integration with GitOps and IaC, enabling automated, version-controlled Harbor resource management.
Simplified user experience, abstracting away Terraform complexities while leveraging its power.
Setting the Stage – Your Kubernetes Cluster
In this section, we'll assume you already have a Kubernetes cluster running, complete with an Ingress controller (such as NGINX or Traefik) for routing traffic to your services. With that in place, we’ll walk through the steps to install Harbor and Crossplane along with its Terraform provider to prepare your environment for managing Harbor resources declaratively.
1. Ingress Controller Setup
Ensure that your Ingress controller is running in your Kubernetes cluster, as it will be used to expose Harbor and other services to the outside world. If not already installed, you can set it up using Helm:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace
Verify the Ingress controller is up and running:
kubectl get pods -n ingress-nginx
2. Install Harbor
When deploying Harbor on your Kubernetes cluster, you can customize the deployment to fit your specific needs by modifying the values.yaml file. This allows you to control how Harbor is exposed, configure TLS, set up admin credentials, and more.
Download the values.yaml for Customization
To get started, you need to download the default values.yaml file that comes with the Harbor Helm chart. This file contains all the default configurations, which you can modify based on your requirements.
To download the file:
helm show values harbor/harbor > values.yaml
This command fetches the default configuration from the Harbor Helm repository and saves it locally as values.yaml.
Modify the values.yaml for Ingress and Exposing Harbor
Open the values.yaml file in your preferred editor and modify the following sections to expose Harbor using Ingress and configure your domain:
expose:
type: ingress # Use Ingress to expose Harbor
tls:
enabled: false # Disable TLS for now (can be enabled if using certificates)
certSource: auto # Automatically generate certificates (can be set to 'secret' if you have your own)
ingress:
hosts:
core: harbor-demo.kubenoops.com # Your Harbor domain
controller: default # Use the default Ingress controller (e.g., NGINX)
className: "nginx" # Ensure this matches the Ingress class of your controller
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
cert-manager.io/cluster-issuer: letsencrypt-prod # If using Cert Manager for TLS certificates
harborAdminPassword: "Harbor12345" # Set the Harbor admin password
Since Harbor will be exposed via Ingress, you need to ensure the DNS for your domain (harbor-demo.kubenoops.com) points to the external IP of your Ingress controller. This can be done by creating an A or CNAME record with your domain registrar or DNS provider.
Deploy Harbor
Once you've modified values.yaml, deploy Harbor using Helm with the updated configuration:
helm install harbor harbor/harbor -f values.yaml --namespace harbor --create-namespace
This command installs Harbor with your customized configuration, including the Ingress setup and admin credentials.
By default, the Harbor Helm chart deploys several additional components within your Kubernetes cluster, such as:
PostgreSQL: The database for storing Harbor metadata.
Redis: Used for caching and session management.
These components are managed within the same Kubernetes namespace (harbor) as Harbor itself. If you prefer to use external database or cache services, you can modify the values.yaml to point to those external resources instead.
3. Install Crossplane
To install Crossplane in your Kubernetes cluster using Helm, follow these steps:
Add the Crossplane Helm repository:
helm repo add crossplane-stable https://charts.crossplane.io/stable helm repo update
Install Crossplane:
helm install crossplane --namespace crossplane-system --create-namespace crossplane-stable/crossplane
Verify installation:
kubectl get pods -n crossplane-system
For more detailed instructions, visit Crossplane Installation Documentation.
4. Install the Terraform Provider for Crossplane
To install the official Terraform provider for Crossplane, follow these steps:
Install the Terraform Provider: Create the following YAML configuration to install the provider:
apiVersion: pkg.crossplane.io/v1 kind: Provider metadata: name: provider-terraform spec: package: xpkg.upbound.io/upbound/provider-terraform:v0.18.0
Apply this configuration with:
kubectl apply -f provider-terraform.yaml
Verify the installation: Check the provider status with:
kubectl get providers
For more details, refer to Upbound Terraform Provider Documentation.
5.Configure the ProviderConfig for Harbor Provider in Terraform
To configure credentials for managing Harbor resources via Terraform in Crossplane, you need to define a ProviderConfig for the Harbor Terraform provider. This configuration sets up the backend for Terraform state storage within Kubernetes and authenticates with Harbor.
Here’s the configuration for the Harbor provider:
apiVersion: tf.upbound.io/v1beta1
kind: ProviderConfig
metadata:
name: harbor-tf
spec:
configuration: |
terraform {
backend "kubernetes" {
secret_suffix = "providerconfig-default"
namespace = "harbor"
in_cluster_config = true
}
required_providers {
harbor = {
source = "goharbor/harbor"
version = "3.10.15"
}
}
}
provider "harbor" {
url = "http://harbor-demo.kubenoops.com"
username = "admin"
password = "Harbor12345"
}
Explanation:
Terraform Backend: This stores the Terraform state within Kubernetes, in the harbor namespace.
Harbor Provider: The provider is configured with the necessary credentials (username, password, and URL) to authenticate with Harbor.
Required Provider: Specifies the version of the Harbor provider to use for Terraform.
This setup will allow you to manage Harbor resources directly through Kubernetes using Terraform as the underlying provider.
Demo: Creating Harbor CRDs with Crossplane and Terraform
This demo shows how to manage Harbor projects using Crossplane and Terraform. By creating Custom Resource Definitions (CRDs) and using Crossplane’s Composition, we automate Harbor resource management.
1. Harbor Project
The Harbor Project is a fundamental building block in Harbor, where you manage and store container images and other artifacts. Using Crossplane, we can create a HarborProject CRD that integrates with Terraform to automate the creation of projects in Harbor.
1.1 Composite Resource Definition (XRD)
The XRD defines the structure of the HarborProject CRD, allowing users to configure:
Project Name
Public Status
Vulnerability Scanning
Content Trust Options
SBOM Generation
This is the Composite Resource Definition for HarborProject
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: compositeharborprojects.kubenoops.com
spec:
group: kubenoops.com
names:
kind: CompositeHarborProject
plural: compositeharborprojects
claimNames:
kind: HarborProject
plural: harborprojects
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
projectName:
type: string
public:
type: boolean
vulnerabilityScanning:
type: boolean
enableContentTrust:
type: boolean
enableContentTrustCosign:
type: boolean
autoSbomGeneration:
type: boolean
required:
- projectName
- public
- vulnerabilityScanning
- enableContentTrust
- enableContentTrustCosign
- autoSbomGeneration
The HarborProject XRD defines the custom resource in Kubernetes. It allows users to configure Harbor projects using the HarborProject CRD. This XRD creates a Kubernetes-native way to interact with Harbor projects
1.2 Composition for Terraform Automation
The Composition translates user input into a Terraform Workspace, which handles project creation in Harbor using the Harbor provider.
Example claim for a Harbor project:
apiVersion: kubenoops.com/v1alpha1
kind: HarborProject
metadata:
name: my-harbor-project
spec:
projectName: "example-project"
public: true
vulnerabilityScanning: true
enableContentTrust: true
enableContentTrustCosign: true
autoSbomGeneration: true
This is the Composition manifest that will be responsible for managing the HarborProject
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: harborproject
spec:
compositeTypeRef:
apiVersion: kubenoops.com/v1alpha1
kind: CompositeHarborProject
resources:
- name: harborProject
base:
apiVersion: tf.upbound.io/v1beta1
kind: Workspace
spec:
providerConfigRef:
name: harbor-tf
forProvider:
source: Inline
env:
- name: TF_VAR_projectName
value: "spec.projectName"
- name: TF_VAR_public
value: "spec.public"
- name: TF_VAR_vulnerabilityScanning
value: "spec.vulnerabilityScanning"
- name: TF_VAR_enableContentTrust
value: "spec.enableContentTrust"
- name: TF_VAR_enableContentTrustCosign
value: "spec.enableContentTrustCosign"
- name: TF_VAR_autoSbomGeneration
value: "spec.autoSbomGeneration"
module: |
variable "projectName" {
description = "Project Name"
type = string
}
variable "public" {
description = "Public Project"
type = bool
default = true
}
variable "vulnerabilityScanning" {
description = "Vulnerability Scanning"
type = bool
default = false
}
variable "enableContentTrust" {
description = "Enable Content Trust"
type = bool
}
variable "enableContentTrustCosign" {
description = "Enable Content Trust Cosign"
type = bool
}
variable "autoSbomGeneration" {
description = "Auto SBOM Generation"
type = bool
}
resource "harbor_project" "main" {
name = var.projectName
public = var.public
vulnerability_scanning = var.vulnerabilityScanning
enable_content_trust = var.enableContentTrust
enable_content_trust_cosign = var.enableContentTrustCosign
auto_sbom_generation = var.autoSbomGeneration
}
output "project_id" {
value = harbor_project.main.id
sensitive = false
}
patches:
- type: FromCompositeFieldPath
fromFieldPath: "spec.projectName"
toFieldPath: "spec.forProvider.env[0].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.public"
toFieldPath: "spec.forProvider.env[1].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.vulnerabilityScanning"
toFieldPath: "spec.forProvider.env[2].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.enableContentTrust"
toFieldPath: "spec.forProvider.env[3].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.enableContentTrustCosign"
toFieldPath: "spec.forProvider.env[4].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.autoSbomGeneration"
toFieldPath: "spec.forProvider.env[5].value"
- type: ToCompositeFieldPath
fromFieldPath: "status.atProvider.outputs.project_id"
toFieldPath: "metadata.annotations['project_id']"
The Composition maps the CRD fields to a Terraform Workspace, which uses the Harbor provider to create or modify the actual Harbor project. The Composition includes:
Environment Variables: Maps CRD spec fields (e.g., projectName, public) to Terraform variables (TF_VAR_projectName), allowing Terraform to create projects based on the user's input.
Patches: Ensures that the fields provided in the CRD are transformed and passed to Terraform correctly. The status of the project, such as its project_id, is also patched back to the CRD, allowing for a complete feedback loop between Crossplane and Terraform.
This setup automates Harbor project creation and management, enabling users to define their project resources declaratively through Kubernetes while Crossplane and Terraform handle the underlying infrastructure.
2. Harbor User
This section demonstrates how to manage Harbor Users using Crossplane and Terraform. We'll define a HarborUser CRD and automate its creation through Crossplane Composition.
The Harbor User resource manages user accounts in Harbor. We will create a HarborUser CRD that allows users to define the following:
Username
Password
Email
Full Name
Admin Status
2.1 Composite Resource Definition (XRD)
This XRD defines the structure of the HarborUser CRD, allowing users to create user accounts in Harbor with custom configurations.
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: compositeharborusers.kubenoops.com
spec:
group: kubenoops.com
names:
kind: CompositeHarborUser
plural: compositeharborusers
shortNames: ["hu"]
claimNames:
kind: HarborUser
plural: harborusers
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
username:
type: string
password:
type: string
email:
type: string
fullname:
type: string
admin:
type: boolean
required:
- username
- password
- email
- fullname
- admin
The HarborUser XRD defines the custom resource in Kubernetes, enabling users to create and manage Harbor user accounts.
2.2 Composition for Terraform Automation
The Composition for Harbor Users translates the CRD into a Terraform Workspace. This handles the actual creation of the user in Harbor using the Harbor provider.
Example claim for a Harbor user:
apiVersion: kubenoops.com/v1alpha1
kind: HarborUser
metadata:
name: my-harbor-user
spec:
username: "johndoe"
password: "strongpassword"
email: "johndoe@kubenoops.com"
fullname: "John Doe"
admin: false
This is the Composition manifest for managing Harbor Users:
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: harboruser-composition
spec:
compositeTypeRef:
apiVersion: kubenoops.com/v1alpha1
kind: CompositeHarborUser
resources:
- name: harboruser
base:
apiVersion: tf.upbound.io/v1beta1
kind: Workspace
spec:
providerConfigRef:
name: harbor-tf
forProvider:
source: Inline
env:
- name: TF_VAR_username
value: "spec.username"
- name: TF_VAR_password
value: "spec.password"
- name: TF_VAR_fullname
value: "spec.fullname"
- name: TF_VAR_email
value: "spec.email"
- name: TF_VAR_admin
value: "spec.admin"
module: |
variable "username" {
description = "Username"
type = string
}
variable "password" {
description = "Password"
type = string
}
variable "fullname" {
description = "Full Name"
type = string
}
variable "email" {
description = "Email"
type = string
}
variable "admin" {
description = "Admin User"
type = bool
}
resource "harbor_user" "main" {
username = var.username
password = var.password
full_name = var.fullname
email = var.email
admin = var.admin
}
patches:
- type: FromCompositeFieldPath
fromFieldPath: "spec.username"
toFieldPath: "spec.forProvider.env[0].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.password"
toFieldPath: "spec.forProvider.env[1].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.fullname"
toFieldPath: "spec.forProvider.env[2].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.email"
toFieldPath: "spec.forProvider.env[3].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.admin"
toFieldPath: "spec.forProvider.env[4].value"
transforms:
- type: convert
convert:
toType: "string"
Summary
The HarborUser XRD defines the schema for user management.
The Composition links the XRD to a Terraform Workspace to automate the creation of Harbor users.
The user claim defines key properties like username, password, email, and admin status, while Terraform handles the infrastructure management.
This approach allows you to manage Harbor users declaratively using Kubernetes while leveraging Terraform's automation capabilities.
3. Harbor Registry
In this section, we will automate the management of Harbor Registries using Crossplane and Terraform. A HarborRegistry represents external registries that Harbor interacts with for pulling and pushing images.
The Harbor Registry CRD will allow users to configure:
Provider Name: Registry provider (e.g., AWS, DockerHub, GCP).
Registry Name
Endpoint URL
Authentication: Access ID and secret.
Insecure Flag: Allow insecure connections.
3.1 Composite Resource Definition (XRD)
The XRD defines the structure for managing Harbor Registries.
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: compositeharborregistries.kubenoops.com
spec:
group: kubenoops.com
names:
kind: CompositeHarborRegistry
plural: compositeharborregistries
shortNames: ["hr"]
claimNames:
kind: HarborRegistry
plural: harborregistries
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
providerName:
type: string
description: "The name of the provider for the Harbor registry."
enum:
- "alibaba"
- "aws"
- "azure"
- "docker-hub"
- "google"
- "harbor"
name:
type: string
description: "The name of the Harbor registry."
endpointURL:
type: string
description: "The endpoint URL for the Harbor registry."
accessId:
type: string
description: "The access ID for the Harbor registry, used for authentication."
accessSecret:
type: string
description: "The secret access key for the Harbor registry, used for authentication."
insecure:
type: boolean
description: "Allow insecure connections to the Harbor registry."
default: false
required:
- providerName
- name
- endpointURL
- accessId
- accessSecret
This XRD defines the custom resource for managing registries in Harbor, including details like the provider name, endpoint, and authentication.
3.2 Composition for Terraform Automation
The Composition translates the CRD into a Terraform Workspace for managing Harbor registries.
Example claim for a Harbor registry:
apiVersion: kubenoops.com/v1alpha1
kind: HarborRegistry
metadata:
name: my-harbor-registry
spec:
providerName: "docker-hub"
name: "dockerhub-registry"
endpointURL: "https://hub.docker.com"
accessId: "my-access-id"
accessSecret: "my-secret-key"
insecure: false
Here’s the Composition manifest for managing Harbor Registries:
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: harborregistry-composition
spec:
compositeTypeRef:
apiVersion: kubenoops.com/v1alpha1
kind: CompositeHarborRegistry
resources:
- name: harborregistry
base:
apiVersion: tf.upbound.io/v1beta1
kind: Workspace
spec:
providerConfigRef:
name: harbor-tf
forProvider:
source: Inline
env:
- name: TF_VAR_providerName
value: "spec.providerName"
- name: TF_VAR_name
value: "spec.name"
- name: TF_VAR_endpointURL
value: "spec.endpointURL"
- name: TF_VAR_insecure
value: "spec.insecure"
- name: TF_VAR_access
value: "spec.accessId"
- name: TF_VAR_secret
value: "spec.accessSecret"
module: |
variable "providerName" {
description = "The name of the provider for the Harbor registry."
type = string
}
variable "name" {
description = "The name of the Harbor registry."
type = string
}
variable "endpointURL" {
description = "The endpoint URL for the Harbor registry."
type = string
}
variable "insecure" {
description = "Allow insecure connections to the Harbor registry."
type = bool
default = false
}
variable "access" {
description = "The access ID for the Harbor registry."
type = string
}
variable "secret" {
description = "The secret access key for the Harbor registry."
type = string
}
resource "harbor_registry" "main" {
name = var.name
provider_name = var.providerName
endpoint_url = var.endpointURL
insecure = var.insecure
access_id = var.access
access_secret = var.secret
}
patches:
- type: FromCompositeFieldPath
fromFieldPath: "spec.providerName"
toFieldPath: "spec.forProvider.env[0].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.name"
toFieldPath: "spec.forProvider.env[1].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.endpointURL"
toFieldPath: "spec.forProvider.env[2].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.insecure"
toFieldPath: "spec.forProvider.env[3].value"
transforms:
- type: convert
convert:
toType: "string"
- type: FromCompositeFieldPath
fromFieldPath: "spec.accessId"
toFieldPath: "spec.forProvider.env[4].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.accessSecret"
toFieldPath: "spec.forProvider.env[5].value"
Summary
The HarborRegistry XRD defines the structure for registry management, including provider information and authentication details.
The Composition links the CRD to a Terraform Workspace, allowing automated management of external Harbor registries.
Users can manage registries declaratively by specifying fields like the provider, endpoint, and access credentials, while Terraform handles the resource creation.
4. Harbor Robot Account
This section demonstrates how to manage Harbor Robot Accounts using Crossplane and Terraform. A HarborRobotAccount allows you to create automation accounts for Harbor, with specific access permissions.
The Harbor Robot Account CRD allows you to define:
Name: The robot account’s name.
Level: Account scope (system or project).
Permissions: Access control specifying allowed actions on resources (pull/push/read on repositories/labels).
Secret: The secret key used for authentication.
4.1 Composite Resource Definition (XRD)
This XRD defines the structure for the HarborRobotAccount, allowing users to manage robot accounts in Harbor.
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: compositeharborrobotaccounts.kubenoops.com
spec:
group: kubenoops.com
names:
kind: CompositeHarborRobotAccount
plural: compositeharborrobotaccounts
shortNames: ["hba"]
claimNames:
kind: HarborRobotAccount
plural: harborrobotaccounts
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
name:
type: string
level:
type: string
enum:
- "system"
- "project"
permission:
type: object
properties:
kind:
type: string
enum:
- "system"
- "project"
namespace:
type: string
access:
type: object
properties:
action:
type: string
enum:
- "pull"
- "push"
- "read"
resource:
type: string
enum:
- "repository"
- "labels"
secret:
type: string
required:
- name
- level
- permission
- secret
This XRD defines the custom resource for managing robot accounts in Harbor, including details like account level and permissions.
4.2 Composition for Terraform Automation
The Composition translates the CRD into a Terraform Workspace for managing Harbor robot accounts.
Example claim for a Harbor robot account:
apiVersion: kubenoops.com/v1alpha1
kind: HarborRobotAccount
metadata:
name: my-harbor-robot-account
spec:
name: "my-robot"
level: "project"
secret: "supersecret"
permission:
kind: "project"
namespace: "my-namespace"
access:
action: "pull"
resource: "repository"
Here’s the Composition manifest for managing Harbor Robot Accounts:
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: harbor-robot-account-composition
spec:
compositeTypeRef:
apiVersion: kubenoops.com/v1alpha1
kind: CompositeHarborRobotAccount
resources:
- name: harborrobotaccount
base:
apiVersion: tf.upbound.io/v1beta1
kind: Workspace
spec:
providerConfigRef:
name: harbor-tf
forProvider:
source: Inline
env:
- name: TF_VAR_name
value: "spec.name"
- name: TF_VAR_level
value: "spec.level"
- name: TF_VAR_kind
value: "spec.permission.kind"
- name: TF_VAR_namespace
value: "spec.permission.namespace"
- name: TF_VAR_action
value: "spec.permission.access.action"
- name: TF_VAR_resource
value: "spec.permission.access.resource"
- name: TF_VAR_secret
value: "spec.secret"
module: |
variable "name" {
description = "Robot Account Name"
type = string
}
variable "level" {
description = "Level of the Robot Account (system/project)"
type = string
}
variable "kind" {
description = "Permission kind (system/project)"
type = string
}
variable "namespace" {
description = "Namespace for the permission"
type = string
}
variable "action" {
description = "Action allowed by the permission (pull/push/read)"
type = string
}
variable "resource" {
description = "Resource for the permission (repository/labels)"
type = string
}
variable "secret" {
description = "Secret for the Robot Account"
type = string
}
resource "harbor_robot_account" "project" {
name = var.name
secret = var.secret
level = var.level
permissions {
access {
action = var.action
resource = var.resource
}
kind = var.kind
namespace = var.namespace
}
}
patches:
- type: FromCompositeFieldPath
fromFieldPath: "spec.name"
toFieldPath: "spec.forProvider.env[0].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.level"
toFieldPath: "spec.forProvider.env[1].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.permission.kind"
toFieldPath: "spec.forProvider.env[2].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.permission.namespace"
toFieldPath: "spec.forProvider.env[3].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.permission.access.action"
toFieldPath: "spec.forProvider.env[4].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.permission.access.resource"
toFieldPath: "spec.forProvider.env[5].value"
- type: FromCompositeFieldPath
fromFieldPath: "spec.secret"
toFieldPath: "spec.forProvider.env[6].value"
Summary
The HarborRobotAccount XRD defines the structure for creating and managing robot accounts with configurable permissions in Harbor.
The Composition links the CRD to a Terraform Workspace, enabling automation for creating robot accounts with specific access controls.
Users can manage robot accounts declaratively by specifying fields like name, level, and permission, while Terraform handles resource creation in Harbor.
This approach simplifies the management of robot accounts and integrates seamlessly with Crossplane’s declarative management and Terraform’s infrastructure automation.
5. Harbor Replication
This section demonstrates how to manage Harbor Replications using Crossplane and Terraform. A HarborReplication allows you to replicate images across registries, either by pulling or pushing.
The Harbor Replication CRD will allow users to configure:
Registry Name
Replication Action: Pull or push.
Filters: Optionally filter by name, tag, labels, or resource.
Schedule: Manual or scheduled replication.
Project Destination: The target project in the destination registry.
5.1 Composite Resource Definition (XRD)
This XRD defines the structure for managing Harbor Replications.
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: compositeharborreplications.kubenoops.com
spec:
group: kubenoops.com
names:
kind: CompositeHarborReplication
plural: compositeharborreplications
shortNames: ["hrp"]
claimNames:
kind: HarborReplication
plural: harborreplications
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
registryName:
type: string
action:
type: string
enum:
- pull
- push
schedule:
type: string
default: "manual"
filters:
type: object
properties:
name:
type: string
tag:
type: string
labels:
type: string
resource:
type: string
name:
type: string
projectDestination:
type: string
required:
- projectDestination
- registryName
- action
- name
This Composite Resource Definition allows users to configure Harbor Replication jobs declaratively using Kubernetes.
5.2 Composition for Terraform Automation
The Composition translates the CRD into a Terraform Workspace for creating or updating replication jobs in Harbor.
Example claim for a Harbor replication:
apiVersion: kubenoops.com/v1alpha1
kind: HarborReplication
metadata:
name: prometheus-neuvector-pull
spec:
registryName: neuvector
action: pull
name: prometheus-replication
projectDestination: neuvector
filters:
name: "prom/**"
tag: "latest"
labels: ""
resource: "image"
Here’s the Composition manifest for managing Harbor Replications:
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: harbor-replication-composition
spec:
compositeTypeRef:
apiVersion: kubenoops.com/v1alpha1
kind: CompositeHarborReplication
resources:
- name: harborreplication
patches:
- type: FromCompositeFieldPath
fromFieldPath: spec.registryName
toFieldPath: spec.forProvider.env[0].value
- type: FromCompositeFieldPath
fromFieldPath: spec.name
toFieldPath: spec.forProvider.env[1].value
- type: FromCompositeFieldPath
fromFieldPath: spec.schedule
toFieldPath: spec.forProvider.env[2].value
- type: FromCompositeFieldPath
fromFieldPath: spec.action
toFieldPath: spec.forProvider.env[3].value
- type: FromCompositeFieldPath
fromFieldPath: spec.projectDestination
toFieldPath: spec.forProvider.env[4].value
- type: FromCompositeFieldPath
fromFieldPath: "spec.filters.name"
toFieldPath: "spec.forProvider.env[5].value"
policy:
fromFieldPath: Optional
- type: FromCompositeFieldPath
fromFieldPath: "spec.filters.tag"
toFieldPath: "spec.forProvider.env[6].value"
policy:
fromFieldPath: Optional
- type: FromCompositeFieldPath
fromFieldPath: "spec.filters.labels"
toFieldPath: "spec.forProvider.env[7].value"
policy:
fromFieldPath: Optional
- type: FromCompositeFieldPath
fromFieldPath: "spec.filters.resource"
toFieldPath: "spec.forProvider.env[8].value"
policy:
fromFieldPath: Optional
base:
apiVersion: tf.upbound.io/v1beta1
kind: Workspace
spec:
providerConfigRef:
name: terraform-provider
forProvider:
source: Inline
env:
- name: TF_VAR_registryName
value: "spec.registryName"
- name: TF_VAR_name
value: "spec.name"
- name: TF_VAR_schedule
value: "spec.schedule"
- name: TF_VAR_action
value: "spec.action"
- name: TF_VAR_destination
value: "spec.projectDestination"
- name: TF_VAR_filter_name
value: "spec.filters.name"
valueFrom:
fieldRef:
fieldPath: "spec.filters.name"
- name: TF_VAR_filter_tag
value: "spec.filters.tag"
valueFrom:
fieldRef:
fieldPath: "spec.filters.tag"
- name: TF_VAR_filter_labels
value: "spec.filters.labels"
valueFrom:
fieldRef:
fieldPath: "spec.filters.labels"
- name: TF_VAR_filter_resource
value: "spec.filters.resource"
valueFrom:
fieldRef:
fieldPath: "spec.filters.resource"
module: |
variable "registryName" {
description = "Registry Name"
type = string
}
variable "name" {
description = "Replication Name"
type = string
}
variable "schedule" {
description = "Schedule"
type = string
default = "manual"
}
variable "action" {
description = "Action (pull/push)"
type = string
}
variable "destination" {
description = "Destination Project"
type = string
}
variable "filter_name" {
description = "Filter Name"
type = string
default = null
}
variable "filter_tag" {
description = "Filter Tag"
type = string
default = null
}
variable "filter_labels" {
description = "Filter Labels"
type = string
default = null
}
variable "filter_resource" {
description = "Filter Resource"
type = string
default = null
}
data "harbor_registry" "main" {
name = var.registryName
}
resource "harbor_replication" "main" {
name = var.name
action = var.action
registry_id = regex("[0-9]+", data.harbor_registry.main.id)
schedule = var.schedule
dest_namespace = var.destination
# Use dynamic block to include each filter type only if they are not null
dynamic "filters" {
for_each = var.filter_name != null ? [var.filter_name] : []
content {
name = filters.value
}
}
dynamic "filters" {
for_each = var.filter_tag != null ? [var.filter_tag] : []
content {
tag = filters.value
}
}
dynamic "filters" {
for_each = var.filter_labels != null ? [var.filter_labels] : []
content {
labels = [filters.value]
}
}
dynamic "filters" {
for_each = var.filter_resource != null ? [var.filter_resource] : []
content {
resource = filters.value
}
}
}
Summary
The HarborReplication XRD defines the structure for replication jobs, including details like registry name, filters, schedule, and destination project.
The Composition links the CRD to a Terraform Workspace, automating the creation of replication jobs in Harbor.
Users can manage replication jobs declaratively, specifying filters, schedules, and actions, while Terraform handles the actual resource management.
Conclusion
This blog post demonstrated how combining Crossplane and Terraform empowers you to manage Harbor resources like Projects, Users, Registries, Robot Accounts, and Replication Rules directly through Kubernetes CRDs. By leveraging Crossplane's Kubernetes-native management and Terraform's extensive provider ecosystem, we can automate complex infrastructure tasks without manually coding operators. This approach allows for seamless integration with GitOps, Infrastructure-as-Code (IaC), and a declarative management model, making it an ideal solution for managing cloud-native platforms like Harbor in a scalable and efficient manner.
Comments