Quarkiverse Java Operator SDK

This extension integrates the Java Operator SDK project (JOSDK) with Quarkus, making it even easier to use both.

Features

  • Automatically generates a main class, so that the only thing that’s required is to write Reconciler implementation(s)

  • Automatically makes a Kubernetes/OpenShift client available for CDI injection

  • Automatically sets up an Operator instance, also available for CDI injection

  • Automatically processes the reconcilers' configuration at build time, exposing all the available configuration of JOSDK via application properties

  • Automatically registers reconcilers with the Operator and start them

  • Automatically generates CRDs for all CustomResource implementations used by reconcilers

  • Automatically generates Kubernetes descriptors

  • Automatically generates the bundle manifests for all reconcilers (using the quarkus-operator-sdk-bundle-generator extension) [Preview]

  • Integrates with the Dev mode:

    • Watches your code for changes and reload automatically your operator if needed without having to hit an endpoint

    • Only re-generates the CRDs if a change impacting its generation is detected

    • Only re-processes a reconciler’s configuration if needed

    • Automatically apply the CRD to the cluster when it has changed

  • Supports micrometer registry extensions (adding a Quarkus-supported micrometer registry extension will automatically inject said registry into the operator)

  • Automatically adds a SmallRye health check

  • Can easily generate a Docker image via JIB

  • Sets up reflection for native binary generation

  • Customize the JSON serialization that the Fabric8 client relies on by providing an ObjectMapperCustomizer implementation, qualified with the @KubernetesClientSerializationCustomizer annotation

Installation

If you want to use this extension, you need to add the quarkus-operator-sdk extension first.

You need to add minimally, the following to your pom.xml file:

<dependency>
    <groupId>io.quarkiverse.operatorsdk</groupId>
    <artifactId>quarkus-operator-sdk</artifactId>
    <version>{extension version}</version>
</dependency>

However, it might be more convenient to use the quarkus-operator-sdk-bom dependency to ensure that all dependency versions are properly aligned:

<dependencyManagement>
    <dependencies>
      <dependency>
        <groupId>io.quarkiverse.operatorsdk</groupId>
        <artifactId>quarkus-operator-sdk-bom</artifactId>
        <version>{extension version}</version>
        <type>pom</type>
        <scope>import</scope>
      </dependency>

      <!-- other dependencies as needed by your project -->

    </dependencies>
  </dependencyManagement>

If you do use the BOM, please do make sure to use the same Quarkus version as the one defined in the BOM when configuring the Quarkus plugin as the Quarkus Dev Mode will not work properly otherwise, failing with an error:

Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IllegalStateException: Hot deployment of the application is not supported when updating the Quarkus version. The application needs to be stopped and dev mode started up again
        at io.quarkus.deployment.dev.DevModeMain.start(DevModeMain.java:138)
        at io.quarkus.deployment.dev.DevModeMain.main(DevModeMain.java:62)

If you want to use the Bundle generator, you will first need to use Quarkus 2.3.0.Final or above and add the quarkus-operator-sdk-bundle-generator extension first:

<dependency>
    <groupId>io.quarkiverse.operatorsdk</groupId>
    <artifactId>quarkus-operator-sdk-bundle-generator</artifactId>
    <version>{extension version}</version>
</dependency>

Deployment

This section explains how to deploy your operator using the Operator Lifecycle Manager (OLM) by following the next steps:

  1. Requirements

Make sure you have installed the opm command tool and are connected to a Kubernetes cluster on which OLM is installed.

  1. Generate the Operator image and bundle manifests

Quarkus provides several extensions to build the container image. For example, the Joke sample uses the Quarkus Jib container image extension to build the image. So, you first need to configure one of these extensions as you prefer. Then, you need to add the quarkus-operator-sdk-bundle-generator extension:

<dependency>
    <groupId>io.quarkiverse.operatorsdk</groupId>
    <artifactId>quarkus-operator-sdk-bundle-generator</artifactId>
</dependency>

This extension generates the Operator bundle manifests in the target/bundle directory.

Finally, to generate the operator image and the bundle manifests at once, you simply need to run the next Maven command:

mvn clean package -Dquarkus.container-image.build=true \
    -Dquarkus.container-image.push=true \
    -Dquarkus.container-image.registry=<your container registry. Example: quay.io> \
    -Dquarkus.container-image.group=<your container registry namespace> \
    -Dquarkus.kubernetes.namespace=<the kubernetes namespace where you will deploy the operator> \
    -Dquarkus.operator-sdk.bundle.package-name=<the name of the package that bundle image belongs to> \
    -Dquarkus.operator-sdk.bundle.channels=<the list of channels that bundle image belongs to>

For example, if we want to name the package my-operator and use the alpha channels, we would need to append the properties -Dquarkus.operator-sdk.bundle.package-name=my-operator -Dquarkus.operator-sdk.bundle.channels=alpha.

Find more information about channels and packages here.

If you’re using an insecure container registry, you’ll also need to append the next property to the Maven command -Dquarkus.container-image.insecure=true.

  1. Build the Operator Bundle image

An Operator Bundle is a container image that stores Kubernetes manifests and metadata associated with an operator. You can find more information about this here. In the previous step, we generated the bundle manifests at target/bundle which includes a ready-to-use target/bundle/bundle.Dockerfile Dockerfile that you will use to build and push the final Operator Bundle image to your container registry:

MY_BUNDLE_IMAGE=<your container registry>/<your container registry namespace>/<bundle image name>:<tag>
docker build -t $MY_BUNDLE_IMAGE -f target/bundle/bundle.Dockerfile target/bundle
docker push $MY_BUNDLE_IMAGE

For example, if we want to name our bundle image as my-manifest-bundle, our container registry is quay.io, our Quay user is myuser and the tag we’re releasing is 1.0, the final MY_BUNDLE_IMAGE property would be quay.io/myuser/my-manifest-bundle:1.0.

  1. Make your operator available within a Catalog

OLM uses catalogs to discover and install Operators and their dependencies. So, a catalog is similar to a repository of operators and their associated versions that can be installed on a cluster. Moreover, the catalog is also a container image that contains a collection of bundles and channels. Therefore, we’d need to create a new catalog (or update an existing one if you’re already have one), build/push the catalog image and then install it on our cluster.

So far, we have already built the Operator bundle image at $MY_BUNDLE_IMAGE (see above) and next, we need to add this Operator bundle image into our catalog. For doing this, we’ll use the olm tool as follows:

CATALOG_IMAGE=<catalog container registry>/<catalog container registry namespace>/<catalog name>:<tag>
opm index add \
    --bundles $MY_BUNDLE_IMAGE \
    --tag $CATALOG_IMAGE \
    --build-tool docker
docker push $CATALOG_IMAGE

For example, if our catalog name is my-catalog, our container registry for the catalog is quay.io, our Quay user is myuser and the container tag we’re releasing is 59.0, the final CATALOG_IMAGE property would be quay.io/myuser/my-catalog:59.0.

If you’re using an insecure registry, you’d need to append the argument --skip-tls to the opm index command.

Once we have our catalog image built and pushed at $CATALOG_IMAGE, we need to install it in the same namespace where OLM is running (by default, OLM is running in the operators namespace, we will use the OLM_NAMESPACE property to represent this namespace) on our cluster using the CatalogSource resource by doing the next command:

cat <<EOF | kubectl apply -f -
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: my-catalog-source
  namespace: $OLM_NAMESPACE
spec:
  sourceType: grpc
  image: $CATALOG_IMAGE
EOF

Once the catalog is installed, you should see the catalog pod up and running:

kubectl get pods -n $OLM_NAMESPACE --selector=olm.catalogSource=my-catalog-source
  1. Install your operator via OLM

OLM deploys operators via subscriptions. Creating a Subscription will trigger the operator deployment. You can simply create the Subscription resource that contains the operator name and channel to install by running the following command:

cat <<EOF | kubectl create -f -
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: my-subscription
  namespace: <Kubernetes namespace where your operator will be installed>
spec:
  channel: alpha
  name: my-operator-name
  source: my-catalog-source
  sourceNamespace: $OLM_NAMESPACE
EOF

We’ll install the operator in the target namespace defined in the metadata object. The sourceNamespace value is the Kubernetes namespace where the catalog was installed on.

Once the subscription is created, you should see your operator pod up and running:

kubectl get csv -n $OLM_NAMESPACE my-operator-name

Extension Configuration Reference

Configuration property fixed at build time - All other configuration properties are overridable at runtime

Configuration property

Type

Default

Whether the extension should generate a ClusterServiceVersion manifest for controllers.

boolean

false

Whether the operator should check that the CRD is properly deployed and that the associated CustomResource implementation matches its information before registering the associated controller.

boolean

true

Whether the extension should automatically generate the CRD based on CustomResource implementations.

boolean

true

Whether the extension should automatically apply updated CRDs when they change.

boolean

false

Comma-separated list of which CRD versions should be generated.

list of string

v1

The directory where the CRDs will be generated, relative to the project’s output directory.

string

kubernetes

Whether controllers should only process events if the associated resource generation has increased since last reconciliation, otherwise will process all events. Sets the default value for all controllers.

boolean

true

The optional fully qualified name of a CDI event class that controllers will wait for before registering with the Operator. Sets the default value for all controllers.

string

The max number of concurrent dispatches of reconciliation requests to controllers.

int

Amount of seconds the SDK waits for reconciliation threads to terminate before shutting down.

int

An optional list of comma-separated namespace names all controllers will watch if not specified. If this property is left empty then controllers will watch all namespaces by default. Sets the default value for all controllers.

list of string

The optional name of the finalizer to use for controllers. If none is provided, one will be automatically generated. It should be noted that having several controllers use the same finalizer might create issues and this configuration item is mostly useful when we don’t want to use finalizers at all by default (using the io.javaoperatorsdk.operator.api.Controller#NO_FINALIZER value). Sets the default value for all controllers.

string

Whether the controller should only process events if the associated resource generation has increased since last reconciliation, otherwise will process all events.

boolean

true

string

An optional list of comma-separated namespace names the controller should watch. If this property is left empty then the controller will watch all namespaces.

list of string

The optional name of the finalizer for the controller. If none is provided, one will be automatically generated.

string

How many times an operation should be retried before giving up

int

The initial interval that the controller waits for before attempting the first retry

long

2000

The value by which the initial interval is multiplied by for each retry

double

1.5

The maximum interval that the controller will wait for before attempting a retry, regardless of all other configuration

long

An optional list of comma-separated label selectors that Custom Resources must match to trigger the controller. See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more details on selectors.

string