Building Crossplane Composition Functions to Empower Your Control Plane
Since Crossplane Composition Functions were promoted to beta in Crossplane version 1.14, the community has been adopting the new system. Some in the community are leaning into using open-source Composition Functions like Patch and Transform or Go Templates. However, we at Imagine Learning have been building our own Composition Functions due to the troubles we have had with patch and transform with complex compositions.
Imagine Learning empowers educators to inspire breakthrough moments in every student’s unique learning journey with digital-first, K–12 education solutions. We use Crossplane in our own internal developer platform (IDP) to deploy resources into Amazon Web Services (AWS). Building out abstractions through compositions with native patch and transform was common. Composition files for resources like AWS RDS Clusters regularly contained more than 1,000 lines of repetitive code. We also needed several compositions to be able to specify different instance counts because native patch and transform does not support conditionally rendering resources. This repetition and the large files made testing changes to the compositions nearly impossible and authoring compositions difficult.
Implementing Composition Functions
After the release of Crossplane 1.14, Imagine Learning moved the existing patch and transform compositions to Functions. As part of that effort, we had to determine if we wanted to use a monolithic Composition Function for all compositions or to have a unique Function for each composition. We chose a monolithic approach. The monolithic approach allows us to route dynamically to the rendering logic for a specific composition and results in significantly fewer idle workloads running in our cluster. Another win associated with this model is that we only need to manage one code base.
Composition Functions have opened a lot of doors—allowing us to make our control plane easier to test and to run locally and they have improved our ability to manage complex compositions. For example, the composition YAML file for an RDS Cluster composition in the patch and transform framework was about 1,000 lines of code for a single instance cluster and 1,250 lines for a two-instance cluster. Two compositions that looked almost identical to each other were required to support provisioning a different number of database instances. With Crossplane Functions we were able to reduce that to a simple for loop reducing complexity and repetitiveness.
Here is a code snippet showing how, in an RDS Cluster Composition, we can use a coding practice like a loop to keep code clean and simple:
instanceCount, _ := oxr.Resource.GetInteger("spec.resourceConfig.instanceCount")
for i := int64(0); i < instanceCount; i++ {
desired[resource.Name(fmt.Sprintf("instance-%v", i))] = resource.NewDesiredComposed()
GenerateInstanceResource(desired[resource.Name(fmt.Sprintf("instance-%v", i))], oxr, observed, tags, i)
}
Running Crossplane functions locally with the crossplane beta render
CLI command allows us to validate that the rendering of resources is correct before deploying to a Kubernetes cluster. The ability to unit test and eventually integration test our compositions saves us from errors caused by contracts breaking between what is expected in the Composite Resource (XR) or in EnvironmentConfigs and what we have deployed.
Here is the output from crossplane beta render
command for an S3 Bucket Composition:
apiVersion: imaginelearning.engineering/v1alpha1
kind: XS3Bucket
metadata:
labels:
cluster: platform-production
domain: platform
product: sandbox
managedFields: null
name: sandbox-dev-sample-python-bucket-12easda
spec:
claimRef:
apiVersion: imaginelearning.engineering/v1alpha1
kind: S3Bucket
name: sandbox-dev-sample-python-bucket
namespace: sandbox-dev
resourceConfig: {}
status:
bucketArn: arn:aws:s3:::sandbox-dev-sample-python-bucket
bucketName: sandbox-dev-sample-python-bucket
---
apiVersion: s3.aws.upbound.io/v1beta1
kind: BucketServerSideEncryptionConfiguration
metadata:
annotations:
crossplane.io/composition-resource-name: bucket-ses
generateName: sandbox-dev-sample-python-bucket-12easda-
labels:
crossplane.io/claim-name: sandbox-dev-sample-python-bucket
crossplane.io/claim-namespace: sandbox-dev
crossplane.io/composite: sandbox-dev-sample-python-bucket-12easda
managedFields: null
name: sandbox-dev-sample-python-bucket
ownerReferences:
- apiVersion: imaginelearning.engineering/v1alpha1
blockOwnerDeletion: true
controller: true
kind: XS3Bucket
name: sandbox-dev-sample-python-bucket-12easda
uid: ""
spec:
deletionPolicy: Orphan
forProvider:
bucketSelector:
matchControllerRef: true
region: us-east-2
rule:
- applyServerSideEncryptionByDefault:
- sseAlgorithm: AES256
providerConfigRef:
name: aws-provider
---
apiVersion: s3.aws.upbound.io/v1beta1
kind: Bucket
metadata:
annotations:
crossplane.io/composition-resource-name: bucket
generateName: sandbox-dev-sample-python-bucket-12easda-
labels:
crossplane.io/claim-name: sandbox-dev-sample-python-bucket
crossplane.io/claim-namespace: sandbox-dev
crossplane.io/composite: sandbox-dev-sample-python-bucket-12easda
managedFields: null
name: sandbox-dev-sample-python-bucket
ownerReferences:
- apiVersion: imaginelearning.engineering/v1alpha1
blockOwnerDeletion: true
controller: true
kind: XS3Bucket
name: sandbox-dev-sample-python-bucket-12easda
uid: ""
spec:
deletionPolicy: Orphan
forProvider:
region: us-east-2
tags:
cluster: platform-production
domain: platform
env: dev
name: sandbox-dev-sample-python-bucket
namespace: sandbox-dev
product: sandbox
providerConfigRef:
name: aws-provider
Conclusion
Building your internal developer platform (IDP) control plane with Crossplane using Composition Functions allows you to build with confidence. Imagine Learning has leveraged Composition Functions to reduce bugs and the toil of maintaining thousands of lines of YAML configuration. Adding more resources to Composition Function coverage, evaluating how to automate day two operations, and having a more thorough end-to-end test suite with a local Kubernetes cluster are the next steps we will take to bring our internal developer platform to the next level.