Provisioning and Consuming Multi-Cloud Infrastructure with Crossplane and Dapr
In this blog post, we will walk through a detailed example that combines the capabilities of Crossplane and Dapr to provision and then consume a set of cloud resources. Through this practical example, we will answer some key questions in this workflow such as how can an application developer create and customize their own cloud resources? And how do they discover and find what’s available to connect to from their application code?
Crossplane is a CNCF project and a powerful tool to provision cloud resources leveraging the declarative nature of the Kubernetes APIs. By installing Crossplane Providers (AWS, GCP, Azure, Alibaba Cloud, Helm, Kubernetes, among others), you can provision resources in different providers by defining and managing just Kubernetes Resources. Crossplane is also well known for its Composition capabilities (Composite Resources Definitions- XRDs), which allow teams to create and manage groups of resources (provisioned and configured) using a simplified Kubernetes Custom Resource Definition (CRD).
Then we will look into how the Dapr project (a CNCF project) enables developers to build and connect Cloud-Native applications using a set of common APIs. Dapr also gives the tools to the platform engineering team(s) to simplify the creation of development environments that allow them to focus on building features and fixing bugs. We will look into how Dapr and Crossplane can be combined to reduce the cognitive load from developers at the time of connecting and interacting with resources that can even be hosted in different cloud providers.
To be able to consume cloud resources, we first need to create them and make sure that they are configured correctly so application developers can access them. We will start by creating a Crossplane Composition enabling teams to create databases on demand. Once we have a database for applications to store data, we will look at connecting to it by extending the previous composition to use Dapr components, giving developers an easy way to find and interact with the database instance.
Provisioning Infrastructure, everywhere
Crossplane requires you to install and configure Crossplane Providers on a Kubernetes Cluster, depending on which Cloud Provider you want to create resources on. For this example, we will use the Crossplane Helm Provider to provision a Redis database in the same cluster where Crossplane is installed. You can extend this example to support creating resources in the cloud provider of your choice.
You can follow a step-by-step tutorial on how to install and configure your local KinD Cluster to get this working on your laptop by looking at the following URL:
https://github.com/salaboy/from-monolith-to-k8s/tree/main/platform/crossplane-dapr
Let’s keep it simple for now, and let’s create just a Redis database for our application to connect:
Usually, whether it is a NoSQL database or a Relational Database, you will need to know some details (username/password/endpoint+port/tls enabled?) to ensure that your application can connect to it when the resource is provisioned.
But let’s wait a second; life is not that simple. If you want to provision a database, you must decide how that database will be configured. Some common questions to answer at this stage are:
- Do we want a highly available database?
- Where is the database supposed to be running? In which region of the world?
- Which version of the database do we want to use?
- How much storage capacity will the database have, and where is that storage?
Here is where a Platform Team can use Crossplane compositions to abstract away all these complex questions that development teams shouldn’t worry about when they just need a database for their applications. Let’s see how this is done.
Once you have Crossplane installed and configured as instructed in the step-by-step tutorial, you will find the Crossplane Composition in a file called app-database-redis.yaml
:
This Composition is managed by the following CompositeResourceDefinition
, which we will use to create new database instances: https://github.com/salaboy/from-monolith-to-k8s/blob/main/platform/crossplane-dapr/app-database-resource.yaml
Once the Composition and the CompositeResourceDefinition are installed in the cluster, a new Database instance can be requested by creating a Database resource that looks like this:
apiVersion: salaboy.com/v1alpha1
kind: Database
metadata:
name: my-db
spec:
compositionSelector:
matchLabels:
provider: local
type: dev
parameters:
size: small
By creating new instances of the Database resource, teams are requesting a new Database to be provisioned by the Crossplane composition. As we discussed before, a Redis instance will be created by the Crossplane Helm Provider using the Bitnami Redis Helm Chart. By defining a composition, the platform team can encapsulate all the parameters required by the Redis Helm Chart behind a simpler resource that aims to be used by the application development teams (AppDev).
Because Crossplane relies on extending the Kubernetes APIs you can use kubectl
to list all the available databases.
> kubectl get dbs
NAME SIZE SYNCED READY COMPOSITION AGE
my-db small True True db.local.salaboy.com 3h28m
This composition (Database) automatically provisions a Redis instance.
> kubectl get pods
NAME READY STATUS RESTARTS AGE
my-db-redis-master-0 1/1 Running 0 3h35m
As shown in the previous diagram, the Redis Helm chart also creates a new Kubernetes Secret that contains the connection details needed to connect to the instances that we just created.
> kubectl get secret
NAME TYPE DATA AGE
my-db-redis Opaque 1 3h37m
Applications can now use this secret to connect to the newly provisioned database.
Finally, all this wouldn’t be worth the hassle if we couldn’t use the same interface for provisioning resources on multiple cloud providers. This is where the labels in the composition and the Database resources come into play. For example, if we wanted to provision an InMemoryStore (Redis) in the Google Cloud Platform, we would only need to provide a new composition that uses the GCP Crossplane Provider and encapsulate all the details needed to create that database behind the same Database resource.
By creating a new Composition and by using different labels, our developers can reuse the same resource to provision a database in GCP or any other Cloud Provider.
apiVersion: salaboy.com/v1alpha1
kind: Database
metadata:
name: my-db-on-the-cloud
spec:
compositionSelector:
matchLabels:
provider: cloud
type: dev
parameters:
size: small
We have used the provider: cloud
label to choose the composition that creates an InMemoryStore in GCP by using the Crossplane GCP provider.
We have databases!!! What’s next?
Consuming Infrastructure, from everywhere
As an application developer, I would love to store and read data from a database whenever my application needs it. Still, figuring out where the database is, or what kind of database it is so I can find the correct libraries and connection details to connect becomes hard when the database can be different for different environments.
In this section, we will look at Dapr, which provides a common set of APIs to interact with infrastructure no matter where that infrastructure is, enabling developers to focus on writing new features or fixing bugs, instead of worrying about boilerplate code and complicated setups.
Using Dapr, the platform team that defines where the infrastructure will be created can wire up “Dapr Components” to allow developers to connect to the infrastructure they need by just knowing a Dapr Component name.
Let’s see this in action. Let’s go back to our example application, now you want to connect to the database that was provisioned in GCP or locally using Helm.
By using Dapr, we can configure a Dapr Component that abstracts away all the database details, and the only thing that developers need to know is the name of the Dapr Statestore Component to connect and use the database.
The Statestore Dapr Component has all the credentials and configurations needed to connect to our Redis database. The application can now interact with the Statestore component using HTTP, GRPC, or one of the Dapr SDKs, and it doesn’t need any Redis dependency in your application code! This reduces the amount of dependencies, size and the attack surface of your services.
This is how the Statestore Dapr Component looks when using a declarative configuration approach for Kubernetes:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: my-statestore
spec:
type: state.redis
version: v1
metadata:
- name: redisHost
value: <REDIS_HOST:REDIS_PORT>
- name: redisPassword
value: <REDIS_PASSWORD>
Now how does this affect our Crossplane composition? Initially, we can let developers set up their Dapr components, but we can take things one step further to simplify their experiences on top of Kubernetes.
Let’s add our Dapr Statestore component configuration to our Crossplane composition using the Kubernetes Crossplane Provider. You can take a look at the composition that includes the Statestore Dapr component here: https://github.com/salaboy/from-monolith-to-k8s/blob/main/platform/crossplane-dapr/app-database-redis-dapr.yaml#L119
This new Composition belongs to a separate API group (dapr.db.local.salaboy.com
) and is labelled with dapr-dev
.
Now, when we request a new Database resource the Helm Provider will install the Redis Helm Chart and the Kubernetes Provider will create a new Kubernetes Object (as defined by Crossplane) which wires up a Dapr Component and the credentials coming from the secret created when the Redis chart was installed, as shown in the figure below.
After installing the new composition, you can create a new Database resource with the same API and schema as before, but now using a different label to select the new Dapr aware composition:
apiVersion: salaboy.com/v1alpha1
kind: Database
metadata:
name: my-db-dapr
spec:
compositionSelector:
matchLabels:
provider: local
type: dapr-dev
parameters:
size: small
After applying this resource you can check that the Dapr Statestore Component was created and it is linked to the Redis Instance by running:
> kubectl get components -n my-db-dapr
NAME AGE
my-db-dapr-statestore 16m
Finally, developers can now use the dapr
CLI to query components in their target cluster.
> dapr components -k
NAMESPACE NAME TYPE VERSION SCOPES CREATED AGE
my-db-dapr my-db-dapr-statestore state.redis v1 10:17.35 13m
As an application developer, you have two different, but equivalent options. You can include the Dapr SDKs in your service (available for most programming languages) or use plain HTTP/GRPC requests to interact with the Dapr Components APIs. In the step-by-step tutorial, you deploy two applications, one that stores data into Redis (Java) and the other that reads from it (Go).
For example, here is how a Java application would store data in the Statestore component that we just provisioned:
String STATE_STORE_NAME = "my-db-dapr-statestore";
private DaprClient client = new DaprClientBuilder().build();
@PostMapping("/")
public MyValues storeValues(@RequestParam("value") String value) {
State<MyValues> results = client.getState(STATE_STORE_NAME,
"values", MyValues.class).block();
MyValues valuesList = results.getValue();
if (valuesList == null) {
valuesList = new MyValues(new ArrayList<String>());
valuesList.values().add(value);
} else {
valuesList.values().add(value);
}
client.saveState(STATE_STORE_NAME, "values",
valuesList).block();
return valuesList;
}
Here is how our Go application will read the data stored by the Java application:
dapr "github.com/dapr/go-sdk/client"
var (
STATE_STORE_NAME = "my-db-dapr-statestore"
daprClient dapr.Client
)
func readValues(w http.ResponseWriter, r *http.Request) {
ctx := context.Background()
daprClient, daprErr := dapr.NewClient()
if daprErr != nil {
panic(daprErr)
}
result, err := daprClient.GetState(ctx, STATE_STORE_NAME,
"values", nil)
if err != nil {
panic(err)
}
myValues := MyValues{}
json.Unmarshal(result.Value, &myValues)
respondWithJSON(w, http.StatusOK, myValues)
}
If you are not a Java or Go developer, you can check the other available Dapr SDKs for Python, Rust, Javascript, etc.
Sum up
In this blog post, we explored how Dapr and Crossplane can be combined to provision cloud resources that can be consumed using Dapr Components without pushing developers to know where these cloud resources are or how to connect to them. More importantly, we have enabled the Platform team(s) to express, behind a simple interface, how all these resources and components are configured together.
Once the basics are working, you can continue exploring other Dapr components. For example, suppose you want to emit and consume async messages between different applications. In that case, you can create a new Dapr Pub/Sub Component and use Redis, Kafka, RabbitMQ, or a Cloud Provider implementation to once again let developers just focus on building features and not worrying about the transport that will be used to move messages around.
The latest Dapr 1.10 release contains many new features and improvements, such as Workflow, Pluggable Component SDKs, and Multi-App Run, read all about them in this blog post. Check out the GitHub repository for more examples, and feel free to reach out if you have questions via Twitter @Salaboy or my blog https://salaboy.com. You are welcome to join the Dapr Discord and the Crossplane Slack to share your experience with both Dapr and Crossplane.