Openstack-integrator charm
This charm acts as a proxy to OpenStack and provides an interface to provide a set of credentials for a somewhat limited project user to the applications that are related to this charm.
Usage
When on OpenStack, this charm can be deployed, granted trust via Juju to access OpenStack, and then related to an application that supports the interface.
For example, Charmed Kubernetes has support for this, and can be deployed with the following bundle overlay (download it here):
applications:
openstack-integrator:
charm: cs:~containers/openstack-integrator
num_units: 1
trust: true
relations:
- ['openstack-integrator', 'kubernetes-master:openstack']
- ['openstack-integrator', 'kubernetes-worker:openstack']
Using Juju 2.4 or later:
juju deploy cs:charmed-kubernetes --overlay ./k8s-openstack-overlay.yaml --trust
To deploy with earlier versions of Juju, you will need to provide the cloud
credentials via the credentials
, charm config options.
Resource Usage Note
By relating to this charm, other charms can directly allocate resources, such as PersistentDisk volumes and Load Balancers, which could lead to cloud charges and count against quotas. Because these resources are not managed by Juju, they will not be automatically deleted when the models or applications are destroyed, nor will they show up in Juju’s status or GUI. It is therefore up to the operator to manually delete these resources when they are no longer needed, using the OpenStack console or API.
Examples
Following are some examples using OpenStack integration with Charmed Kubernetes.
Creating a pod with a PersistentDisk-backed volume
This script creates a busybox pod with a persistent volume claim backed by OpenStack’s PersistentDisk.
#!/bin/bash
# create a persistent volume claim using the StorageClass which is
# automatically created by Charmed Kubernetes when it is related to
# the openstack-integrator
kubectl create -f - <<EOY
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: testclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
storageClassName: cdk-cinder
EOY
# create the busybox pod with a volume using that PVC:
kubectl create -f - <<EOY
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
- mountPath: "/pv"
name: testvolume
restartPolicy: Always
volumes:
- name: testvolume
persistentVolumeClaim:
claimName: testclaim
EOY
Creating a service with a OpenStack load-balancer
The following script starts the hello-world pod behind a OpenStack-backed load-balancer.
kubectl create deployment hello-world --image=gcr.io/google-samples/node-hello:1.0
kubectl scale deployment hello-world --replicas=5
kubectl expose deployment hello-world --type=LoadBalancer --name=hello --port=8080
watch kubectl get svc hello -o wide
Configuration
name | type | Default | Description |
---|---|---|---|
auth-url | string | The URL of the keystone API used to authenticate. On OpenStack control panels, this can be found at Access and Security > API Access > Credentials. | |
bs-version | string | None | See notes |
credentials | string | See notes | |
endpoint-tls-ca | string | See notes | |
floating-network-id | string | See notes | |
ignore-volume-az | boolean | None | See notes |
lb-floating-network | string | If set, this charm will assign a floating IP in this network (name or ID) for load balancers created for other charms related on the loadbalancer endpoint. | |
lb-method | string | ROUND_ROBIN | See notes |
lb-port | int | 443 | Port to use for load balancers created by this charm for other charms related on the loadbalancer endpoint. |
lb-subnet | string | See notes | |
manage-security-groups | boolean | False | See notes |
password | string | Password of a valid user set in keystone. | |
project-domain-name | string | Name of the project domain where you want to create your resources. | |
project-name | string | Name of project where you want to create your resources. | |
region | string | Name of the region where you want to create your resources. | |
snap_proxy | string | DEPRECATED. Use snap-http-proxy and snap-https-proxy model configuration settings. HTTP/HTTPS web proxy for Snappy to use when accessing the snap store. | |
snap_proxy_url | string | DEPRECATED. Use snap-store-proxy model configuration setting. The address of a Snap Store Proxy to use for snaps e.g. http://snap-proxy.example.com | |
snapd_refresh | string | See notes | |
subnet-id | string | See notes | |
trust-device-path | boolean | None | See notes |
user-domain-name | string | Name of the user domain where you want to create your resources. | |
username | string | Username of a valid user set in keystone. |
bs-version
Used to override automatic version detection for block storage usage. Valid values are v1, v2, v3 and auto. When auto is specified automatic detection will select the highest supported version exposed by the underlying OpenStack cloud. If not set, will use the upstream default.
credentials
The base64-encoded contents of a JSON file containing OpenStack credentials.
The credentials must contain the following keys: auth-url, username, password, project-name, user-domain-name, and project-domain-name.
It could also contain a base64-encoded CA certificate in endpoint-tls-ca key value.
This can be used from bundles with ‘include-base64://’ (see https://discourse.charmhub.io/t/bundle-reference/1158), or from the command-line with ‘juju config openstack credentials=”$(base64 /path/to/file)”’.
It is strongly recommended that you use ‘juju trust’ instead, if available.
endpoint-tls-ca
A CA certificate that can be used to verify the target cloud API endpoints. Use ‘include-base64://’ in a bundle to include a certificate. Otherwise, pass a base64-encoded certificate (base64 of “—–BEGIN” to “—–END”) as a config option in a Juju CLI invocation.
floating-network-id
If set, it will be passed to integrated workloads to indicate that floating IPs should be created in the given network for load balancers that those workloads manage. For example, this will determine whether and where FIPs will be created by Kubernetes for LoadBalancer type services in the cluster.
ignore-volume-az
Used to influence availability zone use when attaching Cinder volumes. When Nova and Cinder have different availability zones, this should be set to true. This is most commonly the case where there are many Nova availability zones but only one Cinder availability zone. If not set, will use the upstream default.
lb-method
Algorithm that will be used by load balancers, which must be one of: ROUND_ROBIN, LEAST_CONNECTIONS, SOURCE_IP. This applies both to load balancers managed by this charm for applications related via the loadbalancer endpoint, as well as to load balancers managed by integrated workloads, such as Kubernetes.
lb-subnet
Override the subnet (name or ID) in which this charm will create load balancers for other charms related on the loadbalancer endpoint. If not set, the subnet over which the requesting application is related will be used.
manage-security-groups
Whether or not each load balancer should have its own security group, or if all load balancers should use the default security group for the project. This applies both to load balancers managed by this charm for applications related via the loadbalancer endpoint, as well as to load balancers managed by integrated workloads, such as Kubernetes.
snapd_refresh
How often snapd handles updates for installed snaps. The default (an empty string) is 4x per day. Set to “max” to check once per month based on the charm deployment date. You may also set a custom string as described in the ‘refresh.timer’ section here: https://forum.snapcraft.io/t/system-options/87
subnet-id
If set, it will be passed to integrated workloads to indicate in what subnet load balancers should be created. For example, this will determine what subnet Kubernetes uses for LoadBalancer type services in the cluster.
trust-device-path
In most scenarios the block device names provided by Cinder (e.g. /dev/vda) can not be trusted. This boolean toggles this behavior. Setting it to true results in trusting the block device names provided by Cinder. The value of false results in the discovery of the device path based on its serial number and /dev/disk/by-id mapping and is the recommended approach. If not set, will use the upstream default.