Keystone Operator Deploy/Upgrade on OpenShift
My last post was about installing an Operator for Keystone. In this post I go over how to use the same Keystone Operator to deploy and then upgrade a working Keystone API on OpenShift using containers from the RDO project.
Creating a route to access Keystone API
The first thing is to create a route that will be used to access the Keystone API outside (external to) the OpenShift/Kubernetes cluster. Create yaml file called route.yaml that looks something like this (swapping in values for your local project and domain):
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: keystone
namespace: test
spec:
host: keystone-test.apps.test.dprince
path: /
to:
kind: Service
name: keystone
port:
targetPort: api
Once you have created the file create the route with the following command:
oc create -f route.yaml
NOTE: We create the route first because so that we can pass the dns hostname being used as a parameter when creating the Keystone API. This is used to "bootstrap" the Keystone service endpoints. Also note that OpenShift routes support a variety of TLS options (including edge and end-to-end configurations). For simplicity we are keeping with standard http for this demo.
Once you have completed these steps you should have an OpenShift route created that points to http://keystone-test.apps.test.dprince. This will route traffic through the OpenShift load balancer to the internal keystone service running on OpenShift.
Deploying Keystone API
The Keystone Operator gave us a custom CRD which can be used to create Keystone objects within the cluster. Create a YAML file representing the Keystone API we want to create like this:
apiVersion: keystone.openstack.org/v1
kind: KeystoneApi
metadata:
name: keystone
spec:
adminPassword: foobar123
containerImage: docker.io/tripleostein/centos-binary-keystone:current-tripleo
replicas: 1
databasePassword: foobar123
databaseHostname: openstack-db-mariadb
# used for keystone-manage bootstrap endpoints
apiEndpoint: http://keystone-test.apps.test.dprince/
# used to create the DB schema
databaseAdminUsername: root
databaseAdminPassword: foobar123
mysqlContainerImage: docker.io/tripleomaster/centos-binary-mariadb:current-tripleo
Save this file as a YAML file called keystone.yaml and then create the resource with the following command.
oc create -f keystone.yaml
NOTE: The assumption here is that you are already running a MariaDB instances in your cluster. Eventually maybe we'll have an Operator for that too, and we could even abstract away some of the DB parameters above once that happens.
Once the command completes the Keystone Operator will start the deployment. It goes through several phases including: creating the Keystone database, creating a Kubernetes Deployment resource within the cluster, and then bootstrapping the Keystone installation. You can watch the stdout of your Keystone Operator pod if you want to see it happen live. All of the commands should take about a minute or so to complete and once it is finished you should have a live working Keystone installation.
Test it
Now its time to actually use and test the installation. Create a file called stackrc that looks like this (again swap in values for your own environment if you are trying this):
export OS_AUTH_URL=http://keystone-test.apps.test.dprince/
export OS_PASSWORD=foobar123
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export COMPUTE_API_VERSION=1.1
export OS_NO_CACHE=True
export OS_IDENTITY_API_VERSION=3
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_AUTH_VERSION=3
Now lets run a few sample commands to test the installation and show the version of the service:
openstack token issue
openstack versions show
If you used the same containerImage from the OpenStack Train release the 'versions show' command above should display the identity service at version 3.12.
Upgrade it
Next we'll upgrade the Keystone deployment to the OpenStack Train release. This will be a live rolling upgrade, although I'm not sure Keystone officially supports this it demonstrates the capability and seems to work fine.
To fire off the upgrade run the following OpenShift command:
oc patch keystoneapi keystone --type='json' -p '[{"op": "replace", "path": "/spec/containerImage", "value":"docker.io/tripleotrain/centos-binary-keystone:current-tripleo"}]'
This upgdates the keystoneapi resource we initially deployed to use a new containerImage for the Train release. The Operator watches for changes to keystoneapi resources and reacts to them immediately and takes the appropriate actions to run the DB sync for the upgraded and do a rolling update to a newer API container. Again you can watch the stdout of the Keystone Operator container if you want to see it happen live. It should take less than a minute to finish.
Prove that it works
Once the upgrade finishes we'll run another versions command to see what is returned.
openstack versions show
If everything ran correctly you should see 3.13 running the latest OpenStack Train release.
Some final thoughts
Hopefully this gives an idea of what it is like to deploy and manage an application with a Kubernetes/OpenShift Operator. There is still a lot to be implemented to make the Keystone Operator feature complete but it already demos quite nicely I think.