Mu in the cloud
by Daniel Ochoa
- •
- August 20, 2019
- •
- mu• kubernetes• gcloud
- |
- 12 minutes to read.

This is the fifth article in an ongoing series about the Mu open source library for building purely functional microservices.
In this post, we’ll cover how to integrate a Mu application with Kubernetes and the cloud, concretely by deploying it into Google Cloud Platform.
We are going to simulate the behavior of a smart home. We will have a device that will send our geolocation to our house using Mu. The house will then perform various functions like starting/stopping the robot vacuum cleaner, turning the lights on/off, or enabling/disabling the security cameras.
Also, our smart home server will stream all this geolocation data to pubsub
in Google Cloud
. Then, this data will be stored in a BigQuery
table and, finally, we will be able to visualize this data in BigQuery Geo Viz. You can check the code in the Mu smart home repository.
The smart home server will be located in a Kubernetes cluster.
Although we are not providing details in this blog post, these are some of the tools that are needed:
Let’s start creating a fat-jar
and then dockerizing it. Instead of doing this in two steps, we will use the multistage feature of docker, and we will package our app inside of docker.
But first, we need to create the fat-jar
. A Mu application works like any other Scala
app, so we can add the sbt-native-package plugin, and then modify our build.sbt
, enabling the fat-jar packaging to our server submodule:
lazy val server = project
.in(file("server"))
.enablePlugins(UniversalPlugin, JavaAppPackaging)
.settings(moduleName := "mu-smart-home-server")
.settings(serverSettings)
.dependsOn(protocol)
.dependsOn(shared)
You can learn more about this plugin here.
Now we are able to create a zip file that will contain the fat-jar
and a run script using a new sbt
stage provided by the sbt-native-packager
called universal:packageBin
. Using this, we can create our Dockerfile
:
FROM hseeberger/scala-sbt as builder
ARG PROJECT
WORKDIR /build
COPY project project
COPY build.sbt .
RUN sbt update
COPY . .
RUN sbt $PROJECT/universal:packageBin
FROM openjdk:8u181-jre-slim
ARG VERSION
ARG PROJECT
ENV version=$VERSION
COPY --from=builder /build/$PROJECT/target/universal/. .
RUN unzip -o ./mu-smart-home-$VERSION.zip
RUN chmod +x mu-smart-home-$VERSION/bin/mu-smart-home
ENTRYPOINT mu-smart-home-$version/bin/mu-smart-home
You can test it locally with:
docker build --build-arg PROJECT=server --build-arg VERSION=0.1.0 -t mu-smart-home/server-test:0.1.0 .
Once you have your image built, you can start it with:
docker run -p 9111:9111 mu-smart-home/server-test:0.1.0
Now it is time to create the infrastructure of our application in Google Cloud
.
To do this, we are going to use Deployment Manager and jinja
templates.
For our smart-home project, we will need the following services:
- Service account that we will use to connect our cluster to
pubsub
. - Pubsub topic where our geolocation data will be streamed.
- Kubernetes cluster where our smart home server will be deployed.
- A
Kubernetes deployment
and aKubernetes service
of our smart home server. The type of service will beLoadBalancer
. - BigQuery table where the geolocation data is stored.
- Cloud Build to build and publish our docker image in the registry of our project.
All the resources above are described in the jinja
files inside of the deployment
folder in
the smart-home repository. All will be
called by the cloudbuild.yaml
when we run the following command:
gcloud deployment-manager deployments create mu-example --config cloudbuild.yaml
This creates a deployment called mu-example
, described by the file cloudbuild.yaml
. This file contains a collection of resources (listed above) that can be created or managed. In this case, the deployment contains seven resources, and all of them are managed by the Deployment Manager
tool from Google Cloud
.
This single configuration file is quite useful for our example because it provides the real values to our jinja
templates.
From all the resources described above, let’s focus on two of them: the service account and the deployment of our Mu
application.
A service-account
is an identity that an instance or an application can use to run API requests on your behalf. This identity will be named mu-example-service-account
, and it will be used in the deployment of our pubsub
:
resources:
- name: {{ topic }}
type: pubsub.v1.topic
properties:
topic: {{ topic }}
accessControl:
gcpIamPolicy:
bindings:
- role: roles/pubsub.publisher
members:
- "serviceAccount:{{ service_account }}"
We are binding the publisher role to the service-account
created in our deployment. This role allows any app using this service-account
to publish, and only publish events to this pubsub
. Now, the question is, how can we tell our Mu
smart-home app to use this service-account
in order to publish the received events on a topic in our pubsub
?
Well, the answer to this question is quite simple. From a service-account
, you can create a private-key file in json format.
This file must be assigned to an environment variable called GOOGLE_APPLICATION_CREDENTIALS
.
Then, our Mu
app will use this just-created variable to connect with pubsub
with the right permissions. But now the problem is: how can we translate it to an automatized deployment in gcloud
and kubernetes
?
The first step is, when the service-account
is created, extracts its private-key:
- name: {{ service_account_key }}
type: iam.v1.serviceAccounts.key
properties:
name: {{ name }}
parent: $(ref.{{ service_account }}.name)
privateKeyType: TYPE_GOOGLE_CREDENTIALS_FILE
keyAlgorithm: KEY_ALG_RSA_2048
outputs:
- name: privateKeyData
value: $(ref.{{ service_account_key }}.privateKeyData)
As you can see under the outputs field, we are creating a variable called privateKeyData
with the json private-key
of the service-account
. This value can be used by the cloudbuild.yaml
, using this as a parameter for the deployment of other resources.
The second step will be the creation of a Kubernetes secret so the value of the privateKeyData
will be available inside of the cluster:
resources:
- name: secret-deploy
type: {{ cluster_type }}:{{ secret_collection }}
metadata:
dependsOn:
- {{ properties['cluster-type-apps'] }}
properties:
apiVersion: v1
kind: Secret
namespace: {{ properties['namespace'] | default('default') }}
metadata:
name: pubsub-key
deployment: {{ env['deployment'] }}
type: Opaque
data:
key.json: {{ properties['key'] }}
We call our secret pubsub-key
and we store the privateKeyData
in a file called key.json
. Now we can assign this to an environment variable in the deployment of our Mu
application:
spec:
volumes:
- name: google-cloud-key
secret:
secretName: pubsub-key
containers:
- image: {{ image }}
name: mu-smart-home-server
ports:
- containerPort: 9111
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
What we are saying above is that we are using a volume called google-cloud-key
that contains the secret pubsub-key
.
Then, we have a container called mu-smart-home-server
which exposes the port 9111
and uses the volume google-cloud-key
in a specific path.
Finally, we are assigning the needed environment variable with the file stored in the secret, which is the private key from our service-account. With this, our Mu
application has the right permissions to publish in pubsub
!
Now we can continue with the result of our deployment. As soon as it is finished, you can get the credentials for your Kubernetes
cluster with:
gcloud container clusters get-credentials mu-cluster
With these credentials, you are able to connect to your cluster from your local machine using kubectl
. For example, now you can check the pods that you have in your cluster with:
kubectl get pods
You should see something like this in your terminal:
NAME READY STATUS RESTARTS AGE
mu-example-deployment-5f7c78d9b-fww5r 1/1 Running 0 12m
mu-example-deployment-5f7c78d9b-m85g2 1/1 Running 0 12m
Those are the two replicas of our mu-smart-home server
. Since we created a
LoadBalancer
service for this deployment, we can also do
kubectl get services
and extract the external IP
and the port
exposed. We can use these values to configure our mu-smart-home client
locally. But before starting our client, let’s create our Dataflow
function so the data streamed to our pubsub
topic, by the server in Kubernetes
, is also stored in a BigQuery
table.
From the Dataflow
menu, we can create a new job from a template. The template that we will use is the one called Cloud Pub/Sub Topic to BigQuery
as shown in the following image:
Once we have everything configured, we can run the job. Now, when we start our client and it begins to send data to the server allocated in the cluster, our data will also be stored in the table, so we can select it.
But we want this data as geographic coordinates so we can visualize it on a map. To do so, we can execute the following query:
SELECT
ST_GeogPoint(long, lat) AS location,
timestamp
FROM
`<project>.mu_example_dataset.mu_example_table`
LIMIT 1000
In BigQuery Geo Viz we can execute this query and we will have the following map:
Finally, do not forget to remove all resources that you have created. You have to stop the job in Dataflow
and remove the deployment with:
gcloud deployment-manager deployments delete mu-example
You have seen in this post how easy it is deploying a Mu
application into a Kubernetes
cluster allocated in GCloud
. Since Kubernetes
is agnostic to the cloud, you can do the same in any other provider like Azure
or AWS
, using alternatives to PubSub
like Event Hub
or Kinesis
.
Have questions or comments? Join us in the Mu Gitter channel. Mu is possible thanks to an awesome group of contributors. As with all of the open source projects under the 47 Degrees umbrella, we’re an inclusive environment and always looking for new contributors, and we’re happy to provide guidance and mentorship on contributing to, or using, the library.