Orchestrating microservices with Cloud Workflows

The trend toward splitting a monolith into fine-grained loosely-coupled microservices has its merits. It allows us to scale parts of an application more easily. Teams become more effective on their focused perimeter. However, in a chain or graph of services interacting with each other via message buses or other eventing mechanisms, it becomes difficult to understand when things start to break. Your business processes spanning those services are in limbo. Here starts detective work to find out how to get back on track.

Choreography: like a bunch of dancers on the floor composing a ballet. Loosely-coupled microservices compose business processes without really being aware of each other, casually interacting by receiving and sending messages or events.

Orchestration: more like a conductor of an orchestra who directs musicians and their instruments to play each part. The approach of using a higher level solution that purposefully invokes and tracks each individual service, enables developers to know what the current state of a business process is. 

Both approaches have their pros and cons. The loosely-coupled aspects of choreography certainly enables agility. But business processes are harder to follow. Although orchestration adds a single-point-of-failure with its orchestrator tying all the pieces together, it brings clarity in the spaghetti of myriads of microservices.

In addition to GCP’s existing messaging (Cloud Pub/Sub) and eventing solutions (Eventarc) for your service choreography, the newly launched product Cloud Workflows is tackling the orchestration approach. 

Cloud Workflows is a scalable fully-managed serverless system that automates and coordinates services, takes care of error handling and retries on failure, and tells you if the overall process has finished.

In this short video, during the “demo derby” at Google Cloud Next OnAir, I had the chance to present a demo of Cloud Workflows, with some concrete examples:

In this video, I started with the proverbial Hello World, using the Yaml syntax for defining workflows:

- hello:
    return: "Hello from Cloud Workflows!"

I defined a hello step, whose sole purpose is to return a string, as the result of its execution.

Next, I showed that workflow definitions can take arguments, and also return values thanks to more complex expressions:

    params: [args]
        - returnGreeting:
            return: ${"Hello " + args.first + " " + args.last}

Cloud Workflows is able to invoke any HTTP-based service (and supports OAuth2 and OIDC), whether in Google Cloud or outside (on premises, or other servers). Here, I invoke 2 Cloud Functions:

- getRandomNumber:
    call: http.get
        url: https://us-central1-myprj.cloudfunctions.net/randomNumber
    result: randomNumber
- getNthPoemVerse:
    call: http.get
        url: https://us-central1-myprj.cloudfunctions.net/theCatPoem
            nth: ${randomNumber.body.number}
    result: randomVerse
- returnOutput:
    return: ${randomVerse.body}

The getRandomNumber step calls a function that returns a random number with an HTTP GET, and stores the result of that invocation in the randomNumber variable.
The getNthPoemVerse calls another function that takes a query parameter, which is found in the randomNumber variable which holds the result of the previous function invocation.
The returnOutput step then returns the resulting value.

My fourth example shows variable assignment and conditional switches in action:

- getRandomNumber:
    call: http.get
        url: https://us-central1-myprj.cloudfunctions.net/randomNumber
    result: randomNumber
- assign_vars:
        - number: ${int(randomNumber.body.number)}
- conditionalSwitch:
        - condition: ${number < 33}
          next: low
        - condition: ${number < 66}
          next: medium
    next: high
- low:
    return: ${"That's pretty small! " + string(number)}
- medium:
    return: ${"Hmm, okay, an average number. " + string(number)}
- high:
    return: ${"It's a big number! " + string(number)}

Reusing the random function from the previous example, notice how variables are assigned, and how to create a switch with multiple conditions, as well as showing how to redirect the execution of the workflow to different steps, depending on the outcome of the switch.

But there’s really more to this! You can double check the syntax reference, to see all the constructs you can use in your workflow definitions.


Cloud Workflows:
  • Orchestrate Google Cloud and HTTP-based API services into serverless workflows
  • Automate complex processes
  • Fully managed service requires no infrastructure or capacity planning
  • Fast scalability supports scaling down to zero and pay-per-use pricing model
In terms of features:
  • Reliable workflow execution
  • Built-in error handling
  • Passing variable values between workflow steps
  • Built-in authentication for Google Cloud products
  • Low latency of execution
  • Support for external API calls
  • Built-in decisions and conditional step executions
  • Cloud Logging
If you want to get started with Cloud Workflows, you can head over to this hands-on codelabs from my colleague Mete Atamel. Learn more by watching this longer video by Product Manager Filip Knapik who dives into Cloud Workflows. In upcoming articles, we’ll come back to Workflows into more details, diving into some more advanced features, or how to migrate a choreographed example, into an orchestrated one. So, stay tuned!

The Developer Advocacy Feedback Loop

For one of the closing keynotes of DevRelCon Earth 2020, I spoke about what I call the Developer Advocacy Feedback Loop. People often think about developer relations and advocacy as just being about external outreach. However, there’s more to it! Developer Advocates are here to represent users, developers, technical practitioners, to influence the roadmap and development of the services and products to suit their needs. That’s the internal advocacy that loops back into improving the products.

Without further ado, let me share with you the slide deck here:

And the video:

Let me paraphrase what I presented in this talk.

For the past 4 years, I’ve been a Developer Advocate, for Google, focusing on Google Cloud, and especially our serverless solutions (like App Engine, Cloud Functions, Cloud Run). I fell in the magic potion of advocacy, inadvertently, a long time ago while working on an open source project. This project is the Apache Groovy programming language. I was leading the project, but at the same time, I was also evangelising it at events, through articles,, and was trying to incorporate the feedback I was getting in the field back into the project. I was doing advocacy without really realizing it, like Mr Jourdain in Molière’s play who was speaking in prose without knowing it. But I really saw a loop, a feedback loop, in the process, in how you spread the word about technologies, but also how you can listen to the feedback and improve your product. 

If you’ve studied Electronics, you might have seen such diagrams about the feedback loop. There’s something in input, something in output, but there’s a loop back, that brings some output back into the input channel. To make the parallel with advocacy... Advocacy is not just a one-way monologue, it’s a conversation: you’re here to tell a story to your kids for example, but you listen to feedback from the audience, on how to make your story even better. Not just how you tell the story (better intonation, pauses), but really improving the plot, the characters, the setting,≈‹ everything, perhaps making up a totally different story in the end!

Let me start with a short disclaimer. If you ask this room to give a definition of developer relations, or developer advocacy, or evangelism (a term I avoid because of its connotations), you’ll get as many answers as there are attendees. I don’t claim I have THE ultimate definitions for these concepts and approaches. I don’t claim those things are the same things, or are different. And anyway, there’s not just one way to do it, there’s a multitude of ways. They are many things we do the same way, but I’m sure there are many incredible things you do that I’m not even aware of but that I’d like to learn more about! But I’ll tell you how I am doing developer advocacy, and where this feedback loop comes into play. 

So, who are we? DevRel is not always the same thing everywhere, in every company. And there’s not just one way to do DevRel. 

Are we salespeople? Not necessarily, I don’t get any bucks when I indirectly help sign a new customer deal. My metrics are more about the number of developers reached, or views on my articles or videos, or Twitter impressions. 

So are we marketing people? Well, I have some similar metrics for sure, I advertise the products or company I represent, but my goal is that my audience (the technical practitioners) be successful, even if they end up not using my technology. I want my audience to even advocate themselves for those products if possible (if the product is good and makes sense for them).

Are we engineers? In my case, yes I am, I’m even in the Engineering org chart, and to show more empathy towards our engineer users, it’s easier if we’re engineers ourselves. We speak the same language. We’re part of the same community. We have the same tool belt. Also as an engineer, I can even sometimes contribute to the products I talk about. But it’s not because you’re not an engineer that you can’t succeed, and be a good advocate! Empathy is really key in this role, more so probably than engineering chops.

Or are we PMs? In a previous life, in a small startup, I was actually wearing 2 hats: PM & DA. But it's tough to do two jobs like these at the same time. As a DA (without being a PM), with my contributions, my feedback from the field, from the community I advocate for, I do influence the roadmap of our products, for sure. But I’m only a part of the equation. However providing critical product feedback is super important in my job. That’s the key aspect of the developer advocacy feedback loop! 

Perhaps we’re just international travelers? We’re measured by the number of visa stamps on our passports? Ah well, maybe. Or maybe not, we try to be greener, but with COVID-19, things have changed recently! The pandemic refines our job, our duties, our ways to communicate. There’s lots we can do in the comfort of our home office too. 

Ultimately, we’re all different, but we all have myriads of ways to contribute and reach our goal. Some of us may be focusing more on awesome videos tutorials, some on organizing hours-long hackathons, and others will be awesome beta-testers for our products, write cristal-clear code samples or SDKs, etc. There’s not just one way to be a great Developer Advocate. You don’t need to do it all. And we’re a team. So we complement each other with our respective strengths. And we work with others too, like marketing, sales, consulting, tech writers, leadership. 

What do we do, what’s our goal? We are empowering our users to reach their goals. We want to make them successful. We’re enabling customer success. We’re driving mindshare in the field, in our communities. We are making our users happy! 

How do we make our community, our users, our customers be successful? There are many tools for that. Some of the most well-known tools that we can use are outward facing: it’s about external outreach (talks, articles, videos, etc.) But to make our communities more successful, we also need to get our products improved. That’s where we create the feedback loop, with our internal influence, thanks to some tools I’ll enumerate, we can help make the products better, by bringing our users’ feedback up the chain to the PMs, Product Leads, etc. Let me show you. 

Let me introduce you to our personas of my story, of my feedback loop.

At the top, you have the product leadership, the PM, CxOs, the SWEs. At the bottom, that’s our users, our customers, our technical practitioners And in the middle, in between, there’s you, the Developer Advocate.

But in a way, there are two teams. Here, in the white cloud, at the top, that’s your company.

But at the bottom, that’s your community, with your users. You’re not just part of the company, you’re also part of the community. You are the advocate for your users, representing them to the product leadership, so that their voice is being heard! 

That’s the external outreach. What some call evangelism, the outward part of the loop. You’re the voice of the company. You spread the word on your cool technology. You’re creating great demos, code samples, polished videos. You’re writing helpful articles, useful tutorials, readable documentation. You’re attending and presenting at events to talk about the products. You’re helping users succeed by answering questions on social media, StackOverflow, or other forums.

What makes it a feedback loop is this part. It’s not just a by-product of the external outreach. It’s an integral part of the advocacy work. There’s the obvious stuff like filing bugs, or being a customer zero by testing the product before it launches. But things like writing trip reports, friction logs, customer empathy sessions may be new to you. If you can, make it a habit to produce such artifacts. And you can list, and track, and report about all those feedback elements that you bring upstream, and check how it’s being enacted or not. 

Often people think about us mostly for the outreach part, the arrow going downward toward our community. They can think we’re just kind of marketing puppets. And I’ve seen conference organisers complaining they only got “evangelists” at their show, when they wanted “real engineers“ instead, working on the products or projects. But frankly, they are not necessarily always the best at explaining their own projects! Folks often forget that we’re here to make them successful, and report their feedback, their needs, to advocate for them, and to influence the decision makers to make better products that fill the needs of those users. Both parts are critical! And please pay attention to that feedback loop, to that arrow going back to the top of the slide, to the leadership. 

So let’s see some concrete examples of the things you can put in place to provide feedback, and show that DevRel is important and has a strong impact. 

To make developers happy, you need to remove as much friction as possible. You want the developer experience to be as smooth as possible. You might need to work with UX designers and PMs directly for that. But you can also report about your findings, where you saw friction by writing a friction log. Last week, my colleague Emma spoke about this at DevRelCon Earth, and another great colleague, Aja, wrote about friction logging on DevRel.net a while ago. Great resources to check out!

I’m going to show you a real friction log. There’s some metadata about the environment, date, user name, scenario title, etc. You’re reporting about some concrete use case you were trying to implement (an app you were building, a new API you were trying to use, etc.) You document all the steps you followed, and tell what worked or not, how you expected things to work out. This document will be shared broadly via an alias which pings most PMs, tech leads, etc. So it’s very visible. 

But the key thing here is the color coding aspect. You show where there’s friction, where there’s frustration, where you’d quit if you were a real user. But also, it’s super important to highlight what worked well, what surprised you, what delighted you. It’s not just about the negative things. 

And the last trick to make it effective: add comments, tagging key stakeholders (PMs, Tech Leads, etc), so they really acknowledge the problem. Create associated bug requests, and track them to check if progress is made.

My colleague Zack even developed an application for creating friction vlogs. Video friction logs! With a video, you can show concretely your frustration (but perhaps don’t swear too much). A video shows where you struggle, where you lose time. You can navigate to various sections in the video, and annotate those sections, with the green / orange / red color coding scheme. The tool also creates a classical written friction log document as well. I found that application pretty neat, to be honest, especially as it also shows where users struggle, where they lose time.

You can apply the same approach to other kinds of reporting activities. We often write reports for our trips, events, meetups, customer engagements. In particular, although we’re not sales people, we’re trying to show that we also have an impact on sales. Customers love having DevRel people come and show cool stuff! And we can collect and show the feedback coming from the field to the leadership. It’s not just us sharing our own impressions and ideas, it’s really coming from someone else’s mouth, so it has more weight in the conversation. I’d like to highlight our internal advocacy reporting: we have someone on the team that collects all our bug reports (and included them in bug hotlists), all our friction logs, our trip reports, and who actively tracks how this feedback is taken into account, and it’s a very effective way of showing that we do have impact to the leadership, beyond the usual metrics. And by the way, even those DevRel product feedback reports make use of the color coding we have in friction logs. So it’s a very familiar thing for all our engineering team. 

Another interesting thing we’re running is what we call customer empathy sessions, a concept invented by my colleague Kim. Gather various PMs, SWEs, DevRel people, potentially customers but not mandatory, in the same room (or virtually nowadays) and craft some concrete scenarios of something you’d like them to build in small groups (but where you know there’s gonna be lots of friction). With teams of 3 or more, each one has a role: a driver, a scribe, and a searcher. Have them do the task. Then compare copies at the end. It’s a bit like creating new Critical User Journeys that have not been addressed, that exhibit a lot of friction. But this time the engineers, the PM, will really feel the very same frustration our customers can potentially feel when they can’t accomplish their tasks. The various teams often work in silos, on a particular aspect, and avoid certain paths (when you know you shouldn’t click somewhere, you won’t do it, you’ll use the other path you know works). But customer empathy sessions are here to show what our users have to go through in real scenarios, beyond a handful of critical journeys. In summary, feel the pain, and show empathy toward your customers! Really, I won’t stress this enough, but empathy is key here, and a real driver for positive change. 

We can do scalable advocacy, by creating things like videos that are broadcasted to many thousands of watchers. That have a long shelf time, which is less ephemeral than a conference talk. But sometimes, it’s also good to do things that actually don’t scale. Helping one single person can make a big difference: at a conference after my talk, I had a long conversation with an attendee that had a particular need. I onboarded them on our early access program for a new product, which seemed to be what they needed. They could provide key feedback to our PMs and engineers. And they helped us get that new product ready with a real use case. And the next year, the attendee was a key customer, that even came on stage to talk about the product. So I both won a new customer and a new advocate for that product. So the hallway track at events is very important. And that’s the kind of feedback signals I’m missing in those times of pandemic. 

Another approach is office hours: you set up some time slots in your calendar, and anyone can book time with you. That’s a great way to get feedback, and see what problems users are facing. I haven’t tried that myself, as I’m a bit shy, and afraid someone would ask questions on topics I don’t know much about! But that’s very effective, and I have several colleagues doing that, and who are learning along the way. 

Sometimes, your community, your users, will highlight a missing gap in your product portfolio. And it might give you some ideas of a product that would delight those persons, and they could become customers if you had that product. So that’s actually how some of my colleagues went on creating totally new products, for example for gaming companies, or for secret management. On another occasion, as I had strong ideas on how a new product runtime should look like, I went on designing and prototyping an API that our users would use. Somehow, it’s a bit like being the change you want to see in the world! And the API I designed, further improved with the engineering team, is now an API our customers are using today.

Time to wrap up. Often, the proof is in the pudding. It’s not just about our intuitions or own personal experience. You need to gather feedback, in particular concrete customer feedback, to prove that you’re right. And when it’s a customer with some money to spend, usually product leadership listens.

Sometimes, it’s all roses and bloom! Our feedback, ideas, features are implemented! Woohoo! Success!

There’s the ideal world where we indeed influence products, but sometimes, we also hit a brick wall, a dead end, we’re not at the helm, and our feedback is not taken into account. Our companies can be big, work in silos, and it’s sometimes a struggle to find the right people who are able to listen to us, and are able to get change enacted. Be resilient, let it not affect you personally, but return to the charge if you really think it’s important for your community! 

Remember: We’re in it together! It’s a team’s effort. Let’s make our users happy! And how to make them happy? By making great products, with great user and developer experience. By showing empathy toward our users, wear their shoes, listen to their feedback, and let that feedback be heard up above, to improve our products, by advocating for our users. That’s where the feedback loop closes. Thanks for your attention.

Running Micronaut serverlessly on Google Cloud Platform

Last week, I had the pleasure of presenting Micronaut in action on Google Cloud Platform, via a webinar organized by OCI. Particularly, I focused on the serverless compute options available: Cloud Functions, App Engine, and Cloud Run.

Here are the slides I presented. However, the real meat is in the demos which are not displayed on this deck! So let’s have a closer look at them, until the video is published online.

On Google Cloud Platform, you have three solutions when you want to deploy your code in a serverless fashion (ie. hassle-free infrastructure, automatic scaling, pays-as-you-go): 

  • For event-oriented logic that reacts to cloud events (a new file in cloud storage, a change in a database document, a Pub/Sub message) you can go with a function. 

  • For a web frontend, a REST API, a mobile API backend, also for serving static assets for single-page apps, App Engine is going to do wonders. 

  • But you can also decide to containerize your applications and run them as containers on Cloud Run, for all kinds of needs. 

Both Cloud Functions and App Engine provide a Java 11 runtime (the latest LTS version of Java at the time of writing), but with Cloud Run, in a container, you can of course package whichever Java runtime environment that you want. 

And the good news is that you can run Micronaut easily on all those three environments!

Micronaut on Cloud Functions

HTTP functions

Of those three solutions, Cloud Functions is the one that received a special treatment, as the Micronaut team worked on a dedicated integration with the Functions Framework API for Java. Micronaut supports both types of functions: HTTP and background functions.

For HTTP functions, you can use a plain Micronaut controller. Your usual controllers can be turned into an HTTP function. 

package com.example;

import io.micronaut.http.annotation.*;
public class HelloController {
    @Get(uri="/", produces="text/plain")
    public String index() {
        return "Micronaut on Cloud Functions";

Micronaut Launch tool even allows you to create a dedicated scaffolded project with the right configuration (ie. the right Micronaut integration JAR, the Gradle configuration, including for running functions locally on your machine.) Pick the application type in the Launch configuration, and add the google-cloud-function module.

In build.gradle, Launch will add the Functions Frameworks’ invoker dependency, which allows you to run your functions locally on your machine (it’s also the framework that is used in the cloud to invoke your functions, ie. the same portable and open source code):


It adds the Java API of the Functions Framework, as compileOnly as it’s provided by the platform when running in the cloud:


And Micronaut’s own GCP Functions integration dependency:


And there’s also a new task called runFunction, which allows you to run your function locally:

./gradlew runFunction

If you decide to use Maven, the same dependencies are applied to your project, but there’s a dedicated Maven plugin that is provided to run functions locally.

./mvnw function:run

Then to deploy your HTTP function, you can learn more about the topic in the documentation. If you deploy with the gcloud command-line SDK, you will deploy with a command similar to the following one (depending on the region, or size of the instance you want to use):

gcloud functions deploy hello \
    --region europe-west1 \
    --trigger-http --allow-unauthenticated \
    --runtime java11 --memory 512MB \
    --entry-point io.micronaut.gcp.function.http.HttpFunction

Note that Cloud Functions can build your functions from sources when you deploy, or it can deploy a pre-build shadowed JAR (as configured by Launch.)

Background functions

For background functions, in Launch, select the Micronaut serverless function type. Launch will create a class implementing the BackgroundFunction interface from the Function Frameworks APIs. But it will extend the GoogleFunctionInitializer class from Micronaut’s function integration, which takes care of all the usual wiring (like dependency injection). This function by default receives a Pub/Sub message, but there are other types of events that you can receive, like when a new file is uploaded in cloud storage, a new or changed document in the Firestore nosql document database, etc.

package com.example;

import com.google.cloud.functions.*;
import io.micronaut.gcp.function.GoogleFunctionInitializer;
import javax.inject.*;
import java.util.*;
public class PubSubFunction extends GoogleFunctionInitializer
        implements BackgroundFunction {
    @Inject LoggingService loggingService;
    public void accept(PubSubMessage pubsubMsg, Context context) {
        String textMessage = new String(Base64.getDecoder().decode(pubsubMsg.data));
class PubSubMessage {
    String data;
    Map attributes;
    String messageId;
    String publishTime;
class LoggingService {
    void logMessage(String txtMessage) {

When deploying, you’ll define a different trigger, for example here, it’s a Pub/Sub message, so you’ll use a --trigger-topic TOPIC_NAME flag to tell the platform you want to receive messages on that topic.

For deployment, the gcloud command would look as follows:

gcloud functions deploy pubsubFn \
    --region europe-west1 \
    --trigger-topic TOPIC_NAME \
    --runtime java11 --memory 512MB \
    --entry-point com.example.PubSubFunction

Micronaut on App Engine

Micronaut deploys fine as well on App Engine. I wrote about it in the past already. If you’re using Micronaut Launch, just select the Application type. App Engine allows you to deploy the standalone runnable JARs generated by the configured shadow JAR plugin. But if you want to easily stage your application deliverable, to run the application locally, to deploy, you can also use the Gradle App Engine plugin.

For that purpose, you should add the following build script section in build.gradle: 

buildscript {
    repositories {
    dependencies {
        classpath 'com.google.cloud.tools:appengine-gradle-plugin:2.3.0'

And then apply the plugin with:

apply plugin: 'com.google.cloud.tools.appengine'

Before packaging the application, there’s one extra step you need to go through, which is to add the special App Engine configuration file: app.yaml. You only need to add one line, unless you want to further configure the instance types, specify some JVM flags, point at static assets, etc. But otherwise, you only need this line in src/main/appengine/app.yaml:

runtime: java11

Then, stage your application deliverable with:

./gradlew appengineStage

Cd in the directory, and you can deploy with the plugin or with the gcloud SDK:

cd build/staged-app/
gcloud app deploy

During the demonstration, I showed a controller that was accessing some data from the Cloud Firestore nosql database, listing some pet names:

package com.example;

import java.util.*;
import com.google.api.core.*;
import com.google.cloud.firestore.*;
import com.google.cloud.firestore.*;
import io.micronaut.http.annotation.*;
public class WelcomeController {
    @Get(uri="/", produces="text/html")
    public String index() {
        return "

Hello Google Cloud!

    @Get(uri="/pets", produces="application/json")
    public String pets() throws Exception {
        StringBuilder petNames = new StringBuilder().append("[");
        FirestoreOptions opts = FirestoreOptions.getDefaultInstance();
        Firestore db = opts.getService();
        ApiFuture query = db.collection("pets").get();
        QuerySnapshot querySnapshot = query.get();
        List documents = querySnapshot.getDocuments();
        for (QueryDocumentSnapshot document : documents) {
                .append("\", ");
        return petNames.append("]").toString();

Micronaut on Cloud Run

Building a Micronaut container image with Jib

In a previous article, I talked about how to try Micronaut with Java 14 on Google Cloud. I was explaining how to craft your own Dockerfile, instead of the one generated then by default by Micronaut Launch (now, it is using openjdk:14-alpine). But instead of fiddling with Docker, in my demos, I thought it was cleaner to use Jib. Jib is a tool to create cleanly layered container images for your Java applications, without requiring a Docker daemon. There are plugins available for Gradle and Maven, I used the Gradle one by configuring my build.gradle with:

plugins {
    id "com.google.cloud.tools.jib" version "2.4.0"

And by configuring the jib task with:

jib {
    to {
        image = "gcr.io/serverless-micronaut/micronaut-news"
    from {
        image = "openjdk:14-alpine"

The from/image line defines the base image to use, and the to/image points at the location in Google Cloud Container Registry where the image will be built, and we can then point Cloud Run at this image for deployment:

gcloud config set run/region europe-west1
gcloud config set run/platform managed
./gradlew jib
gcloud run deploy news --image gcr.io/serverless-micronaut/micronaut-news --allow-unauthenticated

Bonus points: Server-Sent Events

In the demo, I showed the usage of Server-Sent Events. Neither Cloud Functions nor App Engine support any kind of streaming, as there’s a global frontend server in the Google Cloud infrastructure that buffers requests and responses. But Cloud Run is coming up with streaming support (HTTP/2 streaming, gRPC streaming, and server-sent events, but not yet WebSocket streaming). The feature is in alpha, but is coming in beta soon, and if you want to get access to that feature feel free to fill this form.

So that was a great excuse to play with Micronaut’s SSE support. I went with a slightly modified example from the documentation, to emit a few string messages a second apart:

package com.example;

import io.micronaut.http.MediaType;
import io.micronaut.http.annotation.*;
import io.micronaut.http.sse.Event;
import io.micronaut.scheduling.TaskExecutors;
import io.micronaut.scheduling.annotation.ExecuteOn;
import io.reactivex.Flowable;
import org.reactivestreams.Publisher;
public class NewsController {
    @Get(produces = MediaType.TEXT_EVENT_STREAM)
    public Publisher> index() { 
        String[] ids = new String[] { "1", "2", "3", "4", "5" };
        return Flowable.generate(() -> 0, (i, emitter) -> { 
            if (i < ids.length) {
                    Event.of("Event #" + i)
                try { Thread.sleep(1000); } catch (Throwable t) {}
            } else {
            return ++i;

Then I accessed the /news controller and was happy to see that the response was not buffered and that the events were showing up every second.

Apart from getting on board of this alpha feature of Cloud Run (via the form mentioned to get my GCP project whitelisted), I didn’t have to do anything special to my Micronaut setup from the previous section. No further configuration required, it just worked out of the box.


The great benefit to using Micronaut on Google Cloud Platform’s serverless solutions is that thanks to Micronaut’s ahead-of-time compilation techniques, it starts and runs super fast, and consumes much less memory than other Java frameworks. Further down the road, you can also take advantage of GraalVM for even faster startup and lower memory usage. Although my examples were in Java, you can also use Kotlin or Groovy if you prefer.

Video: getting started with Java on Google Cloud Functions

For the 24 hours of talks by Google Cloud DevRel, I recorded my talk about the new Java 11 runtime for Google Cloud Functions. I wrote about this runtime in this article showing for example how to run Apache Groovy functions, and I also wrote about it on the GCP blog and Google Developers blog as well.

In this video, I'm giving a quick explanations on the serverless approach, the various serverless options provided by Google Cloud, and then I dive into the various shapes Java functions can take (HTTP and background functions), the interfaces you have to implement when authoring a function. And I also do various demonstrations, deploying Java functions, Groovy functions, or Micronaut functions!

Deploying serverless functions in Groovy on the new Java 11 runtime for Google Cloud Functions

Java celebrates its 25th anniversary!  Earlier this year, the Apache Groovy team released the big 3.0 version of the programming language. GMavenPlus was published in version 1.9 (the Maven plugin for compiling Groovy code) which works with Java 14. And today, Google Cloud opens up the beta of the Java 11 runtime for Cloud Functions. What about combining them all?

I’ve been working for a bit on the Java 11 runtime for Google Cloud Functions (that’s the Function-as-a-Service platform of Google Cloud, pay-as-you-go, hassle-free / transparent scaling), and in this article, I’d like to highlight that you can also write and deploy functions with alternative JVM languages like Apache Groovy

So today, you’re going to:
  • Write a simple Groovy 3.0 function,
  • Compile it with Maven 3.6 and the GMavenPlus 1.9 plugin, 
  • Deploy and run the function on the Cloud Functions Java 11 runtime!
Note: If you want to try this at (work from?) home, you will need an account on Google Cloud, you can easily create a free account and benefit from $300 of cloud credits to get started (including also free quotas for many products). You will also need to create a billing account, but for the purpose of this tutorial, you should be within the free quota (so your credit card shouldn’t be billed). Then, head over to the console.cloud.google.com cloud console to create a new project. And then navigate to the Cloud Functions section to enable the service for your project.

Let’s get started! So what do we need? A pom.xml file, and a Groovy class! 

Let’s start with the pom.xml file, and what you should add to your build file. First of all, since I’m using Groovy as my function implementation language, I’m going to use GMavenPlus for compilation. So in the build/plugins section, I configure the plugin as follows:


That way, when I do an mvn compile, my Groovy sources are compiled as part of the compilation lifecycle of Maven.

But I’m adding a second plugin, the Functions Framework plugin! That’s a Maven plugin to run functions locally on your machine, before deploying into the cloud, so that you can have a local developer experience that’s easy and fast. The Functions Framework is actually an open source project on Github. It’s a lightweight API to write your functions with, and it’s also a function runner / invoker. What’s interesting is that it also means that you are not locked in the Cloud Functions platform, but you can run your function locally or anywhere else where you can run a JAR file on a JVM! Great portability! 

So let’s configure the Functions Framework Maven plugin:


I specify a configuration flag to point at the function I want to run. But we’ll come back in a moment on how to run this function locally. We need to write it first!

We need two more things in our pom.xml, a dependency on Groovy, but also on the Functions Framework Java API.



So you’re all set for the build, let’s now create our function in src/main/groovy/mypackage/HelloWorldFunction.groovy.

There are two flavors of functions: HTTP functions and background functions. The latter react to cloud events like a new file stored in Cloud Storage, a new data update in the Firestore database, etc. Whereas the former directly exposes a URL that can be invoked via an HTTP call. That’s the one I want to create to write a symbolic “Hello Groovy World” message in your browser window.

package mypackage

import com.google.cloud.functions.*
class HelloWorldFunction implements HttpFunction {
    void service(HttpRequest request, HttpResponse response) {
        response.writer.write "Hello Groovy World!"

Yes, that’s all there is to it! You implement a Functions Framework interface, and its service() method. You have a request / response mode (a request and a response parameters are passed to your method). You can access the writer to write back to the browser or client that invoked the function.

Now it’s time to run the function locally to see if it’s working. Just type the following command in your terminal:

mvn function:run

After a moment, and some build logs further, you should see something like:

INFO: Serving function...
INFO: Function: mypackage.HelloWorldFunction
INFO: URL: http://localhost:8080/

With your browser (or curl), you can browse this local URL, and you will see the hello world message appearing. Yay!

With the Maven plugin, you can also deploy, but you can use the gcloud command-line tool to deploy the function:

gcloud functions deploy helloFunction \
--region europe-west1 \
--trigger-http --allow-unauthenticated \
--runtime java11 \
--entry-point mypackage.HelloWorldFunction \
--memory 512MB

After a little moment, the function is deployed, and you’ll notice you’ll have a URL created for your function looking something like this:


The very same function now runs in the cloud! A pretty Groovy function! This function is portable: you can invoke it with the Functions Framework invoker, anywhere you can run a JVM.

Going further, I encourage you to have a look at the Functions Framework documentation on Github to learn more about it. Here you deployed the function source and the pom.xml file, as the function will be built directly in the cloud. But it’s also possible to compile and create a JAR locally and deploy that instead. That’s interesting for example if you want to use another build tool, like Gradle. And this will be the purpose of another upcoming article!

© 2012 Guillaume Laforge | The views and opinions expressed here are mine and don't reflect the ones from my employer.