Deploying serverless functions in Groovy on the new Java 11 runtime for Google Cloud Functions

Java celebrates its 25th anniversary!  Earlier this year, the Apache Groovy team released the big 3.0 version of the programming language. GMavenPlus was published in version 1.9 (the Maven plugin for compiling Groovy code) which works with Java 14. And today, Google Cloud opens up the beta of the Java 11 runtime for Cloud Functions. What about combining them all?

I’ve been working for a bit on the Java 11 runtime for Google Cloud Functions (that’s the Function-as-a-Service platform of Google Cloud, pay-as-you-go, hassle-free / transparent scaling), and in this article, I’d like to highlight that you can also write and deploy functions with alternative JVM languages like Apache Groovy

So today, you’re going to:
  • Write a simple Groovy 3.0 function,
  • Compile it with Maven 3.6 and the GMavenPlus 1.9 plugin, 
  • Deploy and run the function on the Cloud Functions Java 11 runtime!
Note: If you want to try this at (work from?) home, you will need an account on Google Cloud, you can easily create a free account and benefit from $300 of cloud credits to get started (including also free quotas for many products). You will also need to create a billing account, but for the purpose of this tutorial, you should be within the free quota (so your credit card shouldn’t be billed). Then, head over to the console.cloud.google.com cloud console to create a new project. And then navigate to the Cloud Functions section to enable the service for your project.

Let’s get started! So what do we need? A pom.xml file, and a Groovy class! 

Let’s start with the pom.xml file, and what you should add to your build file. First of all, since I’m using Groovy as my function implementation language, I’m going to use GMavenPlus for compilation. So in the build/plugins section, I configure the plugin as follows:

      <plugin>
        <groupId>org.codehaus.gmavenplus</groupId>
        <artifactId>gmavenplus-plugin</artifactId>
        <version>1.9.0</version>
        <executions>
          <execution>
            <id>groovy-compile</id>
            <phase>process-resources</phase>
            <goals>
              <goal>addSources</goal>
              <goal>compile</goal>
            </goals>
          </execution>
        </executions>
        <dependencies>
          <dependency>
            <groupId>org.codehaus.groovy</groupId>
            <artifactId>groovy-all</artifactId>
            <version>3.0.4</version>
            <scope>runtime</scope>
            <type>pom</type>
          </dependency>
        </dependencies>
      </plugin>


That way, when I do an mvn compile, my Groovy sources are compiled as part of the compilation lifecycle of Maven.

But I’m adding a second plugin, the Functions Framework plugin! That’s a Maven plugin to run functions locally on your machine, before deploying into the cloud, so that you can have a local developer experience that’s easy and fast. The Functions Framework is actually an open source project on Github. It’s a lightweight API to write your functions with, and it’s also a function runner / invoker. What’s interesting is that it also means that you are not locked in the Cloud Functions platform, but you can run your function locally or anywhere else where you can run a JAR file on a JVM! Great portability! 

So let’s configure the Functions Framework Maven plugin:

      <plugin>
        <groupId>com.google.cloud.functions</groupId>
        <artifactId>function-maven-plugin</artifactId>
        <version>0.9.2</version>
        <configuration>
          <functionTarget>mypackage.HelloWorldFunction</functionTarget>
        </configuration>
      </plugin>

I specify a configuration flag to point at the function I want to run. But we’ll come back in a moment on how to run this function locally. We need to write it first!

We need two more things in our pom.xml, a dependency on Groovy, but also on the Functions Framework Java API.

    <dependency>
      <groupId>com.google.cloud.functions</groupId>
      <artifactId>functions-framework-api</artifactId>
      <version>1.0.1</version>
      <scope>provided</scope>
    </dependency>

    <dependency>
      <groupId>org.codehaus.groovy</groupId>
      <artifactId>groovy-all</artifactId>
      <version>3.0.4</version>
      <type>pom</type>
    </dependency>

So you’re all set for the build, let’s now create our function in src/main/groovy/mypackage/HelloWorldFunction.groovy.

There are two flavors of functions: HTTP functions and background functions. The latter react to cloud events like a new file stored in Cloud Storage, a new data update in the Firestore database, etc. Whereas the former directly exposes a URL that can be invoked via an HTTP call. That’s the one I want to create to write a symbolic “Hello Groovy World” message in your browser window.

package mypackage

import com.google.cloud.functions.*
class HelloWorldFunction implements HttpFunction {
    void service(HttpRequest request, HttpResponse response) {
        response.writer.write "Hello Groovy World!"
    }
}

Yes, that’s all there is to it! You implement a Functions Framework interface, and its service() method. You have a request / response mode (a request and a response parameters are passed to your method). You can access the writer to write back to the browser or client that invoked the function.

Now it’s time to run the function locally to see if it’s working. Just type the following command in your terminal:

mvn function:run

After a moment, and some build logs further, you should see something like:

INFO: Serving function...
INFO: Function: mypackage.HelloWorldFunction
INFO: URL: http://localhost:8080/

With your browser (or curl), you can browse this local URL, and you will see the hello world message appearing. Yay!

With the Maven plugin, you can also deploy, but you can use the gcloud command-line tool to deploy the function:

gcloud functions deploy helloFunction \
--region europe-west1 \
--trigger-http --allow-unauthenticated \
--runtime java11 \
--entry-point mypackage.HelloWorldFunction \
--memory 512MB

After a little moment, the function is deployed, and you’ll notice you’ll have a URL created for your function looking something like this:

https://europe-west1-myprojectname.cloudfunctions.net/helloFunction

The very same function now runs in the cloud! A pretty Groovy function! This function is portable: you can invoke it with the Functions Framework invoker, anywhere you can run a JVM.

Going further, I encourage you to have a look at the Functions Framework documentation on Github to learn more about it. Here you deployed the function source and the pom.xml file, as the function will be built directly in the cloud. But it’s also possible to compile and create a JAR locally and deploy that instead. That’s interesting for example if you want to use another build tool, like Gradle. And this will be the purpose of another upcoming article!

The Pic-a-Daily serverless workshop, now in video

With my partner in crime, Mete Atamel, we ran two editions of our "Pic-a-Daily" serverless workshop. It's an online, hands-on, workshop, where developers get their hands on the the serverless products provided by Google Cloud Platform:
  • Cloud Functions — to develop and run functions, small units of logic glue, to react to events of your cloud projects and services
  • App Engine — to deploy web apps, for web frontends, or API backends
  • Cloud Run — to deploy and scale containerised services

The theme of the workshop is to build a simple photosharing application (hence the play on words, with a picture a day) with those serverless products, but along the way, developers also get to use other services like:
  • Pub/Sub — as a messaging fabric to let events flow between your services
  • Firestore — for storing picture metadata in the scalable document database
  • Cloud Storage — to store the image blobs
  • Cloud Scheduler — to run a services on a schedule (ie. cron as a service)
  • Cloud Vision API — a machine learning API to make sense of what's in your pictures

The workshop is freely accessible on our codelabs platform: "Pic-a-Daily" serverless workshop. So you can follow this hands-on workshop on your own, at your own pace. There are 4 codelabs:
  • The first one lets you build a function that responds to events as new pictures are uploaded into Cloud Storage, invoking the Vision API to understand what is in the picture, and storing some picture metadata information in Firestore.
  • The second lab will use a Cloud Run service which reacts to new files stored in Cloud Storage too, but will create thumbnails of the pictures.
  • A third lab is also taking advantage of Cloud Run to run on a schedule, thanks to Cloud Scheduler. It creates a collage of the most recent pictures.
  • Last but not least, the fourth lab will let you build a web frontend and backend API on Google App Engine.

We have a dedicated Github repository where you can check-out the code of the various functions, apps and containers, and you can have a look at the slide deck introducing the workshop and the technologies used.

And now, the videos of the first edition are also available on YouTube!

The first part covers Cloud Functions and Cloud Run with the first two labs:


The second part covers Cloud Run and App Engine:



Covid learning: Machine Learning applied to music generation with Magenta

I missed this talk from Alexandre Dubreuil, when attending Devoxx Belgium 2019, but I had the chance to watch while doing my elliptical bike run, confined at home. It's about applying Machine Learning to music generation, thanks to the Magenta project, which is based on Tensorflow.


I like playing music (a bit of piano & guitar) once in a while, so as a geek, I've also always been interested in computer generated music. And it's hard to generate music that actually sounds pleasant to the ear! Alexandre explains that it's hard to encode the rules a computer could follow to play music, but that machine learning is pretty interesting, as it's able to learn complex functions, thus understanding what does sound good.

He, then, covers the various types of music representations, like MIDI scores which are quite light in terms of data, and audio waves which on the high end of data as there are thousands of data points representing the position on the wave along the time axis. While MIDI represents a note of music, audio waves really represent the sound physically as a wave (of data points).

Note that in the following part of the article, I'm not an ML / AI expert, so I'm just trying to explain what I actually understood :-)

For MIDI, Recurrent Neural Networks (RNN) make sense, as they work on sequences for the input and output, and also have the ability to remember past information. And that's great as you find recurring patterns in music (series of chords, main song lines, etc.) 

RNN tend to forget progressively those past events, so those networks often use Long-Short-Term-Memory to keep some of their memory fresh.

Variational Auto-Encoders are a pair of networks that diminish the dimensions of outputs compared to the quantity in input, but to then re-expand back to the same size of output. So VAEs try to actually generate back something that's close to what was initially given in input, but it events to reproduce similar patterns.

For audio waves, Magenta comes with a Convolutional Neural Network (CNN) called WaveNet, that's used for example for voice generation on devices like Google Home. There are WaveNet Auto-Encoders that also generate audio waves, because it can learn to generate the actual sound of instruments, or create totally new instruments, or mixes of sounds. Alexandre shows some cool demos of weird instruments made of cat sounds and musical instruments.

Magenta comes with various RNNs for drums, melody, polyphony, performance. With auto-encoders for WaveNet and MIDI too. There's also a Generative Adversarial Network (GAN) for audio waves. GANs are often used for generating things like pictures, for example. 

The demos in this presentation are quite cool, with creating new instruments (cat + musical instrument), or for generating sequences of notes (drum score, melody score)

Alexandre ends the presentation with pointers to things like data sets of music, as neural networks further need to learn about style, performance, and networks need plenty of time to learn from existing music and instrument sounds, so as to create something nice to hear! He shows briefly some other cool demos using TensorFlow.js, so that it works in the browser and that you can more easily experiment with music generation.

Also, Alexandre wrote the book "Hands-On Music Generation with Magenta", so if you want to dive deeper, there's much to read and experiment with!

Covid learning: HTML semantic tags

We all know about HTML 5, right? Well, I knew about some of the new semantic tags, like header / nav / main / article / aside / footer, but I'm still falling down to using tons of divs and spans instead. So as I want to refresh that blog at some point, it was time I revise those semantic tags. Let's take the little time we have during confinement to learn something!

There are likely plenty of videos of the topic, but this one was in my top results, so I watched:
HTML & CSS Crash Course Tutorial #6 - HTML 5 Semantics. It's part of a series of videos on the topic of HTML & CSS by the Net Ninja. This particular episode was covering the topic of the semantic tags:

 

So you have a main tag that wraps the meaty content of your page (ie. not stuff like header / footer / navigation). Inside, you would put articles, that wrap each piece of content (a blog post, a news article, etc). Sections tend to be for grouping some other information, like a list of resources, some contact info. Asides can be related content like similar articles, or something somewhat related to your current article (perhaps a short bio of a character you're mentioning in your article?) In the header section, you'd put the title of your site, the navigation. The footer will contain your contact info.

Here's a basic structure of how those tags are organised:


After an explanation of those tags, the author does a live demo, building up a web page with all those tags. So it was a good refresher for me to remember how to use those tags, rather than nesting div after div!

Covid learning: Modern Web Game Development

Next in my series of videos while doing sports at home, I watched this video from my colleague Tom Greenaway! It's about modern web game development, and was recorded last year at Google I/O.


There are big gaming platforms, like Sony's PlayStation, Microsoft's XBox, Nintendo Switch, as well as plenty of mobile games on Android and iOS. But the Web itself, within your browser, is also a great platform for developing and publishing games! There's all that's needed for good games!

Tom explains that you need a functioning game (runs well on device, looks good, sounds good). And today, most of the game engines you can use for developing games actually provide an HTML5 target. You need users, and you need to have a good monetisation strategy. The web already provides all the right APIs for nice graphics, sound mixing, etc, and its a very open platform for spreading virally.

It was pretty interesting to hear about one of the key advantages of the web: it's URLs! You can be pretty creative with URLs. A game can create a URL for a given game session, for a particular state in a game, for inviting others to join.

In addition to game engines with a web target, Tom mentions also that it's possible to port games from C/C++ for example, to JavaScript in the browser, with a tool like Emscripten. Even things like OpenGL 3D rendering can be translated into WebGL. But he also advises to look at WebAssembly, as it's really become the new approach to native performance in the browser. He mentioned construct, it's basically the Box2D game engine, but optimised for WebAssembly.

For 3D graphics, for the web, the future lies in WebGPU, which is a more modern take on WebGL and OpenGL. For audio, there's the Web Audio APIs and worklets which allows you to even create effects in JavaScript or WebAssembly. But there are other useful APIs for game development, like the Gamepad API, the Gyroscope API, etc. 

For getting users, ensure that your game is fun of course, but also make it fast, in particular load fast, to avoid using users even before you actually got them to load the game! But you also need to think about this user acquisition loop: make the game load and start fast to enter the action right away, so you're really pulled in in the game, and that's then a good reason for users to share this new cool games with others. Of course, being featured on game sites & libraries helps, it gives a big boost, but it's not necessarily what will make you earn the most in the long run. Tom also shares various examples of games that were successful and worked well.


 
© 2012 Guillaume Laforge | The views and opinions expressed here are mine and don't reflect the ones from my employer.