Testing Java 8 snippets on the new App Engine Java 8 runtime

A new Java 8 runtime for Google App Engine standard is coming soon, and is currently in alpha testing. You can request to join the alpha program, if you want to try it out for yourself. But I wanted to let anyone play with it, easily, to see how well the Java 8 APIs work, but also to try some Java 8 syntax too. So here's a web console where you can do just that!

But to be precise, it's actually my good old Groovy Web Console, where people can write, execute and save Apache Groovy snippets. It is a special version, in fact, as it's built on Java 8, uses the invoke dynamic flavor, and... drum roll... it's using the upcoming "Parrot" parser which adds the Java 8 syntax constructs to the Groovy grammar. So not only can you try Java snippets, but it's a great opportunity to try the future Groovy parser that's gonna be released in Apache Groovy 2.5 or 3.0 (still to be decided).

A meetup about Java 8 on Google App Engine standard

Also, for those who live in Paris and the area, we have the chance of having Ludovic Champenois, an engineer working on App Engine, that will be in France, and will be speaking at this GDG Cloud meetup hosted by Xebia, which takes places on Tuesday, April 4th, just on the even of Devoxx France! 

So if you want to learn more about Java 8 on App Engine, please sign up!

I will also be presenting about Google Home, the Google Assistant, API.AI, and Google Cloud Functions to host the logic of your very own bots and agents. It's based on the presentation I gave at Cloud Next 2017 in San Francisco. If you want to learn more about 

Happy Pi Day! Google Home helps you learn the digits of Pi

You know what? It's Pi Day today! Well, if you follow the American date standard, it's 3.14 today, a nice approximation of Pi. Last year, in a past life, I had played with Pi already, but this year, my awesome colleagues (Ray, Sandeep, Francesc, Ian) have been working on some very cool demos around Pi, with the "Pi delivery", at https://pi.delivery/

You can transform the Pi digits in a nice melody, show a D3.js based visualisation of the transitions between digits, you can stream the Pi digits, and more. And you can learn about how it's been developed on the Google Cloud Platform.

Ray pinged me to see if we could also create an assistant you can invoke on Google Home, to ask for digits of Pi, as I recently played with Google Home, API.AI and Cloud Functions! And I played with the idea: created a new Cloud Function that invokes the Pi's Web API, designed an assistant in API.AI, and submitted this assistant to the Google Assistant.

You'll be able to ask your Google Home:
Ok Google, talk to Pi Digit Agent.
What is the 34th digit of Pi?
And it will tell you that it's 2.

How did I do that, let's first have a look at the Cloud Function, implemented in JavaScript / Node.js:
  "name": "pi-assistant",
  "version": "0.0.1",
  "private": true,
  "scripts": {
    "start": "node index.js",
    "deploy": "rm -rf node_modules; gcloud alpha functions deploy digit --project digit-of-pi-2017-assistant  --trigger-http --stage-bucket gs://digit-of-pi-2017-assistant/"
  "description": "Ask for the n-th digit of Pi!",
  "main": "index.js",
  "repository": "",
  "author": "Guillaume Laforge",
  "dependencies": {
    "actions-on-google": "^1.0.7",
    "node-fetch": "^1.6.3"
The key things here are the dependencies: I'm using the actions-on-google Node module to interact more easily with API.AI and the Assistant, and I'm using node-fetch to interact with the Pi Delivery's REST API. 

Let's now have a look at the code of our exported digit function in index.js:
const ApiAiAssistant = require('actions-on-google').ApiAiAssistant;
const fetch = require('node-fetch');

function nthDigit(assistant) {
    let rank = parseInt(assistant.getArgument('rank').replace(/,/g, ''));
    console.log(`${rank}nth digit`);

    // 0 -> 3, 1 -> ., 2 -> 1, 3 -> 4, 4 -> 1, ...
    // let's return 3 for 0th / 1st, and the digit otherwise, 
    // to follow natural human numbering and the fact the dot is accounted

    let start = rank < 2 ? 0 : rank;

        .then(response => response.json())
        .then(data => {
            assistant.ask(`Digit ${rank} of Pi is ${data.content}. Do you want to know a different digit of Pi? Or say cancel to exit.`);
        }).catch(err => {
            assistant.ask('The ways of Pi are mysterious... Try again, or with another digit? Or say cancel to exit.');

exports.digit = function (request, response) {
    let assistant = new ApiAiAssistant({request, response});
    let actionMap = new Map();
    actionMap.set('nth-digit-intent', nthDigit);
It's pretty straightforward, we export a digit function, that creates an API.AI assistant, to which we feed an action map pointing at our main intent, for asking for digits. I extract the parameter (ie. the rank of the digit I'm interested in), I call the REST API with a fetch() call, and then I return the result with the assistant.ask() call.

In a nutshell, on API.AI's side, my welcome intent greets you, telling you how to use the assistant:
And then the main intent, whose webhook points at my Cloud Function, does the heavy lifting:

You can try it in the emulator:

After that, once the webhook is properly configured, I published my action, through the integrations pane, and the cloud API console. I'll skip the details here, but you can read more on how to distribution your actions.

So again, Happy Pi Day! And hopefully, if you have a Google Home device and when my assistant is officially published, you'll be able to learn more about the digits of Pi!

And let's finish with a video of the assistant running live on my Google Home!

Extending the Google Assistant with Actions on Google

Last week, in San Francisco, took place the Google Cloud Next 2017 conference, and I had the pleasure to co-present a session on "Extending the Google Assistant with Actions on Google", with Brad Abrams, product manager on the assistant technology at Google.

The Google Assistant is the conversational user interface that helps you get things done in your world. Actions on Google let you build on this assistance, while your integrations can help you engage users through Google Home on Pixel, Android and many other devices that connect with Google Assistant. In this session, we'll share the latest innovations behind the Google Assistant and how you can leverage those technologies and best practices for Voice User Interface design to build your own custom extensions to Google Assistant.

In this presentation, for our demonstration, we used API.AI and Google Cloud Functions (announced as beta during the keynote) to implement our assistant, whose job was to help attendees learn more about the conference schedule and see which talks they'd be interested in attending.

You can watch the video of the talk on YouTube already:

And you can have a closer look at the slides below:

Google Cloud Endpoints in General Availability

Today was announced the general availability of Google Cloud Endpoints

Endpoints is the Google Cloud Platform solution for Web API management, which lets you easily protect & secure your API, monitor it, without overhead, and even allows you to implement your API with any language or framework you want.

I've spoken about Endpoints a few times already, at Devoxx Belgium, Nordic APIs summit, and APIDays Paris. And you can see the recording of my Nordic APIs appearance, if you want to learn more about Cloud Endpoints:

A tight develop/test loop for developing bots with API.AI, the Google Cloud Function emulator, Node.js and ngrok

For Google Cloud Next and Devoxx France, I’m working on a new talk showing how to build a conference assistant, to whom you’ll be able to ask questions like “what is the next talk about Java”, “when is Guillaume Laforge speaking”, “what is the topic of the ongoing keynote”, etc.

For that purpose, I’m developing the assistant using API.AI. It’s a “conversational user experience platform” recently acquired by Google, which allows you to define various “intents” which correspond to the kind of questions / sentences that a user can say, and various “entities” which relate to the concepts dealt with (in my example, I have entities like “talk” or “speaker”). API.AI lets you define sentences pretty much in free form, and it derives what must be the various entities in the sentences, and is able to actually understand more sentences that you’ve given it. Pretty clever machine learning and natural language process at play. In addition to that, you also have support for several spoken languages (English, French, Italian, Chinese and more), integrations with key messaging platforms like Slack, Facebook Messenger, Twilio, or Google Home. It also offers various SDKs so you can integrate it easily in your website, mobile application, backend code (Java, Android, Node, C#...)

When implementing your assistant, you’ll need to implement some business logic. You need to retrieve the list of speakers, the list of talks from a backend or REST API. You also need to translate the search for a talk on a given topic into the proper query to that backend. In order to implement such logic, API.AI offers a Webhook interface. You instruct API.AI to point at your own URL that will take care of dealing with the request, and will reply adequately with the right data. To facilitate the development, you can take advantage of the SDKs I mentioned above, or you can also just parse and produce the right JSON payloads. To implement my logic, I decided to use Google Cloud Functions, Google’s recent serverless, function-based offering. Cloud Functions is currently is alpha, and supports JavaScript through Node.js.

For brevity sake, I’ll focus on a simple example today. I’m going to create a small agent that replies to queries like “what time is it in Paris” or some other city.

In API.AI, we’re going to create an “city” entity with a few city names:

Next, we’re creating the “ask-for-the-time” intent, with a sentence like “what time it is in Paris?”:

Quick remark, when creating my intent, I didn’t use the built-in @sys.geo-city data type, I just created my own city kind, but I was pleasantly surprised that it recognized the city name as a potential @sys.geo-city type. Neat!

With our intent and entity ready, we enable the “fulfillment”, so that API.AI knows it should call our own business logic for replying to that query:

And that’s in the URL field that we’ll be able to point at our business logic developed as a Cloud Function. But first, we’ll need to implement our function.

After having created a project in the Google Cloud console (you might need to request being whitelisted, as at the time of this writing the product is still in alpha), I create a new function, that I’m simply calling ‘agent’. I define the function as being triggered by an HTTP call, and with the source code inline.

For the source of my function, I’m using the “actions-on-google” NPM module, that I’m defining in the package.json file:

"name": "what-time-is-it",
"version": "0.0.1",
"private": true,
"scripts": {
"start": "node server.js",
"deploy": "gcloud alpha functions deploy agent --project what-time-is-it-157614 --trigger-http --stage-bucket gs://what-time-is-it-157614/"
"description": "An agent to know the time in various cities around the world.",
"main": "index.js",
"repository": "",
"author": "Guillaume Laforge",
"dependencies": {
"actions-on-google": "^1.0.5"

And the implementation looks like the following:

var ApiAiAssistant = require('actions-on-google').ApiAiAssistant;
const ASK_TIME_INTENT = 'ask-for-the-time';  
const CITY = 'city';
function whatTimeIsIt(assistant) {
  var city = assistant.getArgument(CITY);
  if (city === 'Paris') 
    assistant.ask("It's noon in Paris.");
  else if (city === 'London') 
    assistant.ask("It's 11 a.m. in London.");
    assistant.ask("It’s way to early or way too late in " + city);
exports.agent = function(request, response) {
    var assistant = new ApiAiAssistant({request: request, response: response});
    var actionMap = new Map();
    actionMap.set(ASK_TIME_INTENT, whatTimeIsIt);

Once my function is created, after 30 seconds or so, the function is actually deployed and ready to serve its first requests. I update the fulfillment details to point at the URL of my newly created cloud function. Then I can use the API.AI console to make a first call to my agent:

You can see that my function replied it was noon in Paris. When clicking the “SHOW JSON” button, you can also see the JSON being exchanged:

"id": "20ef54be-ee01-4fbe-9e6e-e73305046601",
"timestamp": "2017-02-03T22:22:08.822Z",
"result": {
"source": "agent",
"resolvedQuery": "what time is it in paris?",
"action": "ask-for-the-time",
"actionIncomplete": false,
"parameters": {
"city": "Paris"
"contexts": [
"name": "_actions_on_google_",
"parameters": {
"city": "Paris",
"city.original": "Paris"
"lifespan": 100
"metadata": {
"intentId": "b98aaae0-838a-4d55-9c8d-6adef4a4d798",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "true",
"intentName": "ask-for-the-time"
"fulfillment": {
"speech": "It's noon in Paris.",
"messages": [
"type": 0,
"speech": "It's noon in Paris."
"data": {
"google": {
"expect_user_response": true,
"is_ssml": false,
"no_input_prompts": []
"score": 1
"status": {
"code": 200,
"errorType": "success"
"sessionId": "4ba74fa2-e462-4992-9587-2439b32aad3d"

So far so good, it worked. But as you start fleshing out your agent, you’re going to continue making tests manually, then update your code and redeploy the function, several times. Although the deployment times of Cloud Function is pretty fast (30 seconds or so), as you make even simple tweaks to your function’s source code, adding several times 30 seconds, you will quickly feel like you’re wasting a bit of time waiting for those deployments. What if… you could run your function locally on your machine, let API.AI point at your local machine somehow through its fulfillment configuration, and make changes live to your code, and test the changes right away without needing any redeployment! We can! We are going to do so by using the Cloud Functions emulator, as well as the very nice ngrok tool which allows you to expose your local host to the internet. Let’s install the Cloud Functions emulator, as shown in its documentation:

npm install -g @google-cloud/functions-emulator
Earlier, we entered the code of our function (index.js and package.json) directly in the Google Cloud Platform web console, but we will now retrieve them locally, to run them from our own machine. We will also need to install the actions-on-google npm module for our project to run:
npm install actions-on-google
Once the emulator is installed (you’ll need at least Node version 6.9), you can define your project ID with something like the following (update to your actual project ID):
functions config set projectId what-time-is-it-157614
And then we can start the emulator, as a daemon, with:
functions start
We deploy the function locally with the command:
functions deploy agent --trigger-http
If the function deployed successfully on your machine, you should see the following:

Notice that your function is running on localhost at:

We want this function to be accessible from the web. That’s where our ngrok magic bullet will help us. Once you’ve signed-up to the service and installed it on your machine, you can run ngrok with:
ngrok http 8010
The command will expose your service on the web, and allow you to have a public, accessible https endpoint:

In the API.AI interface, you must update the fulfillment webhook endpoint to point to that https URL: https://acc0889e.ngrok.io. But you must also append the path shown when running on localhost: what-time-is-it-157614/us-central1/agent, so the full path to indicate in the fulfillment URL will be: https://acc0889e.ngrok.io/what-time-is-it-157614/us-central1/agent

Then I use the API.AI console to send another test request, for instance what is the time in San Francisco. And it’s calling my local function:

And in the ngrok local console, you can indeed see that it’s my local function that has been called in the emulator:

Nice, it worked! We used the Cloud Functions emulator, in combination with ngrok, to route fulfillment request to our local machine. However, the astute reader might have noticed that my bot’s answer contained a typo, I wrote “to early”, instead of “too early”. Damn! I’ll need to fix that locally, in a tight feedback loop, rather than having to redeploy all the time my function. How do I go about it? I just open my IDE or text editor, fix the typo, and here you go, nothing to redeploy locally or anything, the change is already applied and live. If I make a call in the API.AI console, the typo is fixed:

Thanks to the Cloud Functions emulator and ngrok, I can develop locally on my machine, with a tight develop / test loop, without having to deploy my functions all the time. The changes are taken into account live: no need to restart the emulator, or deploy the function locally. Once I’m happy with the result, I can deploy for real. Then, I’ll have to remember to change the webhook fulfillment URL to the real live cloud function.
© 2012 Guillaume Laforge | The views and opinions expressed here are mine and don't reflect the ones from my employer.