Google Cloud Endpoints in General Availability

Today was announced the general availability of Google Cloud Endpoints

Endpoints is the Google Cloud Platform solution for Web API management, which lets you easily protect & secure your API, monitor it, without overhead, and even allows you to implement your API with any language or framework you want.

I've spoken about Endpoints a few times already, at Devoxx Belgium, Nordic APIs summit, and APIDays Paris. And you can see the recording of my Nordic APIs appearance, if you want to learn more about Cloud Endpoints:

A tight develop/test loop for developing bots with API.AI, the Google Cloud Function emulator, Node.js and ngrok

For Google Cloud Next and Devoxx France, I’m working on a new talk showing how to build a conference assistant, to whom you’ll be able to ask questions like “what is the next talk about Java”, “when is Guillaume Laforge speaking”, “what is the topic of the ongoing keynote”, etc.


For that purpose, I’m developing the assistant using API.AI. It’s a “conversational user experience platform” recently acquired by Google, which allows you to define various “intents” which correspond to the kind of questions / sentences that a user can say, and various “entities” which relate to the concepts dealt with (in my example, I have entities like “talk” or “speaker”). API.AI lets you define sentences pretty much in free form, and it derives what must be the various entities in the sentences, and is able to actually understand more sentences that you’ve given it. Pretty clever machine learning and natural language process at play. In addition to that, you also have support for several spoken languages (English, French, Italian, Chinese and more), integrations with key messaging platforms like Slack, Facebook Messenger, Twilio, or Google Home. It also offers various SDKs so you can integrate it easily in your website, mobile application, backend code (Java, Android, Node, C#...)


When implementing your assistant, you’ll need to implement some business logic. You need to retrieve the list of speakers, the list of talks from a backend or REST API. You also need to translate the search for a talk on a given topic into the proper query to that backend. In order to implement such logic, API.AI offers a Webhook interface. You instruct API.AI to point at your own URL that will take care of dealing with the request, and will reply adequately with the right data. To facilitate the development, you can take advantage of the SDKs I mentioned above, or you can also just parse and produce the right JSON payloads. To implement my logic, I decided to use Google Cloud Functions, Google’s recent serverless, function-based offering. Cloud Functions is currently is alpha, and supports JavaScript through Node.js.


For brevity sake, I’ll focus on a simple example today. I’m going to create a small agent that replies to queries like “what time is it in Paris” or some other city.


In API.AI, we’re going to create an “city” entity with a few city names:

Next, we’re creating the “ask-for-the-time” intent, with a sentence like “what time it is in Paris?”:

Quick remark, when creating my intent, I didn’t use the built-in @sys.geo-city data type, I just created my own city kind, but I was pleasantly surprised that it recognized the city name as a potential @sys.geo-city type. Neat!


With our intent and entity ready, we enable the “fulfillment”, so that API.AI knows it should call our own business logic for replying to that query:

And that’s in the URL field that we’ll be able to point at our business logic developed as a Cloud Function. But first, we’ll need to implement our function.


After having created a project in the Google Cloud console (you might need to request being whitelisted, as at the time of this writing the product is still in alpha), I create a new function, that I’m simply calling ‘agent’. I define the function as being triggered by an HTTP call, and with the source code inline.


For the source of my function, I’m using the “actions-on-google” NPM module, that I’m defining in the package.json file:


{
"name": "what-time-is-it",
"version": "0.0.1",
"private": true,
"scripts": {
"start": "node server.js",
"deploy": "gcloud alpha functions deploy agent --project what-time-is-it-157614 --trigger-http --stage-bucket gs://what-time-is-it-157614/"
},
"description": "An agent to know the time in various cities around the world.",
"main": "index.js",
"repository": "",
"author": "Guillaume Laforge",
"dependencies": {
"actions-on-google": "^1.0.5"
}
}

And the implementation looks like the following:

var ApiAiAssistant = require('actions-on-google').ApiAiAssistant;
const ASK_TIME_INTENT = 'ask-for-the-time';  
const CITY = 'city';
function whatTimeIsIt(assistant) {
  var city = assistant.getArgument(CITY);
  if (city === 'Paris') 
    assistant.ask("It's noon in Paris.");
  else if (city === 'London') 
    assistant.ask("It's 11 a.m. in London.");
  else 
    assistant.ask("It’s way to early or way too late in " + city);
}
exports.agent = function(request, response) {
    var assistant = new ApiAiAssistant({request: request, response: response});
    var actionMap = new Map();
    actionMap.set(ASK_TIME_INTENT, whatTimeIsIt);
    assistant.handleRequest(actionMap);
};

Once my function is created, after 30 seconds or so, the function is actually deployed and ready to serve its first requests. I update the fulfillment details to point at the URL of my newly created cloud function. Then I can use the API.AI console to make a first call to my agent:


You can see that my function replied it was noon in Paris. When clicking the “SHOW JSON” button, you can also see the JSON being exchanged:


{
"id": "20ef54be-ee01-4fbe-9e6e-e73305046601",
"timestamp": "2017-02-03T22:22:08.822Z",
"result": {
"source": "agent",
"resolvedQuery": "what time is it in paris?",
"action": "ask-for-the-time",
"actionIncomplete": false,
"parameters": {
"city": "Paris"
},
"contexts": [
{
"name": "_actions_on_google_",
"parameters": {
"city": "Paris",
"city.original": "Paris"
},
"lifespan": 100
}
],
"metadata": {
"intentId": "b98aaae0-838a-4d55-9c8d-6adef4a4d798",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "true",
"intentName": "ask-for-the-time"
},
"fulfillment": {
"speech": "It's noon in Paris.",
"messages": [
{
"type": 0,
"speech": "It's noon in Paris."
}
],
"data": {
"google": {
"expect_user_response": true,
"is_ssml": false,
"no_input_prompts": []
}
}
},
"score": 1
},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "4ba74fa2-e462-4992-9587-2439b32aad3d"
}


So far so good, it worked. But as you start fleshing out your agent, you’re going to continue making tests manually, then update your code and redeploy the function, several times. Although the deployment times of Cloud Function is pretty fast (30 seconds or so), as you make even simple tweaks to your function’s source code, adding several times 30 seconds, you will quickly feel like you’re wasting a bit of time waiting for those deployments. What if… you could run your function locally on your machine, let API.AI point at your local machine somehow through its fulfillment configuration, and make changes live to your code, and test the changes right away without needing any redeployment! We can! We are going to do so by using the Cloud Functions emulator, as well as the very nice ngrok tool which allows you to expose your local host to the internet. Let’s install the Cloud Functions emulator, as shown in its documentation:

npm install -g @google-cloud/functions-emulator
Earlier, we entered the code of our function (index.js and package.json) directly in the Google Cloud Platform web console, but we will now retrieve them locally, to run them from our own machine. We will also need to install the actions-on-google npm module for our project to run:
npm install actions-on-google
Once the emulator is installed (you’ll need at least Node version 6.9), you can define your project ID with something like the following (update to your actual project ID):
functions config set projectId what-time-is-it-157614
And then we can start the emulator, as a daemon, with:
functions start
We deploy the function locally with the command:
functions deploy agent --trigger-http
If the function deployed successfully on your machine, you should see the following:



Notice that your function is running on localhost at:

http://localhost:8010/what-time-is-it-157614/us-central1/agent
We want this function to be accessible from the web. That’s where our ngrok magic bullet will help us. Once you’ve signed-up to the service and installed it on your machine, you can run ngrok with:
ngrok http 8010
The command will expose your service on the web, and allow you to have a public, accessible https endpoint:



In the API.AI interface, you must update the fulfillment webhook endpoint to point to that https URL: https://acc0889e.ngrok.io. But you must also append the path shown when running on localhost: what-time-is-it-157614/us-central1/agent, so the full path to indicate in the fulfillment URL will be: https://acc0889e.ngrok.io/what-time-is-it-157614/us-central1/agent



Then I use the API.AI console to send another test request, for instance what is the time in San Francisco. And it’s calling my local function:



And in the ngrok local console, you can indeed see that it’s my local function that has been called in the emulator:

Nice, it worked! We used the Cloud Functions emulator, in combination with ngrok, to route fulfillment request to our local machine. However, the astute reader might have noticed that my bot’s answer contained a typo, I wrote “to early”, instead of “too early”. Damn! I’ll need to fix that locally, in a tight feedback loop, rather than having to redeploy all the time my function. How do I go about it? I just open my IDE or text editor, fix the typo, and here you go, nothing to redeploy locally or anything, the change is already applied and live. If I make a call in the API.AI console, the typo is fixed:

Thanks to the Cloud Functions emulator and ngrok, I can develop locally on my machine, with a tight develop / test loop, without having to deploy my functions all the time. The changes are taken into account live: no need to restart the emulator, or deploy the function locally. Once I’m happy with the result, I can deploy for real. Then, I’ll have to remember to change the webhook fulfillment URL to the real live cloud function.

My favorite Cloud Next sessions

The schedule for Google Cloud Next was unveiled this week, and there's lots of interesting sessions to attend. With the many parallel tracks, it's difficult to make a choice, but I wanted to highlight some of the talks I'd like to watch!

The Google Cloud Platform is a pretty rich one, with many options for your compute needs. How do you choose which one is best for your use case? Brian Dorsey covers this in detail in this session:

To explore a bit further some of the compute options, I'd recommend looking at Container Engine with ABCs of Google Container Engine: tips and best practices by Piotr Szczesniak, and Go beyond PaaS with App Engine Flexible Environment by Justin Beckwith.

The Serverless trend is strong these days, and in this area, I spotted two slots here with Firebase, Cloud Functions: Live coding a serverless app with Firebase and Google Cloud Platform by Mike McDonald, Jen Tong, Frank van Puffelen, and Serverless computing options with Google Cloud Platform by Bret McGowen.

I've blogged before about Cloud Endpoints, as I'm interested in the world of Web APIs, and there are two talks I'd like to attend in this area: Google Cloud Endpoints: serving your API to the world by Francesc Campoy Flores and Authorizing service-to-service calls with Google Cloud Endpoints by Dan Ciruli, Sep Ebrahimzadeh.

And in my misc. category, I'd like to highlight this one on the APIs for G Suite: Developing new apps built for your organization with Google Docs, Slides, Sheets and Sites APIs by Ritcha Ranjan. A talk on big parallel data processing with Using Apache Beam for parallel data processing by Frances Perry.

And to finish, I have to mention my own talk, that I'll be presenting with Brad Abrams: Talking to your users: Build conversational actions for Google Assistant. It should be fun!

What talks are you going to attend?





Deploy a Ratpack app on Google App Engine Flex

The purpose of this article is to deploy a Ratpack web application on Google App Engine Flex.

For my demos at conferences, I often use frameworks like Ratpack, Grails or Gaelyk, which are based on the Apache Groovy programming language. In a previous article, I already used Ratpack, but on a slightly more complex use case, but this time, I want to share a quick Ratpack hello world, and deploy it on Flex.

I started with a hello world template generated by Lazybones (a simple project creation tool that uses packaged project templates), that I had installed with SDKman (a tool for managing parallel versions of multiple Software Development Kits). But you can go ahead with your own Ratpack apps obviously. Feel free to skip the next section if you already have an app.

Create a Ratpack project
# install SDKman
curl -s "https://get.sdkman.io" | bash
# install lazybones with sdkman
sdk install lazybones
# create your hello world Ratpack app from a template
lazybones create ratpack flex-test-1
You can then quickly run your app with:
cd flex-test-1
./gradlew run
And head your browser to http://localhost:5050 to see your app running.

We'll use the distTar task to create a distribution of our app, so build it with:
./gradlew distTar

Get ready for Flex

To run our app on App Engine Flex, we'll need to do two things: 1) to containerize it as a Docker container, and 2) to create an app.yaml app descriptor. Let's start with Docker. Create a Dockerfile, and adapt the path names appropriately (replace "flex-test-1" by the name of the directory you created your project in):
FROM gcr.io/google_appengine/openjdk8
VOLUME /tmp
ADD build/distributions/flex-test-1.tar /
ENV JAVA_OPTS='-Dratpack.port=8080 -Djava.security.egd=file:/dev/./urandom'
ENTRYPOINT ["/flex-test-1/bin/flex-test-1"]
I'm using Open JDK 8 for my custom runtime. I add my tarred project, and specify port 8080 for running (as requested by Flex), and I define the entry point to my generated startup script.

My app.yaml file, for App Engine Flex, is pretty short, and expresses that I'm using the Flexible environment:
runtime: custom
env: flex
threadsafe: true
Create and deploy your project on Google Cloud Platform

Create an App Engine project on the Google Cloud Platform console. And note the project name. You should also install the gcloud SDK to be able to deploy your Ratpack app from the command-line. Once done, you'll be able to go through the deployment with:
gcloud app deploy
After a little while, your Ratpack should be up and running!

A poor-man assistant with speech recognition and natural language processing

All sorts of voice-powered assistants are available today, and chat bots are the new black! In order to illustrate how such tools are made, I decided to create my own little basic conference assistant, using Google's Cloud Speech API and Cloud Natural Language API. This is a demo I actually created for the Devoxx 2016 keynote, when Stephan Janssen invited me on stage to speak about Machine Learning. And to make this demo more fun, I implemented it with a shell script, some curl calls, plus some other handy command-line tools.

So what is this "conference assistant" all about? Thanks for asking. The idea is to ask questions to this assistant about topics you'd like to see during the conference. For example: "Is there a talk about the Google Cloud Vision API?". You send that voice request to the Speech API, which gives you back the transcript of the question. You can then use the Natural Language API to process that text to extract the relevant topic in that question. Then you query the conference schedule to see if there's a talk matching the topic.

Let's see this demo into action, before diving into the details:

So how did I create this little command-line conference assistant? Let's start with a quick diagram showing the whole process and its steps:


  • First, I record the audio using the sox command-line tool.
  • The audio file is saved locally, and I upload it to Google Cloud Storage (GCS).
  • I then call the Speech API, pointing it at my recorded audio file in GCS, so that it returns the text it recognized from the audio.
  • I use the jq command line tool to extract the words from the returned JSON payload, and only the words I'm interested in (basically what appears after the "about" part of my query, ie. "a talk *about* machine learning")
  • Lastly, I'm calling a custom search engine that points at the conference website schedule, to find the relevant talks that match my search query.
Let's have a look at the script in more details (this is the simplified script without all the shiny terminal colors and logging output). You should create a project in the Google Cloud Console, and note its project ID, as we'll reuse it for storing our audio file.

#!/bin/bash

# create an API key to access the Speech and NL APIs
# https://support.google.com/cloud/answer/6158862?hl=en
export API_KEY=YOUR API KEY HERE # create a Google Custom Search and retrieve its id
export CS_ID=THE ID OF YOUR GOOGLE CUSTOM SEARCH # to use sox for recording audio, you can install it with: # brew install sox --with-lame --with-flac --with-libvorbis
sox  -d -r 16k -c 1 query.flac # once the recording is over, hit CTRL-C to stop # upload the audio file to Google Cloud Storage with the gsutil command # see the documentation for installing it, as well as the gcloud CLI # https://cloud.google.com/storage/docs/gsutil_install # https://cloud.google.com/sdk/docs/
gsutil copy -a public-read query.flac gs://devoxx-ml-demo.appspot.com/query.flac
# call the Speech API with the template request saved in speech-request.json: # { # "config": { # "encoding":"FLAC", # "sample_rate": 16000, # "language_code": "en-US" # }, # "audio": { # "uri":"gs://YOUR-PROJECT-ID-HERE.appspot.com/query.flac" # } #}
curl -s -X POST -H "Content-Type: application/json" --data-binary @speech-request.json "https://speech.googleapis.com/v1beta1/speech:syncrecognize?key=${API_KEY}" > speech-output.json # retrieve the text recognized by the Speech API # using the jq to just extract the text part
cat speech-output.json | jq -r .results[0].alternatives[0].transcript > text.txt # prepare a query for the Natural Language API # replacing the @TEXT@ place holder with the text we got from Speech API; # the JSON query template looks like this: # { # "document": { # "type": "PLAIN_TEXT", # "content": "@TEXT@" # }, # "features": { # "extractSyntax": true, # "extractEntities": false, # "extractDocumentSentiment": false # } #} sed "s/@TEXT@/`cat text.txt`/g" nl-request-template.json > nl-request.json # call the Natural Language API with our template
curl -s -X POST -H "Content-Type: application/json" --data-binary @nl-request.json https://language.googleapis.com/v1beta1/documents:annotateText?key=${API_KEY} > nl-output.json # retrieve all the analyzed words from the NL call results
cat nl-output.json | jq -r .tokens[].lemma  > lemmas.txt # only keep the words after the "about" word which refer to the topic searched for
sed -n '/about/,$p' lemmas.txt | tail -n +2 > keywords.txt # join the words together to pass them to the search engine
cat keywords.txt | tr '\n' '+' > encoded-keywords.txt # call the Google Custom Search engine, with the topic search query # and use jq again to filter only the title of the first search result # (the page covering the talk usually comes first)
curl -s "https://www.googleapis.com/customsearch/v1?key=$API_KEY&cx=$CS_ID&q=`cat encoded-keywords.txt`" | jq .items[0].title
And voila, we have our conference assistant on the command-line! We combined the Speech API to recognize the voice and extract the text corresponding to the query audio, we analyze this text with the Natural Language API, and we use a few handy command-line tools to do the glue.

 
© 2012 Guillaume Laforge | The views and opinions expressed here are mine and don't reflect the ones from my employer.