The JDK built-in web server with Apache Groovy

In my timeline, I saw a tweet from Joe Walnes about the built-in HTTP server available in the JDK since Java 6. It's super convenient, starts super fast, easy to use, but I often forget about it. I'd probably not use it for serving planet-wide load, but it's very useful when you need to create a quick service, a little mock for testing some web or micro-service.

Here's a little hello world for the fun.

I'm taking advantage of Apache Groovy's closure-to-functional-interface coercion support, as well as the with{} method to reuse the HttpServer instance for two method calls on the same instance (I could've used it for the http variable as well, actually).


HttpServer.create(new InetSocketAddress(8080), 0).with {
    createContext("/hello") { http ->
        http.responseHeaders.add("Content-type", "text/plain")
        http.sendResponseHeaders(200, 0)
        http.responseBody.withWriter { out ->
            out << "Hello ${http.remoteAddress.hostName}!"

More voice control for Actions on Google

Today, there were some interesting announcements for Actions on Googlefor building your conversational interfaces for the Google AssistantAmong the great news, one item particularly caught my attention: the improved SSML support:

Better SSML
We recently rolled out an update to the web simulator 
which includes a new SSML audio design experience. 
We now give you more options for creating natural, 
quality dialog using newly supported SSML tags, including <prosody>, 
<emphasis>, <audio> and others. The new tag <par> is coming soon 
and lets you add mood and richness, so you can play background music 
and ambient sounds while a user is having a conversation with your app. 
To help you get started, we've added over 1,000 sounds to the sound library.
Listen to a brief SSML audio experiment that shows off some of the new features here.

SSML stands for Speech Synthesis Markup LanguageIt's a W3C standard whose goal is to provide better support for a more natural sounding speech generation.

So far, Actions on Google had limited SSML support, but today, there's a bit more you can do with SSML to enhance your apps' voice!

At the Devoxx Belgium conference last week, in a couple of talks showing Dialogflow, 
Actions on Google, and Cloud Functions, I showed some quick examples of SSML.

For example, I made an attendee do some squats on stage! (but the camera didn't catch that unfortunately.) I created a loop over a tick-tock sound to mimick a countdown. I repeated x times the tick-tock sound. With x audio elements. But we can do better now, by using the repeatCount attribute instead!
<audio src="gs://my-bucket-sounds/tick-tock-1s.wav" repeatCount="10" />

It's much better than repeating my audio tag 10 times!

If you want to make your interactions even more lively, you could already use the Actions on Google sound library, or use a free sound library like Freesound.

But there's a promising upcoming tag that's gonna be supported soon: <par/>
If you will, par is a bit like a multi-track audio mixer. You'll be able to play different sounds in parallel, or make the voice speak in parallel. So you could very well have a background sound or music, with your app speaking at the same time.

Speaking of voice, the human voice goes up and down in pitch. With the prosody element, you can define the rate, pitch, and volume attributes. For instance, I make my voice sing some notes with semitones (but to be honest, it doesn't quite sound yet like a real singer!)

  <prosody rate="slow" pitch="-7st">C</prosody>
  <prosody rate="slow" pitch="-5st">D</prosody>
  <prosody rate="slow" pitch="-3st">E</prosody>
  <prosody rate="slow" pitch="-2st">F</prosody>
  <prosody rate="slow" pitch="0st">G</prosody>
  <prosody rate="slow" pitch="+2st">A</prosody>
  <prosody rate="slow" pitch="+4st">B</prosody>
  <prosody rate="slow" pitch="+6st">C</prosody>

You can also play with different levels of emphasis:

  This is 
  <emphasis level="strong">really</emphasis>

Learn more about all the support SSML tags in the Actions on Google documentationIt's gonna be even more fun to create lively voice interactions with all those improvements!

JavaOne — How Languages Influence Each Other: Reflections on 14 Years of Apache Groovy

Last week, I was in San Francisco for my tenth JavaOne! I had two sessions: one on the past / present / future of Java Platform-as-a-Service offerings, and one on programming language influences, and particularly how was Apache Groovy influenced, and how it also inspired other languages.

Here's the abstract:

Languages have been influencing one another since the dawn of computer programming. There are families of languages: from Algol descendants with begin/end code blocks to those with curly braces such as C. Languages are not invented in a vacuum but are inspired by their predecessors. This session’s speaker, who has been working on Apache Groovy for the past 14 years, reflects on the influences that have driven the design of programming languages. In particular, Groovy’s base syntax was directly derived from Java’s but quickly developed its own flavor, adding closures, type inference, and operators from Ruby. Groovy also inspired other languages: C#, Swift, and JavaScript adopted Groovy’s null-safe navigation operator and the famous Elvis operator.

And you can have a look at the slides below:

Apache Groovy is a multi-faceted language for the Java platform, allowing developers to code in a Java-friendly syntax, with great integration with the Java ecosystem, and powerful scripting and Domain-Specific Language capabilities, while at the same time being able to offer you type safety and static compilation.

In this presentation, I revisited some of the influences from other languages, from the C-family and its Java older brother, going through its Python-inspired strings, its Smalltalk and Ruby heritage for named parameters and closures, its type system à-la-Java. But I'm also showing some of the innovations Groovy came up with that were later borrowed by others (Swift, C#, Kotlin, Ceylon, PHP, Ruby, Coffeescript...). Things like Groovy's trailing closure, Groovy builders, null-safe navigation, the Elvis operator, ranges, the spaceship operator, and more.

Ultimately, inspiration is really a two-way street, as languages don't come from nowhere and inherit from their older brothers and sisters. No language is perfect, but each one of them somehow help the next ones to get better, by borrowing here and there some nice features that make developers more productive and write more readable and maintainable code.

DevFest Toulouse — Building your own chatbots with API.AI and Cloud Functions

A few weeks ago, my buddy Wassim and I had the chance to present again on the topic of chatbots, with API.AI and Cloud Functions, at the DevFest Toulouse conference.

Here's the latest update to our slide deck:

Chatbots, per se, are not really new, in the sense that we've been developing bots for things like IRC for a long time, but back in the day, it was simply some regular expression labor of love, rather than the natural language that we use today. The progress in machine learning, in both speech recognition (for when you use devices like Google Home) and natural language understanding (NLU), is what led us to being able to speak and chat naturally to those chatbots we encounter now.

In this presentation, we're covering the key concepts that underpin the NLU aspects:
  • Intents — the various kind of sentences or actions that are recognized (ex: "I-want-to-eat-something")
  • Entities — the concepts and values that we manipulate or that are parameterizing intents (ex: the kind of food associated with the "I-want-to-eat-something" intent)
  • Context — a conversation is not just a request-reply exchange, but the discussion between you and the chatbot can span longer back'n forth exchanges, and the chatbot needs to remember what was previously said to be useful and avoid any frustration for the user
We're also clarifying some of the terminology used when working with the Google Assistant and its Actions on Google developer platform:
  • Google Assistant — a conversation between you and Google to help GTD
  • Google Home — voice activated speaker powered by the Google Assistant
  • Google Assistant SDK — kit to embed the Google Assistant in your devices
  • Agent / chatbot / action — an actual app serving a particular purpose
  • Actions on Google — developer platform to build apps for the Assistant
  • Apps for the Google Assistant — 3rd party apps integrated to the Assistant
  • Actions SDK — a software SDK for creating apps
  • API.AI — a platform for creating conversational interfaces
It's important that your chatbot has a consistent persona, that corresponds to the core values or attributes of your brand, the spirit of your bot. A bot for children will likely be more friendly and use easy to understand vocabulary, vs a more formal tone for, say, a bank chatbot).

There are some great resources available for seeing if your chatbot and its conversation is ready for prime time:
Our tool of choice for our demo is API.AI, for implementing the voice interactions. It's clearly one of the best platforms on the market that makes it simple to create intents, entities, handle contexts, deal with many predefined entity types, that also provides various pre-built conversations that you can peruse.

For the business logic, we went with Google Cloud Functions which allows us to define our logic using JavaScript and Node.JS. We also took advantage of the local Cloud Functions emulator, to run our logic on our local machine, and ngrok for creating a tunnel between that local machine and API.AI. In API.AI, in the fulfillment webhook, you'll put the temporary URL given by ngrok, that then points at your local machine, via ngrok's tunnel. That way, you can see changes immediately, thanks to the live reloading supported by the emulator, making it easy to evolve your code.

Cloud Functions is Google's function-as-a-service offering, which is a serverless service, taylored for event-oriented systems as well as for direct HTTP invocation, and you pay only as you go, as requests are made or events are sent to your function. It's a cost effective solution, that scale automatically with your load.

To finish, we're also saying a few words about how to submit your bot to the Actions on Google development platform, to extend the Google Assistant with your own ideas.

Cloud Functions et API.AI à Devoxx Belgique pour vos interfaces conversationnelles

Pour Devoxx France, j'avais développé l'embryon d'un petit chatbot pour découvrir l'agenda de la conférence. Mais c'était plus un "proof of concept" qu'un projet vraiment fini. Mais je vais pouvoir reprendre cette ébauche et l'étoffer bientôt, car j'aurai le plaisir d'approfondir le sujet pour Devoxx Belgique ! 

La vidéo (en Français) de la présentation que j'ai donnée à Devoxx France sur le sujet a été publié il y a quelque temps sur YouTube, et vous pourrez la voir ci-dessous :

Pour Devoxx Belgique, avec mon comparse Wassim, nous allons développer un chatbot complet, avec API.AI et Cloud Functions, qui, je l'espère, sera intégr" à l'application d'agenda de Devoxx, et peut-être aussi à l'application mobile. Les spectateurs pourront poser des questions à cet agent, pour savoir ce qu'il y a comme présentation en ce moment, pour trouver des sujets qui les intéressent, en savoir plus sur les présentateurs, ou savoir quand ils pourront manger des frites ou voir le film ! Nous présenterons tout cela avec Wassim lors d'une BOF.

J'aurai aussi l'occasion d'approfondir le sujet des interfaces conversationnelles avec Benjamin Fuentes d'IBM et Tara Walker d'Amazon, pour avoir un panorama de ce qui se fait niveau outillage pour permettre aux développeurs de créer eux même leurs chatbots, comment brancher la logique métier nécessaire, comment intégrer leurs créations en mode web ou mobile, et plus encore.

Anvers, à très bientôt !
© 2012 Guillaume Laforge | The views and opinions expressed here are mine and don't reflect the ones from my employer.