Cloud Functions et API.AI à Devoxx Belgique pour vos interfaces conversationnelles

Pour Devoxx France, j'avais développé l'embryon d'un petit chatbot pour découvrir l'agenda de la conférence. Mais c'était plus un "proof of concept" qu'un projet vraiment fini. Mais je vais pouvoir reprendre cette ébauche et l'étoffer bientôt, car j'aurai le plaisir d'approfondir le sujet pour Devoxx Belgique ! 

La vidéo (en Français) de la présentation que j'ai donnée à Devoxx France sur le sujet a été publié il y a quelque temps sur YouTube, et vous pourrez la voir ci-dessous :


Pour Devoxx Belgique, avec mon comparse Wassim, nous allons développer un chatbot complet, avec API.AI et Cloud Functions, qui, je l'espère, sera intégr" à l'application d'agenda de Devoxx, et peut-être aussi à l'application mobile. Les spectateurs pourront poser des questions à cet agent, pour savoir ce qu'il y a comme présentation en ce moment, pour trouver des sujets qui les intéressent, en savoir plus sur les présentateurs, ou savoir quand ils pourront manger des frites ou voir le film ! Nous présenterons tout cela avec Wassim lors d'une BOF.

J'aurai aussi l'occasion d'approfondir le sujet des interfaces conversationnelles avec Benjamin Fuentes d'IBM et Tara Walker d'Amazon, pour avoir un panorama de ce qui se fait niveau outillage pour permettre aux développeurs de créer eux même leurs chatbots, comment brancher la logique métier nécessaire, comment intégrer leurs créations en mode web ou mobile, et plus encore.

Anvers, à très bientôt !

Cloud Shell and its Orion-based text editor to develop in the cloud

After deploying in the cloud, there's a new trend towards programming in the cloud. Although I'm not sure we're quite there yet, there are a couple of handy tools I've been enjoying when working on the Google Cloud Platform

I had been using the built-in Cloud Shell console, on the Google Cloud console, to have a terminal already pre-configured for my Google Cloud project. It allows you to easily have access to your whole environment, run commands, etc, just like you would from your own computer. The fact that all the command-line tools you can imagine (gradle, maven, gcloud sdk, etc) are already there is helpful, as well as the fact that you are already configured for using other cloud services.

To launch the shell, look no further than the top right hand corner, and click on the little [>_] button. It will launch the terminal in the bottom part of your cloud console.

 
You will see the console popping up below, and you'll be ready to access your project's environment:

But look at this little pen icon above? If you click it, you'll get your terminal in full screen in another window, but more interestingly, it will launch a proper file editor! It's an editor based on Eclipse Orion's web editor. You have your usual file browsing pane, to navigate to and select which files you want to edit, and you also have things like syntax highlighting to better understand the code at hand. 


The more friendly those built-in web editors will become, the sooner we'll really be able to develop in the cloud. I believe I will still continue to work on my local computer a long, but there are already times when I prefer running some operations directly in the cloud: for example, tasks that are really network hungry, they benefit directly from the wonderful network that cloud shell has access to, which is much snappier than the connection I have at home on my DSL router. For example, running some Docker build command, or fetching tons of dependencies for Node or Maven/Gradle, and it's really much nicer and faster within Cloud Shell. So having the added capability of also editing some files in my project make things pretty snappy.

There was a recent article on the Google Cloud blog outlining the beta launch of the Cloud Shell's code editor, which is why I wanted to play with this new built-in editor.

Apache Groovy and Google App Engine at JavaOne

I'll be back at JavaOne in San Francisco in October to speak about Apache Groovy and Google App Engine

Apache Groovy

I've been involved with the Apache Groovy project for 14 years now, it's a long time, and it's interesting to see how the language has evolved over time, how it was influenced by other languages, but also how it influenced those other languages itself! Let's see which operators or syntax constructs evolved and moved from one to the other.

Google App Engine

These days, the hype is around containers, containers everywhere! We tend to relegate Platform-as-a-Service solutions to the side, but it's still one of the most convenient way to deploy and scale an application today. After all, Snapchat and others are able to take advantage of a PaaS like App Engine, so why couldn't you too? (and you don't need to scale to their level anyway, but you'd still get the convenience of easy development and deployment)

Anyhow, I've invited my friends from Heroku and Oracle to join me for a panel discussion on the theme of Java PaaS-es. We'll see how Java PaaS-es are relevant today, more than ever.

The abstracts

So if you want to lear more about those two talks, here are their abstracts.

[CON5034] How Languages Influence Each Other: Reflections on 14 Years of Apache Groovy 

Languages have been influencing one another since the dawn of computer programming. There are families of languages: from Algol descendants with begin/end code blocks to those with curly braces such as C. Languages are not invented in a vacuum but are inspired by their predecessors. This session’s speaker, who has been working on Apache Groovy for the past 14 years, reflects on the influences that have driven the design of programming languages. In particular, Groovy’s base syntax was directly derived from Java’s but quickly developed its own flavor, adding closures, type inference, and operators from Ruby. Groovy also inspired other languages: C#, Swift, and JavaScript adopted Groovy’s null-safe navigation operator and the famous Elvis operator.

[CON5945] Java PaaS -- Then, Now and Next 

Java developers want to deploy their apps easily. Fortunately, there are great solutions for them in the form of Platform-as-a-Service for Java. In this discussion panel, we will share the views of Oracle, Heroku and Google engineers about their respective Java PaaS-es, how this space has evolved over the past few years, and what makes a great developer experience for users today. We'll discuss the future of PaaS in light of new technologies like microservices, containerization, and serverless architectures. Finally, we'll open up the space for an interactive discussion with the audience.

(with Joe Kutner from Heroku, Shaun Smith from Oracle, Ludovic Champenois from Google, and Frank Greco from NY JavaSIG as moderator)

Scale an Open API based web API with Cloud Endpoints

InfoQ recently released a video from the APIDays conference that took place in Paris last year. I talked about scaling an Open API based web API using Cloud Endpoints, on the Google Cloud platform. 

I spoke about the topic a few times, as web APIs is a topic I enjoy, at Nordic APIs, at APIDays, or Devoxx. But it's great to see the video online. So let me share the slide deck along with the video:



In a nutshell, the API contract is the source of truth. Whether you're the one implementing the API backend, or you're the consumer calling the API, there's this central contract that each party can rely on, to be certain how the API should be looking like, what kind of endpoint to expect, what payloads will be exchanged, or which status codes are used.

With a central contract, team communication and collaboration is facilitated: I've seen customers where a central architecture team would define a contract, that was implemented by a third-party (an outsourcing consulting company), and the API was consumed by different teams, both internally and externally. The central contract was here to facilitate the work between those teams, to ensure the contract would be fulfilled.

In addition, having such a computer-friendly contract is really useful for tooling. Out of the contract, you can generate various useful artifacts, such as: 

  • static & live mocks — that consumers can use when the API is not finalized, 
  • test stubs — for facilitating integration tests, 
  • server skeletons — to get started implementing the business logic of the API with a ready-made project template,
  • client SDKs — offering kits consumers can use, using various languages, to call your API more easily,
  • sandbox & live playground — a visual environment for testing and calling the API, for developers to discover how the API actually works,
  • an API portal with provisioning — a website offering the API reference documentation and allowing developers to get credentials to get access to the API,
  • static documentation — perhaps with just the API reference documentation, or a bundle of useful associated user guide, etc.
However, be careful with artifact generation. As soon as you start making some customizations to what's been generated by tools, you might run the risk of overwriting those changes the next time you re-generate those artifacts! So beware, how customization can be done and be integrated with those generated artifacts.

In my presentation and demo, I decided to use Cloud Endpoints to manage my API, and to host the business logic of my API implementation on the Google Cloud Platform. GCP (for short) provides various "compute" solutions for your projects:

  • Google App Engine (Platform-as-a-Service): you deploy your code, and all the scaling is done transparently for you by the platform,
  • Google Container Engine (Container-as-a-Service): it's a Kubernetes-based container orchestrator where you deploy your apps in the form of containers,
  • Google Compute Engine (Infrastructure-as-a-Service): this time, it's full VMs, with even more control on the environment, that you deploy and scale.
In my case, I went with a containerized Ratpack implementation for my API, implemented using the Apache Groovy programming language (what else? :-). So I deployed my application on Container Engine.

I described my web API via an Open API descriptor, and managed it via Cloud Endpoints. Cloud Endpoints is actually the underlying infrastructure used by Google themselves, to host all the APIs developers can use today (think Google Maps API, etc.) This architecture already serves literally hundreds of billions of requests everyday... so you can assume it's certainly quite scalable in itself. You can manage APIs described with Open API, regardless of how they were implemented (totally agnostic from the underlying implementation), and it can manage both HTTP-based JSON web APIs, as well as gRPC based ones.

There are three interesting key aspects to know about Cloud Endpoints, regardless of whether you're using the platform for public / private / mobile / micro-services APIs:

  • Cloud Endpoints takes care of security, to control access to the API, to authenticate consumers (taking advantage of API keys, Firebase auth, Auth0, JSON Web Tokens)
  • Cloud Endpoints offers logging and monitoring capabilities of key API related metrics
  • Cloud Endpoints is super snappy and scales nicely as already mentioned (we'll come back to this in a minute)
Cloud Endpoints actually offers an open source "sidecar" container proxy. Your containerized application will go hand in hand with the Extensible Service Proxy, and will actually be wrapped by that proxy. All the calls will actually go through that proxy before hitting your own application. Interestingly, there's not one single proxy, but each instance of you app will have its own proxy, thus diminishing the latency between the call to the proxy and the actual code execution in your app (there's no network hop between the two, to a somewhat distant central proxy, as the two containers are together). For the record, this proxy is based on Nginx. And that proxy container can also be run elsewhere, even on your own infrastructure.

In summary, Cloud Endpoints takes care of securing, monitoring and scaling your Web API. Developing, deploying, and managing your API on Google Cloud Platform gives you the choice: in terms of protocol with JSON / HTTP based APIs or gRPC, in terms of implementation technology as you can chose any language or framework you wish that are supported by the various compute options of the platform allow you to go from PaaS, to CaaS, or IaaS. Last but not least, this solution is open: based on open standards like Open API and gRPC, or by implementing its proxy on top of Nginx.





Scale Jenkins with Kubernetes on Google Container Engine

Last week, I had the pleasure to speak at the Jenkins Community Day conference, in Paris, organized by my friends from JFrog, provider of awesome tools for software management and distribution. I covered how to scale Jenkins with Kubernetes on Google Container Engine.


For the impatient, here are the slides of the presentation I’ve given:



But let’s step back a little. In this article, I’d like to share with you why you would want to run Jenkins in the cloud, as well as give you some pointers to interesting resources on the topic.


Why running Jenkins in the cloud?


So why running Jenkins in the cloud? First of all, imagine your small team, working on a single project. You have your own little server, running under a desk somewhere, happily building your application on each commit, a few times a day. So far so good, your build machine running Jenkins isn’t too busy, and stays idle most of the day.


Let’s do some bottom of the napkin calculations. Let’s say you have a team of 3 developers, committing roughly 4 times a day, on one single project, and the build takes roughly 10 minutes to go.


3 developers * 4 commits / day / developer * 10 minutes build time * 1 project = 1 hour 20 minutes


So far so good, your server indeed stays idle most of the day. Usually, at most, your developers will wait just 10 minutes to see the result of their work.


But your team is growing to 10 persons, the team is still as productive, but the project becoming bigger, the build time goes up to 15 minutes:


10 developers * 4 commits / day / developer * 15 minutes build time * 1 project = 10 hours


You’re already at 10 hours build time, so your server is busy the whole day, and at times, you might have several build going on at the same time, using several CPU cores in parallel. And instead of building in 15 minutes, sometimes, the build might take longer, or your build might be queued. So in theory, it might be 15 minutes, but in practice, it could be half an hour because of the length of the queue or the longer time to build parallel projects.


Now, the company is successful, and has two projects instead of one (think a backend and a mobile app). Your teams grow further up to 20 developers per project. The developers are a little less productive because of the size of the codebase and project, so they only commit 3 times a day. The build takes more time too, at 20 minutes (in ideal time). Let’s do some math again:


20 developers * 3 commits / day / developer * 20 minutes build time * 2 projects = 40 hours


Woh, that’s already 40 hours of total build time, if all the builds are run serially. Fortunately, our server is multi-core, but still, there are certainly already many builds that are enqueued, and many of them, perhaps up to 2-3 or perhaps even 4 could be run in parallel. But as we said, the build queue increases further, the real effective time of build is certainly longer than 30 minutes. Perhaps at times, developers won’t see the result of their developments before at least an hour, if not more.


One last calculation? With team sizes of 30 developers, decreased productivity of 2 commits, 25 build time, and 3 projects? And you’ll get 75 hours total build time. You may start creating a little build farm, with a master and several build agents. But you also increase the burden of server management. Also, if you move towards a full Continuous Delivery or Continuous Deployment approach, you may further increase your build times to go up to deployment, make more but smaller commits, etc. You could think of running builds less often, or even on a nightly basis, to cope with the demand, but then, your company is less agile, and the time-to-market for fixes of new features might increase, and your developers may also become more frustrated because they are developing in the blind, not knowing before the next day if their work was successful or not.


With my calculations, you might think that it makes more sense for big companies, with tons of projects and developers. This is quite true, but when you’re a startup, you also want to avoid taking care of local server management, provisioning, etc. You want to be agile, and use only compute resources you need for the time you need them. So even if you’re a small startup, a small team, it might still make sense to take advantage of the cloud. You pay only for the actual time taken by your builds as the build agent containers are automatically provisioned and decommissioned. The builds can scale up via Kubernetes, as you need more (or less) CPU time for building everything.


And this is why I was happy to dive into scaling Jenkins in the cloud. For that purpose, I decided to go with building with containers, with Kubernetes, as my app was also containerized as well. Google Cloud offers Container Engine, which is basically just Kubernetes in the cloud.


Useful pointers


I based my presentation and demo on some great solutions that are published on the Google Cloud documentation portal. Let me give you some pointers.


Overview of Jenkins on Container Engine

https://cloud.google.com/solutions/jenkins-on-container-engine


Setting up Jenkins on Container Engine

https://cloud.google.com/solutions/jenkins-on-container-engine-tutorial


Configuring Jenkins for Container Engine

https://cloud.google.com/solutions/configuring-jenkins-container-engine


Continuous Deployment to Container Engine using Jenkins

https://cloud.google.com/solutions/continuous-delivery-jenkins-container-engine


Lab: Build a Continuous Deployment Pipeline with Jenkins and Kubernetes

https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes


The latter one is the tutorial I actually followed for the demo that I presented during the conference. It’s a simple Go application, with a frontend and backend. It’s continuously build, on each commit (well, every minute to check if there’s a new commit), and deployed automatically in different environments: dev, canary, production. The sources of the project are stored in Cloud Source Repository (it can be mirrored from Github, for example). The containers are stored in Cloud Container Registry. And both the Jenkins master and agents, as well as the application are running inside Kubernetes clusters in Container Engine.


Summary and perspective


Don’t bother with managing servers! Quickly, you’ll run out of CPU cycles, and you’ll have happier developers with builds that are super snappy!


And for the record, at Google, dev teams are also running Jenkins! There was a presentation (video and slides available) given last year by David Hoover at Jenkins World talking about how developers inside Google are running hundreds of build agents to build projects on various platforms.


 
© 2012 Guillaume Laforge | The views and opinions expressed here are mine and don't reflect the ones from my employer.