Apache Groovy and Google App Engine at JavaOne

I'll be back at JavaOne in San Francisco in October to speak about Apache Groovy and Google App Engine

Apache Groovy

I've been involved with the Apache Groovy project for 14 years now, it's a long time, and it's interesting to see how the language has evolved over time, how it was influenced by other languages, but also how it influenced those other languages itself! Let's see which operators or syntax constructs evolved and moved from one to the other.

Google App Engine

These days, the hype is around containers, containers everywhere! We tend to relegate Platform-as-a-Service solutions to the side, but it's still one of the most convenient way to deploy and scale an application today. After all, Snapchat and others are able to take advantage of a PaaS like App Engine, so why couldn't you too? (and you don't need to scale to their level anyway, but you'd still get the convenience of easy development and deployment)

Anyhow, I've invited my friends from Heroku and Oracle to join me for a panel discussion on the theme of Java PaaS-es. We'll see how Java PaaS-es are relevant today, more than ever.

The abstracts

So if you want to lear more about those two talks, here are their abstracts.

[CON5034] How Languages Influence Each Other: Reflections on 14 Years of Apache Groovy 

Languages have been influencing one another since the dawn of computer programming. There are families of languages: from Algol descendants with begin/end code blocks to those with curly braces such as C. Languages are not invented in a vacuum but are inspired by their predecessors. This session’s speaker, who has been working on Apache Groovy for the past 14 years, reflects on the influences that have driven the design of programming languages. In particular, Groovy’s base syntax was directly derived from Java’s but quickly developed its own flavor, adding closures, type inference, and operators from Ruby. Groovy also inspired other languages: C#, Swift, and JavaScript adopted Groovy’s null-safe navigation operator and the famous Elvis operator.

[CON5945] Java PaaS -- Then, Now and Next 

Java developers want to deploy their apps easily. Fortunately, there are great solutions for them in the form of Platform-as-a-Service for Java. In this discussion panel, we will share the views of Oracle, Heroku and Google engineers about their respective Java PaaS-es, how this space has evolved over the past few years, and what makes a great developer experience for users today. We'll discuss the future of PaaS in light of new technologies like microservices, containerization, and serverless architectures. Finally, we'll open up the space for an interactive discussion with the audience.

(with Joe Kutner from Heroku, Shaun Smith from Oracle, Ludovic Champenois from Google, and Frank Greco from NY JavaSIG as moderator)

Scale an Open API based web API with Cloud Endpoints

InfoQ recently released a video from the APIDays conference that took place in Paris last year. I talked about scaling an Open API based web API using Cloud Endpoints, on the Google Cloud platform. 

I spoke about the topic a few times, as web APIs is a topic I enjoy, at Nordic APIs, at APIDays, or Devoxx. But it's great to see the video online. So let me share the slide deck along with the video:



In a nutshell, the API contract is the source of truth. Whether you're the one implementing the API backend, or you're the consumer calling the API, there's this central contract that each party can rely on, to be certain how the API should be looking like, what kind of endpoint to expect, what payloads will be exchanged, or which status codes are used.

With a central contract, team communication and collaboration is facilitated: I've seen customers where a central architecture team would define a contract, that was implemented by a third-party (an outsourcing consulting company), and the API was consumed by different teams, both internally and externally. The central contract was here to facilitate the work between those teams, to ensure the contract would be fulfilled.

In addition, having such a computer-friendly contract is really useful for tooling. Out of the contract, you can generate various useful artifacts, such as: 

  • static & live mocks — that consumers can use when the API is not finalized, 
  • test stubs — for facilitating integration tests, 
  • server skeletons — to get started implementing the business logic of the API with a ready-made project template,
  • client SDKs — offering kits consumers can use, using various languages, to call your API more easily,
  • sandbox & live playground — a visual environment for testing and calling the API, for developers to discover how the API actually works,
  • an API portal with provisioning — a website offering the API reference documentation and allowing developers to get credentials to get access to the API,
  • static documentation — perhaps with just the API reference documentation, or a bundle of useful associated user guide, etc.
However, be careful with artifact generation. As soon as you start making some customizations to what's been generated by tools, you might run the risk of overwriting those changes the next time you re-generate those artifacts! So beware, how customization can be done and be integrated with those generated artifacts.

In my presentation and demo, I decided to use Cloud Endpoints to manage my API, and to host the business logic of my API implementation on the Google Cloud Platform. GCP (for short) provides various "compute" solutions for your projects:

  • Google App Engine (Platform-as-a-Service): you deploy your code, and all the scaling is done transparently for you by the platform,
  • Google Container Engine (Container-as-a-Service): it's a Kubernetes-based container orchestrator where you deploy your apps in the form of containers,
  • Google Compute Engine (Infrastructure-as-a-Service): this time, it's full VMs, with even more control on the environment, that you deploy and scale.
In my case, I went with a containerized Ratpack implementation for my API, implemented using the Apache Groovy programming language (what else? :-). So I deployed my application on Container Engine.

I described my web API via an Open API descriptor, and managed it via Cloud Endpoints. Cloud Endpoints is actually the underlying infrastructure used by Google themselves, to host all the APIs developers can use today (think Google Maps API, etc.) This architecture already serves literally hundreds of billions of requests everyday... so you can assume it's certainly quite scalable in itself. You can manage APIs described with Open API, regardless of how they were implemented (totally agnostic from the underlying implementation), and it can manage both HTTP-based JSON web APIs, as well as gRPC based ones.

There are three interesting key aspects to know about Cloud Endpoints, regardless of whether you're using the platform for public / private / mobile / micro-services APIs:

  • Cloud Endpoints takes care of security, to control access to the API, to authenticate consumers (taking advantage of API keys, Firebase auth, Auth0, JSON Web Tokens)
  • Cloud Endpoints offers logging and monitoring capabilities of key API related metrics
  • Cloud Endpoints is super snappy and scales nicely as already mentioned (we'll come back to this in a minute)
Cloud Endpoints actually offers an open source "sidecar" container proxy. Your containerized application will go hand in hand with the Extensible Service Proxy, and will actually be wrapped by that proxy. All the calls will actually go through that proxy before hitting your own application. Interestingly, there's not one single proxy, but each instance of you app will have its own proxy, thus diminishing the latency between the call to the proxy and the actual code execution in your app (there's no network hop between the two, to a somewhat distant central proxy, as the two containers are together). For the record, this proxy is based on Nginx. And that proxy container can also be run elsewhere, even on your own infrastructure.

In summary, Cloud Endpoints takes care of securing, monitoring and scaling your Web API. Developing, deploying, and managing your API on Google Cloud Platform gives you the choice: in terms of protocol with JSON / HTTP based APIs or gRPC, in terms of implementation technology as you can chose any language or framework you wish that are supported by the various compute options of the platform allow you to go from PaaS, to CaaS, or IaaS. Last but not least, this solution is open: based on open standards like Open API and gRPC, or by implementing its proxy on top of Nginx.





Scale Jenkins with Kubernetes on Google Container Engine

Last week, I had the pleasure to speak at the Jenkins Community Day conference, in Paris, organized by my friends from JFrog, provider of awesome tools for software management and distribution. I covered how to scale Jenkins with Kubernetes on Google Container Engine.


For the impatient, here are the slides of the presentation I’ve given:



But let’s step back a little. In this article, I’d like to share with you why you would want to run Jenkins in the cloud, as well as give you some pointers to interesting resources on the topic.


Why running Jenkins in the cloud?


So why running Jenkins in the cloud? First of all, imagine your small team, working on a single project. You have your own little server, running under a desk somewhere, happily building your application on each commit, a few times a day. So far so good, your build machine running Jenkins isn’t too busy, and stays idle most of the day.


Let’s do some bottom of the napkin calculations. Let’s say you have a team of 3 developers, committing roughly 4 times a day, on one single project, and the build takes roughly 10 minutes to go.


3 developers * 4 commits / day / developer * 10 minutes build time * 1 project = 1 hour 20 minutes


So far so good, your server indeed stays idle most of the day. Usually, at most, your developers will wait just 10 minutes to see the result of their work.


But your team is growing to 10 persons, the team is still as productive, but the project becoming bigger, the build time goes up to 15 minutes:


10 developers * 4 commits / day / developer * 15 minutes build time * 1 project = 10 hours


You’re already at 10 hours build time, so your server is busy the whole day, and at times, you might have several build going on at the same time, using several CPU cores in parallel. And instead of building in 15 minutes, sometimes, the build might take longer, or your build might be queued. So in theory, it might be 15 minutes, but in practice, it could be half an hour because of the length of the queue or the longer time to build parallel projects.


Now, the company is successful, and has two projects instead of one (think a backend and a mobile app). Your teams grow further up to 20 developers per project. The developers are a little less productive because of the size of the codebase and project, so they only commit 3 times a day. The build takes more time too, at 20 minutes (in ideal time). Let’s do some math again:


20 developers * 3 commits / day / developer * 20 minutes build time * 2 projects = 40 hours


Woh, that’s already 40 hours of total build time, if all the builds are run serially. Fortunately, our server is multi-core, but still, there are certainly already many builds that are enqueued, and many of them, perhaps up to 2-3 or perhaps even 4 could be run in parallel. But as we said, the build queue increases further, the real effective time of build is certainly longer than 30 minutes. Perhaps at times, developers won’t see the result of their developments before at least an hour, if not more.


One last calculation? With team sizes of 30 developers, decreased productivity of 2 commits, 25 build time, and 3 projects? And you’ll get 75 hours total build time. You may start creating a little build farm, with a master and several build agents. But you also increase the burden of server management. Also, if you move towards a full Continuous Delivery or Continuous Deployment approach, you may further increase your build times to go up to deployment, make more but smaller commits, etc. You could think of running builds less often, or even on a nightly basis, to cope with the demand, but then, your company is less agile, and the time-to-market for fixes of new features might increase, and your developers may also become more frustrated because they are developing in the blind, not knowing before the next day if their work was successful or not.


With my calculations, you might think that it makes more sense for big companies, with tons of projects and developers. This is quite true, but when you’re a startup, you also want to avoid taking care of local server management, provisioning, etc. You want to be agile, and use only compute resources you need for the time you need them. So even if you’re a small startup, a small team, it might still make sense to take advantage of the cloud. You pay only for the actual time taken by your builds as the build agent containers are automatically provisioned and decommissioned. The builds can scale up via Kubernetes, as you need more (or less) CPU time for building everything.


And this is why I was happy to dive into scaling Jenkins in the cloud. For that purpose, I decided to go with building with containers, with Kubernetes, as my app was also containerized as well. Google Cloud offers Container Engine, which is basically just Kubernetes in the cloud.


Useful pointers


I based my presentation and demo on some great solutions that are published on the Google Cloud documentation portal. Let me give you some pointers.


Overview of Jenkins on Container Engine

https://cloud.google.com/solutions/jenkins-on-container-engine


Setting up Jenkins on Container Engine

https://cloud.google.com/solutions/jenkins-on-container-engine-tutorial


Configuring Jenkins for Container Engine

https://cloud.google.com/solutions/configuring-jenkins-container-engine


Continuous Deployment to Container Engine using Jenkins

https://cloud.google.com/solutions/continuous-delivery-jenkins-container-engine


Lab: Build a Continuous Deployment Pipeline with Jenkins and Kubernetes

https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes


The latter one is the tutorial I actually followed for the demo that I presented during the conference. It’s a simple Go application, with a frontend and backend. It’s continuously build, on each commit (well, every minute to check if there’s a new commit), and deployed automatically in different environments: dev, canary, production. The sources of the project are stored in Cloud Source Repository (it can be mirrored from Github, for example). The containers are stored in Cloud Container Registry. And both the Jenkins master and agents, as well as the application are running inside Kubernetes clusters in Container Engine.


Summary and perspective


Don’t bother with managing servers! Quickly, you’ll run out of CPU cycles, and you’ll have happier developers with builds that are super snappy!


And for the record, at Google, dev teams are also running Jenkins! There was a presentation (video and slides available) given last year by David Hoover at Jenkins World talking about how developers inside Google are running hundreds of build agents to build projects on various platforms.


Serverless, Chatbots et APIs de Machine Learning au meetup TechBreak

Cette semaine, j'ai eu le plaisir d'inaugurer le meetup TechBreak lancé par Talan ! Merci à mon ami Jérôme Bernard pour l'invitation. En plus d'un accueil avec un chouette buffet champagne et petits fours, une tireuse à bière avait même été amenée pour l'occasion ! Une cinquantaine de personnes était venue pour parler de chatbots, de serverless, de machine learning, qui sont des sujets chauds du moment.

Aujourd'hui, je vous publie les slides, et nous devrions avoir également les vidéos bientôt sur YouTube.

Un bot pour gérer l'agenda de ta conférence

Et si vous tiriez parti d'un bot pour préparer et ajuster votre agenda pour la conférence ? Est-ce qu'il y a des présentations sur le Machine Learning, ou sur Docker, ou votre langage de programmation préféré ? Qui présente cette session ?Dans cette session, nous allons regarder comment construire notre propre assistant, avec les APIs de reconnaissance vocale et d'analyse de langage naturel de Google Cloud, avec les services de API.AI pour construire des bots intelligents, et Google Cloud Functions pour implémenter la logique métier nécessaire.


Les APIs de Machine Learning

Reconnaissance vocale or visuelle ? Analyse du langage naturel ? Compréhension des vidéos ? Il y a une API pour ça, chez Google Cloud. Dans cette présentation, nous ferons un tour d'horizon des différentes APIs disponibles, que vous pouvez intégrer dans vos applications.Nous évoquerons également brièvement TensorFlow, le framework Open Source de Deep Learning lancé par Google, ainsi que la plateforme Cloud ML Engine qui permet d'entrainer vos réseaux de neurones et de lancer vos prédictions dans le cloud. 

Jenkins Community Day: scaler Jenkins avec Kubernetes sur Google Cloud

A Paris, le 11 juillet, Jenkins, notre personnage open source préféré, sera là, pour accueillir des conférenciers bien connus comme Kohsuke Kawaguchi (le papa de Jenkins), Julien Dubois (le j-hipster bro), Quentin Adam (de nuage intelligent), Nicolas De Loof (la déjantée abeille nuageuse déguisée en canard) et bien d'autres.... dont... (roulement de tambour)... moi !!!


J'aurai le plaisir de voler dans les nuages avec Jenkins : combien faut-il de machines ou d'esclaves pour faire tourner vos builds rapidement ? Et bien il en faut toujours un de plus ! Nous découvrirons comment scaler son utilisation de Jenkins, en tirant partie de Google Cloud Platform, comment enchainer ses esclaves, ses machines virtuelles ou ses containers, pour leur faire obéir à tous ses désirs de build ! Pour cela, nous utiliserons Kubernetes et Container Engine.

Je suis impatient de plonger dans ce sujet avec vous, et j'espère vous voir nombreux à ma session ! Merci beaucoup à JFrog, nos amis grenouilles, pour m'avoir invité et m'accueillir à présenter à cet événement clé, si vous vous intéressez à l'intégration continue, au déploiement continu, et au DevOps en général. Merci JFrog ! JFrog fait partie de ces quelques entreprises qui ont bien compris les problématiques des développeurs, des ingénieurs, et qui soutient très activement le monde de l'Open Source. De nombreux projets Open Source comme Apache Groovy et bien d'autre bénéficient de l'infrastructure de JFrog pour leurs déploiements, en particulier en utilisant des produits comme Artifactory et Bintray.

Pour s'inscrire à la conférence, c'est par ici !

 
© 2012 Guillaume Laforge | The views and opinions expressed here are mine and don't reflect the ones from my employer.