Turning a website into a desktop application

Probably like most of you, my dear readers, I have too many browser windows open, with tons of tabs for each window. But there are always apps I come back to very often, like my email (professional & personal), my calendar, my chat app, or even social media sites like Mastodon or Twitter. You can switch from window to window with CTRL/CMD-Tab, but you also have to move between tabs potentially. But for the most common webapps or websites I’m using, I wanted to have a dedicated desktop application.

Initially, I was on the lookout for a Mac specific approach, as I’ve been a macOS users for many years. So I found some Mac-specific apps that can handle that. This website mentions 5 approaches for macOS, including free, freemium, non-free apps, like Fluid, Applicationize (creating a Chrome extension), Web2Desk, or Unite. However, some of them create big hundred-mega wrappers. Another approach on Macs was using Automator, to create a pop-up window, but that’s just a pop-up, not a real app. There are also some promising open source projects like Tauri and Nativefier which seem promising.

Fortunately, there’s a cool feature from Chrome, that should work across all OSses, and not just macOS. So if you’re on Linux or Windows, please read on. The websites you’ll turn into applications don’t even need to be PWAs (Progressive Web Apps).

Here’s how to proceed:

First, navigate to your website you want to transform into an application with your Chrome browser.

Click on the triple dots in the top right corner, then More Tools, and finally Create Shorctut:

It will then let you customise the name of the application. It’ll reuse the favicon of the website as icon for the application. But be sure to check “Open as window” to create a standalone application:

Then you’ll be able to open the website as a standalone application:

I was curious if a similar feature existed with other browsers like Firefox. For the little fox, the only thing I could find was the ability to open Firefox in kiosk mode, in full-screen. But I wanted a window I could dimension however I wanted, not necessarily full-screen. I hope that Firefox will add that capability at some point. But for now, I’m happy to have this solution with Chrome!

APIs, we have a Problem JSON

When designing a web API, not only do you have to think about the happy path when everything is alright, but you also have to handle all the error cases: Is the payload received correct? Is there a typo in a field? Do you need more context about the problem that occured? 

There’s only a limited set of status codes that can convey the kind of error you’re getting, but sometimes you need to explain more clearly what the error is about.

In the past, the APIs I was designing used to follow a common JSON structure for my error messages: a simple JSON object, usually with a message field, and sometimes with extra info like a custom error code, or a details field that contained a longer explanation in plain English. However, it was my own convention, and it’s not necessarily one that is used by others, or understood by tools that interact with my API. 

So that’s why today, for reporting problems with my web APIs, I tend to use Problem JSON. This is actually an RFC (RFC-7807) whose title is “Problem Details for HTTP APIs”. Exactly what I needed, a specification for my error messages!

First of all, it’s a JSON content-type. Your API should specify the content-type with:

Content-Type: application/problem+json

Content-types that end with +json are basically treated as application/json.

Now, an example payload from the specification looks like:

HTTP/1.1 403 Forbidden
Content-Type: application/problem+json
Content-Language: en

  "type": "https://example.com/probs/out-of-credit",
  "title": "You do not have enough credit.",
  "detail": "Your current balance is 30, but that costs 50.",
  "instance": "/account/12345/msgs/abc",
  "balance": 30,
  "accounts": ["/account/12345", "/account/67890"]

There are some standard fields like:

  • type: a URI reference that uniquely identifies the problem type

  • title: a short readable error statement

  • status: the original HTTP status code from the origin server

  • detail: a longer text explanation of the issue

  • instance: a URI that points to the resource that has issues

Then, in the example above, you also have custom fields: balance and accounts, which are specific to your application, and not part of the specification. Which means you can expand the Problem JSON payload to include details that are specific to your application.

Note: Although I’m only covering JSON APIs, the RFC also suggests an application/problem+xml alternative for the XML APIs.

Icing on the cake: built-in support in Micronaut

My framework of choice these days for all my apps is Micronaut, as it’s super fast and memory efficient. And it’s only recently that I realized there was actually a Micronaut extension for Problem JSON! So instead of returning a custom JSON payload manually, I can use the built-in integration.

Here’s an example from the Problem JSON Micronaut extension:

public class ProductController {
  public void index() {
    throw Problem.builder()
            .withTitle("Out of Stock")
            .withStatus(new HttpStatusType(HttpStatus.BAD_REQUEST))
            .withDetail("Item B00027Y5QG is no longer available")
            .with("product", "B00027Y5QG")

Which will return a JSON error as follows:

    "status": 400,
    "title": "Out of Stock",
    "detail": "Item B00027Y5QG is no longer available",
    "type": "https://example.org/out-of-stock",
    "parameters": {"product": "B00027Y5QG"}

Now, I’m happy that I can use some official standard for giving more details about the errors returned by my APIs!

Workflows tips’n tricks

Here are some general tips and tricks that we found useful as we used Google Cloud Workflows:

Avoid hard-coding URLs

Since Workflows is all about calling APIs and service URLs, it’s important to have some clean way to handle those URLs. You can hard-code them in your workflow definition, but the problem is that your workflow can become harder to maintain. In particular, what happens when you work with multiple environments? You have to duplicate your YAML definitions and use different URLs for the prod vs staging vs dev environments. It is error-prone and quickly becomes painful to make modifications to essentially the same workflow in multiple files. To avoid hard-coding those URLs, there are a few approaches. 

The first one is to externalize those URLs, and pass them as workflow execution arguments. This is great for workflow executions that are launched via the CLI, via the various client libraries, or the REST & gRPC APIs. However, there’s a limitation to this first approach, in the case of event-triggered workflows, where the invoker is Eventarc. In that case, that’s Eventarc that decides which arguments to pass (ie. the event payload). There’s no way to pass extra arguments in that case.

A safer approach is then to use some placeholder replacement techniques. Just use a tool that replaces some specific string tokens in your definition file, before deploying that updated definition. We explored that approach using some Cloud Build steps that do some string replacement. You still have one single workflow definition file, but you deploy variants for the different environments. If you’re using Terraform for provisioning your infrastructure, we’ve got you covered, you can also employ a similar technique with Terraform.

There are also other possible approaches, like taking advantage of Secret Manager and the dedicated workflow connector, to store those URLs, and retrieve them. Or you can also read some JSON file in a cloud storage bucket, within which you would store those environment specific details.

Take advantage of sub-steps

Apart from branching or looping, defining your steps is a pretty sequential process. One step happens after another. Steps are their own atomic operation. However, often, some steps really go hand-in-hand, like making an API call, logging its outcome, retrieving and assigning parts of the payload into some variables. You can actually regroup common steps into substeps. This becomes handy when you are branching from a set of steps to another set of steps, without having to point at the right atomic step.

    params: [input]
    - callWikipedia:
        - checkSearchTermInInput:
                - condition: ${"searchTerm" in input}
                    - searchTerm: ${input.searchTerm}
                  next: readWikipedia
        - getCurrentTime:
            call: http.get
            result: currentDateTime
        - setFromCallResult:
                - searchTerm: ${currentDateTime.body.dayOfTheWeek}
        - readWikipedia:
            call: http.get
                url: https://en.wikipedia.org/w/api.php
                    action: opensearch
                    search: ${searchTerm}
            result: wikiResult
    - returnOutput:
            return: ${wikiResult.body[1]}

Wrap expressions

The dollar/curly brace ${} expressions are not part of the YAML specification, so what you put inside sometimes doesn’t play well with YAML’s expectations. For example, putting a colon inside a string inside an expression can be problematic, as the YAML parser believes the colon is the end of the YAML key, and the start of the right-hand-side. So to be safe, you can actually wrap your expressions within quotes, like: '${...}' 

Expressions can span several lines, as well as the strings within that expression. That’s handy for SQL queries for BigQueries, like in our example:

query: ${
    "SELECT TITLE, SUM(views)
    FROM `bigquery-samples.wikipedia_pageviews." + table + "`
    LIMIT 100"

Replace logic-less services with declarative API calls

In our serverless workshop, in lab 1, we had a function service that was making a call to the Cloud Vision API, checking a boolean attribute, then writing the result in Firestore. But the Vision API can be called declaratively from Workflows. The boolean check can be done with a switch conditional expression, and even writing to Firestore can be done via a declarative API call. When rewriting our application in lab 6 to use the orchestrated approach, we moved those logic-less calls into declarative API calls. 

There are times where Workflows lack some built-in function that you would need, so you have no choice but fork into a function to do the job. But when you have pretty logic-less code that just makes some API calls, you’d better just write this declaratively using Workflows syntax.

It doesn’t mean that everything, or as much as possible, should be done declaratively in a Workflow either. Workflows is not a hammer, and it’s definitely not a programming language. So when there’s real logic, you definitely need to call some service that represents that business logic.

Store what you need, free what you can

Workflows keeps on granting more memory to workflow executions, but there are times, with big API response payloads, where you’d be happy to have even more memory. That’s when sane memory management can be a good thing to do. You can be selective in what you store into variables: don’t store too much, but store just the right payload part you really need. Once you know you won’t need the content of one of your variables, you can also reassign that variable to null, that should also free that memory. Also, in the first place, if the APIs allow you to filter the result more aggressively, you should also do that. Last but not least, if you’re calling a service that returns a gigantic payload that can’t fit in Workflows memory, you could always delegate that call to your own function that would take care of making the call on your behalf, and returning to you just the parts you’re really interested in.

Don’t forget to check the documentation on quotas and limits to know more about what’s possible.

Take advantage of sub-workflows and the ability to call external workflows

In your workflows, sometimes there are some steps that you might need to repeat. That’s when subworkflows become handy. Sub-workflows are like sub-routines, procedures or methods. They are a way to make a set of steps reusable in several places of your workflow, potentially parameterized with different arguments. The sole downside maybe is that subworkflows are just local to your workflow definition, so they can’t be reused in other workflows. In that case, you could actually create a dedicated reusable workflow, because you can also call workflows from other workflows! The workflows connector for workflows is there to help.


We’ve covered a few tips and tricks, and we’ve reviewed some useful advice on how to make the best use of Workflows. There are certainly others we’re forgetting about. So feel free to share them with @meteatamel and @glaforge over Twitter. 

And don’t forget to double check what’s in the Workflows documentation. In particular, have a look at the built-in functions of the standard library, at the list of connectors that you can use, and perhaps even print the syntax cheat sheet

Lastly, check out all the samples in the documentation portal, and all the workflow demos Mete and I have built and open sourced over time.

Retrieve YouTube views count with youtube-dl, jq, and a docker container

I wanted to track the number of views, and also likes, of some YouTube videos I was featured in. For example, when I present a talk at a conference, often the video becomes available at a later time, and I’m not the owner of the channel or video. At first, I wanted to use the YouTube Data API, but I had the impression that I could only see the stats of videos or channels I own, however I think I might be wrong, and should probably revisit this approach later on. 

My first intuition was to just scrape the web page of the video, but it’s a gobbledygook of JavaScript, and I couldn't really find an easy way to consistently get the numbers in that sea of compressed JavaScript. That’s when I remembered about the youtube-dl project. Some people think of this project as a way to download videos to watch offline, but it’s also a useful tool that offers lots of metadata about the videos. You can actually even use the project without downloading videos at all, but just fetching the metadata.

For example, if I want to get the video metadata, without downloading, I can launch the following command, after having installed the tool locally:

youtube-dl -j -s https://www.youtube.com/watch?v=xJi6pldZnsw

The -s flag is equivalent to --simulate which doesn’t download anything on disk.

And the -j flag is the short version of --dump-json which returns a big JSON file with lots of metadata, including the view count, but also things like links to transcripts in various languages, chapters, creator, duration, episode number, and so on and so forth.

Now, I’m only interested in view counts, likes, dislikes. So I’m using jq to filter the big JSON payload, and create a resulting JSON document with just the fields I want.

jq '{"id":.id,"title":.title,"views":.view_count,"likes":(.like_count // 0), "dislikes":(.dislike_count // 0)}'

This long command is creating a JSON structure as follows:

    "id": "xJi6pldZnsw",
    "title": "Reuse old smartphones to monitor 3D prints, with WebRTC, WebSockets and Serverless by G. Laforge",
    "views": 172,
    "likes": 6,
    "dislikes": 0

The .id, .title, .view_count, etc, are searching for that particular key in the big JSON documentation. The // 0 notation is to avoid null values and return 0 if there’s no key or if the value associated with the key is null. So I always get a number — although I noticed that sometimes, the likes are not always properly accounted for, but I haven’t figured out why.

So far so good… but if you pass a URL of a video with a playlist, or if you pass a playlist URL, it will fetch all the metadata for all the videos. This is actually useful: you can even create your own playlists for the videos you want to track. There’s one odd thing happening though when using youtube-dl with such URLs: it will output a JSON document per line for each video. It’s not returning an array of those documents. So I found a nice trick with jq to always put the results within an array, whether you pass a URL for a single video, or a video with a playlist:

​​jq -n '[inputs]'

So I’m piping the youtube-dl command, the first and second jq commands.

Rather than installing those tools locally, I decided to containerize my magic commands.

Let me first show you the whole Dockerfile:

FROM ubuntu:latest
RUN apt-get update && apt-get -y install wget \
    && wget https://yt-dl.org/latest/youtube-dl -O /usr/local/bin/youtube-dl \
    && chmod a+rx /usr/local/bin/youtube-dl \
    && apt-get -y install python3-pip jq \
    && pip3 install --upgrade youtube-dl
COPY ./launch-yt-dl.sh /
RUN chmod +x /launch-yt-dl.sh
ENTRYPOINT ["./launch-yt-dl.sh"]

And also this bash script mentioned in the Dockerfile:

youtube-dl -j -s -- "$@" | jq '{"id":.id,"title":.title,"views":.view_count,"likes":(.like_count // 0), "dislikes":(.dislike_count // 0)}' | jq -n '[inputs]'

I went with the latest ubuntu image. I ran some apt-get commands to install wget to download the latest youtube-dl release, Python 3’s pip to upgrade youtube-dl. There’s no recent apt module for youtube-dl, hence why we have those steps together.

What’s more interesting is why I don’t have the youtube-dl and jq commands in the Dockerfile directly, but instead in a dedicated bash script. Initially I had an ENTRYPOINT that pointed at youtube-dl, so that arguments passed to the docker run command would be passed as arguments of that entrypoint. However, after those commands, I still have to pipe with my jq commands. And I couldn’t find how to do so with ENTRYPOINT and CMD. When raising the problem on twitter, my friends Guillaume Lours and Christophe Furmaniak pointed me in the right direction with this idea of passing through a script.

So I use the $@ bash shortcut, which expands as arguments $1 $2 $3, etc. in case there are several videos passed as arguments. I have the jq pipes after that shortcut. But for my ENTRYPOINT, it’s fine, the args are passed directly to it, and it’s that intermediary script that weaves the args in my longer command.

Next, I just need to build my Docker container:

docker build . -t yt-video-stat

And then run it:

docker run --rm -it yt-video-stat "https://www.youtube.com/watch?v=xJi6pldZnsw"

And voila, I have the stats for the YouTube videos I’m interested in!

Building and deploying Java 17 apps on Cloud Run with cloud native buildpacks on Temurin

In this article, let’s revisit the topic of deploying Java apps on Cloud Run. In particular, I’ll deploy a Micronaut app, written with Java 17, and built with Gradle.

With a custom Dockerfile

On Cloud Run, you deploy containerised applications, so you have to decide the way you want to build a container for your application. In a previous article, I showed an example of using your own Dockerfile, which would look as follows with an OpenJDK 17, and enabling preview features of the language:

FROM openjdk:17
COPY ./ ./
RUN ./gradlew shadowJar
CMD ["java", "--enable-preview", "-jar", "build/libs/app-0.1-all.jar"]

To further improve on that Dockerfile, you could use a multistage Docker build to first build the app in one step with Gradle, and then run it in a second step. Also you might want to parameterize the command as the JAR file name is hard-coded.

To build the image, you can build it locally with Docker, and then push it to Container Registry, and then deploy it:

# gcloud auth configure-docker
# gcloud components install docker-credential-gcr

docker build . --tag gcr.io/YOUR_PROJECT_ID/IMAGE_NAME
docker push gcr.io/YOUR_PROJECT_ID/IMAGE_NAME

gcloud run deploy weekend-service \
    --image gcr.io/YOUR_PROJECT_ID/IMAGE_NAME

Instead of building locally with Docker, you could also let Cloud Build do it for you:

gcloud builds submit . --tag gcr.io/YOUR_PROJECT_ID/SERVICE_NAME

With JIB

Instead of messing around with Dockerfiles, you can also let JIB create the container for you, like I wrote in another article. You configure Gradle to use the JIB plugin:

plugins {
    id "com.google.cloud.tools.jib" version "2.8.0"
tasks {
    jib {
        from {
            image = "gcr.io/distroless/java17-debian11"
        to {
            image = "gcr.io/YOUR_PROJECT_ID/SERVICE_NAME"

You specify the version of the plugin, but you also indicate that you want to use Java 17 by choosing a base image with that same version. Be sure to change the placeholders for your project ID and service name. Feel free to lookup the documentation about the JIB Gradle plugin. You can then let Gradle build the container with ./gradlew jib, or with ./gradlew jibDockerBuild if you want to use your local Docker daemon.

With Cloud Native Buildpacks

Now that we covered the other approaches, let’s zoom in on using Cloud Native Buildpacks instead, in particular, the Google Cloud Native Buildpacks. With buildpacks, you don’t have to bother with Dockerfiles or with building the container before deploying the service. You let Cloud Run use buildpacks to build, containerize, and deploy your application from sources.

Out of the box, the buildpack actually targets Java 8 or Java 11. But I’m interested in running the latest LTS version of Java, with Java 17, to take advantage of some preview features like records, sealed classes, switch expressions, etc.

In my Gradle build, I specify that I’m using Java 17, but also enable preview features:

java {
    toolchain {

Like in Cédric Champeaus’s blog post, to enable preview features, you should also tell Gradle you want to enable them for compilation, test, and execution tasks:

tasks.withType(JavaCompile).configureEach {

tasks.withType(Test).configureEach {

tasks.withType(JavaExec).configureEach {

So far so good, but as I said, the default native buildpack isn’t using Java 17, and I want to specify that I use preview features. So when I tried to deploy my Cloud Run app from sources with the buildpack, simply by running the gcloud deploy command, I would get an error.

gcloud beta run deploy SERVICE_NAME

To circumvent this problem, I had to add a configuration file, to instruct the buildpack to use Java 17. I created a project.toml file at the root of my project:

value = "17"
value = "java --enable-preview -jar /workspace/build/libs/app-0.1-all.jar"

I specify that the runtime version must use Java 17. But I also add the --enable-preview flag to enable the preview features at runtime.

Adoptium Temuring OpenJDK 17

The icing on the cake is that the build is using Adoptium’s Temurin build of OpenJDK 17, as we recently announced! If you look at the build logs in Cloud Build, you should see some output mentioning it, like:

    "link": "https://github.com/adoptium/temurin17-binaries/releases/download/jdk-",
    "name": "OpenJDK17U-jdk-sources_17.0.4.1_1.tar.gz",
    "size": 105784017

Way to go! Java 17 Micronaut app, deployed on Temurin on Cloud Run thanks to cloud native buildpacks! I win at buzzword bingo 🙂

© 2012 Guillaume Laforge | The views and opinions expressed here are mine and don't reflect the ones from my employer.