Calculating your potential reach on Mastodon with Google Cloud Workflows orchestrating the Mastodon APIs

With the turmoil around Twitter, like many, I’ve decided to look into Mastodon. My friend Geert is running his own Mastodon server, and welcomed me on his instance at:

With Twitter, you can access your analytics to know how your tweets are doing, how many views you’re getting. Working in developer relations, it’s always interesting to get some insights into those numbers to figure out if what you’re sharing is interesting for your community. But for various (actually good) reasons, Mastodon doesn’t offer such detailed analytics. However, I wanted to see what the Mastodon APIs offered. 

How to calculate your potential reach

Your “toots” (ie. your posts on Mastodon) can be “boosted” (equivalent of a retweet on Twitter). Also, each actor on Mastodon has a certain number of followers. So potentially, one of your toots can reach all your followers, as well as all the followers of the actors who reshare your toot. 

So the maximum potential reach of one of your posts would correspond to the following equation:

potential_reach = 

    me.followers_count + 

    ∑ ( boosters[i].followers_count )

Let’s play with the Mastodon APIs to compute your reach

Fortunately, the Mastodon APIs allow you to get those numbers, albeit not with a single API call. Let’s have a look at the interesting endpoints to get the potential reach of my most recent posts.

First of all, I’ll look up my account on the Mastodon instance that hosts me:


I pass my account name as a query parameter to the /accounts/lookup endpoint.

In return, I get a JSON document that contains various details about my account and me (I’ll just show some of the interesting fields, not the whole payload):

    id: "109314675907601286",
    username: "glaforge",
    acct: "glaforge",
    display_name: "Guillaume Laforge",
    note: "...",
    url: "",
    followers_count: 878,
    fields: [...]

I get two important pieces of information here: the followers_count gives me, you guessed it, the number of followers my account has, thus the potential number of persons that can see my toots. Also the id of my account, which I’ll need for some further API calls further down.

To get the most recent statuses I’ve posted, I’ll indeed need that account id for crafting the new URL I’ll call:


This call will return a list of statuses (again, snipped less interesting part of the payload):


        id: "109620174916140649",
        created_at: "2023-01-02T14:52:06.044Z",

        replies_count: 2,
        reblogs_count: 6,
        favourites_count: 6,

        edited_at: null,
        content: "...",
        reblog: null,




In each status object, you can see the number of replies, the number of times the post was reshared or favorited, or whether it’s a reshared toot itself. So what’s interesting here is the reblogs_count number. 

However, you don’t get more details about who reshared your toot. So we’ll need some extra calls to figure that out!

So for each of your posts, you’ll have to call the following endpoint to know more about those “reblogs”:


This time, you’ll get a list of all the persons who reshared your post:

        id: "123456789",
        username: "...",
        acct: "...",
        display_name: "...",
        followers_count: 7,

And as you can see the details of those persons also have the followers_count field, that tells the number of people that follow them. 

So now, we have all the numbers we need to calculate the potential reach of our toots: your own number of followers, and the number of followers of all those who reshared! It doesn’t mean that your toots will actually be viewed that many times, as one doesn’t necessarily read each and every toots on their timelines, but at least, that’s an approximation of the maximum reach you can get.

Automating the potential reach calculation with Web API orchestration

Initially I played with both cURL and a little Apache Groovy script to better understand the Mastodon APIs to figure out how to chain them to get to the expected result. Then I decided to automate that series of Web API calls using an API orchestrator: Google Cloud Workflows.

To recap, we need to:

  • Get the details of your account

  • Get the recent posts for that account

  • Get all the followers count for each person who reshared each post

Let’s have a look at this piece by piece:

    params: [input]
    - account_server_vars:
        - account: ${input.account}
        - server: ${input.server}
        - prefix: ${"https://" + server + "/api/v1"}
        - impact_map: {}

First, the workflow takes an account and server arguments, in my case that is glaforge and And I’m defining a variable with the base path of the Mastodon API, and a dictionary to hold the data for each toot.

- find_account_id:
    call: http.get
        url: ${prefix + "/accounts/lookup"}
            acct: ${account}
    result: account_id_lookup
- account_id_var:
    - account_id: ${}
    - followers_count: ${account_id_lookup.body.followers_count}

Above, I’m doing an account lookup, to get the id of the account, but also the followers count.

- get_statuses:
    call: http.get
        url: ${prefix + "/accounts/" + account_id + "/statuses"}
            limit: 100
            exclude_reblogs: true
    result: statuses

We get the list of most recent toots. 

Now things get more interesting, as we need to iterate over all the statuses. We’ll do so in parallel, to save some time:

- iterate_statuses:
        shared: [impact_map]
            value: status
            in: ${statuses.body}

To parallelize the per-status calls, we just need to state it’s parallel, and that the variable we’ll keep our data in is a shared variable that needs to be accessed in parallel. Next, we define the steps for each parallel iteration:

- counter_var:
    - impact: ${followers_count}
- fetch_reblogs:
    call: http.get
        url: ${prefix + "/statuses/" + + "/reblogged_by"}
    result: reblogs

Above, we get the list of people who reshared our post. And for each of these accounts, we’re incrementing our impact counter with the number of their followers. It’s another loop, but that doesn’t need to be done in parallel, as we’re not calling any API:

- iterate_reblogs:
        value: reblog
        in: ${reblogs.body}
        - increment_reblog:
            - impact: ${impact + reblog.followers_count}
- update_impact_map:
    - impact_map[status.url]: ${impact}

And we finish the workflow by returning the data:

- returnOutput:
            id: ${account_id}
            account: ${account}
            server: ${server}
            followers: ${followers_count}
            impact: ${impact_map}

This will return an output similar to this:

  "account": "glaforge",
  "followers": 878,
  "id": "109314675907601286",
  "impact": {
    "": 945,
    "": 1523,
    "": 121385,
    "": 878,
    "": 1002,
    "": 878,
    "": 896,
    "": 1662,
    "": 1523,
  "server": ""

With this little workflow, I can check how my toots are doing on this new social media! As next steps, you might want to check out how to get started with API orchestration with Google Cloud Workflows, in the cloud console, or from the command-line. And to go further, potentially, it might be interesting to schedule a workflow execution with Cloud Scheduler. We could also imagine storing those stats in a database (perhaps BigQuery for some analytics, or simply Firestore or CloudSQL), to see how your impact evolves over time.

Turning a website into a desktop application

Probably like most of you, my dear readers, I have too many browser windows open, with tons of tabs for each window. But there are always apps I come back to very often, like my email (professional & personal), my calendar, my chat app, or even social media sites like Mastodon or Twitter. You can switch from window to window with CTRL/CMD-Tab, but you also have to move between tabs potentially. But for the most common webapps or websites I’m using, I wanted to have a dedicated desktop application.

Initially, I was on the lookout for a Mac specific approach, as I’ve been a macOS users for many years. So I found some Mac-specific apps that can handle that. This website mentions 5 approaches for macOS, including free, freemium, non-free apps, like Fluid, Applicationize (creating a Chrome extension), Web2Desk, or Unite. However, some of them create big hundred-mega wrappers. Another approach on Macs was using Automator, to create a pop-up window, but that’s just a pop-up, not a real app. There are also some promising open source projects like Tauri and Nativefier which seem promising.

Fortunately, there’s a cool feature from Chrome, that should work across all OSses, and not just macOS. So if you’re on Linux or Windows, please read on. The websites you’ll turn into applications don’t even need to be PWAs (Progressive Web Apps).

Here’s how to proceed:

First, navigate to your website you want to transform into an application with your Chrome browser.

Click on the triple dots in the top right corner, then More Tools, and finally Create Shorctut:

It will then let you customise the name of the application. It’ll reuse the favicon of the website as icon for the application. But be sure to check “Open as window” to create a standalone application:

Then you’ll be able to open the website as a standalone application:

I was curious if a similar feature existed with other browsers like Firefox. For the little fox, the only thing I could find was the ability to open Firefox in kiosk mode, in full-screen. But I wanted a window I could dimension however I wanted, not necessarily full-screen. I hope that Firefox will add that capability at some point. But for now, I’m happy to have this solution with Chrome!

APIs, we have a Problem JSON

When designing a web API, not only do you have to think about the happy path when everything is alright, but you also have to handle all the error cases: Is the payload received correct? Is there a typo in a field? Do you need more context about the problem that occured? 

There’s only a limited set of status codes that can convey the kind of error you’re getting, but sometimes you need to explain more clearly what the error is about.

In the past, the APIs I was designing used to follow a common JSON structure for my error messages: a simple JSON object, usually with a message field, and sometimes with extra info like a custom error code, or a details field that contained a longer explanation in plain English. However, it was my own convention, and it’s not necessarily one that is used by others, or understood by tools that interact with my API. 

So that’s why today, for reporting problems with my web APIs, I tend to use Problem JSON. This is actually an RFC (RFC-7807) whose title is “Problem Details for HTTP APIs”. Exactly what I needed, a specification for my error messages!

First of all, it’s a JSON content-type. Your API should specify the content-type with:

Content-Type: application/problem+json

Content-types that end with +json are basically treated as application/json.

Now, an example payload from the specification looks like:

HTTP/1.1 403 Forbidden
Content-Type: application/problem+json
Content-Language: en

  "type": "",
  "title": "You do not have enough credit.",
  "detail": "Your current balance is 30, but that costs 50.",
  "instance": "/account/12345/msgs/abc",
  "balance": 30,
  "accounts": ["/account/12345", "/account/67890"]

There are some standard fields like:

  • type: a URI reference that uniquely identifies the problem type

  • title: a short readable error statement

  • status: the original HTTP status code from the origin server

  • detail: a longer text explanation of the issue

  • instance: a URI that points to the resource that has issues

Then, in the example above, you also have custom fields: balance and accounts, which are specific to your application, and not part of the specification. Which means you can expand the Problem JSON payload to include details that are specific to your application.

Note: Although I’m only covering JSON APIs, the RFC also suggests an application/problem+xml alternative for the XML APIs.

Icing on the cake: built-in support in Micronaut

My framework of choice these days for all my apps is Micronaut, as it’s super fast and memory efficient. And it’s only recently that I realized there was actually a Micronaut extension for Problem JSON! So instead of returning a custom JSON payload manually, I can use the built-in integration.

Here’s an example from the Problem JSON Micronaut extension:

public class ProductController {
  public void index() {
    throw Problem.builder()
            .withTitle("Out of Stock")
            .withStatus(new HttpStatusType(HttpStatus.BAD_REQUEST))
            .withDetail("Item B00027Y5QG is no longer available")
            .with("product", "B00027Y5QG")

Which will return a JSON error as follows:

    "status": 400,
    "title": "Out of Stock",
    "detail": "Item B00027Y5QG is no longer available",
    "type": "",
    "parameters": {"product": "B00027Y5QG"}

Now, I’m happy that I can use some official standard for giving more details about the errors returned by my APIs!

Workflows tips’n tricks

Here are some general tips and tricks that we found useful as we used Google Cloud Workflows:

Avoid hard-coding URLs

Since Workflows is all about calling APIs and service URLs, it’s important to have some clean way to handle those URLs. You can hard-code them in your workflow definition, but the problem is that your workflow can become harder to maintain. In particular, what happens when you work with multiple environments? You have to duplicate your YAML definitions and use different URLs for the prod vs staging vs dev environments. It is error-prone and quickly becomes painful to make modifications to essentially the same workflow in multiple files. To avoid hard-coding those URLs, there are a few approaches. 

The first one is to externalize those URLs, and pass them as workflow execution arguments. This is great for workflow executions that are launched via the CLI, via the various client libraries, or the REST & gRPC APIs. However, there’s a limitation to this first approach, in the case of event-triggered workflows, where the invoker is Eventarc. In that case, that’s Eventarc that decides which arguments to pass (ie. the event payload). There’s no way to pass extra arguments in that case.

A safer approach is then to use some placeholder replacement techniques. Just use a tool that replaces some specific string tokens in your definition file, before deploying that updated definition. We explored that approach using some Cloud Build steps that do some string replacement. You still have one single workflow definition file, but you deploy variants for the different environments. If you’re using Terraform for provisioning your infrastructure, we’ve got you covered, you can also employ a similar technique with Terraform.

There are also other possible approaches, like taking advantage of Secret Manager and the dedicated workflow connector, to store those URLs, and retrieve them. Or you can also read some JSON file in a cloud storage bucket, within which you would store those environment specific details.

Take advantage of sub-steps

Apart from branching or looping, defining your steps is a pretty sequential process. One step happens after another. Steps are their own atomic operation. However, often, some steps really go hand-in-hand, like making an API call, logging its outcome, retrieving and assigning parts of the payload into some variables. You can actually regroup common steps into substeps. This becomes handy when you are branching from a set of steps to another set of steps, without having to point at the right atomic step.

    params: [input]
    - callWikipedia:
        - checkSearchTermInInput:
                - condition: ${"searchTerm" in input}
                    - searchTerm: ${input.searchTerm}
                  next: readWikipedia
        - getCurrentTime:
            call: http.get
            result: currentDateTime
        - setFromCallResult:
                - searchTerm: ${currentDateTime.body.dayOfTheWeek}
        - readWikipedia:
            call: http.get
                    action: opensearch
                    search: ${searchTerm}
            result: wikiResult
    - returnOutput:
            return: ${wikiResult.body[1]}

Wrap expressions

The dollar/curly brace ${} expressions are not part of the YAML specification, so what you put inside sometimes doesn’t play well with YAML’s expectations. For example, putting a colon inside a string inside an expression can be problematic, as the YAML parser believes the colon is the end of the YAML key, and the start of the right-hand-side. So to be safe, you can actually wrap your expressions within quotes, like: '${...}' 

Expressions can span several lines, as well as the strings within that expression. That’s handy for SQL queries for BigQueries, like in our example:

query: ${
    "SELECT TITLE, SUM(views)
    FROM `bigquery-samples.wikipedia_pageviews." + table + "`
    LIMIT 100"

Replace logic-less services with declarative API calls

In our serverless workshop, in lab 1, we had a function service that was making a call to the Cloud Vision API, checking a boolean attribute, then writing the result in Firestore. But the Vision API can be called declaratively from Workflows. The boolean check can be done with a switch conditional expression, and even writing to Firestore can be done via a declarative API call. When rewriting our application in lab 6 to use the orchestrated approach, we moved those logic-less calls into declarative API calls. 

There are times where Workflows lack some built-in function that you would need, so you have no choice but fork into a function to do the job. But when you have pretty logic-less code that just makes some API calls, you’d better just write this declaratively using Workflows syntax.

It doesn’t mean that everything, or as much as possible, should be done declaratively in a Workflow either. Workflows is not a hammer, and it’s definitely not a programming language. So when there’s real logic, you definitely need to call some service that represents that business logic.

Store what you need, free what you can

Workflows keeps on granting more memory to workflow executions, but there are times, with big API response payloads, where you’d be happy to have even more memory. That’s when sane memory management can be a good thing to do. You can be selective in what you store into variables: don’t store too much, but store just the right payload part you really need. Once you know you won’t need the content of one of your variables, you can also reassign that variable to null, that should also free that memory. Also, in the first place, if the APIs allow you to filter the result more aggressively, you should also do that. Last but not least, if you’re calling a service that returns a gigantic payload that can’t fit in Workflows memory, you could always delegate that call to your own function that would take care of making the call on your behalf, and returning to you just the parts you’re really interested in.

Don’t forget to check the documentation on quotas and limits to know more about what’s possible.

Take advantage of sub-workflows and the ability to call external workflows

In your workflows, sometimes there are some steps that you might need to repeat. That’s when subworkflows become handy. Sub-workflows are like sub-routines, procedures or methods. They are a way to make a set of steps reusable in several places of your workflow, potentially parameterized with different arguments. The sole downside maybe is that subworkflows are just local to your workflow definition, so they can’t be reused in other workflows. In that case, you could actually create a dedicated reusable workflow, because you can also call workflows from other workflows! The workflows connector for workflows is there to help.


We’ve covered a few tips and tricks, and we’ve reviewed some useful advice on how to make the best use of Workflows. There are certainly others we’re forgetting about. So feel free to share them with @meteatamel and @glaforge over Twitter. 

And don’t forget to double check what’s in the Workflows documentation. In particular, have a look at the built-in functions of the standard library, at the list of connectors that you can use, and perhaps even print the syntax cheat sheet

Lastly, check out all the samples in the documentation portal, and all the workflow demos Mete and I have built and open sourced over time.

Retrieve YouTube views count with youtube-dl, jq, and a docker container

I wanted to track the number of views, and also likes, of some YouTube videos I was featured in. For example, when I present a talk at a conference, often the video becomes available at a later time, and I’m not the owner of the channel or video. At first, I wanted to use the YouTube Data API, but I had the impression that I could only see the stats of videos or channels I own, however I think I might be wrong, and should probably revisit this approach later on. 

My first intuition was to just scrape the web page of the video, but it’s a gobbledygook of JavaScript, and I couldn't really find an easy way to consistently get the numbers in that sea of compressed JavaScript. That’s when I remembered about the youtube-dl project. Some people think of this project as a way to download videos to watch offline, but it’s also a useful tool that offers lots of metadata about the videos. You can actually even use the project without downloading videos at all, but just fetching the metadata.

For example, if I want to get the video metadata, without downloading, I can launch the following command, after having installed the tool locally:

youtube-dl -j -s

The -s flag is equivalent to --simulate which doesn’t download anything on disk.

And the -j flag is the short version of --dump-json which returns a big JSON file with lots of metadata, including the view count, but also things like links to transcripts in various languages, chapters, creator, duration, episode number, and so on and so forth.

Now, I’m only interested in view counts, likes, dislikes. So I’m using jq to filter the big JSON payload, and create a resulting JSON document with just the fields I want.

jq '{"id","title":.title,"views":.view_count,"likes":(.like_count // 0), "dislikes":(.dislike_count // 0)}'

This long command is creating a JSON structure as follows:

    "id": "xJi6pldZnsw",
    "title": "Reuse old smartphones to monitor 3D prints, with WebRTC, WebSockets and Serverless by G. Laforge",
    "views": 172,
    "likes": 6,
    "dislikes": 0

The .id, .title, .view_count, etc, are searching for that particular key in the big JSON documentation. The // 0 notation is to avoid null values and return 0 if there’s no key or if the value associated with the key is null. So I always get a number — although I noticed that sometimes, the likes are not always properly accounted for, but I haven’t figured out why.

So far so good… but if you pass a URL of a video with a playlist, or if you pass a playlist URL, it will fetch all the metadata for all the videos. This is actually useful: you can even create your own playlists for the videos you want to track. There’s one odd thing happening though when using youtube-dl with such URLs: it will output a JSON document per line for each video. It’s not returning an array of those documents. So I found a nice trick with jq to always put the results within an array, whether you pass a URL for a single video, or a video with a playlist:

​​jq -n '[inputs]'

So I’m piping the youtube-dl command, the first and second jq commands.

Rather than installing those tools locally, I decided to containerize my magic commands.

Let me first show you the whole Dockerfile:

FROM ubuntu:latest
RUN apt-get update && apt-get -y install wget \
    && wget -O /usr/local/bin/youtube-dl \
    && chmod a+rx /usr/local/bin/youtube-dl \
    && apt-get -y install python3-pip jq \
    && pip3 install --upgrade youtube-dl
COPY ./ /
RUN chmod +x /

And also this bash script mentioned in the Dockerfile:

youtube-dl -j -s -- "$@" | jq '{"id","title":.title,"views":.view_count,"likes":(.like_count // 0), "dislikes":(.dislike_count // 0)}' | jq -n '[inputs]'

I went with the latest ubuntu image. I ran some apt-get commands to install wget to download the latest youtube-dl release, Python 3’s pip to upgrade youtube-dl. There’s no recent apt module for youtube-dl, hence why we have those steps together.

What’s more interesting is why I don’t have the youtube-dl and jq commands in the Dockerfile directly, but instead in a dedicated bash script. Initially I had an ENTRYPOINT that pointed at youtube-dl, so that arguments passed to the docker run command would be passed as arguments of that entrypoint. However, after those commands, I still have to pipe with my jq commands. And I couldn’t find how to do so with ENTRYPOINT and CMD. When raising the problem on twitter, my friends Guillaume Lours and Christophe Furmaniak pointed me in the right direction with this idea of passing through a script.

So I use the $@ bash shortcut, which expands as arguments $1 $2 $3, etc. in case there are several videos passed as arguments. I have the jq pipes after that shortcut. But for my ENTRYPOINT, it’s fine, the args are passed directly to it, and it’s that intermediary script that weaves the args in my longer command.

Next, I just need to build my Docker container:

docker build . -t yt-video-stat

And then run it:

docker run --rm -it yt-video-stat ""

And voila, I have the stats for the YouTube videos I’m interested in!

© 2012 Guillaume Laforge | The views and opinions expressed here are mine and don't reflect the ones from my employer.