Category Archives: Planeta GNOME Hispano

Igalia and WebKit: status update and plans (2024)

Published / by mario

It’s been more than 2 years since the last time I wrote something here, and in that time a lot of things happened. Among those, one of the main highlights was me moving back to Igalia‘s WebKit team, but this time I moved as part of Igalia’s support infrastructure to help with other types of tasks such as general coordination, team facilitation and project management, among other things.

On top of those things, I’ve been also presenting our work around WebKit in different venues, such as in the Embedded Open Source Summit or in the Embedded Recipes conference, for instance. Of course, that included presenting our work in the WebKit community as part of the WebKit Contributors Meeting, a small and technically focused event that happens every year, normally around the Bay Area (California). That’s often a pretty dense presentation where, over the course of 30-40 minutes, we go through all the main areas that we at Igalia contribute to in WebKit, trying to summarize our main contributions in the previous 12 months. This includes work not just from the WebKit team, but also from other ones such as our Web Platform, Compilers or Multimedia teams.

So far I did that a couple of times only, both last year on October 24rth as well as this year, just a couple of weeks ago in the latest instance of the WebKit Contributors meeting. I believe the session was interesting and informative, but unfortunately it does not get recorded so this time I thought I’d write a blog post to make it more widely accessible to people not attending that event.

This is a long read, so maybe grab a cup of your favorite beverage first…

Igalia and WebKit

So first of all, what is the relationship between Igalia and the WebKit project?

Igalia logoWebKit logo

In a nutshell, we are the lead developers and the maintainers of the two Linux-based WebKit ports, known as WebKitGTK and WPE. These ports share a common baseline (e.g. GLib, GStreamer, libsoup) and also some goals (e.g. performance, security), but other than that their purpose is different, with WebKitGTK being aimed at the Linux desktop, while WPE is mainly focused on embedded devices.

WPE logo

This means that, while WebKitGTK is the go-to solution to embed Web content in GTK applications (e.g. GNOME Web/Epiphany, Evolution), and therefore integrates well with that graphical toolkit, WPE does not even provide a graphical toolkit since its main goal is to be able to run well on embedded devices that often don’t even have a lot of memory or processing power, or not even the usual mechanisms for I/O that we are used to in desktop computers. This is why WPE’s architecture is designed with flexibility in mind with a backends-based architecture, why it aims for using as few resources as possible, and why it tries to depend on as few libraries as possible, so you can integrate it virtually in any kind of embedded Linux platform.

Besides that port-specific work, which is what our WebKit and Multimedia teams focus a lot of their effort on, we also contribute at a different level in the port-agnostic parts of WebKit, mostly around the area of Web standards (e.g. contributing to Web specifications and to implement them) and the Javascript engine. This work is carried out by our Web Platform and Compilers team, which tirelessly contribute to the different parts of WebCore and JavaScriptCore that affect not just the WebKitGTK and WPE ports, but also the rest of them to a bigger or smaller degree.

Last but not least, we also devote a considerable amount of our time to other topics such as accessibility, performance, bug fixing, QA... and also to make sure WebKit works well on 32-bit devices, which is an important thing for a lot of WPE users out there.

Who are our users?

At Igalia we distinguish 4 main types of users of the WebKitGTK and WPE ports of WebKit:

Port users: this category would include anyone that writes a product directly against the port’s API, that is, apps such as a desktop Web browser or embedded systems that rely on a fullscreen Web view to render its Web-based content (e.g. digital signage systems).

Platform providers: in this category we would have developers that build frameworks with one of the Linux ports at its core, so that people relying on such frameworks can leverage the power of the Web without having to directly interface with the port’s API. RDK could be a good example of this use case, with WPE at the core of the so-called Thunder plugin (previously known as WPEFramework).

Web developers: of course, Web developers willing to develop and test their applications against our ports need to be considered here too, as they come with a different set of needs that need to be fulfilled, beyond rendering their Web content (e.g. using the Web Inspector).

End users: And finally, the end user is the last piece of the puzzle we need to pay attention to, as that’s what makes all this effort a task worth undertaking, even if most of them most likely don’t need what WebKit is, which is perfectly fine :-)

We like to make this distinction of 4 possible types of users explicit because we think it’s important to understand the complexity of the amount of use cases and the diversity of potential users and customers we need to provide service for, which is behind our decisions and the way we prioritize our work.

Strategic goals

Our main goal is that our product, the WebKit web engine, is useful for more and more people in different situations. Because of this, it is important that the platform is homogeneous and that it can be used reliably with all the engines available nowadays, and this is why compatibility and interoperability is a must, and why we work with the the standards bodies to help with the design and implementation of several Web specifications.

With WPE, it is very important to be able to run the engine in small embedded devices, and that requires good performance and being efficient in multiple hardware architectures, as well as great flexibility for specific hardware, which is why we provided WPE with a backend-based architecture, and reduced dependencies to a minimum.

Then, it is also important that the QA Infrastructure is good enough to keep the releases working and with good quality, which is why I regularly maintain, evolve and keep an eye on the EWS and post-commit bots that keep WebKitGTK and WPE building, running and passing the tens of thousands of tests that we need to check continuously, to ensure we don’t regress (or that we catch issues soon enough, when there’s a problem). Then of course it’s also important to keep doing security releases, making sure that we release stable versions with fixes to the different CVEs reported as soon as possible.

Finally, we also make sure that we keep evolving our tooling as much as possible (see for instance the release of the new SDK earlier this year), as well as improving the documentation for both ports.

Last, all this effort would not be possible if not because we also consider a goal of us to maintain an efficient collaboration with the rest of the WebKit community in different ways, from making sure we re-use and contribute to other ports as much code as possible, to making sure we communicate well in all the forums available (e.g. Slack, mailing list, annual meeting).

Contributions to WebKit in numbers

Well, first of all the usual disclaimer: number of commits is for sure not the best possible metric,  and therefore should be taken with a grain of salt. However, the point here is not to focus too much on the actual numbers but on the more general conclusions that can be extracted from them, and from that point of view I believe it’s interesting to take a look at this data at least once a year.

Igalia contributions to WebKit (2024)

With that out of the way, it’s interesting to confirm that once again we are still the 2nd biggest contributor to WebKit after Apple, with ~13% of the commits landed in this past 12-month period. More specifically, we landed 2027 patches out of the 15617 ones that took place during the past year, only surpassed by Apple and their 12456 commits. The remaining 1134 patches were landed mostly by Sony, followed by RedHat and several other contributors.

Igalia contributions to WebKit (2024)Now, if we remove Apple from the picture, we can observe how this year our contributions represented ~64% of all the non-Apple commits, a figure that grew about ~11% compared to the past year. This confirms once again our commitment to WebKit, a project we started contributing about 14 years ago already, and where we have been systematically being the 2nd top contributor for a while now.

Main areas of work

The 10 main areas we have contributed to in WebKit in the past 12 months are the following ones:

  • Web platform
  • Graphics
  • Multimedia
  • JavaScriptCore
  • New WPE API
  • WebKit on Android
  • Quality assurance
  • Security
  • Tooling
  • Documentation

In the next sections I’ll talk a bit about what we’ve done and what we’re planning to do next for each of them.

Web Platform

content-visibility:auto

This feature allows skipping painting and rendering of off-screen sections, particularly useful to avoid the browser spending time rendering parts in large pages, as content outside of the view doesn’t get rendered until it gets visible.

We completed the implementation and it’s now enabled by default.

Navigation API

This is a new API to manage browser navigation actions and examine history, which we started working on in the past cycle. There’s been a lot of work happening here and, while it’s not finished yet, the current plan is that Apple will continue working on that in the next months.

hasUAVisualTransition

This is an attribute of the NavigateEvent interface, which is meant to be True if the User Agent has performed a visual transition before a navigation event. It was something that we have also finished implementing and is now also enabled by default.

Secure Curves in the Web Cryptography API

In this case, we worked on fixing several Web Interop related issues, as well as on increasing test coverage within the Web Platform Tests (WPT) test suites.

On top of that we also moved the X25519 feature to the “prepare to ship” stage.

Trusted Types

This work is related to reducing DOM-based XSS attacks. Here we finished the implementation and this is now pending to be enabled by default.

MathML

We continued working on the MathML specification by working on the support for padding, border and margin, as well as by increasing the WPT score by ~5%.

The plan for next year is to continue working on core features and improve the interaction with CSS.

Cross-root ARIA

Web components have accessibility-related issues with native Shadow DOM as you cannot reference elements with ARIA attributes across boundaries. We haven’t worked on this in this period, but the plan is to work in the next months on implementing the Reference Target proposal to solve those issues.

Canvas Formatted Text

Canvas has not a solution to add formatted and multi-line text, so we would like to also work on exploring and prototyping the Canvas Place Element proposal in WebKit, which allows better text in canvas and more extended features.

Graphics

Completed migration from Cairo to Skia for the Linux ports

If you have followed the latest developments, you probably already know that the Linux WebKit ports (i.e. WebKitGTK and WPE) have moved from Cairo to Skia for their 2D rendering library, which was a pretty big and important decision taken after a long time trying different approaches and experiments (including developing our own HW-accelerated 2D rendering library!), as well as running several tests and measuring results in different benchmarks.

Skia logoThe results in the end were pretty overwhelming and we decided to give Skia a go, and we are happy to say that, as of today, the migration has been completed: we covered all the use cases in Cairo, achieving feature parity, and we are now working on implementing new features and improvements built on top of Skia (e.g. GPU-based 2D rendering).

On top of that, Skia is now the default backend for WebKitGTK and WPE since 2.46.0, released on September 17th, so if you’re building a recent version of those ports you’ll be already using Skia as their 2D rendering backend. Note that Skia is using its GPU-based backend only on desktop environments, on embedded devices the situation is trickier and for now the default is the CPU-based Skia backend, but we are actively working to narrow the gap and to enable GPU-based rendering also on embedded.

Architecture changes with buffer sharing APIs (DMABuf)

We did a lot of work here, such as a big refactoring of the fencing system to control the access to the buffers, or the continued work towards integrating with Apple’s DisplayLink infrastructure.

On top of that, we also enabled more efficient composition using damaging information, so that we don’t need to pass that much information to the compositor, which would slow the CPU down.

Enablement of the GPUProcess

On this front, we enabled by default the compilation for WebGL rendering using the GPU process, and we are currently working in performance review and enabling it for other types of rendering.

New SVG engine (LBSE: Layer-Based SVG Engine)

If you are not familiar with this, here the idea is to make sure that we reuse the graphics pipeline used for HTML and CSS rendering, and use it also for SVG, instead of having its own pipeline. This means, among other things, that SVG layers will be supported as a 1st-class citizen in the engine, enabling HW-accelerated animations, as well as support for 3D transformations for individual SVG elements.

LBSE logo

On this front, on this cycle we added support for the missing features in the LBSE, namely:

  • Implemented support for gradients & patterns (applicable to both fill and stroke)
  • Implemented support for clipping & masking (for all shapes/text)
  • Implemented support for markers
  • Helped review implementation of SVG filters (done by Apple)

Besides all this, we also improved the performance of the new layer-based engine by reducing repaints and re-layouts as much as possible (further optimizations still possible), narrowing the performance gap with the current engine for MotionMark. While we are still not at the same level of performance as the current SVG engine, we are confident that there are several key places where, with the right funding, we should be able to improve the performance to at least match the current engine, and therefore be able to push the new engine through the finish line.

General overhaul of the graphics pipeline, touching different areas (WIP):

On top of everything else commented above, we also worked on a general refactor and simplification of the graphics pipeline. For instance, we have been working on the removal of the Nicosia layer now that we are not planning to have multiple rendering implementations, among other things.

Multimedia

DMABuf-based sink for HW-accelerated video

We merged the DMABuf-based sink for HW-accelerated video in the GL-based GStreamer sink.

WebCodecs backend

We completed the implementation of  audio/video encoding and decoding, and this is now enabled by default in 2.46. As for the next steps, we plan to keep working on the integration of WebCodecs with WebGL and WebAudio.

GStreamer-based WebRTC backends

We continued working on GstWebRTC, bringing it to a point where it can be used in production in some specific use cases, and we will still be working on this in the next months.

Other

Besides the points above, we also added an optional text-to-speech backend based on libspiel to the development branch, and worked on general maintenance around the support for Media Source Extensions (MSE) and Encrypted Media Extensions (EME), which are crucial for the use case of WPE running in set-top-boxes, and is a permanent task we will continue to work on in the next months.

JavaScriptCore

ARMv7/32-bit support:

A lot of work happened around 32-bit support in JavaScriptCore, especially around WebAssembly (WASM): we ported the WASM BBQJIT and ported/enabled concurrent JIT support, and we also completed 80% of the implementation for the OMG optimization level of WASM, which we plan to finish in the next months. If you are unfamiliar with what the OMG and BBQ optimization tiers in WASM are, I’d recommend you to take a look at this article in webkit.org: Assembling WebAssembly.

WASM logoWe also contributed to the JIT-less WASM, which is very useful for embedded systems that can’t support JIT for security or memory related constraints, and also did some work on the In-Place Interpreter (IPInt), which is a new version of the WASM Low-level interpreter (LLInt) that uses less memory and executes WASM bytecode directly without translating it to LLInt bytecode  (and should therefore be faster to execute).

Last, we also contributed most of the implementation for the WASM GC, with the exception of some Kotlin tests.

As for the next few months, we plan to investigate and optimize heap/JIT memory usage in 32-bit, as well as to finish several other improvements on ARMv7 (e.g. IPInt).

New WPE API

The new WPE API is a new API that aims at making it easier to use WPE in embedded devices, by removing the hassle of having to handle several libraries in tandem (i.e. WPEWebKit, libWPE and WPEBackend-FDO, for instance), available from WPE’s releases page, and providing a more modern API in general, better aimed at the most common use cases of WPE.

A lot of effort happened this year along these lines, including the fact that we finally upstreamed and shipped its initial implementation with WPE 2.44, back in the first half of the year. Now, while we recommend users to give it a try and report feedback as much as possible, this new API is still not set in stone, with regular development still ongoing, so if you have the chance to try it out and share your experience, comments are welcome!

Besides shipping its initial implementation, we also added support for external platforms, so that other ones can be loaded beyond the Wayland, DRM and “headless” ones, which are the default platforms already included with WPE itself. This means for instance that a GTK4 platform, or another one for RDK could be easily used with WPE.

Then of course a lot of API additions were included in the new API in the latest months:

  • Screens management APIAPI to handle different screens, ask the display for the list of screens with their device scale factor, refresh rate, geometry…
  • Top level management API: This API allows a greater degree of control, for instance by allowing more than one WebView for the same top level, as well as allowing to retrieve properties such as size, scale or state (i.e. full screen, maximized…).
  • Maximized and minimized windows API: API to maximize/minimize a top level and monitor its state. mainly used by WebDriver.
  • Preferred DMA-BUF formats API: enables asking the platform (compositor or DRM) for the list of preferred formats and their intended use (scanout/rendering).
  • Input methods APIallows platforms to provide an implementation to handle input events (e.g. virtual keyboard, autocompletion, auto correction…).
  • Gestures API: API to handle gestures (e.g. tap, drag).
  • Buffer damaging: WebKit generates information about the areas of the buffer that actually changed and we pass that to DRM or the compositor to optimize painting.
  • Pointer lock API: allows the WebView to lock the pointer so that the movement of the pointing device (e.g. mouse) can be used for a different purpose (e.g. first-person shooters).

Last, we also added support for testing automation, and we can support WebDriver now in the new API.

With all this done so far, the plan now is to complete the new WPE API, with a focus on the Settings API and accessibility support, write API tests and documentation, and then also add an external platform to support GTK4. This is done on a best-effort basis, so there’s no specific release date.

WebKit on Android

This year was also a good year for WebKit on Android, also known as WPE Android, as this is a project that sits on top of WPE and its public API (instead of developing a fully-fledged WebKit port).

Android logoIn case you’re not familiar with this, the idea here is to provide a WebKit-based alternative to the Chromium-based Web view on Android devices, in a way that leverages HW acceleration when possible and that it integrates natively (and nicely) with the several Android subsystems, and of course with Android’s native mainloop. Note that this is an experimental project for now, so don’t expect production-ready quality quite yet, but hopefully something that can be used to start experimenting with selected use cases.

If you’re adventurous enough, you can already try the APKs yourself from the releases page in GitHub at https://github.com/Igalia/wpe-android/releases.

Anyway, as for the changes that happened in the past 12 months, here is a summary:

  • Updated WPE Android to WPE 2.46 and NDK 27 LTS
  • Added support for WebDriver and included WPT test suites
  • Added support for instrumentation tests, and integrated with the GitHub CI
  • Added support for the remote Web inspector, very useful for debugging
  • Enabled the Skia backend, bringing HW-accelerated 2D rendering to WebKit on Android
  • Implemented prompt delegates, allowing implementing things such as alert dialogs
  • Implemented WPEView client interfaces, allowing responding to things such as HTTP errors
  • Packaged a WPE-based Android WebView in its own library and published in Maven Central. This is a massive improvement as now apps can use WPE Android by simply referencing the library from the gradle files, no need to build everything on their own.
  • Other changes: enabled HTTP/2 support (via the migration to libsoup3), added support for the device scale factor, improved the virtual on-screen keyboard, general bug fixing…

On top of that, we published 3 different blog posts covering different topics, from a general intro to a more deep dive explanation of the internals, and showing some demos. You can check them out in Jani’s blog at https://blogs.igalia.com/jani

As for the future, we’ll focus on stabilization and regular maintenance for now, and then we’d like to work towards achieving production-ready quality for specific cases if possible.

Quality Assurance

On the QA front, we had a busy year but in general we could highlight the following topics.

  • Fixed a lot of API tests failures in the bots that were limiting our test coverage.
  • Fixed lots of assertions-related crashes in the bots, which were slowing down the bots as well as causing other types of issues, such as bots exiting early due too many failures.
  • Enabled assertions in the release bots, which will help prevent crashes in the future, as well as with making our debug bots healthier.
  • Moved all the WebKitGTK and WPE bots to building now with Skia instead of Cairo. This means that all the bots running tests are now using Skia, and there’s only one bot still using Cairo to make sure that the compilation is not broken, but that bot does not run tests.
  • Moved all the WebKitGTK bots to use GTK4 by default. As with the move to Skia, all the WebKit bots running tests now use GTK4 and the only one remaining building with GTK3 does not run tests, it only makes sure we don’t break the GTK3 compilation for now.
  • Working on moving all the bots to use the new SDK. This is still work in progress and will likely be completed during 2025 as it’s needed to implement several changes in the infrastructure that will take some time.
  • General gardening and bot maintenance

In the next months, our main focus would be a revamp of the QA infrastructure to make sure that we can get all the bots (including the debug ones) to a healthier state, finish the migration of all the bots to the new SDK and, ideally, be able to bring back the ready-to-use WPE images that we used to have available in wpewebkit.org.

Security

The current release cadence has been working well, so we continue issuing major releases every 6 months (March, September), and then minor and unstable development releases happening on-demand when needed.

As usual, we kept aligning releases for WebKitGTK and WPE, with both of them happening at the same time (see https://webkitgtk.org/releases and https://wpewebkit.org/release), and then also publishing WebKit Security Advisories (WSA) when necessary, both for WebKitGTK and for WPE.

Last, we also shortened the time before including security fixes in stable releases this year, and we have removed support for libsoup2 from WPE, as that library is no longer maintained.

Tooling & Documentation

On tooling, the main piece of news is that this year we released the initial version of the new SDK,  which is developed on top of OCI-based containers. This new SDK fixes the issues with the current existing approaches based on JHBuild and flatpak, where one of them was great for development but poor for testing and QA, and the other one was great for testing and QA, but not very convenient for development.

This new SDK is regularly maintained and currently runs on Ubuntu 24.04 LTS with GCC 14 & Clang 18. It has been made public on GitHub and announced to the public in May 2024 in Patrick’s blog, and is now the officially recommended way of building WebKitGTK and WPE.

As for documentation, we didn’t do as much as we would have liked here, but we still landed a few contributions in docs.webkit.org, mostly related to WebKitGTK (e.g. Releases and VersioningSecurity UpdatesMultimedia). We plan to do more on this regard in the next months, though, mostly by writing/publishing more documentation and perhaps also some tutorials.

Final thoughts

This has been a fairly long blog post but, as you can see, it’s been quite a year for WebKit here at Igalia, with many exciting changes happening at several fronts, and so there was quite a lot of stuff to comment on here. This said, you can always check the slides of the presentation in the WebKit Contributors Meeting here if you prefer a more concise version of the same content.

In any case, what’s clear it’s that the next months are probably going to be quite interesting as well with all the work that’s already going on in WebKit and its Linux ports, so it’s possible that in 12 months from now I might be writing an equally long essay. We’ll see.

Thanks for reading!

​Chromium now migrated to the new C++ Mojo types

Published / by mario

At the end of the last year I wrote a long blog post summarizing the main work I was involved with as part of Igalia’s Chromium team. In it I mentioned that a big chunk of my time was spent working on the migration to the new C++ Mojo types across the entire codebase of Chromium, in the context of the Onion Soup 2.0 project.

For those of you who don’t know what Mojo is about, there is extensive information about it in Chromium’s documentation, but for the sake of this post, let’s simplify things and say that Mojo is a modern replacement to Chromium’s legacy IPC APIs which enables a better, simpler and more direct way of communication among all of Chromium’s different processes.

One interesting thing about this conversion is that, even though Mojo was already “the new thing” compared to Chromium’s legacy IPC APIs, the original Mojo API presented a few problems that could only be fixed with a newer API. This is the main reason that motivated this migration, since the new Mojo API fixed those issues by providing less confusing and less error-prone types, as well as additional checks that would force your code to be safer than before, and all this done in a binary compatible way. Please check out the Mojo Bindings Conversion Cheatsheet for more details on what exactly those conversions would be about.

Another interesting aspect of this conversion is that, unfortunately, it wouldn’t be as easy as running a “search & replace” operation since in most cases deeper changes would need to be done to make sure that the migration wouldn’t break neither existing tests nor production code. This is the reason why we often had to write bigger refactorings than what one would have anticipated for some of those migrations, or why sometimes some patches took a bit longer to get landed as they would span way too much across multiple directories, making the merging process extra challenging.

Now combine all this with the fact that we were confronted with about 5000 instances of the old types in the Chromium codebase when we started, spanning across nearly every single subdirectory of the project, and you’ll probably understand why this was a massive feat that would took quite some time to tackle.

Turns out, though, that after just 6 months since we started working on this and more than 1100 patches landed upstream, our team managed to have nearly all the existing uses of the old APIs migrated to the new ones, reaching to a point where, by the end of December 2019, we had completed 99.21% of the entire migration! That is, we basically had almost everything migrated back then and the only part we were missing was the migration of //components/arc, as I already announced in this blog back in December and in the chromium-mojo mailing list.


Progress of migrations to the new Mojo syntax by December 2019

This was good news indeed. But the fact that we didn’t manage to reach 100% was still a bit of a pain point because, as Kentaro Hara mentioned in the chromium-mojo mailing list yesterday, “finishing 100% is very important because refactoring projects that started but didn’t finish leave a lot of tech debt in the code base”. And surely we didn’t want to leave the project unfinished, so we kept collaborating with the Chromium community in order to finish the job.

The main problem with //components/arc was that, as explained in the bug where we tracked that particular subtask, we couldn’t migrate it yet because the external libchrome repository was still relying on the old types! Thus, even though almost nothing else in Chromium was using them at that point, migrating those .mojom files under //components/arc to the new types would basically break libchrome, which wouldn’t have a recent enough version of Mojo to understand them (and no, according to the people collaborating with us on this effort at that particular moment, getting Mojo updated to a new version in libchrome was not really a possibility).

So, in order to fix this situation, we collaborated closely with the people maintaining the libchrome repository (external to Chromium’s repository and still relies in the old mojo types) to get the remaining migration, inside //components/arc, unblocked. And after a few months doing some small changes here and there to provide the libchrome folks with the tools they’d need to allow them to proceed with the migration, they could finally integrate the necessary changes that would ultimately allow us to complete the task.

Once this important piece of the puzzle was in place, all that was left was for my colleague Abhijeet to land the CL that would migrate most of //components/arc to the new types (a CL which had been put on hold for about 6 months!), and then to land a few CLs more on top to make sure we did get rid of any trace of old types that might still be in codebase (special kudos to my colleague Gyuyoung, who wrote most of those final CLs).


Progress of migrations to the new Mojo syntax by July 2020

After all this effort, which would sit on top of all the amazing work that my team had already done in the second half of 2019, we finally reached the point where we are today, when we can proudly and loudly announce that the migration of the old C++ Mojo types to the new ones is finally complete! Please feel free to check out the details on the spreadsheet tracking this effort.

So please join me in celebrating this important milestone for the Chromium project and enjoy the new codebase free of the old Mojo types. It’s been difficult but it definitely pays off to see it completed, something which wouldn’t have been possible without all the people who contributed along the way with comments, patches, reviews and any other type of feedback. Thank you all! 👌 🍻

IgaliaLast, while the main topic of this post is to celebrate the unblocking of these last migrations we had left since December 2019, I’d like to finish acknowledging the work of all my colleagues from Igalia who worked along with me on this task since we started, one year ago. That is, Abhijeet, Antonio, Gyuyoung, Henrique, Julie and Shin.

Now if you’ll excuse me, we need to get back to working on the Onion Soup 2.0 project because we’re not done yet: at the moment we’re mostly focused on converting remote calls using Chromium’s legacy IPC to Mojo (see the status report by Dave Tapuska) and helping finish Onion Soup’ing the remaining directores under //content/renderer (see the status report by Kentaro Hara), so there’s no time to waste. But those migrations will be material for another post, of course.

The Web Platform Tests project

Published / by mario

Web Browsers and Test Driven Development

Working on Web browsers development is not an easy feat but if there’s something I’m personally very grateful for when it comes to collaborating with this kind of software projects, it is their testing infrastructure and the peace of mind that it provides me with when making changes on a daily basis.

To help you understand the size of these projects, they involve millions of lines of code (Chromium is ~25 million lines of code, followed closely by Firefox and WebKit) and around 200-300 new patches landing everyday. Try to imagine, for one second, how we could make changes if we didn’t have such testing infrastructure. It would basically be utter and complete chao​s and, more especially, it would mean extremely buggy Web browsers, broken implementations of the Web Platform and tens (hundreds?) of new bugs and crashes piling up every day… not a good thing at all for Web browsers, which are these days some of the most widely used applications (and not just ‘the thing you use to browse the Web’).

The Chromium Trybots in action
The Chromium Trybots in action

Now, there are all different types of tests that Web engines run automatically on a regular basis: Unit tests for checking that APIs work as expected, platform-specific tests to make sure that your software runs correctly in different environments, performance tests to help browsers keep being fast and without increasing too much their memory footprint… and then, of course, there are the tests to make sure that the Web engines at the core of these projects implement the Web Platform correctly according to the numerous standards and specifications available.

And it’s here where I would like to bring your attention with this post because, when it comes to these last kind of tests (what we call “Web tests” or “layout tests”), each Web engine used to rely entirely on their own set of Web tests to make sure that they implemented the many different specifications correctly.

Clearly, there was some room for improvement here. It would be wonderful if we could have an engine-independent set of tests to test that a given implementation of the Web Platform works as expected, wouldn’t it? We could use that across different engines to make sure not only that they work as expected, but also that they also behave exactly in the same way, and therefore give Web developers confidence on that they can rely on the different specifications without having to implement engine-specific quirks.

Enter the Web Platform Tests project

Good news is that just such an ideal thing exists. It’s called the Web Platform Tests project. As it is concisely described in it’s official site:

“The web-platform-tests project is a cross-browser test suite for the Web-platform stack. Writing tests in a way that allows them to be run in all browsers gives browser projects confidence that they are shipping software which is compatible with other implementations, and that later implementations will be compatible with their implementations.”

I’d recommend visiting its website if you’re interested in the topic, watching the “Introduction to the web-platform-tests” video or even glance at the git repository containing all the tests here. Here, you can also find specific information such as how to run WPTs or how to write them. Also, you can have a look as well at the wpt.fyi dashboard to get a sense of what tests exists and how some of the main browsers are doing.

In short: I think it would be safe to say that this project is critical to the health of the whole Web Platform, and ultimately to Web developers. What’s very, very surprising is how long it took to get to where it is, since it came into being only about halfway into the history of the Web (there were earlier testing efforts at the W3C, but none that focused on automated & shared testing). But regardless of that, this is an interesting challenge: Filling in all of the missing unified tests, while new things are being added all the time!

Luckily, this was a challenge that did indeed took off and all the major Web engines can now proudly say that they are regularly running about 36500 of these Web engine-independent tests (providing ~1.7 million sub-tests in total), and all the engines are showing off a pass rate between 91% and 98%. See the numbers below, as extracted from today’s WPT data:

Chrome 84 Edge 84 Firefox 78 Safari 105 preview
Pass Total Pass Total Pass Total Pass Total
1680105 1714711 1669977 1714195 1640985 1698418 1543625 1695743
Pass rate: 97.98% Pass rate: 97.42% Pass rate: 96.62% Pass rate: 91.03%

And here at Igalia, we’ve recently had the opportunity to work on this for a little while and so I’d like to write a bit about that…

Upstreaming Chromium’s tests during the Coronavirus Outbreak

As you all know, we’re in the middle of an unprecedented world-wide crisis that is affecting everyone in one way or another. One particular consequence of it in the context of the Chromium project is that Chromium releases were paused for a while. On top of this, some constraints on what could be landed upstream were put in place to guarantee quality and stability of the Chromium platform during this strange period we’re going through these days.

These particular constraints impacted my team in that we couldn’t really keep working on the tasks we were working on up to that point, in the context of the Chromium project. Our involvement with the Blink Onion Soup 2.0 project usually requires the landing of relatively large refactors, and these kind of changes were forbidden for the time being.

Fortunately, we found an opportunity to collaborate in the meantime with the Web Platform Tests project by analyzing and trying to upstream many of the existing Chromium-specific tests that haven’t yet been unified. This is important because tests exist for widely used specifications, but if they aren’t in Web Platform Tests, their utility and benefits are limited to Chromium. If done well, this would mean that all of the tests that we managed to upstream would be immediately available for everyone else too. Firefox and WebKit-based browsers would not only be able to identify missing features and bugs, but also be provided with an extra set of tests to check that they were implementing these features correctly, and interoperably.

The WPT Dashboard
The WPT Dashboard

It was an interesting challenge considering that we had to switch very quickly from writing C++ code around the IPC layers of Chromium to analyzing, migrating and upstreaming Web tests from the huge pool of Chromium tests. We focused mainly on CSS Grid Layout, Flexbox, Masking and Filters related tests… but I think the results were quite good in the end:

As of today, I’m happy to report that, during the ~4 weeks we worked on this my team migrated 240 Chromium-specific Web tests to the Web Platform Tests’ upstream repository, helping increase test coverage in other Web Engines and thus helping towards improving interoperability among browsers:

  • CSS Flexbox: 89 tests migrated
  • CSS Filters: 44 tests migrated
  • CSS Masking: 13 tests migrated
  • CSS Grid Layout: 94 tests migrated

But there is more to this than just numbers. Ultimately, as I said before, these migrations should help identifying missing features and bugs in other Web engines, and that was precisely the case here. You can easily see this by checking the list of automatically created bugs in Firefox’s bugzilla, as well as some of the bugs filed in WebKit’s bugzilla during the time we worked on this.

…and note that this doesn’t even include the additional 96 Chromium-specific tests that we analyzed but determined were not yet eligible for migrating to WPT (normally because they relied on some internal Chromium API or non-standard behaviour), which would require further work to get them upstreamed. But that was a bit out of scope for those few weeks we could work on this, so we decided to focus on upstreaming the rest of tests instead.

Personally, I think this was a big win for the Web Platform and I’m very proud and happy to have had an opportunity to have contributed to it during these dark times we’re living, as part of my job at Igalia. Now I’m back to working on the Blink Onion Soup 2.0 project, where I think I should write about too, but that’s a topic for a different blog post.

Credit where credit is due

IgaliaI wouldn’t want to finish off this blog post without acknowledging all the different contributors who tirelessly worked on this effort to help improve the Web Platform by providing the WPT project with these many tests more, so here it is:

From the Igalia side, my whole team was the one which took on this challenge, that is: Abhijeet, Antonio, Gyuyoung, Henrique, Julie, Shin and myself. Kudos everyone!

And from the reviewing side, many people chimed in but I’d like to thank in particular the following persons, who were deeply involved with the whole effort from beginning to end regardless of their affiliation: Christian Biesinger, David Grogan, Robert Ma, Stephen Chenney, Fredrik Söderquist, Manuel Rego Casasnovas and Javier Fernandez. Many thanks to all of you!

Take care and stay safe!

End of the year Update: 2019 edition

Published / by mario

It’s the end of December and it seems that yet another year has gone by, so I figured that I’d write an EOY update to summarize my main work at Igalia as part of our Chromium team, as my humble attempt to make up for the lack of posts in this blog during this year.

I did quit a few things this year, but for the purpose of this blog post I’ll focus on what I consider the most relevant ones: work on the Servicification and the Blink Onion Soup projects, the migration to the new Mojo APIs and the BrowserInterfaceBroker, as well as a summary of the conferences I attended, both as a regular attendee and a speaker.

But enough of an introduction, let’s dive now into the gory details…

Servicification: migration to the Identity service

As explained in my previous post from January, I’ve started this year working on the Chromium Servicification (s13n) Project. More specifically, I joined my team mates in helping with the migration to the Identity service by updating consumers of several classes from the sign-in component to ensure they now use the new IdentityManager API instead of directly accessing those other lower level APIs.

This was important because at some point the Identity Service will run in a separate process, and a precondition for that to happen is that all access to sign-in related functionality would have to go through the IdentityManager, so that other process can communicate with it directly via Mojo interfaces exposed by the Identity service.

I’ve already talked long enough in my previous post, so please take a look in there if you want to know more details on what that work was exactly about.

The Blink Onion Soup project

Interestingly enough, a bit after finishing up working on the Identity service, our team dived deep into helping with another Chromium project that shared at least one of the goals of the s13n project: to improve the health of Chromium’s massive codebase. The project is code-named Blink Onion Soup and its main goal is, as described in the original design document from 2015, to “simplify the codebase, empower developers to implement features that run faster, and remove hurdles for developers interfacing with the rest of the Chromium”. There’s also a nice slide deck from 2016’s BlinkOn 6 that explains the idea in a more visual way, if you’re interested.


“Layers”, by Robert Couse-Baker (CC BY 2.0)

In a nutshell, the main idea is to simplify the codebase by removing/reducing the several layers of located between Chromium and Blink that were necessary back in the day, before Blink was forked out of WebKit, to support different embedders with their particular needs (e.g. Epiphany, Chromium, Safari…). Those layers made sense back then but these days Blink’s only embedder is Chromium’s content module, which is the module that Chrome and other Chromium-based browsers embed to leverage Chromium’s implementation of the Web Platform, and also where the multi-process and sandboxing architecture is implemented.

And in order to implement the multi-process model, the content module is split in two main parts running in separate processes, which communicate among each other over IPC mechanisms: //content/browser, which represents the “browser process” that you embed in your application via the Content API, and //content/renderer, which represents the “renderer process” that internally runs the web engine’s logic, that is, Blink.

With this in mind, the initial version of the Blink Onion Soup project (aka “Onion Soup 1.0”) project was born about 4 years ago and the folks spearheading this proposal started working on a 3-way plan to implement their vision, which can be summarized as follows:

  1. Migrate usage of Chromium’s legacy IPC to the new IPC mechanism called Mojo.
  2. Move as much functionality as possible from //content/renderer down into Blink itself.
  3. Slim down Blink’s public APIs by removing classes/enums unused outside of Blink.

Three clear steps, but definitely not easy ones as you can imagine. First of all, if we were to remove levels of indirection between //content/renderer and Blink as well as to slim down Blink’s public APIs as much as possible, a precondition for that would be to allow direct communication between the browser process and Blink itself, right?

In other words, if you need your browser process to communicate with Blink for some specific purpose (e.g. reacting in a visual way to a Push Notification), it would certainly be sub-optimal to have something like this:

…and yet that is what would happen if we kept using Chromium’s legacy IPC which, unlike Mojo, doesn’t allow us to communicate with Blink directly from //content/browser, meaning that we’d need to go first through //content/renderer and then navigate through different layers to move between there and Blink itself.

In contrast, using Mojo would allow us to have Blink implement those remote services internally and then publicly declare the relevant Mojo interfaces so that other processes can interact with them without going through extra layers. Thus, doing that kind of migration would ultimately allow us to end up with something like this:

…which looks nicer indeed, since now it is possible to communicate directly with Blink, where the remote service would be implemented (either in its core or in a module). Besides, it would no longer be necessary to consume Blink’s public API from //content/renderer, nor the other way around, enabling us to remove some code.

However, we can’t simply ignore some stuff that lives in //content/renderer implementing part of the original logic so, before we can get to the lovely simplification shown above, we would likely need to move some logic from //content/renderer right into Blink, which is what the second bullet point of the list above is about. Unfortunately, this is not always possible but, whenever it is an option, the job here would be to figure out what of that logic in //content/renderer is really needed and then figure out how to move it into Blink, likely removing some code along the way.

This particular step is what we commonly call “Onion Soup’ing //content/renderer/<feature>(not entirely sure “Onion Soup” is a verb in English, though…) and this is for instance how things looked before (left) and after (right) Onion Souping a feature I worked on myself: Chromium’s implementation of the Push API:


Onion Soup’ing //content/renderer/push_messaging

Note how the whole design got quite simplified moving from the left to the right side? Well, that’s because some abstract classes declared in Blink’s public API and implemented in //content/renderer (e.g. WebPushProvider, WebPushMessagingClient) are no longer needed now that those implementations got moved into Blink (i.e. PushProvider and PushMessagingClient), meaning that we can now finally remove them.

Of course, there were also cases where we found some public APIs in Blink that were not used anywhere, as well as cases where they were only being used inside of Blink itself, perhaps because nobody noticed when that happened at some point in the past due to some other refactoring. In those cases the task was easier, as we would just remove them from the public API, if completely unused, or move them into Blink if still needed there, so that they are no longer exposed to a content module that no longer cares about that.

Now, trying to provide a high-level overview of what our team “Onion Soup’ed” this year, I think I can say with confidence that we migrated (or helped migrate) more than 10 different modules like the one I mentioned above, such as android/, appcache/, media/stream/, media/webrtc, push_messaging/ and webdatabase/, among others. You can see the full list with all the modules migrated during the lifetime of this project in the spreadsheet tracking the Onion Soup efforts.

In my particular case, I “Onion Soup’ed” the PushMessagingWebDatabase and SurroundingText features, which was a fairly complete exercise as it involved working on all the 3 bullet points: migrating to Mojo, moving logic from //content/renderer to Blink and removing unused classes from Blink’s public API.

And as for slimming down Blink’s public API, I can tell that we helped get to a point where more than 125 classes/enums were removed from that Blink’s public APIs, simplifying and reducing the Chromium code- base along the way, as you can check in this other spreadsheet that tracked that particular piece of work.

But we’re not done yet! While overall progress for the Onion Soup 1.0 project is around 90% right now, there are still a few more modules that require “Onion Soup’ing”, among which we’ll be tackling media/ (already WIP) and accessibility/ (starting in 2020), so there’s quite some more work to be done on that regard.

Also, there is a newer design document for the so-called Onion Soup 2.0 project that contains some tasks that we have been already working on for a while, such as “Finish Onion Soup 1.0”, “Slim down Blink public APIs”, “Switch Mojo to new syntax” and “Convert legacy IPC in //content to Mojo”, so definitely not done yet. Good news here, though: some of those tasks are already quite advanced already, and in the particular case of the migration to the new Mojo syntax it’s nearly done by now, which is precisely what I’m talking about next…

Migration to the new Mojo APIs and the BrowserInterfaceBroker

Along with working on “Onion Soup’ing” some features, a big chunk of my time this year went also into this other task from the Onion Soup 2.0 project, where I was lucky enough again not to be alone, but accompanied by several of my team mates from Igalia‘s Chromium team.

This was a massive task where we worked hard to migrate all of Chromium’s codebase to the new Mojo APIs that were introduced a few months back, with the idea of getting Blink updated first and then having everything else migrated by the end of the year.


Progress of migrations to the new Mojo syntax: June 1st – Dec 23rd, 2019

But first things first: you might be wondering what was wrong with the “old” Mojo APIs since, after all, Mojo is the new thing we were migrating to from Chromium’s legacy API, right?

Well, as it turns out, the previous APIs had a few problems that were causing some confusion due to not providing the most intuitive type names (e.g. what is an InterfacePtrInfo anyway?), as well as being quite error-prone since the old types were not as strict as the new ones enforcing certain conditions that should not happen (e.g. trying to bind an already-bound endpoint shouldn’t be allowed). In the Mojo Bindings Conversion Cheatsheet you can find an exhaustive list of cases that needed to be considered, in case you want to know more details about these type of migrations.

Now, as a consequence of this additional complexity, the task wouldn’t be as simple as a “search & replace” operation because, while moving from old to new code, it would often be necessary to fix situations where the old code was working fine just because it was relying on some constraints not being checked. And if you top that up with the fact that there were, literally, thousands of lines in the Chromium codebase using the old types, then you’ll see why this was a massive task to take on.

Fortunately, after a few months of hard work done by our Chromium team, we can proudly say that we have nearly finished this task, which involved more than 1100 patches landed upstream after combining the patches that migrated the types inside Blink (see bug 978694) with those that tackled the rest of the Chromium repository (see bug 955171).

And by “nearly finished” I mean an overall progress of 99.21% according to the Migration to new mojo types spreadsheet where we track this effort, where Blink and //content have been fully migrated, and all the other directories, aggregated together, are at 98.64%, not bad!

On this regard, I’ve been also sending a bi-weekly status report mail to the chromium-mojo and platform-architecture-dev mailing lists for a while (see the latest report here), so make sure to subscribe there if you’re interested, even though those reports might not last much longer!

Now, back with our feet on the ground, the main roadblock at the moment preventing us from reaching 100% is //components/arc, whose migration needs to be agreed with the folks maintaining a copy of Chromium’s ARC mojo files for Android and ChromeOS. This is currently under discussion (see chromium-mojo ML and bug 1035484) and so I’m confident it will be something we’ll hopefully be able to achieve early next year.

Finally, and still related to this Mojo migrations, my colleague Shin and I took a “little detour” while working on this migration and focused for a while in the more specific task of migrating uses of Chromium’s InterfaceProvider to the new BrowserInterfaceBroker class. And while this was not a task as massive as the other migration, it was also very important because, besides fixing some problems inherent to the old InterfaceProvider API, it also blocked the migration to the new mojo types as InterfaceProvider did usually rely on the old types!


Architecture of the BrowserInterfaceBroker

Good news here as well, though: after having the two of us working on this task for a few weeks, we can proudly say that, today, we have finished all the 132 migrations that were needed and are now in the process of doing some after-the-job cleanup operations that will remove even more code from the repository! \o/

Attendance to conferences

This year was particularly busy for me in terms of conferences, as I did travel to a few events both as an attendee and a speaker. So, here’s a summary about that as well:

As usual, I started the year attending one of my favourite conferences of the year by going to FOSDEM 2019 in Brussels. And even though I didn’t have any talk to present in there, I did enjoy my visit like every year I go there. Being able to meet so many people and being able to attend such an impressive amount of interesting talks over the weekend while having some beers and chocolate is always great!

Next stop was Toronto, Canada, where I attended BlinkOn 10 on April 9th & 10th. I was honoured to have a chance to present a summary of the contributions that Igalia made to the Chromium Open Source project in the 12 months before the event, which was a rewarding experience but also quite an intense one, because it was a lightning talk and I had to go through all the ~10 slides in a bit under 3 minutes! Slides are here and there is also a video of the talk, in case you want to check how crazy that was.

Took a bit of a rest from conferences over the summer and then attended, also as usual, the Web Engines Hackfest that we at Igalia have been organising every single year since 2009. Didn’t have a presentation this time, but still it was a blast to attend it once again as an Igalian and celebrate the hackfest’s 10th anniversary sharing knowledge and experiences with the people who attended this year’s edition.

Finally, I attended two conferences in the Bay Area by mid November: first one was the Chrome Dev Summit 2019 in San Francisco on Nov 11-12, and the second one was BlinkOn 11 in Sunnyvale on Nov 14-15. It was my first time at the Chrome Dev Summit and I have to say I was fairly impressed by the event, how it was organised and the quality of the talks in there. It was also great for me, as a browsers developer, to see first hand what are the things web developers are more & less excited about, what’s coming next… and to get to meet people I would have never had a chance to meet in other events.

As for BlinkOn 11, I presented a 30 min talk about our work on the Onion Soup project, the Mojo migrations and improving Chromium’s code health in general, along with my colleague Antonio Gomes. It was basically a “extended” version of this post where we went not only through the tasks I was personally involved with, but also talked about other tasks that other members of our team worked on during this year, which include way many other things! Feel free to check out the slides here, as well as the video of the talk.

Wrapping Up

As you might have guessed, 2019 has been a pretty exciting and busy year for me work-wise, but the most interesting bit in my opinion is that what I mentioned here was just the tip of the iceberg… many other things happened in the personal side of things, starting with the fact that this was the year that we consolidated our return to Spain after 6 years living abroad, for instance.

Also, and getting back to work-related stuff here again, this year I also became accepted back at Igalia‘s Assembly after having re-joined this amazing company back in September 2018 after a 6-year “gap” living and working in the UK which, besides being something I was very excited and happy about, also brought some more responsibilities onto my plate, as it’s natural.

Last, I can’t finish this post without being explicitly grateful for all the people I got to interact with during this year, both at work and outside, which made my life easier and nicer at so many different levels. To all of you,  cheers!

And to everyone else reading this… happy holidays and happy new year in advance!

Working on the Chromium Servicification Project

Published / by mario

Igalia & ChromiumIt’s been a few months already since I (re)joined Igalia as part of its Chromium team and I couldn’t be happier about it: right since the very first day, I felt perfectly integrated as part of the team that I’d be part of and quickly started making my way through the -fully upstream- project that would keep me busy during the following months: the Chromium Servicification Project.

But what is this “Chromium servicification project“? Well, according to the Wiktionary the word “servicification” means, applied to computing, “the migration from monolithic legacy applications to service-based components and solutions”, which is exactly what this project is about: as described in the Chromium servicification project’s website, the whole purpose behind this idea is “to migrate the code base to a more modular, service-oriented architecture”, in order to “produce reusable and decoupled components while also reducing duplication”.

Doing so would not only make Chromium a more manageable project from a source code-related point of view and create better and more stable interfaces to embed chromium from different projects, but should also enable teams to experiment with new features by combining these services in different ways, as well as to ship different products based in Chromium without having to bundle the whole world just to provide a particular set of features. 

For instance, as Camille Lamy put it in the talk delivered (slides here) during the latest Web Engines Hackfest,  “it might be interesting long term that the user only downloads the bits of the app they need so, for instance, if you have a very low-end phone, support for VR is probably not very useful for you”. This is of course not the current status of things yet (right now everything is bundled into a big executable), but it’s still a good way to visualise where this idea of moving to a services-oriented architecture should take us in the long run.

Chromium Servicification Layers

With this in mind, the idea behind this project would be to work on the migration of the different parts of Chromium depending on those components that are being converted into services, which would be part of a “foundation” base layer providing the core services that any application, framework or runtime build on top of chromium would need.

As you can imagine, the whole idea of refactoring such an enormous code base like Chromium’s is daunting and a lot of work, especially considering that currently ongoing efforts can’t simply be stopped just to perform this migration, and that is where our focus is currently aimed at: we integrate with different teams from the Chromium project working on the migration of those components into services, and we make sure that the clients of their old APIs move away from them and use the new services’ APIs instead, while keeping everything running normally in the meantime.

At the beginning, we started working on the migration to the Network Service (which allows to run Chromium’s network stack even without a browser) and managed to get it shipped in Chromium Beta by early October already, which was a pretty big deal as far as I understand. In my particular case, that stage was a very short ride since such migration was nearly done by the time I joined Igalia, but still something worth mentioning due to the impact it had in the project, for extra context.

After that, our team started working on the migration of the Identity service, where the main idea is to encapsulate the functionality of accessing the user’s identities right through this service, so that one day this logic can be run outside of the browser process. One interesting bit about this migration is that this particular functionality (largely implemented inside the sign-in component) has historically been located quite high up in the stack, and yet it’s now being pushed all the way down into that “foundation” base layer, as a core service. That’s probably one of the factors contributing to making this migration quite complicated, but everyone involved is being very dedicated and has been very helpful so far, so I’m confident we’ll get there in a reasonable time frame.

If you’re curious enough, though, you can check this status report for the Identity service, where you can see the evolution of this particular migration, along with the impact our team had since we started working on this part, back on early October. There are more reports and more information in the mailing list for the Identity service, so feel free to check it out and/or subscribe there if you like.

One clarification is needed, tough: for now, the scope of this migrations is focused on using the public C++ APIs that such services expose (see //services/<service_name>/public/cpp), but in the long run the idea is that those services will also provide Mojo interfaces. That will enable using their functionality regardless of whether you’re running those services as part of the browser’s process, or inside their own & separate processes, which will then allow the flexibility that chromium will need to run smoothly and safely in different kind of environments, from the least constrained ones to others with a less favourable set of resources at their disposal.

And this is it for now, I think. I was really looking forward to writing a status update about what I’ve been up to in the past months and here it is, even though it’s not the shortest of all reports.

FOSDEM 2019

One last thing, though: as usual, I’m going to FOSDEM this year as well, along with a bunch of colleagues & friends from Igalia, so please feel free to drop me/us a line if you want to chat and/or hangout, either to talk about work-related matters or anything else really.

And, of course, I’d be also more than happy to talk about any of the open job positions at Igalia, should you consider applying. There are quite a few of them available at the moment for all kind of things (most of them available for remote work): from more technical roles such as graphicscompilersmultimedia, JavaScript engines, browsers (WebKitChromium, Web Platform) or systems administration (this one not available for remotes, though), to other less “hands-on” types of roles like developer advocatesales engineer or project manager, so it’s possible there’s something interesting for you if you’re considering to join such an special company like this one.

See you in FOSDEM!

Frogr 1.5 released

Published / by mario

It’s almost one year later and, despite the acquisition by SmugMug a few months ago and the predictions from some people that it would mean me stopping from using Flickr & maintaining Frogr, here comes the new release of frogr 1.5.Frogr 1.5 screenshot

Not many changes this time, but some of them hopefully still useful for some people, such as the empty initial state that is now shown when you don’t have any pictures, as requested a while ago already by Nick Richards (thanks Nick!), or the removal of the applications menu from the shell’s top panel (now integrated in the hamburger menu), in line with the “App Menu Retirement” initiative.

Then there were some fixes here and there as usual, and quite so many updates to the translations this time, including a brand new translation to Icelandic! (thanks Sveinn).

So this is it this time, I’m afraid. Sorry there’s not much to report and sorry as well for the long time that took me to do this release, but this past year has been pretty busy between hectic work at Endless the first time of the year, a whole international relocation with my family to move back to Spain during the summer and me getting back to work at Igalia as part of the Chromium team, where I’m currently pretty busy working on the Chromium Servicification project (which is material for a completely different blog post of course).

Anyway, last but not least, feel free to grab frogr from the usual places as outlined in its main website, among which I’d recommend the Flatpak method, either via GNOME Software  or from the command line by just doing this:

flatpak install --from \
    https://flathub.org/repo/appstream/org.gnome.frogr.flatpakref

For more information just check the main website, which I also updated to this latest release, and don’t hesitate to reach out if you have any questions or comments.

Hope you enjoy it. Thanks!

On Moving

Published / by mario / 7 Comments on On Moving

Winds of Change. One of my favourite songs ever and one that comes to my mind now that me and my family are going through quite some important changes, once again. But let’s start from the beginning…

A few years ago, back in January 2013, my family and me moved to the UK as the result of my decision to leave Igalia after almost 7 years in the company to embark ourselves in the “adventure” or living abroad. This was an idea we had been thinking about for a while already at that time, and our current situation back then suggested that it could be the right moment to try it out… so we did.

It was kind of a long process though: I first arrived alone in January to make sure I would have time to figure things out and find a permanent place for us to live in, and then my family joined me later in May, once everything was ready. Not great, if you ask me, to be living separated from your loved ones for 4 full months, not to mention the juggling my wife had to do during that time to combine her job with looking after the kids mostly on her own… but we managed to see each other every 2-3 weekends thanks to the London – Coruña direct flights in the meantime, so at least it was bearable from that point of view.

But despite of those not so great (yet expected) beginnings, I have to say that this past 5+ years have been an incredible experience overall, and we don’t have a single regret about making the decision to move, maybe just a few minor and punctual things only if I’m completely honest, but that’s about it. For instance, it’s been just beyond incredible and satisfying to see my kids develop their English skills “from zero to hero”, settle at their school, make new friends and, in one word, evolve during these past years. And that alone would have been a good reason to justify the move already, but it turns out we also have plenty of other reasons as we all have evolved and enjoyed the ride quite a lot as well, made many new friends, knew many new places, worked on different things… a truly enriching experience indeed!

In a way, I confess that this could easily be one of those things we’d probably have never done if we knew in advance of all the things we’d have to do and go through along the way, so I’m very grateful for that naive ignorance, since that’s probably how we found the courage, energy and time to do it. And looking backwards, it seems clear to me that it was the right time to do it.

But now it’s 2018 and, even though we had such a great time here both from personal and work-related perspectives, we have decided that it’s time for us to come back to Galicia (Spain), and try to continue our vital journey right from there, in our homeland.

And before you ask… no, this is not because of Brexit. I recognize that the result of the referendum has been a “contributing factor” (we surely didn’t think as much about returning to Spain before that 23 of June, that’s true), but there were more factors contributing to that decision, which somehow have aligned all together to tell us, very clearly, that Now It’s The Time…

For instance, we always knew that we would eventually move back for my wife to take over the family business, and also that we’d rather make the move in a way that it would be not too bad for our kids when it happened. And having a 6yo and a 9yo already it feels to us like now it’s the perfect time, since they’re already native English speakers (achievement unlocked!) and we believe that staying any longer would only make it harder for them, especially for my 9yo, because it’s never easy to leave your school, friends and place you call home behind when you’re a kid (and I know that very well, as I went through that painful experience precisely when I was 9).

Besides that, I’ve also recently decided to leave Endless after 4 years in the company and so it looks like, once again, moving back home would fit nicely with that work-related change, for several reasons. Now, I don’t want to enter into much detail on why exactly I decided to leave Endless, so I think I’ll summarize it as me needing a change and a rest after these past years working on Endless OS, which has been an equally awesome and intense experience as you can imagine. If anything, I’d just want to be clear on that contributing to such a meaningful project surrounded by such a team of great human beings, was an experience I couldn’t be happier and prouder about, so you can be certain it was not an easy decision to make.

Actually, quite the opposite: a pretty hard one I’d say… but a nice “side effect” of that decision, though, is that leaving at this precise moment would allow me to focus on the relocation in a more organized way as well as to spend some quality time with my family before leaving the UK. Besides, it will hopefully be also useful for us to have enough time, once in Spain, to re-organize our lives there, settle properly and even have some extra weeks of true holidays before the kids start school and we start working again in September.

Now, taking a few weeks off and moving back home is very nice and all that, but we still need to have jobs, and this is where our relocation gets extra interesting as it seems that we’re moving home in multiple ways at once…

For once, my wife will start taking over the family business with the help of her dad in her home town of Lalín (Pontevedra), where we plan to be living for the foreseeable future. This is the place where she grew up and where her family and many friends live in, but also a place she hasn’t lived in for the last 15 years, so the fact that we’ll be relocating there is already quite a thing in the “moving back home” department for her…

Second, for my kids this will mean going back to having their relatives nearby once again as well as friends they only could see and play with during holidays until now, which I think it’s a very good thing for them. Of course, this doesn’t feel as much moving home for them as it does for us, since they obviously consider the UK their home for now, but our hope is that it will be ok in the medium-long term, even though it will likely be a bit challenging for them at the beginning.

Last, I’ll be moving back to work at Igalia after almost 6 years since I left which, as you might imagine, feels to me very much like “moving back home” too: I’ll be going back to working in a place I’ve always loved so much for multiple reasons, surrounded by people I know and who I consider friends already (I even would call some of them “best friends”) and with its foundations set on important principles and values that still matter very much to me, both from technical (e.g. Open Source, Free Software) and not so technical (e.g. flat structure, independence) points of view.

Those who know me better might very well think that I’ve never really moved on as I hinted in the title of the blog post I wrote years ago, and in some way that’s perhaps not entirely wrong, since it’s no secret I always kept in touch throughout these past years at many levels and that I always felt enormously proud of my time as an Igalian. Emmanuele even told me that I sometimes enter what he seems to call an “Igalia mode” when I speak of my past time in there, as if I was still there… Of course, I haven’t seen any formal evidence of such thing happening yet, but it certainly does sound like a possibility as it’s true I easily get carried away when Igalia comes to my mind, maybe as a mix of nostalgia, pride, good memories… those sort of things. I suppose he’s got a point after all…

So, I guess it’s only natural that I finally decided to apply again since, even though both the company and me have evolved quite a bit during these years, the core foundations and principles it’s based upon remain the same, and I still very much align with them. But applying was only one part, so I couldn’t finish this blog post without stating how grateful I am for having been granted this second opportunity to join Igalia once again because, being honest, more often than less I was worried on whether I would be “good enough” for the Igalia of 2018. And the truth is that I won’t know for real until I actually start working and stay in the company for a while, but knowing that both my former colleagues and newer Igalians who joined since I left trust me enough to join is all I need for now, and I couldn’t be more excited nor happier about it.

Anyway, this post is already too long and I think I’ve covered everything I wanted to mention On Moving (pun intended with my post from 2012, thanks Will Thompson for the idea!), so I think I’ll stop right here and re-focus on the latest bits related to the relocation before we effectively leave the UK for good, now that we finally left our rented house and put all our stuff in a removals van. After that, I expect a few days of crazy unpacking and bureaucracy to properly settle in Galicia and then hopefully a few weeks to rest and get our batteries recharged for our new adventure, starting soon in September (yet not too soon!).

As usual, we have no clue of how future will be, but we have a good feeling about this thing of moving back home in multiple ways, so I believe we’ll be fine as long as we stick together as a family as we always did so far.

But in any case, please wish us good luck.That’s always welcome! :-)

Updating Endless OS to GNOME Shell 3.26 (Video)

Published / by mario

It’s been a pretty hectic time during the past months for me here at Endless, busy with updating our desktop to the latest stable version of GNOME Shell (3.26, at the time the process started), among other things. And in all this excitement, it seems like I forgot to blog so I think this time I’ll keep it short for once, and simply link to a video I made a couple of months ago, right when I was about to finish the first phase of the process (which ended up taking a bit longer than expected).

Note that the production of this video is far from high quality (unsurprisingly), but the feedback I got so far is that it has been apparently very useful to explain to less technically inclined people what doing a rebase of this characteristics means, and with that in mind I woke up this morning realizing that it might be good to give it its own entry in my personal blog, so here it is.


(Pro-tip: Enable video subtitles to see contextual info)

Granted, this hasn’t been a task as daunting as The Great Rebase I was working on one year ago, but still pretty challenging for a different set of reasons that I might leave for a future, and more detailed, post.

Hope you enjoy watching the video as much as I did making it.

Frogr 1.4 released

Published / by mario

Another year goes by and, again, I feel the call to make one more release just before 2017 over, so here we are: frogr 1.4 is out!

Screenshot of frogr 1.4

Yes, I know what you’re thinking: “Who uses Flickr in 2017 anyway?”. Well, as shocking as this might seem to you, it is apparently not just me who is using this small app, but also another 8,935 users out there issuing an average of 0.22 Queries Per Second every day (19008 queries a day) for the past year, according to the stats provided by Flickr for the API key.

Granted, it may be not a huge number compared to what other online services might be experiencing these days, but for me this is enough motivation to keep the little green frog working and running, thus worth updating it one more time. Also, I’d argue that these numbers for a niche app like this one (aimed at users of the Linux desktop that still use Flickr to upload pictures in 2017) do not even look too bad, although without more specific data backing this comment this is, of course, just my personal and highly-biased opinion.

So, what’s new? Some small changes and fixes, along with other less visible modifications, but still relevant and necessary IMHO:

  • Fixed integration with GNOME Software (fixed a bug regarding appstream data).
  • Fixed errors loading images from certain cameras & phones, such as the OnePlus 5.
  • Cleaned the code by finally migrating to using g_auto, g_autoptr and g_autofree.
  • Migrated to the meson build system, and removed all the autotools files.
  • Big update to translations, now with more than 22 languages 90% – 100% translated.

Also, this is the first release that happens after having a fully operational centralized place for Flatpak applications (aka Flathub), so I’ve updated the manifest and I’m happy to say that frogr 1.4 is already available for i386, arm, aarch64 and x86_64. You can install it either from GNOME Software (details on how to do it at https://flathub.org), or from the command line by just doing this:

flatpak install --from https://flathub.org/repo/appstream/org.gnome.frogr.flatpakref

Also worth mentioning that, starting with Frogr 1.4, I will no longer be updating my PPA at Launchpad. I did that in the past to make it possible for Ubuntu users to have access to the latest release ASAP, but now we have Flatpak that’s a much better way to install and run the latest stable release in any supported distro (not just Ubuntu). Thus, I’m dropping the extra work required to deal with the PPA and flat-out recommending users to use Flatpak or wait until their distro of choice packages the latest release.

And I think this is everything. As usual, feel free to check the main website for extra information on how to get frogr and/or how to contribute to it. Feedback and/or help is more than welcome.

Happy new year everyone!

Back from GUADEC

Published / by mario

After spending a few days in Manchester with other fellow GNOME hackers and colleagues from Endless, I’m finally back at my place in the sunny land of Surrey (England) and I thought it would be nice to write some sort of recap, so here it is:

The Conference

Getting ready for GUADECI arrived in Manchester on Thursday the 27th just on time to go to the pre-registration event where I met the rest of the gang and had some dinner, and that was already a great start. Let’s forget about the fact that I lost my badge even before leaving the place, which has to be some type of record (losing the badge before the conference starts, really?), but all in all it was great to meet old friends, as well as some new faces, that evening already.

Then the 3 core days of GUADEC started. My first impression was that everything (including the accommodation at the university, which was awesome) was very well organized in general, and the venue make it for a perfect place to organize this type of event, so I was already impressed even before things started.

I attended many talks and all of them were great, but if I had to pick my 5 favourite ones I think those would be the following ones, in no particular order:

  • The GNOME Way, by Allan: A very insightful and inspiring talk, made me think of why we do the things we do, and why it matters. It also kicked an interesting pub conversation with Allan later on and I learned a new word in English (“principled“), so believe me it was great.
  • Keynote: The Battle Over Our Technology, by Karen: I have no words to express how much I enjoyed this talk. Karen was very powerful on stage and the way she shared her experiences and connected them to why Free Software is important did leave a mark.
  • Mutter/gnome-shell state of the union, by Florian and Carlos: As a person who is getting increasingly involved with Endless’s fork of GNOME Shell, I found this one particularly interesting. Also, I found it rather funny at points, specially during “the NVIDIA slide”.
  • Continuous: Past, Present, and Future, by Emmanuele: Sometimes I talk to friends and it strikes me how quickly they dismiss things as CI/CD as “boring” or “not interesting”, which I couldn’t disagree more with. This is very important work and Emmanuele is kicking ass as the build sheriff, so his talk was very interesting to me too. Also, he’s got a nice cat.
  • The History of GNOME, by Jonathan: Truth to be told, Jonathan already did a rather similar talk internally in Endless a while ago, so it was not entirely new to me, but I enjoyed it a lot too because it brought so many memories to my head: starting with when I started with Linux (RedHat 5.2 + GNOME pre-release!), when I used GNOME 1.x at the University and then moved to GNOME 2.x later on… not to mention the funny anecdotes he mentioned (never imagined the phone ringing while sleeping could be a good thing). Perfectly timed for the 20th anniversary of GNOME indeed!

As I said, I attended other talks too and all were great too, so I’d encourage you to check the schedule and watch the recordings once they are available online, you won’t regret it.

Closing ceremony

And the next GUADEC will be in… Almería!

One thing that surprised me this time was that I didn’t do as much hacking during the conference as in other occasions. Rather than seeing it as a bad thing, I believe that’s a clear indicator of how interesting and engaging the talks were this year, which made it for a perfect return after missing 3 edition (yes, my last GUADEC was in 2013).

All in all it was a wonderful experience, and I can thank and congratulate the local team and the volunteers who run the conference this year well enough, so here’s is a picture I took where you can see all the people standing up and clapping during the closing ceremony.

Many thanks and congratulations for all the work done. Seriously.

The Unconference

After 3 days of conference, the second part started: “2 days and a bit” (I was leaving on Wednesday morning) of meeting people and hacking in a different venue, where we gathered to work on different topics, plus the occasional high-bandwith meeting in person.

GUADEC unconferenceAs you might expect, my main interest this time was around GNOME Shell, which is my main duty in Endless right now. This means that, besides trying to be present in the relevant BoFs, I’ve spent quite some time participating of discussions that gathered both upstream contributors and people from different companies (e.g. Endless, Red Hat, Canonical).

This was extremely helpful and useful for me since, now we have rebased our fork of GNOME Shell 3.22, we’re in a much better position to converge and contribute back to upstream in a more reasonable fashion, as well as to collaborate implementing new features that we already have in Endless but that didn’t make it to upstream yet.

And talking about those features, I’d like to highlight two things:

First, the discussion we held with both developers and designers to talk about the new improvements that are being considered for both the window picker and the apps view, where one of the ideas is to improve the apps view by (maybe) adding a new grid of favourite applications that the users could customize, change the order… and so forth.

According to the designers this proposal was partially inspired by what we have in Endless, so you can imagine I would be quite happy to see such a plan move forward, as we could help with the coding side of things upstream while reducing our diff for future rebases. Thing is, this is a proposal for now so nothing is set in stone yet, but I will definitely be interested in following and participating of the relevant discussions regarding to this.

Second, as my colleague Georges already vaguely mentioned in his blog post, we had an improvised meeting on Wednesday with one of the designers from Red Hat (Jakub Steiner), where we discussed about a very particular feature upstream has wanted to have for a while and which Endless implemented downstream: management of folders using DnD, right from the apps view.

This is something that Endless has had in its desktop since the beginning of times, but the implementation relied in a downstream-specific version of folders that Endless OS implemented even before folders were available in the upstream GNOME Shell, so contributing that back would have been… “interesting”. But fortunately, we have now dropped that custom implementation of folders and embraced the upstream solution during the last rebase to 3.22, and we’re in a much better position now to contribute our solution upstream. Once this lands, you should be able to create, modify, remove and use folders without having to open GNOME Software at all, just by dragging and dropping apps on top of other apps and folders, pretty much in a similat fashion compared to how you would do it in a mobile OS these days.

We’re still in an early stage for this, though. Our current solution in Endless is based on some assumptions and tools that will simply not be the case upstream, so we will have to work with both the designers and the upstream maintainers to make this happen over the next months. Thus, don’t expect anything to land for the next stable release yet, but simply know we’ll be working on it  and that should hopefully make it not too far in the future.

The Rest

This GUADEC has been a blast for me, and probably the best and my most favourite edition ever among all those I’ve attended since 2008. Reasons for such a strong statement are diverse, but I think I can mention a few that are clear to me:

From a personal point of view, I never felt so engaged and part of the community as this time. I don’t know if that has something to do with my recent duties in Endless (e.g. flatpak, GNOME Shell) or with something less “tangible” but that’s the truth. Can’t state it well enough.

From the perspective of Endless, the fact that 17 of us were there is something to be very excited and happy about, specially considering that I work remotely and only see 4 of my colleagues from the London area on a regular basis (i.e. one day a week). Being able to meet people I don’t regularly see as well as some new faces in person is always great, but having them all together “under the same ceilings” for 6 days was simply outstanding.

GNOME 20th anniversary dinner

GNOME 20th anniversary dinner

Also, as it happened, this year was the celebration of the 20th anniversary of the GNOME project and so the whole thing was quite emotional too. Not to mention that Federico’s birthday happened during GUADEC, which was a more than nice… coincidence? :-) Ah! And we also had an incredible dinner on Saturday to celebrate that, couldn’t certainly be a better opportunity for me to attend this conference!

Last, a nearly impossible thing happened: despite of the demanding schedule that an event like this imposes (and I’m including our daily visit to the pubs here too), I managed to go running every single day between 5km and 10km, which I believe is the first time it happened in my life. I definitely took my running gear with me to other conferences but this time was the only one I took it that seriously, and also the first time that I joined other fellow GNOME runners in the process, which was quite fun as well.

Final words

I couldn’t finish this extremely long post without a brief note to acknowledge and thank all the many people who made this possible this year: the GNOME Foundation and the amazing group of volunteers who helped organize it, the local team who did an outstanding job at all levels (venue, accomodation, events…), my employer Endless for sponsoring my attendance and, of course, all the people who attended the event and made it such an special GUADEC this year.

Thank you all, and see you next year in Almería!

Credit to Georges Stavracas

Endless OS 3.2 released!

Published / by mario

We just released Endless OS 3.2 to the world after a lot of really hard work from everyone here at Endless, including many important changes and fixes that spread pretty much across the whole OS: from the guts and less visible parts of the core system (e.g. a newer Linux kernel, OSTree and Flatpak improvements, updated libraries…) to other more visible parts including a whole rebase of the GNOME components and applications (e.g. mutter, gnome-settings-daemon, nautilus…), newer and improved “Endless apps” and a completely revamped desktop environment.

By the way, before I dive deeper into the rest of this post, I’d like to remind you thatEndless OS is a Operating System that you can download for free from our website, so please don’t hesitate to check it out if you want to try it by yourself. But now, even though I’d love to talk in detail about ALL the changes in this release, I’d like to talk specifically about what has kept me busy most of the time since around March: the full revamp of our desktop environment, that is, our particular version of GNOME Shell.

Endless OS 3.2 as it looks in my laptop right now

Endless OS 3.2 as it looks in my laptop right now

If you’re already familiar with what Endless OS is and/or with the GNOME project, you might already know that Endless’s desktop is a forked and heavily modified version of GNOME Shell, but what you might not know is that it was specifically based on GNOME Shell 3.8.

Yes, you read that right, no kidding: a now 4 years old version of GNOME Shell was alive and kicking underneath the thousands of downstream changes that we added on top of it during all that time to implement the desired user experience for our target users, as we iterated based on tons of user testing sessions, research, design visions… that this company has been working on right since its inception. That includes porting very visible things such as the “Endless button”, the user menu, the apps grid right on top of the desktop, the ability to drag’n’drop icons around to re-organize that grid and easily manage folders (by just dragging apps into/out-of folders), the integrated desktop search (+ additional search providers), the window picker mode… and many other things that are not visible at all, but that are required to deliver a tight and consistent experience to our users.

Endless button showcasing the new "show desktop" functionality

Endless button showcasing the new “show desktop” functionality

Aggregated system indicators and the user menu

Of course, this situation was not optimal and finally we decided we had found the right moment to tackle this situation in line with the 3.2 release, so I was tasked with leading the mission of “rebasing” our downstream changes on top of a newer shell (more specifically on top of GNOME Shell 3.22), which looked to me like a “hell of a task” when I started, but still I didn’t really hesitate much and gladly picked it up right away because I really did want to make our desktop experience even better, and this looked to me like a pretty good opportunity to do so.

By the way, note that I say “rebasing” between quotes, and the reason is because the usual approach of taking your downstream patches on top of a certain version of an Open Source project and apply them on top of whatever newer version you want to update to didn’t really work here: the vast amount of changes combined with the fact that the code base has changed quite a bit between 3.8 and 3.22 made that strategy fairly complicated, so in the end we had to opt for a combination of rebasing some patches (when they were clean enough and still made sense) and a re-implementation of the desired functionality on top of the newer base.

Integrated desktop search

The integrated desktop search in action

New implementation for folders in Endless OS (based on upstream’s)

As you can imagine, and especially considering my fairly limited previous experience with things like mutter, clutter and the shell’s code, this proved to be a pretty difficult thing for me to take on if I’m truly honest. However, maybe it’s precisely because of all those things that, now that it’s released, I look at the result of all these months of hard work and I can’t help but feel very proud of what we achieved in this, pretty tight, time frame: we have a refreshed Endless OS desktop now with new functionality, better animations, better panels, better notifications, better folders (we ditched our own in favour of upstream’s), better infrastructure… better everything!.

Sure, it’s not perfect yet (no such a thing as “finished software”, right?) and we will keep working hard for the next releases to fix known issues and make it even better, but what we have released today is IMHO a pretty solid 3.2 release that I feel very proud of, and one that is out there now already for everyone to see, use and enjoy, and that is quite an achievement.

Removing and app by dragging and dropping it into the trash bin

Now, you might have noticed I used “we” most of the time in this post when referring to the hard work that we did, and that’s because this was not something I did myself alone, not at all. While it’s still true I started working on this mostly on my own and that I probably took on most of the biggest tasks myself, the truth is that several other people jumped in to help with this monumental task tackling a fair amount of important tasks in parallel, and I’m pretty sure we couldn’t have released this by now if not because of the team effort we managed to pull here.

I’m a bit afraid of forgetting to mention some people, but I’ll try anyway: many thanks to Cosimo Cecchi, Joaquim Rocha, Roddy Shuler, Georges Stavracas, Sam Spilsbury, Will Thomson, Simon Schampijer, Michael Catanzaro and of course the entire design team, who all joined me in this massive quest by taking some time alongside with their other responsibilities to help by tackling several tasks each, resulting on the shell being released on time.

The window picker as activated from the hot corner (bottom – right)

Last, before I finish this post, I’d just like to pre-answer a couple of questions that I guess some of you might have already:

Will you be proposing some of this changes upstream?

Our intention is to reduce the diff with upstream as much as possible, which is the reason we have left many things from upstream untouched in Endless OS 3.2 (e.g. the date/menu panel) and the reason why we already did some fairly big changes for 3.2 to get closer in other places we previously had our very own thing (e.g. folders), so be sure we will upstream everything we can as far as it’s possible and makes sense for upstream.

Actually, we have already pushed many patches to the shell and related projects since Endless moved to GNOME Shell a few years ago, and I don’t see any reason why that would change.

When will Endless OS desktop be rebased again on top of a newer GNOME Shell?

If anything we learned from this “rebasing” experience is that we don’t want to go through it ever again, seriously :-). It made sense to be based on an old shell for some time while we were prototyping and developing our desktop based on our research, user testing sessions and so on, but we now have a fairly mature system and the current plan is to move on from this situation where we had changes on top of a 4 years old codebase, to a point where we’ll keep closer to upstream, with more frequent rebases from now on.

Thus, the short answer to that question is that we plan to rebase the shell more frequently after this release, ideally two times a year so that we are never too far away from the latest GNOME Shell codebase.


And I think that’s all. I’ve already written too much, so if you excuse me I’ll get back to my Emacs (yes, I’m still using Emacs!) and let you enjoy this video of a recent development snapshot of Endless OS 3.2, as created by my colleague Michael Hall a few days ago:


(Feel free to visit our YouTube channel to check out for more videos like this one)

Also, quick shameless plug just to remind you that we have an Endless Community website which you can join and use to provide feedback, ask questions or simply to keep informed about Endless. And if real time communication is your thing, we’re also on IRC (#endless on Freenode) and Slack, so I very much encourage you to join us via any of these channels as well if you want.

Ah! And before I forget, just a quick note to mention that this year I’m going to GUADEC again after a big break (my last one was in Brno, in 2013) thanks to my company, which is sponsoring my attendance in several ways, so feel free to say “hi!” if you want to talk to me about Endless, the shell, life or anything else.

Frogr 1.3 released

Published / by mario

Quick post to let you know that I just released frogr 1.3.

This is mostly a small update to incorporate a bunch of updates in translations, a few changes aimed at improving the flatpak version of it (the desktop icon has been broken for a while until a few weeks ago) and to remove some deprecated calls in recent versions of GTK+.

Ah! I’ve also officially dropped support for OS X via gtk-osx, as I was systematically failing to update and use (I only use frogr from GNOME these days) since a loooong time ago,  and so it did not make sense for me to keep pretending that the mac version is something that is usable and maintained anymore.

As usual, you can go to the main website for extra information on how to get frogr and/or how to contribute to it. Any feedback or help is more than welcome!

 

Going to FOSDEM!

Published / by mario / 1 Comment on Going to FOSDEM!

It’s been two years since the last time I went to FOSDEM, but it seems that this year I’m going to be there again and, after having traveled to Brussels a few times already by plane and train, this year I’m going by car!: from home to the Euro tunnel and then all the way up to Brussels. Let’s see how it goes.

FOSDEM 2017

As for the conference, I don’t have any particular plan other than going to some keynotes and probably spending most of my time in the Distributions and the Desktops devrooms. Well, and of course joining other GNOME people at A La Bécasse, on Saturday night.

As you might expect, I will have my Endless laptop with me while in the conference, so feel free to come and say “hi” in case you’re curious or want to talk about that if you see me around.

At the moment, I’m mainly focused on developing and improving our flatpak story, how we deliver apps to our users via this wonderful piece of technology and how the overall user experience ends up being, so I’d be more than happy to chat/hack around this topic and/or about how we integrate flatpak in EndlessOS, the challenges we found, the solutions we implemented… and so forth.

That said, flatpak is one of my many development hats in Endless, so be sure I’m open to talk about many other things, including not work-related ones, of course.

Now, if you excuse me, I have a bag to prepare, an English car to “adapt” for the journey ahead and, more importantly, quite some hours to sleep. Tomorrow it will be a long day, but it will be worth it.

See you at FOSDEM!

Frogr 1.2 released

Published / by mario

Of course, just a few hours after releasing frogr 1.1, I’ve noticed that there was actually no good reason to depend on gettext 0.19.8 for the purposes of removing the intltool dependency only, since 0.19.7 would be enough.

So, as raising that requirement up to 0.19.8 was causing trouble to package frogr for some distros still in 0.19.7 (e.g. Ubuntu 16.04 LTS), I’ve decided to do a quick new release and frogr 1.2 is now out with that only change.

One direct consequence is that you can now install the packages for Ubuntu from my PPA if you have Ubuntu Xenial 16.04 LTS or newer, instead of having to wait for Ubuntu Yakkety Yak (yet to be released). Other than that 1.2 is exactly the same than 1.1, so you probably don’t want to package it for your distro if you already did it for 1.1 without trouble. Sorry for the noise.

 

Frogr 1.1 released

Published / by mario / 2 Comments on Frogr 1.1 released

After almost one year, I’ve finally released another small iteration of frogr with a few updates and improvements.

Screenshot of frogr 1.1

Not many things, to be honest, bust just a few as I said:

  • Added support for flatpak: it’s now possible to authenticate frogr from inside the sandbox, as well as open pictures/videos in the appropriate viewer, thanks to the OpenURI portal.
  • Updated translations: as it was noted in the past when I released 1.0, several translations were left out incomplete back then. Hopefully the new version will be much better in that regard.
  • Dropped the build dependency on intltool (requires gettext >= 0.19.8).
  • A few bugfixes too and other maintenance tasks, as usual.

Besides, another significant difference compared to previous releases is related to the way I’m distributing it: in the past, if you used Ubuntu, you could configure my PPA and install it from there even in fairly old versions of the distro. However, this time that’s only possible if you have Ubuntu 16.10 “Yakkety Yak”, as that’s the one that ships gettext >= 0.19.8, which is required now that I removed all trace of intltool (more info in this post).

However, this is also the first time I’m using flatpak to distribute frogr so, regardless of which distribution you have, you can now install and run it as long as you have the org.gnome.Platform/x86_64/3.22 stable runtime installed locally. Not too bad! :-). See more detailed instructions in its web site.

That said, it’s interesting that you also have the portal frontend service and a backend implementation, so that you can authorize your flickr account using the browser outside the sandbox, via the OpenURI portal. If you don’t have that at hand, you can still used the sandboxed version of frogr, but you’d need to copy your configuration files from a non-sandboxed frogr (under ~/.config/frogr) first, right into ~/.var/app/org.gnome.Frogr/config, and then it should be usable again (opening files in external viewers would not work yet, though!).

So this is all, hope it works well and it’s helpful to you. I’ve just finished uploading a few hundreds of pictures a couple of days ago and it seemed to work fine, but you never know… devil is in the detail!

 

Cross-compiling WebKit2GTK+ for ARM

Published / by mario / 2 Comments on Cross-compiling WebKit2GTK+ for ARM

I haven’t blogged in a while -mostly due to lack of time, as usual- but I thought I’d write something today to let the world know about one of the things I’ve worked on a bit during this week, while remotely attending the Web Engines Hackfest from home:

Setting up an environment for cross-compiling WebKit2GTK+ for ARM

I know this is not new, nor ground-breaking news, but the truth is that I could not find any up-to-date documentation on the topic in a any public forum (the only one I found was this pretty old post from the time WebKitGTK+ used autotools), so I thought I would devote some time to it now, so that I could save more in the future.

Of course, I know for a fact that many people use local recipes to cross-compile WebKit2GTK+ for ARM (or simply build in the target machine, which usually takes a looong time), but those are usually ad-hoc things and hard to reproduce environments locally (or at least hard for me) and, even worse, often bound to downstream projects, so I thought it would be nice to try to have something tested with upstream WebKit2GTK+ and publish it on trac.webkit.org,

So I spent some time working on this with the idea of producing some step-by-step instructions including how to create a reproducible environment from scratch and, after some inefficient flirting with a VM-based approach (which turned out to be insanely slow), I finally settled on creating a chroot + provisioning it with a simple bootstrap script + using a simple CMake Toolchain file, and that worked quite well for me.

In my fast desktop machine I can now get a full build of WebKit2GTK+ 2.14 (or trunk) in less than 1 hour, which is pretty much a productivity bump if you compare it to the approximately 18h that takes if I build it natively in the target ARM device I have :-)

Of course, I’ve referenced this documentation in trac.webkit.org, but if you want to skip that and go directly to it, I’m hosting it in a git repository here: github.com/mariospr/webkit2gtk-ARM.

Note that I’m not a CMake expert (nor even close) so the toolchain file is far from perfect, but it definitely does the job with both the 2.12.x and 2.14.x releases as well as with the trunk, so hopefully it will be useful as well for someone else out there.

Last, I want to thanks the organizers of this event for making it possible once again (and congrats to Igalia, which just turned 15 years old!) as well as to my employer for supporting me attending the hackfest, even if I could not make it in person this time.

Endless Logo

Chromium Browser on xdg-app

Published / by mario / 7 Comments on Chromium Browser on xdg-app

Last week I had the chance to attend for 3 days the GNOME Software Hackfest, organized by Richard Hughes and hosted at the brand new Red Hat’s London office.

And besides meeting new people and some old friends (which I admit to be one of my favourite aspects about attending these kind of events), and discovering what it’s now my new favourite place for fast-food near London bridge, I happened to learn quite a few new things while working on my particular personal quest: getting Chromium browser to run as an xdg-app.

While this might not seem to be an immediate need for Endless right now (we currently ship a Chromium-based browser as part of our OSTree based system), this was definitely something worth exploring as we are now implementing the next version of our App Center (which will be based on GNOME Software and xdg-app). Chromium updates very frequently with fixes and new features, and so being able to update it separately and more quickly than the OS is very valuable.

Endless OS App Center
Screenshot of Endless OS’s current App Center

So, while Joaquim and Rob were working on the GNOME Software related bits and discussing aspects related to Continuous Integration with the rest of the crowd, I spent some time learning about xdg-app and trying to get Chromium to build that way which, unsurprisingly, was not an easy task.

Fortunately, the base documentation about xdg-app together with Alex Larsson’s blog post series about this topic (which I wholeheartedly recommend reading) and some experimentation from my side was enough to get started with the whole thing, and I was quickly on my way to fixing build issues, adding missing deps and the like.

Note that my goal at this time was not to get a fully featured Chromium browser running, but to get something running based on the version that we use use in Endless (Chromium 48.0.2564.82), with a couple of things disabled for now (e.g. chromium’s own sandbox, udev integration…) and putting, of course, some holes in the xdg-app configuration so that Chromium can access the system’s parts that are needed for it to function (e.g. network, X11, shared memory, pulseaudio…).

Of course, the long term goal is to close as many of those holes as possible using Portals instead, as well as not giving up on Chromium’s own sandbox right away (some work will be needed here, since `setuid` binaries are a no-go in xdg-app’s world), but for the time being I’m pretty satisfied (and kind of surprised, even) that I managed to get the whole beast built and running after 4 days of work since I started :-).

But, as Alberto usually says… “screencast or it didn’t happen!”, so I recorded a video yesterday to properly share my excitement with the world. Here you have it:


[VIDEO: Chromium Browser running as an xdg-app]

As mentioned above, this is work-in-progress stuff, so please hold your horses and manage your expectations wisely. It’s not quite there yet in terms of what I’d like to see, but definitely a step forward in the right direction, and something I hope will be useful not only for us, but for the entire Linux community as a whole. Should you were curious about the current status of the whole thing, feel free to check the relevant files at its git repository here.

Last, I would like to finish this blog post saying thanks specially to Richard Hughes for organizing this event, as well as the GNOME Foundation and Red Hat for their support in the development of GNOME Software and xdg-app. Finally, I’d also like to thank my employer Endless for supporting me to attend this hackfest. It’s been a terrific week indeed… thank you all!

Credit to Georges Stavracas

Credit to Georges Stavracas

Frogr 1.0 released

Published / by mario / 7 Comments on Frogr 1.0 released

I’ve just released frogr 1.0. I can’t believe it took me 6 years to move from the 0.x series to the 1.0 release, but here it is finally. For good or bad.

Screenshot of frogr 1.0This release is again a small increment on top of the previous one that fixes a few bugs, should make the UI look a bit more consistent and “modern”, and includes some cleanups at the code level that I’ve been wanting to do for some time, like using G_DECLARE_FINAL_TYPE, which helped me get rid of ~1.7K LoC.

Last, I’ve created a few packages for Ubuntu in my PPA that you can use now already if you’re in Vivid or later while it does not get packaged by the distro itself, although I’d expect it to be eventually available via the usual means in different distros, hopefully soon. For extra information, just take a look to frogr’s website at live.gnome.org.

Now remember to take lots of pictures so that you can upload them with frogr :)

Happy new year!

Attending the Web Engines Hackfest

Published / by mario / 1 Comment on Attending the Web Engines Hackfest

webkitgtk-hackfest-bannerIt’s certainly been a while since I attended this event for the last time, 2 years ago, when it was a WebKitGTK+ only oriented hackfest, so I guess it was a matter of time it happened again…

It will be different for me this time, though, as now my main focus won’t be on accessibility (yet I’m happy to help with that, too), but on fixing a few issues related to the WebKit2GTK+ API layer that I found while working on our platform (Endless OS), mostly related to its implementation of accelerated compositing.

Besides that, I’m particularly curious about seeing how the hackfest looks like now that it has broaden its scope to include other web engines, and I’m also quite happy to know that I’ll be visiting my home town and meeting my old colleagues and friends from Igalia for a few days, once again.

Endless Mobile logoLast, I’d like to thank my employer for sponsoring this trip, as well as Igalia for organizing this event, one more time.

See you in Coruña!

Importing include paths in Eclipse

Published / by mario / 1 Comment on Importing include paths in Eclipse

First of all, let me be clear: no, I’m not trying to leave Emacs again, already got over that stage. Emacs is and will be my main editor for the foreseeable future, as it’s clear to me that there’s no other editor I feel more comfortable with, which is why I spent some time cleaning up my .emacs.d and making it more “manageable”.

But as much as like Emacs as my main “weapon”, I sometimes appreciate the advantages of using a different kind of beast for specific purposes. And, believe me or not, in the past 2 years I learned to love Eclipse/CDT as the best work-mate I know when I need some extra help to get deep inside of the two monster C++ projects that WebKit and Chromium are. And yes, I know Eclipse is resource hungry, slow, bloated… and whatnot; but I’m lucky enough to have fast SSDs and lots of RAM in my laptop & desktop machines, so that’s not really a big concern anymore for me (even though I reckon that indexing chromium in the laptop takes “quite some time”), so let’s move on :-)

However, there’s this one little thing that still bothers quite me a lot of Eclipse: you need to manually setup the include paths for the external dependencies not in a standard location that a C/C++ project uses, so that you can get certain features properly working such as code auto-completion, automatic error-checking features, call hierarchies… and so forth.

And yes, I know there is an Eclipse plugin adding support for pkg-config which should do the job quite well. But for some reason I can’t get it to work with Eclipse Mars, even though others apparently can use it there for some reason (and I remember using it with Eclipse Juno, so it’s definitely not a myth).

Anyway, I did not feel like fighting with that (broken?) plugin, and in the other hand I was actually quite inclined to play a bit with Python so… my quick and dirty solution to get over this problem was to write a small script that takes a list of package names (as you would pass them to pkg-config) and generates the XML content that you can use to import in Eclipse. And surprisingly, that worked quite well for me, so I’m sharing it here in case someone else finds it useful.

Using frogr as an example, I generate the XML file for Eclipse doing this:

  $ pkg-config-to-eclipse glib-2.0 libsoup-2.4 libexif libxml-2.0 \
        json-glib-1.0 gtk+-3.0 gstreamer-1.0 > frogr-eclipse.xml

…and then I simply import frogr-eclipse.xml from the project’s properties, inside the C/C++ General > Paths and Symbols section.

After doing that I get rid of all the brokenness caused by so many missing symbols and header files, I get code auto-completion nicely working back again and all those perks you would expect from this little big IDE. And all that without having to go through the pain of defining all of them one by one from the settings dialog, thank goodness!

Now you can quickly see how it works in the video below:


VIDEO: Setting up a C/C++ project in Eclipse with pkg-config-to-eclipse

This has been very helpful for me, hope it will be helpful to someone else too!