Smart Colorization in GIMP

As part of the Image team at GREYC lab (CRNS, ENSICAEN, University of Caen), I implemented the “fill by line art” algorithm in GIMP, also known as “Smart Colorization“. You may know this algorithm in G’Mic (developed by the same team), so when they proposed me to work with them, I wanted to implement this algorithm in GIMP core. Thus it became my first assignment.

The Problem

The concept of filling by line art is simple conceptually: you draw a shape with a black pen, say an approximate circle, and you want to fill the inside with a color of your choice. You could already do this, more or less, with the bucket fill, when filling by color similarities. Unfortunately it has 2 major issues:

  1. If ever the line art is not properly closed (“holes” in the lines), the color will leak outside. This might be a small painting mistake sometimes, yet if you don’t see the holes (it could be just 1 pixel in worst case), wasting time finding it is not funny. Othertimes it may even be an artistic choice (“rougher” line style for instance, or any of the billion reasons why you’d want non-closed lines).
  2. Usually you get very ugly non-colored pixels next to line borders because of interpolation, aliasing or whatever (unless you draw with very hard lines) and this is not acceptable result.
2 main problems of Bucket Fill by color similarities

As a consequence, probably no digital colorists ever use the bucket fill directly. There are various other methods, often with the fuzzy select tool (or other selection tools), growing/shrinking the selection then bucket-filling it. Just even painting directly is sometimes the best. Attending one of Aryeom’s workshop on the matter of colorization is absolutely amazing and enlightening as she can teach you a dozen of different methods, and she herself does not always use the same method (she would say it depends on the situation). On ZeMarmot project, I also made some custom quick-colorization Python scripts which Aryeom have used for years now and which do a pretty decent job for optimizing this tedious job (though it’s still tedious!).

The algorithm

The research paper is called “A Fast and Efficient Semi-guided Algorithm for Flat Coloring Line-arts“. I have worked based on C++ code by Sébastien Fourey, with both his and David Tschumperlé’s input, being both co-authors of the paper.

For our needs, it basically has 2 main steps:

  1. Closing the line arts, which is done by finding “key-points” which are line edges with extreme local curvatures (we estimate them likely to be the end for unfinished lines), then closing the lines by joining the keypoints based on some “quality” criteria such as close opposite angles or maximum distances. Lines can be closed either with splines (i.e. curves) or straight segments.
  2. Actually colorizing the created closed regions, “eating” a bit under the line art pixels to ensure absence of uncolorized pixels near borders.

You can see that the algorithm actually deals with both issues previously raised (which I conveniently numbered in the same order ?)! Nevertheless I only implemented the first step of the algorithm and went with my own solution for the second step (still based on similar concepts though), because of usability issues.

Here is what bucket-filling by line art looks like in GIMP:

Bucket Fill by line art detection

I am not going to re-explain the algorithm in full. So if interested by technical details, I suggest that you just read the research paper which is quite clear with nice self-explanatory images too. If you are more “fluent” with code, such as myself, than with math equations, you may also look at my implementation in GIMP, which is mostly self-contained in gimplineart.c, and in particular start with gimp_line_art_close().

Below I will focus more on improvements we made to the algorithm, and which you won’t even find in the paper. I remind that we also worked with the animator/painter Aryeom Han (the director of ZeMarmot project) as artist advisor so that the implementation is actually useful for real work.

Note: this article is mostly about the more technical side of things. If you are just interested into how to use the tool efficiently, wait a few days or weeks. We will make a short yet (hopefully) exhaustive video showing the usage of every option.

Step 1: line art closure

To get a basic idea of what’s going on under the hood, here is an example (using an unfinished sketch by Aryeom, which are good examples for the purpose of the algorithm as sketches are full of holes!). The leftest image is the scribble as-is, the middle is how it is transformed by the algorithm (you never see this version of the image!), the rightest one is a quick attempt to flat-colorize it by myself (using the bucket fill tool only, no brush, nothing!) in less than 1 minute (no joke, I timed!).

From line arts to colorized sketch, with internal closure representation in the middle

Note: of course do not look at this result from a “finished work” point of view. Colorization work is often done in several steps, the first step being flat colors. This tool is only working on this first step and the image may still need additional tweaks (even more with this example as I ran it off a very rough sketch image).

Estimate a global local line width (improved from paper)

I very quickly noticed that a part of the algorithm seemed a bit suboptimal. As said above, we need to detect key-points based on line curvatures. This was a problem with big brushes (either because you paint very fat lines on a small image or because you just paint very high resolution artworks hence even your thin lines may be dozens of pixels). The end of such lines may have very low curvatures and go undetected. In the original paper, the proposed solution was:

For the sake of independence with respect to the image resolution, a second step allows to reduce the width of the strokes to a few pixels, if necessary, using a morphological erosion. The radius to be used for this erosion is set automatically by estimating the width of the strokes found in the drawing.

Section ‘3. Pre-Processing of the anti-aliased image

Unfortunately computing a single estimation of the line width for the whole drawing assumes line arts in a same drawing all have the same thickness! Just ask a calligrapher how ridiculous it sounds! 😉

As first consequence, even though it still worked many times, it could add previously non-existing holes (if eroding a line thiner than the average line width). The research paper acknowledged this issue but discarded it, assuming the closure step (which necessary follows) would anyway close the new hole again:

One should note that disconnections that may result from the morphological erosion step applied here, which remain rare, will be corrected anyway by the closing step to be applied thereafter.

Section ‘3. Pre-Processing of the anti-aliased image

Yet in very early tests which Aryeom did with the first version of my implementation, she quite quickly encountered cases where the closing step did not close the created holes. In even at least one instance, we even had a perfectly closed line art which leaked the color fill ⇒ we got the reverse of why the algorithm was created! Quite paradoxical! To add to the worse, while finding micro holes in not well closed lines can be hard, finding non-existing holes (because only created in an internal, non-visible, representation) is close to impossible.

And the ironical part is that the erosion did not even performed its goal that well as it was very easy to create fat lines where no key points got found despite the erosion. In the end, the global width estimation+erosion idea added more problems than it solved.

Conclusion: this was bad.

After much discussion with David Tschumperlé and Sébastien Fourey, we came up to an evolution of the algorithm, computing approximate local line width for every line art pixel (instead of one single line width for the whole drawing), simply based on a distance map of the line art. Then we made the test deciding whether an edgel curvature was keypoint material relatively to the local line width (instead of eroding and assuming a theoretical global line width).

Not only did it perform a lot better, did not create unfillable holes in perfectly closed areas, did detect better end points on very huge strokes, allowed for variable strokes in a same drawing (whether as a stylistic choice or simply because perfection is hard), but it even ended up faster!

As an example, the original version of the algorithm would have failed to detect end-points for, and close this shape with such fat lines. The updated algorithm has no such problem:

» For these interested, see the commit «

Parallelization for fast processing

Line art closure was clearly the most time-expensive step. Though it does perform quite well on FullHD or even 4K images, it can still spend half a second on my laptop. And half a second for an interactive tool is huge! And if we run this on very big images (which are not uncommon at all nowadays), it may even take several seconds.

So I parallelized this whole process in order to run it as early as possible (as soon as the Bucket Fill tool is selected). As people are not robots, this makes the interaction quite seamless and you may not even notice the processing.

Available line art “Source” in the tool options

Partially for the same reason, you may notice that the “Source” option proposes you much more than all other tools (“Sample merged” versus active layer). Here you can also choose the layer above or below the active one. There was both a logical reason (you do not necessarily want to count colors as line arts) and a performance one (the software does not have to continuously recompute the closure at every edit).

Step 2: color filling

Making the algorithm interactive and error-safe

The original algorithm was based on filling all zones at once by using a watershed algorithm to fill the whole image.

I did the choice to just drop this part of the algorithm. It was mostly a usability choice. Basically the first time I saw the images demonstrating the algorithm on G’Mic, I found the result cool, but the implementation GUI seemed completely impractical. Still I am not the painter in the team, so I showed it to Aryeom. Her first words after seeing how it worked were “I will never use this” (or something along the line!). Note well that we are not talking about the end result, but about the computer-human interaction. As we said, colorizing is tedious, but if the smart colorization is even more tedious, then why use it?

So what’s tedious? There were several variant interactions in G’Mic: you can for instance let the algorithm color every zone randomly, then you could select each plain color zones for recolorization; there is also the possibility to guide the algorithm with color spots, hoping it performs as you want. I also suggest this interesting article by David Revoy, who contributed to the original version of the algorithm.

filtre « Colorize lineart [smart coloring] »
Smart colorization in G’Mic, a bit overwhelming…

Maybe they were interesting workflows sometimes, yet this is not always something you want to do in your drawings.

For once, it takes a lot of steps to color a single drawing. For an animation (i.e. ZeMarmot project), this is even worse, as we do it for dozens or hundreds of layers.
A second reason is that such workflows assume that the algorithm is never wrong. And we know that it is a very bold assumption! Undesired results may happen. This is not necessarily a problem; what you ask to such algorithm is to work well most of the time, as long as you can always go back to more down-to-earth methods in the rare edge cases. But if you have to undo the colorization for the whole image, change the color spots trying to guess what went wrong and try again, then using “smart” algorithm is only a waste of time.

Instead we needed a colorization process which is interactive and progressive, so that errors of the algorithms are easily bypassable by temporarily reverting to traditional techniques (just for some zones). This is why I based my interaction on the existing bucket fill tool. Colorization (by line arts) should work as it always used to: you click in a zone and see it being filled in front of your eyes… one zone at a time!

It is straightforward and very error-safe. If the algorithm doesn’t detect well the zone you are working on right now, just undo and fix this zone only.
Moreover I really wanted to not have to create a new tool if possible (there are so many already). This is basically the same use case as the Bucket Fill tool, right? It’s just a different algorithm to decide how to fill, that’s all. So it makes perfect sense to be an option for this same tool.

Therefore my solution to remplace watershedding the whole image at once is to use again a distance-transform map of the lines (both original line art and invisible closure lines). As I remind, this is a decent estimation of the local line thickness. Then when the bucket fill occurs, I flood the color a few more pixels (until the half width of the line arts), thus ensuring we don’t leave ugly non-colorized looking holes near the borders. Simple, fast and efficient.
This is somehow a local watershedding, but simpler and much faster, and also allows to add a “Max flooding” option to keep the color under control.

Using smart colorization without the “smart” part!

One very cool usage one can have of this new algorithm is to bypass the first step, i.e. not compute the line art closure! This can be very useful when you are using line style without any holes (simpler design with strong lines) so don’t need any algorithmic help for line closure. This way, you can fill colors in a single click without caring about over-segmentation or processing time.

To do so, just set the option “Maximum gap length” to 0. Here an example of a very simple design (left), filled by similar colors (middle) vs by line art detection (right), in a single click:

Left: AstroGNU by Aryeom – Middle: Fill similar colors – Right: Fill by line art detection

See this kind of “white halo” between the red color and the black lines in the middle image? Quality difference with right one is striking, right? While the historical “Fill similar colors” is absolutely non usable (by itself) for quality colorization work, the new line art detection mode is perfectly usable.

Bucket Fill enhanced for all cases!

As a bonus, I had to improve the current interaction of the Bucket Fill tool for all algorithms as well. It is mostly similar, but a bit improved. What is?

Click and hold

A main issue was to take care of the problem of sur-segmentation. You may call an algorithm “smart” or “intelligent”, this won’t make it “human-smart”. In particular here it creates artificial lines based on geometry, not on actual recognition of shapes or meaning, and even less by reading the painter’s thoughts! So often it will over-segment, i.e. creates more artificial zones than desired (if you paint with a crenelated brush, then it can be really bad). Let’s take again previous image as an example:

Closure is “over-segmented”

You can clearly see the issue. The dog for instance is most likely in a single color, but it ends up as about 20 zones! In such case, with former bucket fill interaction, you would then end up clicking 20 times (instead of ideally 1). Counter-productive, right? So I updated the Bucket Fill tool to work more like a brush. You can click, hold and move the cursor over various regions. This change works both with the line art algorithm and when colorizing similar colors (only for selection fill, it is meaningless). This makes filling over-segmented area much less a problem as you can still do it in a single stroke. This is not as fast as a single click, yet quite quick.

Color picking

Another nice change was to allow easy color picking with ctrl-click (no need to select the Color Picker tool). All paint tools could already do this, but not the Bucket Fill, even though it also works with colors. Being able to quickly change color (by picking nearby pixel, which is a very common usage by professional digital painters) makes the bucket fill tool very productive.

With these few changes, the Bucket Fill is now a first class citizen (even if you don’t use the smart colorization).

Limits and possible future works

Algorithm targeted at digital painters

Smart colorization works on “line arts”. This algorithm won’t perform well on random “non line art” images, and in particular is not made to work on photography. Though I’m sure some people may find interesting ways to use it elsewhere, as far as I can see, this is designed for digital painters only.

More than black line on white background use case?

Lines are detected the most basic possible way, with a threshold either on the alpha channel (if you work with transparent layers) or on the grayscale value (particularly useful when working on white paper scans for instance).

Therefore current version of the algorithm may have difficulties detecting the line arts, for instance if you scan a drawing done on not completely white paper. Or say you simply want to draw with light lines on dark backgrounds (everyone is free!). Not sure how often this occurs, and whether we should really pile up the tool with color options. We’ll see.

More optimization

Though I said it is very usable on many images of reasonable size, I consider this new algorithm still a bit slow when working on very big images (which is not so uncommon in the graphics printing industry where you often need higher resolutions than screen industry), even despite all the multi-thread code. So I would not be surprised than in some cases, you may come back to use your old-style colorization techniques.

I am hoping to be able to soon come back on my code and optimize it more.

Not so nice color borders

Color on borders does not have the nice “texture” you have when colorizing with brushes. For instance let’s look closer at our original example here.

The part where the border of the color shows will likely need edit. I added an “antialiasing” option, but this is clearly not the real solution in most cases either, and manual edit with a brush after color filling may be necessary.

Worse case is when you want to remove the lines afterwards. Aryeom does sometimes such drawing style where the lines are only for the first steps, then removed for the final render (actually a whole dreamy scene of ZeMarmot is done like this), then you’d want perfect control of the colorization border quality. This is one such example where the end painting is made only of colors, no border lines:

An image cut of ZeMarmot movie in production, drawn by Aryeom

No API yet

We are still missing API functions to be able to run line art detection in scripts. This is on purpose, as I am waiting a bit more to make sure I don’t miss any important usage, since an API has to be stable (unlike the GUI which can be updated more easily since our new release politics).

Actually you may have noticed that even in the graphical interface, I haven’t made visible all the options you can find in G’Mic (even though they all are implemented). This is because I am trying to not make this tool over-confusing as many of these options are based on the intimate understanding of the algorithm. So that none has to play constantly with sliders at random and without understanding, I am testing the best way to not over-expose options.

Fuzzy selection tool

This algorithm is currently only implemented for the bucket fill tool, but it would be perfectly suited as an alternative method in the Fuzzy Select tool as well!

Improve segmentation issue

With very clean lines and simple shapes, the algorithm is very neat. But as soon as you draw complicated drawing, and in particular use very crenellated brushes (for instance the default “Acrylic” series of GIMP brushes), it starts to detect a bit too many false-positive end points, hence over-segmenting. We encountered such issues so I tried various improvements, but so far none is working on all cases. One such improvement was to apply a median blur on the line arts before the detection of end points, as it would definitely smooth dents in the lines. And it did work quite nicely on one such example:

Middle: over-segmentation with current algorithm
Right: still segmented, yet much better, when median-blurring first

Unfortunately it works very badly on very thin lines as it would create holes (a problem we got rid of when we replaced the erosion step with local line widths!).

Middle: very acceptable result with current algorithm
Right: bad result as color would leak and we lost many details

So I’m still working on this. Ideally that’s an issue for which we can find a solution, though it is not an easy topic. We’ll see!

As a general rule, over-segmenting is a problem (false positives), but it is better than failing to close holes for such tool (false negatives), especially thanks to the new click-and-hold interaction. I did improve already several issues on this matter (for instance micro-zones which the paper calls “not significant regions” yet which are very significant for digital painters as they are annoying to fill; and recently an issue with the chosen algorithm to approximate very quickly region areas, and which may be wrong on open areas).

Conclusion

I will conclude that this project is already a very interesting one to confront research algorithms to reality with real-life day-to-day work. This was even more interesting as we even confronted 3 worlds: research (an algorithm thought by brilliant minds at CNRS/ENSICAEN), engineering (myself) and artists (Aryeom/ZeMarmot). By the way, many of my GUI improvements were ideas and propositions from Aryeom who tested our work on real projects on the field. This joint CNRS/ZeMarmot cooperation went so well that Aryeom was invited to present her work and all her drawing/animation problematics in a seminar at ENSICAEN university.

Now it is still a work-in-progress in my opinion. As you could see, several aspects deserve to be improved. It is not on my main track anymore, but I will definitely improve it as I will see fit. Yet it is already definitely in a releasable state, and I do hope that many people will use and appreciate it. Tell us what you think if you try!

A last comment is that ideas behind the best algorithms are not necessarily the most incredible technically. This Smart Colorization algorithm is based on many simple transformations yet it performs well, is quite fast (within some limits; as I said above), does not take all your memory nor make GIMP hang for 10 minutes… To me, this is much more impressive than maybe much more brilliant algorithms, yet unusuable on everyone’s desktop. This is what you need in a graphics desktop software when you actually want some work done. And that’s very cool. 🙂

Have fun everyone!

The dream of LILA and ZeMarmot

Imagine a movie studio, with many good artists and technicians working on cool movies or series… and releasing them under Libre Licenses for anyone to see, share and reuse. These movies could be watched freely on the web, or screened in cinemas or on TV, wherever.

Imagine now that this studio fully uses Free Software (and Open Hardware, when available!), and while it produces movies, they debug and fix upstream software they use, and also improve them as needed (new features, better interaction and design…). This would include end software (such as GIMP, Blender, Inkscape…), as well as the desktop (GNOME currently), or the operating system (GNU/Linux) and more.

Now you know my dream with the non-profit association LILA, and ZeMarmot project (our first movie project, an animation film). This is what I am aiming for. Actually this is what I have been hoping for since the start, but maybe it was not clear enough, so I decided to spell it out.

If you like this dream, I would like to encourage you to help us make it real by donating either through Patreon, Tipeee, Liberapay, or other means (such as direct donation by wire transfer, Paypal, etc.).

If you want to read more first, I am going to add details below!

My current job

I hinted that there were some cool stuff going on lately on a personal level, and now here it is: I have been hired by CNRS for a year to develop things in relationship to GIMP and G’Mic.

This is the first time in years I have a sustainable income, and this is to continue working on GIMP (something I have done for 6 years, before this job!). How cool is that?
For the full story, I was first approached by the G’Mic team to work on a Photoshop plug-in, which I politely declined. I have nothing against Photoshop, but this is not my dream job. Then the project got reworked for me to continue working on GIMP instead. In particular we identified 2 main projects:

  1. The extension management in GIMP I already talked about (back then, I had no idea I would be hired for it), since it will help G’Mic spread a lot. I will also use the opportunity to improve plug-in support in GIMP.
  2. Implementing their smart colorization algorithm within GIMP.  This was actually my own idea when they proposed to work with me, as this fits very well with my own plans, and would finally make their algorithm “useful” for real work (the interaction within G’Mic is a bit too painful!). I will talk a bit more about this soon in a dedicated post, but here is some teaser:

Where does ZeMarmot project stand?

ZeMarmot is my pet project (together with Aryeom’s), I love it, I cherish it, and as I said, this is where I see a future (not necessarily just ZeMarmot, but what it will lead to). Even though I now have another temporary source of income, I really want to stress that if you like what I do in GIMP, you should really fund ZeMarmot to ensure that I can continue once this contract ends.

This year with CNRS is an opportunity to give the project the chance to bloom. Because let’s be honest, it has not bloomed yet! Every year is the same story where we are asking for your help. And when I see all other foundations and non-profits having started to ask for help a month before, I know we are very bad at it. We are technical people here (developers, animators…), who suck at marketing and are asking at the last second.

Right now, we are crowdfunded barely above 1000 € a month, which can’t even afford to pay someone full time at legal minimum wage in the country we live in. Therefore in 2018, LILA has been able to hire Aryeom (direction/animation work) and myself (development) 6 days a month each, on average. It’s not much, right? Yet this is what we have been living off.
We need much more funding. To be clear, the minimum wage (full time) in France requires about 2100€, and we estimate that we’d need 5000€ per person to fund a real salary for such jobs (same estimation as the Blender Foundation does), though it is still below average market value . So LILA is 4 times away to be able to afford 2 salaries at minimum wage, and 10 times away to be able to pay reasonable salaries (hence also even hire more people). How sad is that?

What is LILA exactly?

LILA is officially registered in France as a non-profit association. It also has an activity number classifying it as a movie production, which makes it a very rare non-profit organization, allowed to hire people for producing movies, which it has now done for nearly 3 years now.

The goal of this production is not to enrich any shareholders (there are no such things here). We want to create our art, and spread it, then go to the next project, because we love this. This is why ZeMarmot is to be released under a Creative Commons by-sa license, which will allow you to download the movie, share it with your friends and family, even sell or modify it. No kidding! We will even upload every source image with layers and all!

Still LILA intends to pay appropriate salary to every person working. Because we don’t believe that Libre Art means “crappy work” or “amateur”. It’s for fun? Yes. But it’s also professional

So if it had crazy funding, LILA would not give us decadent salaries, but would hire more people to help us making the art/entertainment world a better place! That’s also what it means being a non-profit.

And Free Software in all that?

There is this other aspect of our studio: we use Free Software! Not only that, we also develop Free Software! When I say we develop Free Software, I don’t even mean we release once in a while some weird internal script used by 2 people in the world. Mostly we are part of the GIMP team. In the last few years, we are about a fourth of the commits of GIMP (you can just check the commits in particular by myself “Jehan”, as well as ones by Aryeom and Lionel N.). I have also pushed for years to improve the release policy to be able to get new features more often (which finally happened since GIMP 2.10.0!). I believe we are providing positive and meaningful contributions.

GIMP is our main project these days, but over the years, I have had a few patches in many important software! And we regularly report bugs when we don’t have time to fix them ourselves… we are early graphics tablet adopters so we are in contact with some developers from Wacom or Red Hat (and we are sorry to them, because we know sometimes I can be annoying with some bugs! ?). And so on. Only thing preventing me from doing more is time. I know I just need more hands, which would only happen if we had enough funding to start hiring other Free Software developers.

And let it be known, this is not a temporary thing because we don’t want to pay some proprietary license or whatever. No, we just believe in Free Software. We believe this is the right thing to do, because everyone should have access to the best software. But also, we are making much better software this way. I said we do about 1/4 of GIMP commits. This still means that 3/4 are made by other people, and this is even forgetting GEGL (GIMP graphics engine), which is awesome too. Basically we would not be able to do that well just by ourselves. We really enjoy working with some of the sharpest minds I had the chance to work with in software. Not only this, other GIMP developers are really cool and agreeable as well. What better to ask for? This is what Free Software is.
So yeah: using and contributing Free Software is actually in our non-profit studio bylaws, our “contract as a non-profit” and we won’t drop this

2018 in review

Just a very quick review of things I brought in GIMP in 2018:

  • 633 commits, hence nearly 2 commits a day in average, in master branch of GIMP (and more on feature branches in-progress) + patches on various projects we use (GEGL, glib, GTK+, libwebp, Appstream…)
  • Helping MyPaint so that they can soon release a new libmypaint v1 (hopefully early 2019), and creating the data package mypaint-brushes (now an upstream MyPaint package!).
  • Creating and maintaining GIMP flatpak on flathub (according to what we were told, the most downloaded software on flathub!)
  • Automatic image backup on a crash of GIMP
  • Debug tools for automatic gathering of debug data (stack traces, platform info…)
  • HiDPI basic support on GIMP 2.10 (and more work on HiDPI on future GIMP 3)
  • Work-in-progress for extension management in GIMP
  • Maintenance of some data (icons, brushes, appdata, etc.) in GIMP
  • Tablet and input debugging
  • Mentored a FSF intern (improved JPEG 2000 support)
  • Fixed most cases of DLL hell of plug-ins in Windows (used to be the cause for a huge number of bug reports!)
  • Reviewed and improved many features (auto-straighten in Measure tool, libheif and libwebp support, screenshot plug-in, vertical text in text tool, and so much more that I can’t list them all!)
  • Smart colorization option in bucket fill

And everything I probably forget about. I also help a lot on maintaining the website and writing news on gimp.org (63 commits this year). And this all doesn’t count non-GIMP related patches I do sometimes (for instance on Korean input) or the many reports we write and help to fix (notably since we were probably the first to install Linux on a Wacom MobileStudio, or at least talk about it, several bugs were fixed because we reported them and helped debug, even down to the kernel, or Wayland).

And then there is what Aryeom did in 2018, which is a lot of work for ZeMarmot of course (working on animation is a loooot of work; maybe we could do some blog post about it sometime), so we are getting closer to the pilote release (by the way, we recently created an Instagram account where Aryeom posts some images and short videos of her work in progress!). And also some side projects to be able to sustain (I remind that LILA could pay her only an average of 6 days a month officially!), such as an internal board game for a big French non-profit (“Petits Frères des Pauvres“) helping penniless people, a marketing video for the Peertube free software, and pin designs for the Free Software Foundation. She also gave some courses of digital painting and retouching with GIMP at university.

Note that she would also prefer to work only on ZeMarmot full time, but once again… we need your help for this!

The Future

This is how I see our hopeful future: in a few years, we have enough to pay several artists (director, animators, background artists, music…) and developers. LILA will therefore be a small but finally a bit more productive studio

And of course it also means more developers too, hence more control over our Free Software pipeline. I have so many dreams: finally a better video non-linear editor (be it contributing to Blender VSE, Kdenlive or any other which we will decided to use), stable, powerful yet not convoluted? Finally some dedicated Free Software compositing and effects tool for professional (2018 was a bit sad there, right)? Finally more communication between all the tools so that we can just edit our XCF in GIMP and see some changes live in Blender? So many hopes! So many things I wish we had the funds to do!

Help the dream come true

So what can you do? Well you can help the studio increase its funds to first finally be able to just survive. The fact I got hired by CNRS is very cool, but in the same time so sad, because it means that our project was not self-dependent enough. Somehow I had to accept the CNRS contract to save our project. Let it not be the end!

What if we reached 5000€ a month in 2019? This would be a huge milestone for us and the proof our dream is viable.

Will you help us create a non-profit Libre Animation Studio? Professional 2D graphics with Free Software is just there, at our door. It only takes a little help from everyone interested to help it through the entrance! 🙂

» Fund in Patreon (USD $) «
» Fund in Tipeee (EUR €) «
» Fund in Liberapay «
» Other donation methods (including wire transfer or Paypal) «

And have fun end-of-year holidays everyone, and a happy new year!

New header for the new year!

New year 2018

Happy new year everyone!

Aryeom started the first day of the year by the live drawing of a new header illustration welcoming this brand new year. Well that was time since we still had quite a summer-themed header until yesterday. 🙂

New year 2018This new image happens to be also in 16:9 format so it can be used as background image on most screens. Just click the thumbnail on the right to download it full-size.
It is licensed Creative Commons BY 4.0 by Aryeom Han, ZeMarmot director.

Also the drawing session was streamed live (as many of Aryeom’s GIMPing session now, as we explained in “Live Streaming while GIMPing” section of our 2017 report). If you missed it, you can have a look to the recording. As usual, this was not edited afterwards nor was it sped up or anything; oh and we certainly don’t add up any music to make it look cooler or whatever. 😛
This was a real focused live, which explains why it is nearly a one-hour video. Just skip through it if you are bored. 😉
Enjoy!

This drawing and this live are made possible thanks to our many donators!

Reminder: Aryeom's Libre Art creation can be funded on
Liberapay, Patreon or Tipeee through ZeMarmot project.

ZeMarmot: GIMP 2.9.8 and end-of-2017 report

Here it is, GIMP 2.9.8 has been released some days ago now, the latest development version of GIMP! As it is customary now, let’s list our involvement in this version so that our supporters on crowdfunding platforms know what they funded. 🙂

Since it also happens to be the end of the year, I complete this post with our end-of-year report, as we also did in 2016.

What we did for GIMP 2.9.8!

During this release span, I focused most of my efforts on bug fixing. I finished a few features here and there but actually even restrained myself from coding too many new stuff! Why? Because I believe we have enough and at some point, we should just release GIMP 2.10. Of course, GIMP 2.10 could be even twice as awesome if we push it by a few more months, and 5 times more awesome with even more time. But then in the end, if you never see it, what’s the point, right? Actually I even plan on just doing this (bug fix and finishing what was started) until we get 2.10 out. Let’s stop feature craziness!

Apart from a lot of bug fixes, I did a lot of bug triaging these last months (looks like I participated in 122 bug reports between 2.9.6 and 2.9.8, i.e. 3 and a half months).  And this month, I also reorganized our bug tracker still for the same reason (pushing GIMP 2.10 release forward) by reviewing the 50+ bugs we had in the GIMP 2.10 milestone to set as blockers only the ones we should really look into. Right now that’s down to 25 such bugs!

I also put some efforts in our stable flatpak release, which is how since October 16, GIMP has officially had its flatpak package on flathub! It is of course visible on the GNU/Linux section of GIMP’s download page with a nice “Install GIMP flatpak” orange button (notice also the cool drawing on this page? That’s Aryeom’s!).

Right now, you can only install the stable release, i.e. GIMP 2.8.x (flathub only accepts stable builds) but if you get it there, when GIMP 2.10 will be out, you will automatically get an official update!
In any case, this flatpak thing (in particular keeping our development flatpak manifest up-to-date with git code and testing the builds) is taking a lot of maintenance time!

As a whole that was 122 commits authored by me in GIMP repository between GIMP 2.9.6 and 2.9.8, out of 474 commits (so ~25%), and I pushed a few more commit from third-party when I reviewed them…
We also had again 2 guest commits by Lionel N., board member of LILA association, the non-profit managing ZeMarmot project.

Of course, though I said I focused on bug fixes, there are still a bunch of cool features I participed to during this release:

  • Support of password-protected PDF for import (the 2-commit feature implemented by Lionel from LILA!) and new procedure `file-pdf-load2()` API for plug-ins and scripts to open password-protected PDF files, but also multi-page PDFs (loading a multi-page PDF was already possible through the GUI but not by scripts and plug-ins).
  • Help system improvements: upon detection of locally installed manuals in several languages, GIMP will now allow selection of the preferred manual language in the Preferences dialog (Interface > Help System). I felt this was an important feature because we regularly had people not understanding why the manual they installed was not seen by GIMP. And they were right, especially since we don’t have as many manual languages as GUI languages. For example, we have 3 Chinese translations (zh_CN|TW|HK) but only a zh_CN manual. I could definitely imagine someone with a zh_HK GUI to go for the zh_CN manual as a fallback.
  • Verbose version (command line: gimp -v) now displays C compiler information (useful for debugging).
  • Canvas rotation and flipping information are now visible in the status bar, and this information is interactive (clicking the flip icons will unflip the canvas; clicking the rotation angle opens the “Select Rotation Angle” dialog). Some people were indeed noting that with the ability to flip/rotate the canvas, in some cases, you may end up “lost” on whether it is currently rotated, flipped or whatnot. After all, the status bar already has zoom information and flip/rotation is quite a similar feature. 🙂
  • Screenshot implementation for KDE/Wayland.
  • Color picker implementation for KDE/Wayland.
  • Improve delay handling for screenshots.
  • Review HGT support patches and improve a bunch of stuff with auto-detection of the format variants (SRTM-1 and SRTM-3), and also a `file-hgt-load()` API for scripts and plug-ins.

But really, as I said, I think my bug fixes and maintenance of previous code was actually much more important than this above list, even though it is so less fancy (and I am a bit sad I cannot list bug fixes in a non-boring way!). And I will just focus more and more on fixes and stability to get GIMP 2.10 out as soon as possible.

ZeMarmot in 2017: our report!

You know it, ZeMarmot is not only about GIMP, even though this software is a huge part of it! ZeMarmot is about making an animation film in 2D drawing, traditional animation yet with digital means (i.e. drawing on computer, not paper). We draw with GIMP. Well Aryeom Han, animator and animation film director does so (not me). And we crowdfund this project.

Financial status

This year was a bit tough mentally and we really started to wonder if this project was a good idea for our lives. Project finances increased continuously yet very slowly, and were still extra low all throughout 2017 (under 400 € a month).

In October, I finally shouted a cry for help after my computer broke, and we are so happy that many people heard it! The funding increased by about twice.

Now let’s be clear! Our current funding is more or less 1000€ since October. This is a lot better than what we had before and it gave us a lot of hope. Yet it still does not pay full time salaries for 2 people (faaar from it, actually it cannot even pay a single full time salary obviously). So we still hope you will not forget us and if you appreciate our project and what we do, both on GIMP development and/or on ZeMarmot movie, please we will be very thankful if you can donate to the project.

ZeMarmot project donations can happen on:

» Liberapay «
(weekly funding, USD and EUR possible, lowest fees)

» Patreon «
(monthly funding, USD ($) only)

» Tipeee «
(monthly funding, EUR (€) only)

Live Streaming while GIMPing

We were conscious that the lack of news on the animation side was not the best. On the other hand, animation just takes time. That’s the way it is.

Depending on the complexity and details of the animation (as chosen by the team), a minute of animation can take a month of work or more (just search the web, all links say the same).

Of course, it depends on your artistic choices. If you do vector animation or Limited Animation (Simpsons or the likes), you can animate a lot faster. Basically you don’t take the same time to animate South Park or a Disney movie (which is not a problem, it’s a choice; I appreciate The Simpsons or South Park too). For ZeMarmot, as you know, we chose a detailed style with full traditional animation. At times we regretted this choice a bit but that’s the way things are.

That’s how Aryeom decided to live stream herself working! She took a few days to search software and found the Free Software OBS, understand how things work (well she also managed to break Fedora once by reinstalling NVidia drivers while following tutorials! :p), made many tests throughout December and since December 25, public livestreams started.

The work is regularly live-streamed at this address:

» https://www.youtube.com/c/LibreArtInfo/live «

Unfortunately we have not found the right organization yet to plan and give a schedule of future livestreams. So for the time being, the best is either to follow us on Twitter, subscribe to the Youtube channel,  or just try to have a regular look at above link.

Previous livestreams are recorded and listed automatically on the channel once the streaming ends, so you can also have a look later on older (not live anymore) streaming. Be aware though: this is real live of someone working really. That means it is real time, not accelerated 20 times (as all these speed painting you can see everywhere), errors may happen and are not edited out afterwards. There is no sound and Aryeom doesn’t interact with people. She is focused and doesn’t look at what is streamed (we will sometimes look at the chat though and may answer questions but don’t consider it as a granted “feature”; this is a peek at an animator work, not a service). The artist sometimes goes for a rest, and so on. It can even be boring at times. Also the longer recorded streaming in the list is more than 7 hours straight! Fair warning. 😉

Still I think that’s an awesome experiment and we already had some very cool comments, like people thanking us (some in English, many in French) because that it is a bit like being allowed to sneak into an animation studio to observe the animators working.

 

Art+Code symbiosis

We regularly have the question: “why don’t you have separate crowdfundings for development and animation sides?

Answer: because this project is a whole. It is symbiotic: I do Free Software because I use it; if I didn’t have ZeMarmot (or another project where we use GIMP), then I would likely not contribute to GIMP. It is that simple. Aryeom as well would likely not use GIMP if she didn’t have a developer by her side.

We remind that is how I started my first patches: because we had crashes and many issues with GIMP and this was not enough for us as a professional tool at the time. We are very happy to tell you that now, it is. Not only because of us, far from it, let’s be clear! We are so happy to hack together on GIMP with several very talented developers (among them, Mitch, GIMP maintainer who is still here after 20 years!). But we were allowed to do our part and this is the reason we stuck around.

To illustrate, just a few hours ago, thanks to Aryeom’s streaming of her work, we were able to have an unexpected live demo of how well we work together. During her live, GIMP crashed! Ouch! In a few minutes, she was able to find reproduction steps during the live streaming. Less than two hours later, I fixed the crash then improved my fix in the master repository of GIMP (it actually took even just a few minutes to reproduce and fix the crash, but well I also had other priorities which I could not drop immediately!).

That’s how well we work together and what you pay for when you donate to ZeMarmot project, and finally that’s the reason why it is a 2-people project, not 2 separate projects. 🙂

So if you ever hold your donation because you only want to pay for a movie, or at the opposite only want to pay for GIMP development, I hope you will review your judgment and see why you get to your goal even better by paying for both!

GIMP Motion

GIMP Motion is our plug-in for animation in GIMP (we talked about it earlier for simple then complex animations). You can also see it in action in Aryeom’s live streamings by the way, nearly daily now.

Unfortunately I have kind of neglected it lately, and that’s mostly for the reason I told earlier: because I am really focusing on getting core GIMP 2.10 out. So I hope this won’t just drag forever (seriously I want to get done with GIMP 2 and go forward with GIMP 3!)
That means that GIMP Motion will likely not be a part of GIMP 2.10. Yet it will be a part of a further GIMP 2.10.x release since we decided earlier that we would relax the no-feature policy on minor releases, which is how I decided that GIMP Motion was not ready to be part of a stable release. That’s exactly why I pushed for this no-feature policy relax for years (ever since 2014, cf. the section “GIMP Meeting(s)” on our LGM 2014 report!): so that we don’t have to rush half-done features nor push important releases forever.

Well we still use it internally, but that’s still very very rough and has many bugs. Be warned if you try it!

Documentating the process of animation

This year, we have been a bit light on documenting the process. Well we had a post on animatics, key-framing, etc. and one on background design. We also had a few talks during the NUMOK festival in Paris, as well as in the JM2L meeting in South of France where we could give some interesting details on the process as well which are not written here yet (but should be soon).

I am the first to admit that’s not enough since Aryeom and I really want to document the techniques behind giving life to still images. But as I said, we had been a bit down, overworked and penniless this year so this fell a little behind. Hopefully we’ll do better next year.

ZeMarmot in the news

This was also cool since, we got on local TVs twice this year thanks to JM2L! In all the videos, GIMP is clearly mentioned and shown on screen as well as Aryeom hacking animations on GIMP in GNOME. 🙂

We have been featured on France 3 (also in writing):

Then on PleinSud TV (at 2:32):

And a mention in a newspaper, Nice Matin:

New material

Thanks to the increased funding at end of year, we were able to renew a bit our material. In particular we bought a Wacom MobileStudio Pro (basically a laptop-tablet from Wacom) on which the first thing we did was to erase Windows and install a GNU/Linux (Fedora 27) and GIMP. And that worked well. We still opened more than a dozen of bug reports here and there, so don’t expect things to be perfect yet. But we are working on it!

We documented a bit our process on a Twitter moment and unfortunately had to take a pause because of hardware issues. We indeed had to send the tablet back to after-service, which gave it back after more than 3 weeks (2 days ago)!

Now we will have to start it all again. Be prepared, we might publish a very complete guide soon on how to get a very cool Wacom laptop with Linux and GIMP. 🙂

Conclusion

This year was hard but eventful, and the end of year gave us more hope after funding increased and we were able to upgrade our material.

The streaming of Aryeom GIMPing was also a very cool idea and we are happy to see that people seem to like it. To this day we only got positive feedbacks. We should have started this sooner!

We do hope that things will continue to improve. We love what we do and our project, and we really wish we will soon be able to say proudly that we are able to make a living by hacking Free Software and Libre Art.

When this day will come, this will just be a very very happy day. 🙂
Happy New Year 2018 everyone!