Colorisation intelligente dans GIMP

En tant que membre de l’équipe Image du laboratoire GREYC (CRNS, ENSICAEN, Université de Caen), j’ai implémenté un algorithme de “remplissage de dessin au trait” dans GIMP, aussi appelé “colorisation intelligente“. Vous avez peut-être entendu parler du même algorithme dans G’Mic (développé par la même équipe), donc quand on m’a proposé l’emploi, cet algorithme m’a rapidement intéressé. Ce devint ma première mission!

Le problème

Conceptuellement le remplissage de dessin au trait est simple: vous dessinez une forme au stylo noir, disons un cercle approximé, et vous souhaitez le remplir d’une couleur de votre choix. Vous pouviez déjà le faire, plus ou moins, avec l’outil de remplissage, en remplissant les couleurs similaires… à 2 problèmes près:

  1. Si le trait n’est pas bien fermé (il y a des “trous”), la couleur fuite. Les trous peuvent être le fait d’erreur de dessin, cependant on ne les trouve pas forcément aisément (cela peut être un trou d’un pixel au pire des cas), et perdre du temps à les trouver n’est pas très marrant. En outre, cela peut être un choix conscient voire artistique.
  2. Le remplissage laisse en général des pixels non coloriés proche des bordures du traits, à cause de l’interpolation, l’anti-aliasing ou pour d’autres raisons (à moins de ne dessiner qu’avec des pixels pleins, style “pixel art”), ce qui n’est pas un résultat acceptable.
2 principaux problèmes du remplissage des couleurs similaires

En conséquence, probablement aucun coloriste numérique n’utilise l’outil de remplissage directement. Diverses autres méthodes nécessitent par exemple l’outil de sélection contiguë (ou d’autres outils de sélection), l’agrandissement ou réduction de la sélection, puis seulement à la fin le remplissage. Parfois peindre directement avec une brosse est la méthode la plus adaptée. Assister à un atelier d’Aryeom sur le sujet de la colorisation est d’ailleurs absolument fascinant. Elle peut enseigner une dizaine de méthodes différentes utilisées par les coloristes. Elle-même n’utilise pas toujours la même procédure (cela dépend de la situation). Pour le project ZeMarmot, j’ai aussi écrit des scripts Python d’aide à la colorisation, qu’Aryeom utilise maintenant depuis des années, et qui fait un boulot très raisonnable d’accélération de cette tâche ingrate (mais la tâche reste ingrate malgré tout).

L’algorithme

Le papier de recherche s’intitule “Un algorithme semi-guidé performant de colorisation en aplats pour le dessin au trait” (on notera qu’il existe une version anglaise, mais le papier français est plus détaillé: “A Fast and Efficient Semi-guided Algorithm for Flat Coloring Line-arts“). J’ai travaillé sur la base d’un code C++ de Sébastien Fourey, avec les avis de ce dernier ainsi que de David Tschumperlé, tous deux co-auteurs du papier.

Pour nos besoins, je me suis intéressé à ces deux étapes de l’algorithme:

  1. La fermeture des traits, par la caractérisation de “points d’intérêt”, lesquels sont les bords de traits avec courbures extrêmes (on peut alors estimer que ce sont probablement des extrémités de lignes), puis en joignant ces points d’intérêts en définissant des facteurs de qualités à base des angles de normales, ou de distance maximum. Les lignes peuvent être fermées avec soit des splines (c’est à dire des courbes) soit des segments droits.
  2. La colorisation proprement dite, en “mangeant” un peu sous les pixels de traits, ainsi de s’assurer de l’absence de pixels non-coloriés près des bords.

Comme on peut le voir, l’algorithme prend donc en compte les 2 problématiques (que j’ai numérotées dans le même ordre, comme par hasard 😉)! Néanmoins je n’ai implémenté que la première étape de l’algorithme et ai adapté la seconde avec une solution propre (quoique basée sur des concepts similaires) à cause de problématiques d’utilisabilité.

Et voici donc le remplissage par détection de traits dans GIMP:

Remplissage par détection de trait

Je ne vais pas réexpliquer l’algorithme en détail. Si c’est ce qui vous intéresse, je suggère plutôt de lire le papier de recherche (10 pages), lequel est très clair, et a même des images très explicites. Si vous préférez lire du code, comme moi, plutôt que des équations, vous pouvez aussi regarder directement l’implémentation dans GIMP, principalement contenue dans le fichier gimplineart.c, en commençant en particulier avec la fonction gimp_line_art_close().

Ci-dessous, je vais plutôt me focaliser sur les améliorations que j’ai faites à l’algorithme, et que vous ne trouverez pas dans les papiers de recherche. Je rappelle aussi que nous avons travaillé avec l’animatrice/peintre Aryeom Han (réalisatrice de ZeMarmot) comme artiste-conseil, pour rendre l’implémentation utile pour de vrai, non juste théoriquement.

Note: cet article se concentre sur l’aspect technique de la fonctionnalité. Si vous souhaitez seulement savoir comment l’utiliser, je conseille d’attendre quelques jours ou semaines. Nous ferons une courte (néanmoins aussi exhaustive que possible) vidéo pour montrer comment l’utiliser ainsi que le sens de chaque option.

Étape 1: fermeture des traits

Pour donner un aperçu rapide de la logique algorithmique, voici un exemple simple à partir d’une ébauche de dessin par Aryeom (bon exemple puisqu’une telle esquisse est pleines de “trous”!). À gauche, vous pouvez voir l’esquisse, au milieu comment elle est traitée par l’algorithme (cette version de l’image n’est jamais visible par le peintre), et à droite mon essai de colorisation en aplats (avec l’outil de remplissage seulement, pas de brosse, rien!) en moins d’une minute (chronométrée)!

De lignes à colorisation, avec représentation interne au centre

Note: bien sûr, il ne faut pas voir cela comme un exemple de “travail final”. Le travail de colorisation est en général en plusieurs étapes, la première consistant en l’application d’aplats de couleur. Cet outil ne participe qu’à cette première étape et l’image peut nécessiter du travail supplémentaire (d’autant plus avec cet exemple qui se base sur une esquisse).

Estimer une épaisseur de trait globale locale (algo amélioré)

Un aspect de l’algorithme est rapidement apparu comme suboptimal. Comme dit plus haut, nous détectons les points clés grâce à la courbure des lignes. Cela pose problème avec les très grosses brosses (soit parce que vous peignez dans un style “lignes épaisses” ou bien juste parce que vous peignez en très haute résolution, donc avec des lignes “fines de dizaines de pixels”, etc.). L’extrémité de telles lignes peut alors présenter une courbure basse et donc ne pas être détectées. Dans le papier originel, la solution proposée au problème est:

Afin de rendre la méthode de fermeture indépendante de la résolution de l’image, une étape préliminaire permet de réduire si besoin l’épaisseur des tracés à quelques pixels, en utilisant une érosion morphologique. Le rayon utilisé pour cette érosion est déterminé automatiquement par estimation de la largeur des traits présents dans le dessin.

Section ‘2.1. Pré-traitement de l’image anti-aliasée’

Malheureusement calculer une estimation de largeur de traits unique pour un dessin entier présente des limites, puisque cela part du principe que le trait a une épaisseur constante. Demandez donc à un calligraphe ce qu’il en pense pour rigoler! 😉

En outre bien que cela fonctionnait globalement, des effets secondaires pouvaient apparaître. Par exemple des trous inexistants au préalable pouvaient apparaître (en érodant une ligne plus fine que la moyenne). Le papier était conscient de ce problème mais l’écartait en supposant que ce trou serait de toutes façons refermé dans l’étape de fermeture qui suivait nécessairement:

Il est à noter que d’éventuelles déconnexions provoquées par l’érosion appliquée, qui restent rares, seront de toute façon compensées par l’étape suivante de fermeture des traits.

Section ‘2.1. Pré-traitement de l’image anti-aliasée’

Pourtant dès les tests initiaux qu’Aryeom a effectuées avec la première version implémentée de l’outil, elle a rencontré des cas similaires. Une fois même, nous avions une zone parfaitement fermée à la base qui laissait fuiter la couleur, une fois l’algorithme appliqué ⇒ nous obtenions donc l’effet inverse du but de l’algoritme! Paradoxal! Pire, alors que trouver des micros trous dans des traits pour les combler est compliqué, trouver des micros trous invisibles (car existants seulement dans une représentation interne du dessin) tient de la gageure.

Pour couronner le tout, cette érosion ne semblait même pas vraiment bien accomplir son but, puisqu’on arrivait aisément à créer des dessins avec de grosses brosses où aucun point clé n’était détecté malgré l’étape d’érosion. Au final donc, cette étape d’estimation d’épaisseur de trait globale+érosion apportait plus de problèmes qu’elle n’en réglait.

Conclusion: pas glop!

Après de longues discussions avec David Tschumperlé et Sébastien Fourey, nous en sommes arrivés à une évolution de l’algorithme, en calculant des épaisseurs de ligne locales pour chaque pixel de trait (plutôt qu’une unique épaisseur de trait pour le dessin entier), simplement basé sur une fonction distance du dessin. Nous pouvons alors décider si une courbure est symptomatique d’un point clé relativement à l’épaisseur locale (plutôt qu’un test absolu sur un dessin érodé par une épaisseur moyenne).

Non seulement cela fonctionnait mieux, cela ne créait pas de nouveaux trous invisibles dans des zones fermées, détectait des points clés sur de très gros traits en supposant la variabilité des traits (que ce soit un choix stylistique ou parce que la perfection n’est pas de ce monde!), mais en plus c’était même plus rapide!

En exemple, la version originelle de l’algorithme n’aurait pas réussi à détecter les points d’intérêt pour fermer cette zone avec de si gros traits. Le nouvel algorithme n’a aucun problème:

» Pour ceux intéressés, voir le code du changement «

Parallélisation pour traitement rapide

La fermeture de traits est clairement l’étape la plus longue du traitement. Bien que cela reste raisonnable sur des images FullHD voire même 4K, sur mon ordinateur, cela pouvait tout de même prendre une demi-seconde de traitement. Pour un outil intéractif, une demi seconde, c’est un siècle! Sans compter si on se met à traiter des images énormes (pas impossible de nos jours), cela peut alors prendre plusieurs secondes.

J’effectue donc ce calcul en parallèle afin qu’il soit exécuté au plus tôt (dès que l’outil de remplissage est sélectionné). Puisque les gens ne sont pas des robots, cela rend l’intéraction bien plus rapide en apparence, voire dans de nombreux cas, on peut ne même pas se rendre compte qu’il y a eu temps de traitement.

Available line art “Source” in the tool options

Partiellement pour la même raison, vous pourrez remarquer une option “Source” qui propose plus qu’à l’habitude dans d’autres outils (“Échantilloner sur tous les calques” ou sur le calque actif uniquement). Pour cet outil, vous pouvez aussi choisir le calque au dessus ou dessous du calque actif. C’est le résultat à la fois d’une décision logique (la couleur appliquée n’est pas du trait par définition) et pour des raisons de performance (il n’est pas nécessaire de recalculer la fermeture à chaque ajout de couleur).

Étape 2: remplissage

Rendre l’algorithme interactif et résistant aux erreurs

Le papier propose de remplir toutes les zones d’un coup à l’aide d’un algorithme de watershed.

J’ai fait le choix de ne pas honorer cette étape de l’algorithme, principalement pour raison d’utilisabilité. Lorsque j’ai vu les images de démonstration de cet algorithme sur G’Mic pour la première fois, le résultat semblait très prometteur; puis j’ai vu l’interface graphique, et cela semblait peu utilisable. Mais comme je ne suis pas le peintre de l’équipe, je l’ai montré à Aryeom. Ses premiers mots après la démo furent en substance: “je n’utiliserai pas“. Notez bien qu’on ne parle pas du résultat final (qui n’est pas mal du tout), mais bien de l’intéraction humain-machine. La colorisation est fastidieuse, mais si la colorisation intelligente l’est encore plus, pourquoi utiliser?

Qu’est-ce qui est donc fastidieux? G’Mic propose plusieurs variantes pour colorier une image: vous pouvez par exemple laisser l’algorithme colorier aléatoirement les zones, ce qui permet ensuite de sélectionner chaque aplat indépendamment pour recolorisation; vous pouvez aussi guider l’algorithme avec des touches de couleurs, en espérant qu’il fonctionnera suffisamment bien pour inonder les bonnes zones avec les bonnes couleurs. Je propose aussi de regarder cet article sympa de David Revoy, qui avait contribué à la version de base de l’algorithme.

filtre « Colorize lineart [smart coloring] »
La colorisation intelligente dans G’Mic est un peu complexe…

Ce sont des méthodes intéressantes et très sûrement utilisables, voire efficaces, dans certains cas, mais ce n’est pas une méthode générique que vous voudrez utiliser dans tous les cas.

Déjà cela implique beaucoup d’étapes pour colorier un seul dessin. Pour l’animation (par exemple le projet ZeMarmot), c’est encore pire, car nous devons coloriser des dizaines ou centaines de calques.
En outre, cela implique que l’algorithme ne peut se tromper. Or nous savons bien qu’une telle hypothèse est absurde! Des résultats non voulus peuvent se produire, ce qui n’est pas obligatoirement un problème! Ce qu’on demande à un tel algorithme est de fonctionner la plupart du temps, du moment que l’on peut toujours revenir à des méthodes plus traditionnelles pour les rares cas où cela ne fonctionne pas. Si vous devez défaire la colorisation globale (car faite en étape unique), puis essayer de comprendre l’algorithme pour refaire vos touches de couleurs en espérant que la prochaine fois, cela marche mieux afin d’essayer encore, un peu à l’aveuglette, alors l’utilisation d’un algorithme “intelligent” ne peut être que perte de temps.

À la place, nous avions besoin d’un procédé de colorisation intéractif et progressif, de sorte que les erreurs de l’algorithme puissent être simplement contournées en revenant temporairement à des techniques traditionnelles (juste pour quelques zones). C’est pourquoi j’ai basé l’intéraction sur l’outil de remplissage qui existait déjà, de sorte que la colorisation (par détection de traits) fonctionne comme cela a toujours fonctionné: on clique et on voit la zone cliquée être remplie devant ses yeux… une zone à la fois!

C’est simple et surtout résistant aux erreur. En effet si l’algorithme ne détecte pas proprement la zone que vous espériez, vous pouvez simplement annuler pour corriger seulement cette zone.
En outre je voulais éviter de créer un nouvel outil si possible (il y en a déjà tellement!). Après tout, il s’agit du même cas d’usage dont s’est toujours occupé l’outil de remplissage, n’est-ce pas? Il s’agit simplement d’un algorithme différent pour décider comment se fait le remplissage. Il est donc tout à fait logique que ce ne soit qu’une option dans le même outil.

En conclusion, je remplace le watershedding sur l’image totale en utilisant encore une carte de distance. Nous avions déjà vu que cela sert comme estimation décente d’épaisseur (ou de demi-épaisseur pour être précis) locale des lignes. Donc quand on remplit avec une couleur, on utilise cette information pour inonder sous les pixels de lignes (jusqu’au centre de la ligne approximativement). Cela permet ainsi de s’assurer qu’aucun espace non colorié ne soit visible entre le remplissage et les traits. Simple, rapide et efficace.
C’est une sorte de watershedding local, en plus simple et rapide, et cela m’a aussi permis d’ajouter un paramètre “Max flooding” pour garder l’inondation de couleur sous contrôle.

Colorisation intelligente sans la partie “intelligente”!

Un usage possible, et bien cool, de ce nouvel algorithme est de se passer de la première étape, c’est-à-dire ne pas calculer la fermeture des traits! Cela peut être très utile si vous utilisez un style de trait sans rupture (design simple avec lignes solides par exemple) et n’avez donc pas besoin d’aide algorithmique de fermeture. Ainsi vous pouvez remplir des zones d’un seul clic, sans vous préoccuper des la sursegmentation ou de la durée du traitement.

Pour cela, mettez le paramère “Maximum gap length” à 0. Voici un exemple de design très simple (à gauche) rempli avec l’algorithme historique par couleurs similaires (au centre) et par détection de traits (à droite), en un clic:

Gauche: AstroGNU par Aryeom – Centre: remplissage par couleurs similaires – Droite: remplissage par détection de traits

Vous voyez le “halo blanc” entre la couleur rouge et les lignes noires sur l’image du milieu? La différence de qualité avec le résultat à droite est assez frappant et explique pourquoi l’algorithme de remplissage historique par “couleurs similaires” n’est pas utilisable (directement) pour du travail qualitatif de colorisation, alors que le nouvel algorithme par détection de traits l’est.

Outil de remplissage amélioré pour tous!

En bonus, j’ai dû améliorer l’intéraction de l’outil de remplissage de manière générique, pour tout algorithme. Cela reste similaire, mais avec ces détails qui font toute la différence:

Clic et glisse

J’ai mentionné plus haut un problème de “sursegmentation”. En effet on peut appeler un algorithme “intelligent” pour le rendre attrayant, cela ne le rend pas pour autant “humainement intelligent”. En particulier, nous créons des lignes de fermetures artificielles basées sur des règles géométriques, pas de reconnaissance réelle de forme ni en fonction du sens du dessin, et encore moins en lisant dans les pensées de l’artiste! Donc souvent, cela segmentera trop, en d’autres termes, l’algorithme créera plus de zones artificielles qu’idéalement souhaitées (si en plus, vous peignez avec une brosse un peu crénelée, cela peut être très mauvais). Regardons à nouveau l’image précédente:

L’image est “sur-segmentée”

Le problème est clair. on voudra sûrement coloriser le chien avec un aplat de couleur unique, par exemple. Pourtant il a été divisé en une vingtaine de zones! Avec l’ancienne intéraction de l’outil de Remplissage, vous devriez alors cliquer 20 fois (au lieu du clic idéal à 1), ce qui est contre-productif. J’ai donc mis-à-jour l’outil pour autoriser le clic glissé, tel un outil de peinture avec brosse: cliquez, ne relâchez pas et glissez sur les zones à colorier. Cela est désormais possible avec le remplissage par détection de traits ainsi que sur couleurs similaires (pour le remplissage sur sélection, c’est par contre non pertinent, bien sûr). Cela rend le remplissage de dessin sursegmenté bien moins problématique puisque cela peut être fait d’un seul tracé. Ce n’est pas aussi rapide que le clic unique idéal, néanmoins cela reste tolérable.

Prélèvement de couleurs

Une autre amélioration notable est le prélèvement aisé de couleurs avec ctrl-clic (sans avoir besoin de sélectionner la pipette à couleurs). Tous les outils de peinture avaient déjà cette fonctionnalité, mais pas encore l’outil de remplissage. Pourtant on travaille là aussi clairement avec la couleur. Par conséquent pouvoir très aisément changer de couleur par prélèvement sur les pixels alentour (ce qui est un usage très commun des peintres numériques) rend l’outil extrêmement productif.

Avec ces quelques changements, l’outil de remplissage est désormais un citoyen de première classe (même si vous n’avez pas besoin de la colorisation intelligente).

Limitations et travail futur possible

Un algorithme pour peintres numériques

La Colorisation intelligente est faite pour travailler sur des dessins au trait. Cela n’est pas fait pour fonctionner sur des images hors dessin, en particulier des photographies. Bien sûr, on découvre toujours des usages non prévus que des artistes aventureux inventent. Peut-être cela se produira ici aussi. Mais pour le moment, pour autant que je puisse voir, c’est vraiment réservé aux peintres numériques.

Traiter des cas autres que lignes noires sur fond blanc?

Les lignes sont détectés de la manière la plus basique possible, avec un seuil soit sur le canal Alpha (si vous travaillez sur des calques transparent) ou sur le niveau de gris (particulièrement utile si vous travaillez avec des scans de dessins).

Ainsi la version actuelle de l’algorithme peut avoir quelques difficultés pour détecter des traits, si par exemple vous scannez un dessin sur papier non blanc. Ou tout simplement, vous pourriez avoir envie de dessiner en blanc sur fond noir (chacun est libre!). Je ne suis cependant pas certain si ces cas d’usage sont suffisamment courants pour valoir d’ajouter toujours plus de paramètres à l’outil. On verra.

Plus d’optimisation

Je trouve que l’algorithme reste relativement rapide sur des images de taille raisonnable, mais je le trouve encore un peu lent sur de grosses images (ce qui n’est pas un cas rare non plus, notamment dans l’industrie de l’impression), malgré le code multi-thread. Je ne serais donc absolument pas surpris si dans certains cas, vous ne préfériez pas simplement revenir à vos anciennes techniques de colorisation.

J’espère pouvoir revenir tranquillement sur mon code bientôt pour l’optimiser davantage.

Des bordures de couleur peu esthétiques

Le bord du remplissage n’est clairement pas aussi “joli” qu’il pourrait l’être par exemple en coloriant avec une brosse (où le bord montrerait un peu de “texture”. Par exemple, jetons à nouveau un œil à notre exemple d’origine.

L’endroit où le bord du remplissage est visible aura probablement besoin d’une retouche. J’ai ajouté un paramère d'”antialiasing” mais clairement ce n’est pas une vraie solution dans la plupart des cas. Ça ne remplace pas une édition manuelle avec une brosse.

Le pire cas est lorsque vous planifiez de retirer les traits après colorisation. Aryeom fait parfois ce type de dessins où les lignes ne servent qu’en support de travail avant d’être retirées pour le rendu final (en fait une scène entière dans ZeMarmot, un peu “rêveuse”, est faite ainsi). Dans ce cas, vous avez besoin d’un contrôle parfait de la qualité des bords des zones de couleurs. Voici un exemple où le dessin final est fait uniquement de couleurs, sans traits externes:

Image de production du film ZeMarmot, par Aryeom

Pas encore d’interface (API) pour les plug-ins

Je n’ai pas encore ajouté de fonctions pour que les scripts et plug-ins puissent faire du remplissage par détection de trait. Cela est fait exprès car je préfère attendre un peu après la première sortie, notamment pour m’assurer que nous n’avons pas besoin de plus ou meilleurs paramètres, puisque une API se doit de rester stable et de ne plus changer une fois créée, au contraire d’une interface graphique qui peut se permettre plus facilement des changements.

En fait vous vous êtes peut-être rendus compte que toutes les options disponibles dans G’Mic ne sont pas disponibles dans les options d’outils de GIMP (même si elles sont toutes implémentées). C’est parce que j’essaie de rendre l’outil moins confus, étant donné que nombres de ces options nécessitent de comprendre la logique intime de l’algorithme. Plutôt que de passer son temps à trifouiller des curseurs au hasard, j’essaie de trouver une interface qui nécessite peu d’options.

Outil de sélection contiguë

La détection de traits n’est pour l’instant implémentée que pour l’outil de remplissage, mais cela serait aussi tout à fait adapté comme méthode de sélection alternative pour l’outil de sélection contiguë. À suivre…

Segmenter moins

Avec des lignes propres et nets, et des formes simples, l’algorithme marche vraiment bien. Dès que vous complexifiez votre dessin, et surtout utilisez des traits avec un peu de caractère (par exemple la série de brosses “Acrylic”, fournie dans GIMP par défaut), un peu trop de points clés faux-positifs sont détectés, et par conséquent le dessin sur-segmente. Nous sommes tombés sur de tels cas, donc j’ai essayé diverses choses, mais pour l’instant je ne trouve aucune solution miracle. Par exemple récemment j’ai essayé d’appliquer un flou médian sur le trait avant la détection de points clés. L’idée est de lisser les imperfections du trait crénelé. Sur mon exemple de base, cela a bien marché:

Centre: sur-segmentation avec l’algorithme actuel
Droite: toujours trop segmenté, mais bien moins, après avoir appliqué d’abord un flou médian

Malheureusement cela a rendu le résultat sur d’autres images très mauvais, notamment en créant des trous (un problème dont nous nous étions débarrassé en ne faisant plus d’étape d’érosion!).

Centre: résultat très acceptable avec l’algorithme actuel
Droite: mauvais résultat qui ferait fuiter la couleur et qui perd de nombreux détails

Donc je cherche toujours. Je ne sais pas si on trouvera une vraie solution. Ce n’est clairement pas un sujet facile. On verra!

En règle générale, la sursegmentation (faux positifs) est un problème, mais il reste moindre que ne pas réussir à fermer des trous (faux négatifs), notamment grâce à la nouvelle intéraction clic et glisse. J’ai déjà amélioré quelques problèmes à ce sujet, tels les micro-zones que le papier de recherche appelle des “régions non significatives” (or elles sont vraiment significatives pour les peintres numériques, car rien n’est plus embêtant à remplir que des petits trous de quelques pixels); et récemment j’ai aussi réglé des problèmes liés à l’approximation rapide de la surface des régions, laquelle peut être fausse dans le cas de régions ouvertes.

Conclusion

Ce projet fut vraiment intéressant car il a confronté des algorithmes de recherche à la réalité du travail de dessin au quotidien. C’était d’autant plus intéressant avec la rencontre de 3 mondes: la recherche (un algorithme vraiment cool pensé par des esprits brillants du CNRS/ENSICAEN), le développement (moi!) et l’artiste (Aryeom/ZeMarmot). Par ailleurs, pour bien donner le crédit à qui de droit, beaucoup des améliorations d’interface furent des idées et propositions d’Aryeom, laquelle a testé cet algorithme sur le terrain, c’est à dire sur de vrais projets. Cette coopération CNRS/ZeMarmot s’est même tellement bien passée qu’Aryeom a été invitée fin janvier pour présenter son travail et ses problématiques de dessin/animation lors d’un seminaire à l’université ENSICAEN.

Bien sûr, je considère encore ce projet comme un “travail en cours”. Comme noté plus haut, divers aspects doivent encore être améliorés. Ce n’est plus mon projet principal mais je reviendrai clairement régulièrement pour améliorer ce qui doit l’être. C’est néanmoins déjà dans un état tout à fait utilisable. J’espère donc que de nombreuses personnes l’utiliseront et apprécieront. Dites nous ce que vous en pensez!

Un dernier commentaire est que les idées derrière les meilleurs algorithmes ne semblent pas nécessairement les plus incroyables techniquement. Cet algorithme de colorisation intelligente est basé sur des transformations très simples. Cela ne l’empêche pas de fonctionner très bien et d’être relativement rapide (avec quelques limites bien sûr), de ne pas prendre toute votre mémoire vive en bloquant l’interface du logiciel pendant 10 minutes… Pour moi, cela est bien plus impressionnant que certains algorithmes certes brillant, et pourtant inutilisables sur les ordinateurs de bureau. C’est de ce type d’algorithme dont on a besoin pour les logiciels de graphisme pour le bureau. C’est donc très cool et je suis d’autant plus heureux de travailler avec cette équipe talentueuse de G’Mic/CNRS. 🙂

Amusez vous bien à colorier vos dessins! 🙂

Smart Colorization in GIMP

As part of the Image team at GREYC lab (CRNS, ENSICAEN, University of Caen), I implemented the “fill by line art” algorithm in GIMP, also known as “Smart Colorization“. You may know this algorithm in G’Mic (developed by the same team), so when they proposed me to work with them, I wanted to implement this algorithm in GIMP core. Thus it became my first assignment.

The Problem

The concept of filling by line art is simple conceptually: you draw a shape with a black pen, say an approximate circle, and you want to fill the inside with a color of your choice. You could already do this, more or less, with the bucket fill, when filling by color similarities. Unfortunately it has 2 major issues:

  1. If ever the line art is not properly closed (“holes” in the lines), the color will leak outside. This might be a small painting mistake sometimes, yet if you don’t see the holes (it could be just 1 pixel in worst case), wasting time finding it is not funny. Othertimes it may even be an artistic choice (“rougher” line style for instance, or any of the billion reasons why you’d want non-closed lines).
  2. Usually you get very ugly non-colored pixels next to line borders because of interpolation, aliasing or whatever (unless you draw with very hard lines) and this is not acceptable result.
2 main problems of Bucket Fill by color similarities

As a consequence, probably no digital colorists ever use the bucket fill directly. There are various other methods, often with the fuzzy select tool (or other selection tools), growing/shrinking the selection then bucket-filling it. Just even painting directly is sometimes the best. Attending one of Aryeom’s workshop on the matter of colorization is absolutely amazing and enlightening as she can teach you a dozen of different methods, and she herself does not always use the same method (she would say it depends on the situation). On ZeMarmot project, I also made some custom quick-colorization Python scripts which Aryeom have used for years now and which do a pretty decent job for optimizing this tedious job (though it’s still tedious!).

The algorithm

The research paper is called “A Fast and Efficient Semi-guided Algorithm for Flat Coloring Line-arts“. I have worked based on C++ code by Sébastien Fourey, with both his and David Tschumperlé’s input, being both co-authors of the paper.

For our needs, it basically has 2 main steps:

  1. Closing the line arts, which is done by finding “key-points” which are line edges with extreme local curvatures (we estimate them likely to be the end for unfinished lines), then closing the lines by joining the keypoints based on some “quality” criteria such as close opposite angles or maximum distances. Lines can be closed either with splines (i.e. curves) or straight segments.
  2. Actually colorizing the created closed regions, “eating” a bit under the line art pixels to ensure absence of uncolorized pixels near borders.

You can see that the algorithm actually deals with both issues previously raised (which I conveniently numbered in the same order 😉)! Nevertheless I only implemented the first step of the algorithm and went with my own solution for the second step (still based on similar concepts though), because of usability issues.

Here is what bucket-filling by line art looks like in GIMP:

Bucket Fill by line art detection

I am not going to re-explain the algorithm in full. So if interested by technical details, I suggest that you just read the research paper which is quite clear with nice self-explanatory images too. If you are more “fluent” with code, such as myself, than with math equations, you may also look at my implementation in GIMP, which is mostly self-contained in gimplineart.c, and in particular start with gimp_line_art_close().

Below I will focus more on improvements we made to the algorithm, and which you won’t even find in the paper. I remind that we also worked with the animator/painter Aryeom Han (the director of ZeMarmot project) as artist advisor so that the implementation is actually useful for real work.

Note: this article is mostly about the more technical side of things. If you are just interested into how to use the tool efficiently, wait a few days or weeks. We will make a short yet (hopefully) exhaustive video showing the usage of every option.

Step 1: line art closure

To get a basic idea of what’s going on under the hood, here is an example (using an unfinished sketch by Aryeom, which are good examples for the purpose of the algorithm as sketches are full of holes!). The leftest image is the scribble as-is, the middle is how it is transformed by the algorithm (you never see this version of the image!), the rightest one is a quick attempt to flat-colorize it by myself (using the bucket fill tool only, no brush, nothing!) in less than 1 minute (no joke, I timed!).

From line arts to colorized sketch, with internal closure representation in the middle

Note: of course do not look at this result from a “finished work” point of view. Colorization work is often done in several steps, the first step being flat colors. This tool is only working on this first step and the image may still need additional tweaks (even more with this example as I ran it off a very rough sketch image).

Estimate a global local line width (improved from paper)

I very quickly noticed that a part of the algorithm seemed a bit suboptimal. As said above, we need to detect key-points based on line curvatures. This was a problem with big brushes (either because you paint very fat lines on a small image or because you just paint very high resolution artworks hence even your thin lines may be dozens of pixels). The end of such lines may have very low curvatures and go undetected. In the original paper, the proposed solution was:

For the sake of independence with respect to the image resolution, a second step allows to reduce the width of the strokes to a few pixels, if necessary, using a morphological erosion. The radius to be used for this erosion is set automatically by estimating the width of the strokes found in the drawing.

Section ‘3. Pre-Processing of the anti-aliased image

Unfortunately computing a single estimation of the line width for the whole drawing assumes line arts in a same drawing all have the same thickness! Just ask a calligrapher how ridiculous it sounds! 😉

As first consequence, even though it still worked many times, it could add previously non-existing holes (if eroding a line thiner than the average line width). The research paper acknowledged this issue but discarded it, assuming the closure step (which necessary follows) would anyway close the new hole again:

One should note that disconnections that may result from the morphological erosion step applied here, which remain rare, will be corrected anyway by the closing step to be applied thereafter.

Section ‘3. Pre-Processing of the anti-aliased image

Yet in very early tests which Aryeom did with the first version of my implementation, she quite quickly encountered cases where the closing step did not close the created holes. In even at least one instance, we even had a perfectly closed line art which leaked the color fill ⇒ we got the reverse of why the algorithm was created! Quite paradoxical! To add to the worse, while finding micro holes in not well closed lines can be hard, finding non-existing holes (because only created in an internal, non-visible, representation) is close to impossible.

And the ironical part is that the erosion did not even performed its goal that well as it was very easy to create fat lines where no key points got found despite the erosion. In the end, the global width estimation+erosion idea added more problems than it solved.

Conclusion: this was bad.

After much discussion with David Tschumperlé and Sébastien Fourey, we came up to an evolution of the algorithm, computing approximate local line width for every line art pixel (instead of one single line width for the whole drawing), simply based on a distance map of the line art. Then we made the test deciding whether an edgel curvature was keypoint material relatively to the local line width (instead of eroding and assuming a theoretical global line width).

Not only did it perform a lot better, did not create unfillable holes in perfectly closed areas, did detect better end points on very huge strokes, allowed for variable strokes in a same drawing (whether as a stylistic choice or simply because perfection is hard), but it even ended up faster!

As an example, the original version of the algorithm would have failed to detect end-points for, and close this shape with such fat lines. The updated algorithm has no such problem:

» For these interested, see the commit «

Parallelization for fast processing

Line art closure was clearly the most time-expensive step. Though it does perform quite well on FullHD or even 4K images, it can still spend half a second on my laptop. And half a second for an interactive tool is huge! And if we run this on very big images (which are not uncommon at all nowadays), it may even take several seconds.

So I parallelized this whole process in order to run it as early as possible (as soon as the Bucket Fill tool is selected). As people are not robots, this makes the interaction quite seamless and you may not even notice the processing.

Available line art “Source” in the tool options

Partially for the same reason, you may notice that the “Source” option proposes you much more than all other tools (“Sample merged” versus active layer). Here you can also choose the layer above or below the active one. There was both a logical reason (you do not necessarily want to count colors as line arts) and a performance one (the software does not have to continuously recompute the closure at every edit).

Step 2: color filling

Making the algorithm interactive and error-safe

The original algorithm was based on filling all zones at once by using a watershed algorithm to fill the whole image.

I did the choice to just drop this part of the algorithm. It was mostly a usability choice. Basically the first time I saw the images demonstrating the algorithm on G’Mic, I found the result cool, but the implementation GUI seemed completely impractical. Still I am not the painter in the team, so I showed it to Aryeom. Her first words after seeing how it worked were “I will never use this” (or something along the line!). Note well that we are not talking about the end result, but about the computer-human interaction. As we said, colorizing is tedious, but if the smart colorization is even more tedious, then why use it?

So what’s tedious? There were several variant interactions in G’Mic: you can for instance let the algorithm color every zone randomly, then you could select each plain color zones for recolorization; there is also the possibility to guide the algorithm with color spots, hoping it performs as you want. I also suggest this interesting article by David Revoy, who contributed to the original version of the algorithm.

filtre « Colorize lineart [smart coloring] »
Smart colorization in G’Mic, a bit overwhelming…

Maybe they were interesting workflows sometimes, yet this is not always something you want to do in your drawings.

For once, it takes a lot of steps to color a single drawing. For an animation (i.e. ZeMarmot project), this is even worse, as we do it for dozens or hundreds of layers.
A second reason is that such workflows assume that the algorithm is never wrong. And we know that it is a very bold assumption! Undesired results may happen. This is not necessarily a problem; what you ask to such algorithm is to work well most of the time, as long as you can always go back to more down-to-earth methods in the rare edge cases. But if you have to undo the colorization for the whole image, change the color spots trying to guess what went wrong and try again, then using “smart” algorithm is only a waste of time.

Instead we needed a colorization process which is interactive and progressive, so that errors of the algorithms are easily bypassable by temporarily reverting to traditional techniques (just for some zones). This is why I based my interaction on the existing bucket fill tool. Colorization (by line arts) should work as it always used to: you click in a zone and see it being filled in front of your eyes… one zone at a time!

It is straightforward and very error-safe. If the algorithm doesn’t detect well the zone you are working on right now, just undo and fix this zone only.
Moreover I really wanted to not have to create a new tool if possible (there are so many already). This is basically the same use case as the Bucket Fill tool, right? It’s just a different algorithm to decide how to fill, that’s all. So it makes perfect sense to be an option for this same tool.

Therefore my solution to remplace watershedding the whole image at once is to use again a distance-transform map of the lines (both original line art and invisible closure lines). As I remind, this is a decent estimation of the local line thickness. Then when the bucket fill occurs, I flood the color a few more pixels (until the half width of the line arts), thus ensuring we don’t leave ugly non-colorized looking holes near the borders. Simple, fast and efficient.
This is somehow a local watershedding, but simpler and much faster, and also allows to add a “Max flooding” option to keep the color under control.

Using smart colorization without the “smart” part!

One very cool usage one can have of this new algorithm is to bypass the first step, i.e. not compute the line art closure! This can be very useful when you are using line style without any holes (simpler design with strong lines) so don’t need any algorithmic help for line closure. This way, you can fill colors in a single click without caring about over-segmentation or processing time.

To do so, just set the option “Maximum gap length” to 0. Here an example of a very simple design (left), filled by similar colors (middle) vs by line art detection (right), in a single click:

Left: AstroGNU by Aryeom – Middle: Fill similar colors – Right: Fill by line art detection

See this kind of “white halo” between the red color and the black lines in the middle image? Quality difference with right one is striking, right? While the historical “Fill similar colors” is absolutely non usable (by itself) for quality colorization work, the new line art detection mode is perfectly usable.

Bucket Fill enhanced for all cases!

As a bonus, I had to improve the current interaction of the Bucket Fill tool for all algorithms as well. It is mostly similar, but a bit improved. What is?

Click and hold

A main issue was to take care of the problem of sur-segmentation. You may call an algorithm “smart” or “intelligent”, this won’t make it “human-smart”. In particular here it creates artificial lines based on geometry, not on actual recognition of shapes or meaning, and even less by reading the painter’s thoughts! So often it will over-segment, i.e. creates more artificial zones than desired (if you paint with a crenelated brush, then it can be really bad). Let’s take again previous image as an example:

Closure is “over-segmented”

You can clearly see the issue. The dog for instance is most likely in a single color, but it ends up as about 20 zones! In such case, with former bucket fill interaction, you would then end up clicking 20 times (instead of ideally 1). Counter-productive, right? So I updated the Bucket Fill tool to work more like a brush. You can click, hold and move the cursor over various regions. This change works both with the line art algorithm and when colorizing similar colors (only for selection fill, it is meaningless). This makes filling over-segmented area much less a problem as you can still do it in a single stroke. This is not as fast as a single click, yet quite quick.

Color picking

Another nice change was to allow easy color picking with ctrl-click (no need to select the Color Picker tool). All paint tools could already do this, but not the Bucket Fill, even though it also works with colors. Being able to quickly change color (by picking nearby pixel, which is a very common usage by professional digital painters) makes the bucket fill tool very productive.

With these few changes, the Bucket Fill is now a first class citizen (even if you don’t use the smart colorization).

Limits and possible future works

Algorithm targeted at digital painters

Smart colorization works on “line arts”. This algorithm won’t perform well on random “non line art” images, and in particular is not made to work on photography. Though I’m sure some people may find interesting ways to use it elsewhere, as far as I can see, this is designed for digital painters only.

More than black line on white background use case?

Lines are detected the most basic possible way, with a threshold either on the alpha channel (if you work with transparent layers) or on the grayscale value (particularly useful when working on white paper scans for instance).

Therefore current version of the algorithm may have difficulties detecting the line arts, for instance if you scan a drawing done on not completely white paper. Or say you simply want to draw with light lines on dark backgrounds (everyone is free!). Not sure how often this occurs, and whether we should really pile up the tool with color options. We’ll see.

More optimization

Though I said it is very usable on many images of reasonable size, I consider this new algorithm still a bit slow when working on very big images (which is not so uncommon in the graphics printing industry where you often need higher resolutions than screen industry), even despite all the multi-thread code. So I would not be surprised than in some cases, you may come back to use your old-style colorization techniques.

I am hoping to be able to soon come back on my code and optimize it more.

Not so nice color borders

Color on borders does not have the nice “texture” you have when colorizing with brushes. For instance let’s look closer at our original example here.

The part where the border of the color shows will likely need edit. I added an “antialiasing” option, but this is clearly not the real solution in most cases either, and manual edit with a brush after color filling may be necessary.

Worse case is when you want to remove the lines afterwards. Aryeom does sometimes such drawing style where the lines are only for the first steps, then removed for the final render (actually a whole dreamy scene of ZeMarmot is done like this), then you’d want perfect control of the colorization border quality. This is one such example where the end painting is made only of colors, no border lines:

An image cut of ZeMarmot movie in production, drawn by Aryeom

No API yet

We are still missing API functions to be able to run line art detection in scripts. This is on purpose, as I am waiting a bit more to make sure I don’t miss any important usage, since an API has to be stable (unlike the GUI which can be updated more easily since our new release politics).

Actually you may have noticed that even in the graphical interface, I haven’t made visible all the options you can find in G’Mic (even though they all are implemented). This is because I am trying to not make this tool over-confusing as many of these options are based on the intimate understanding of the algorithm. So that none has to play constantly with sliders at random and without understanding, I am testing the best way to not over-expose options.

Fuzzy selection tool

This algorithm is currently only implemented for the bucket fill tool, but it would be perfectly suited as an alternative method in the Fuzzy Select tool as well!

Improve segmentation issue

With very clean lines and simple shapes, the algorithm is very neat. But as soon as you draw complicated drawing, and in particular use very crenellated brushes (for instance the default “Acrylic” series of GIMP brushes), it starts to detect a bit too many false-positive end points, hence over-segmenting. We encountered such issues so I tried various improvements, but so far none is working on all cases. One such improvement was to apply a median blur on the line arts before the detection of end points, as it would definitely smooth dents in the lines. And it did work quite nicely on one such example:

Middle: over-segmentation with current algorithm
Right: still segmented, yet much better, when median-blurring first

Unfortunately it works very badly on very thin lines as it would create holes (a problem we got rid of when we replaced the erosion step with local line widths!).

Middle: very acceptable result with current algorithm
Right: bad result as color would leak and we lost many details

So I’m still working on this. Ideally that’s an issue for which we can find a solution, though it is not an easy topic. We’ll see!

As a general rule, over-segmenting is a problem (false positives), but it is better than failing to close holes for such tool (false negatives), especially thanks to the new click-and-hold interaction. I did improve already several issues on this matter (for instance micro-zones which the paper calls “not significant regions” yet which are very significant for digital painters as they are annoying to fill; and recently an issue with the chosen algorithm to approximate very quickly region areas, and which may be wrong on open areas).

Conclusion

I will conclude that this project is already a very interesting one to confront research algorithms to reality with real-life day-to-day work. This was even more interesting as we even confronted 3 worlds: research (an algorithm thought by brilliant minds at CNRS/ENSICAEN), engineering (myself) and artists (Aryeom/ZeMarmot). By the way, many of my GUI improvements were ideas and propositions from Aryeom who tested our work on real projects on the field. This joint CNRS/ZeMarmot cooperation went so well that Aryeom was invited to present her work and all her drawing/animation problematics in a seminar at ENSICAEN university.

Now it is still a work-in-progress in my opinion. As you could see, several aspects deserve to be improved. It is not on my main track anymore, but I will definitely improve it as I will see fit. Yet it is already definitely in a releasable state, and I do hope that many people will use and appreciate it. Tell us what you think if you try!

A last comment is that ideas behind the best algorithms are not necessarily the most incredible technically. This Smart Colorization algorithm is based on many simple transformations yet it performs well, is quite fast (within some limits; as I said above), does not take all your memory nor make GIMP hang for 10 minutes… To me, this is much more impressive than maybe much more brilliant algorithms, yet unusuable on everyone’s desktop. This is what you need in a graphics desktop software when you actually want some work done. And that’s very cool. 🙂

Have fun everyone!

Crowdfunding for extension management in GIMP (and other improvements)

One of my long-brooded project for GIMP was an extension management system. I had it for years (older message from me I found about this was back in 2013, just a year after I started contributing).
I finally started working on it a few weeks ago!

I actually have a looot of projects regarding extending GIMP: better plug-in API, bidirectional communication with GIMP core, even GUI hacking through plug-ins, why not! But the first issue was: installing plug-ins and resources suck right now!
Let me explain the plan of things to come further below, but first a bit of a reminder:

You can crowdfund ZeMarmot/GIMP!

Everything done so far and to be done only happen thanks to ZeMarmot project. This project is paying for GIMP development, since this is all done for us to improve our production pipeline.

Right now, we don’t even have enough monthly funding to pay a single person at minimum wage. Yet we managed to survive at 2 for years. We had times of slight depressions and burnouts. Actually we kind of always have these lately.

We hope to at least crowdfund enough to pay 2 salaries at minimum wages! That would require about 4 times our current monthly funding. This is why for once I decided to detail more of my development plans in advance (rather than in the end when it is done, as I remind we are major GIMP developers) and I start by the crowdfunding call rather than the technicals, because we really need you!

If you like how GIMP has improved lately, and want it to continue this way, can you help us? If yes, our project is being funded on the 2 following platforms:

» Patreon crowdfunding (USD) «

» Tipeee crowdfunding (EUR) «

What is an extension?

What I call an extension is anything which can be installed in GIMP. In my current development code, an extension can already pack:

  • plug-ins
  • brushes (core and MyPaint ones)
  • dynamics
  • patterns
  • gradients
  • palettes
  • tool presets

In the future, it could also be:

  • scripts
  • templates
  • themes
  • icons
  • … and whatever else data I may forget!

What is management?

What I call “management” is basically being able to:

  • Search for new extensions (made available in remote repositories);
  • install new extensions;
  • uninstall extensions;
  • disable extensions (make them inactive without full removal);
  • update extensions: when the extension creator makes a new version available, GIMP should tell you and give you the possibility to update in  a single click.

You probably all have an idea how it would work. For instance your web browser (i.e. Firefox, Chrome, etc.) probably has extensions as well and you may have searched/installed some. Well basically I want the same thing in GIMP.
Until now, “installing” an extension in GIMP implied making a web search, finding some compressed archive (sometimes on shaddy websites) with weird instructions asking you to put and unzip files into some hidden folder on your disk. Sometimes it was working, sometimes not and usually many barely understood what one were asked to do. Well it sucked.

Extension creator point of view

If you create resources for GIMP, be them brushes, splash images or plug-ins, here is what you’ll get: we are going to set up a website where you can upload your extensions. We don’t know yet if we will recycle the old registry.gimp.org domain (now down) or set up a new one (extensions.gimp.org?). We’ll see.
The code for the website will be on gitlab.gnome.org, though right now the repository is empty (working on GIMP core side first) as I just requested for its creation a few days ago.

Technically an extension is mostly about just adding metadata to your data: an extension name, a description, even screenshots if relevant. Your website URL is also welcome, as well as your bug tracker URL, so that we can redirect people to you when your extension has problems (making your development more efficient). In my current implementation, I chose the AppStream format for the metadata, which is well known by Free Software developers, quite featureful and easy to write (we still have some details to figure out for Windows and probably non-Linux support in general, but I am confident this will go well). Metadata is the main component to be able to search and manage extensions correctly!

We are extending it a bit with a few tags so that you can describe what kind of data your extension provides. For instance, here is the skeleton for metadata of an extension which ships a set of brushes:

<?xml version="1.0" encoding="UTF-8"?>
<component type="addon">
 <id>info.libreart.brushset</id>
 <extends>org.gimp.GIMP</extends>
 <name>My cool brush set</name>
 <summary>A collection of brushes for GIMP</summary>
 <url type="homepage">https://libreart.info</url>
 <metadata_license>CC0-1.0</metadata_license>
 <project_license>GPL-3.0+</project_license>
 <releases>
  <release version="0.1" date="2018-06-07" />
 </releases>
 <requires>
  <id version="2.10" compare="ge">org.gimp.GIMP</id>
 </requires>
 <metadata>
  <value key="GIMP::brush-path">brushes</value>
 </metadata>
</component>

That’s it. You put the brushes inside a brushes/ subdirectory (this is what is described in the “GIMP::brush-path” key) and you are done.

GIMP point of view

One of the cool thing which could happen on GIMP side is that we may transform some of our core data and plug-ins into core extensions as well. This would have 2 direct consequences:

  1. It will be possible to easily disable features or data you don’t care about. I read some people even went as far as deleting files to make their GIMP menus clearer by removing features one never use. Well now it should be unneeded.
  2. Updates could happen out-of-releases: a release of GIMP is a lot of work and preparation. You noticed how now we do them faster now? Well this won’t stop. Nevertheless if we can also update a core extension in the extension repository, this would make our job easier, we could do less core GIMP releases, yet you would get even faster updates for all the small features out there.

What about security?

Well that’s the big question! Let’s be clear: currently security of plug-ins in GIMP sucks.

So the first thing is that our upload website should make basic file type checks and compare them with the metadata listing. If your metadata announces you ship brushes, and we find executables in there, we would block it.

Also all executables (i.e. plug-ins or scripts) would be held for manual review. That also means we’ll need to find people in the community to do the review. I predict that it will require some time for things to set up smoothly and the road may be bumpy at first.

Finally we won’t accept built-files immediately. If code is being compiled, we would need to compile it ourselves on our servers. This is obviously a whole new layer of complexity (even more because GIMP can run on Linux, Windows, macOS, BSDs…). So at first, we will probably not allow C and C++ extensions on our repository. But WAIT! I know that some very famous and well-maintained extensions exist and are compiled. We all think of G’Mic of course! We may make exceptions for trustworthy plug-in creators (with a well-known track record), to allow them to upload their compiled plug-ins as extensions. But these will be really exceptional.

Obviously this will be a difficult path. We all know how security is a big deal, and GIMP is not so good here. At some point, we should even run every extension in a sandbox for instance. Well some say: the trip is long, but the way is clear.

Current code

As said, I am already working on it. Current code can already load extensions, read the metadata, enable and disable the extensions. These are the bases. I still have a lot to do on the bases (for instance the plug-ins need some special-casing for enabling/disabling).

Then I will need to work on getting the extension list from repositories, and work on the web side (Patrick David should help me there, since you can recall he also made our new website and Pixls.us, the trendy community site for photographers using FLOSS).

I cannot say when this will end and I do other things in the same time (lately I have worked on improving GIMP code for HiDPI, and of course I need to work more on our animation plug-in; and much much more!). But I hope a releasable state should be done by end of the year. 🙂

Thanks all!
And don’t forget we are really in need for more funding on Tipeee or Patreon (see also ZeMarmot generic donation page) to help us going forward!

New format in GIMP: HGT

Lately a recurrent contributor to the GIMP project (Massimo Valentini) contributed a patch to support HGT files. From this initial commit, since I found this data quite cool, I improved the support a bit (auto-detection of the variants and special-casing in particular, as well as making an API for scripts).

So what is HGT? That’s topography data basically just containing elevation in meters of various landscape (HGT stands for “height“), gathered by the Shuttle Radar Topography Mission (SRTM) run by various space agencies (NASA, National Geospatial-Intelligence Agency, German and Italian space agencies…). To know more, you can read here and there.
HGT download source: https://dds.cr.usgs.gov/srtm/version2_1/
(go inside SRTM1/ and SRTM3/ directories for respectively 1 arc-second and 3 arc-seconds sampled data)
You probably won’t find other links of interest since not everyone can do such data (not everyone has satellites!).

Here is what it can look like after processing: left is an image obtained from NASA PDF, and right is the same data imported in GIMP followed by a gradient mapping.

So the support is not perfect yet because to get a nice looking result, you need to do it in several steps and that involves likely a bunch of tweaking. My output above is not that good (colors look a bit radioactive compared to the NASA one!) but that’s mostly because I didn’t take the time to tweak more.

And so that’s why I am writing this blog post. Someone trying to import HGT files in GIMP may be a bit disconcerted at first (so I’m hoping you’ll find this blog post to go further). At first you’d likely get a nearly uniform-looking grey image and may think that HGT import is broken. It is not.

What’s happening? Why is the imported HGT all uniform grey?

GIMP by default will convert the HGT data into greyscale. That is not a problem by itself since we can have very well contrasted greys. But that doesn’t happen for HGT import. Why?

HGT contains elevation data as signed 16-bit integers representing meters. In other words, it represents elevation from -32767 m to 32767 m (with an exception for -32768 which means “void”, i.e. invalid data; since that’s raw data with minimum processing, it can indeed contain errors). Therefore once mapped to [0; 1] range, color 0 (pure black) is invalid, ]0; 0.5] is anything under water level and [0.5; 1] is above water elevation.

Considering that on earth, the highest point is Mount Everest at 8848m, when mapped to our [0; 1] range, we see it has value 0.635. So you can see the problem: most things on earth will be represented with greys really close to 0.5 and that’s why there is no contrast.

How to get nice colors and contrast?

There are several solutions, but the one proposed by the contributor was to use the “Gradient Map” plug-in. That’s a good idea. Basically you remap your greys from 0 to 1 into color gradients.
Now you can try to create a gradient by setting random stops through the GUI, but that will most likely be quite a challenge. A better idea is to do it a bit more “scientifically” (i.e. to use numbers, which you can also do through the GUI by using the new blend tool, though not as accurately as I’d like with only 2 decimal places). This is what did Massimo here by creating a gradient file which would map “magenta for invalid data, blue below zero, green to 1000 m, yellow to 2000m, and gray to white above“. From this base, I added a bit of random tweaking because I was trying to get an output similar to the NASA document (just for the sake of it), so you can get a look at how my own gradient file looks like. But if you are looking to, say, create a relief map with accurate elevation/color mapping, you’d prefer to stick by the number-only approach.

Then once you get your gradient “code”, copy it in a file with the extension .ggr inside the gradients/ folder of your GIMP config, and just use it when running “Gradient Map” filter.

Just to explain a bit the format: for each line, you get the startpoint, midpoint and endpoint coordinates (in the [0; 1] range), followed by 4 values for RGBA (also in [0; 1] range) for the startpoint then again 4 values for RGBA endpoint color. Then you get an integer for the blending mode (you likely want to keep it linear, i.e. 0, for a relief map), then the coloring value (leave it to 0 as well, which is RGB). Finally the last 2 integers are whether the startpoint and endpoint must be fixed colors, or correspond to foreground, background, etc. You will likely want to keep them as fixed colors (0).

So basically a line like this:

0.500000 0.507633 0.515267 0.000000 1.000000 0.000000 1.000000 0.000000 0.500000 0.000000 1.000000 0 0 0 0

means: gradient from 0 meter (0.5) to 1000 m ((0.515267 – 0.5) × 216 ≈ 1000) is a linear gradient from RGBA 0-1-0-1 (green) to RGBA 0-0.5-0-1. That is:

start mid end Rs Gs Bs As Re Ge Be Ae 0 0 0 0

where start is the start elevation and end the end elevation in [0; 1] range; and RsGsBsAs and ReGeBeAe are respectively the start and end gradient colors.

That’s how you can easily map the elevation into colors! I hope that’s clear! 🙂

Can’t we have nicer support with a GUI?

Yes of course. This was fun and cool to review then improve this feature, and we should not let quality patches rot in our bugtracker, but that’s not my priority (as you know) so I stopped improving the feature (if I don’t stop myself from all these funny stuff out there, when would I work on ZeMarmot?!).
I gladfully accept new patches to improve the support and have left myself 2 bug reports to leave ideas about how to improve the current tools:

  • Improve “Gradient Map” filter to provide on-canvas preview and editing, similarly to the blend tool, because I realize this filter is powerful but that is a bit of a pain to use right now (iterations of edit gradient, run the filter for test, cancel, again and again).
  • Map gradients directly from HGT import with preview and [0; 1] range remapped to elevation in meters in the dialog so that we don’t have to constantly recompute values back and forth and edit .ggr files by hand.

In the meantime, I leave this blog post so that the format is at least understandable and HGT import usable to moderately technical people. 🙂

That’s it! Hopefully this post will be useful to someone needing to process HGT files with GIMP and willing to understand how this works, until we get more intuitive support.

Reminder: my Free Software coding can be funded on
Liberapay, Patreon or Tipeee through ZeMarmot project.

GIMP 2.9.6 & ZeMarmot

Note: this is a copy of a post initially posted on Patreon and Tipeee.

Splash of GIMP 2.9.6 by Aryeom
Splash of GIMP 2.9.6 by Aryeom

Last month, we released the third development version of GIMP, version 2.9.6, as preparation of the next stable version, GIMP 2.10.

Same as for previous versions, ZeMarmot project was one of the major contributors with 274 commits (out of 1885 total for this release) by Jehan, 4 by Aryeom (some icons, a new paint dynamics “Pressure Size” very useful for flat coloring, and the splash image for this development version), and even for the first time, 3 commits by Lionel, a board member of LILA association. Hence about 15% of GIMP 2.9.6 was brought to you by ZeMarmot! 🙂

To get some more insight, you can have a look at the official announcement. And if you want to get the full and accurate list of Jehan’s contributions in particular, it is available on the source repository.

Brought to you in 2.9.6 by ZeMarmot

  • made libgimp as thread-safe, which basically means simplify plug-in developer work to have plug-ins using  several cores (now all desktop computers are multi-core);
  • display angles when drawing lines;
  • code review for WebP image support, as well as some improvements and fixes (and even a patch upstream on libwebp library);
  • capability to switch exclusive visibility of layers inside layer groups only with shift-click (feature requested and tested/used by Aryeom for a few months before adding it to GIMP);
  • contributing to the Darktable and RawTherapee  developers efforts for our new “raw” plug-in allowing importing  RAW files through these third-party software and into GIMP (GIMP project advocates for cooperation with other Free Software);
  • contribution to allow GIMP to follow GEGL multi-thread limit (once again to have a better usage of modern computer processors but now in GIMP core in particular);
  • various improvements of PDF support, in particular multi-page PDF export from layers (this is the part where Lionel from LILA made his first steps as a developer with Jehan’s help!);
  • code review and fixes for improved support of PCX images import and export;
  • capacity of plug-ins to be installed in their own subdirectory,  which should in the long run allow to get rid of the “DLL hell”, in  particular on Windows system, a very common issue where some plug-ins  embed libraries breaking other plug-ins;
  • change various defaults values to get to up-to-date standards (bigger default font size, fullHD as the new default image dimension, 300 PPI default resolution instead of 72…);
  • intelligent adaptation of physical dimension precision based on printing resolution to allow better precision in various parts of the  software (measure tool, status bar, etc.);
  • capacity to choose the icon size, allowing to adapt GIMP on smaller or bigger screen and in particular high density screens, etc.;
  • auto-detection of native resolution of your screen to choose  better default icon size (this default choice can still be changed, cf.  previous point; but at least you should get better defaults);
  • vector icons by default for the various size support;
  • welcome new code contributors by adding a vim coding style file and integrating contributed emacs and kate coding style files;
  • Flatpak package for GIMP;
  • and much more! Bug fixes and minor features by the dozens!

Flatpak for creators on Linux?

For the creators who use GIMP on a GNU/Linux operating system, you may have heard of Flatpak,  the generic application package system. Since we also exclusively use  Linux, it felt important that GIMP be available in a timely manner (with  distribution package systems, it is not unheard of to have to wait  months after actual release to get some new version!). We take the opportunity of the release of 2.9.6 to test a first public Flatpak package. Since  we don’t have a stable server, we made it available to our Patreon  and Tipeee contributors only for the time being, then will try and make it available for everyone very  soon!

For information, Windows already has a GIMP  2.9.6 installer available; and a MacOS package should hopefully soon get  uploaded (it will depends on this package maintainer who has some family priorities right now). These are not maintained by us. » See the download page! « 🙂

Thanks and “en route to GIMP 2.10”!

I hope you appreciate our contributions to GIMP! Know that these are all thanks to our contributors, be them Patreon or Tipeee, in previous crowdfundings or the ones who make direct donations.

It is not easy everyday because we seriously lack funding, and we have had some blues more than once. ;-(
Yet the many of you who never failed us and continue to support us give us some courage.
Thanks to you!

We will continue in order to bring you an awesome stable GIMP 2.10. 🙂

Have fun with GIMP!

GIMP Motion: part 2 — complex animations

This is the second video to present GIMP Motion, our plug-in to create animations of professional quality in GIMP. As previously written, the code is pretty much work-in-progress, has its share of bugs and issues, and I am regularly reviewing some of the concepts as we experiment them on ZeMarmot. You are still welcome to play with the code, available on GIMP official source code repository under the same Free Software license (GPL v3 and over). Hopefully it will be at some point, not too far away, released with GIMP itself when I will deem it stable and good enough. The more funding (see in the end of the article for our crowdfunding links) we get, the faster it will happen.

Whereas the previous video was introducing “simple animations”, which are mostly animations where each layer is used as a different finale frame, this second video shows you how the plug-in handles animations where every frame can be composited from any number of layers. For instance a single layer for the background used throughout the whole animation, and separate layers for a character, other layers for a second character, and layers for other effects or objects (for instance the snow tracks in the example in the end of the video).

It also shows how we can “play” with the camera, for instance with a full cut larger than the scene where you “pan” while following the characters. In the end, we should be able to animate any effect (GEGL operations) as well. This could be to blur the background or foreground, adding light effects (lens flares for instance), or just artistic effects, even motion graphics…
All this is still very much work-in-progress.

One of the most difficult part is to find how to get the smoother experience. Rendering dozens of frames, each of these composited from several high resolution images and complex mathematical effects, takes time; yet one does not want to freeze the GUI, and the animation preview needs to be as smooth as possible as well. These are topics I worked on and experimented a lot too because these are some of the most painful aspect of working with Blender where we constantly had to render pieces of animation to see the real thing (the preview is terribly slow and we never found the right settings even with a good graphics card, 32GB of memory, a good processor, and SSD hard drives).
One of the results of my work in GIMP core should be to make libgimp finally thread-safe (my patch is still holding for review, yet it works very well for us already as you can see if you check out our branch). So it should be a very good step for all plug-ins, not only for animation only.
This allowed me to work more easily with multi-threading in my plug-in and I am pretty happy of the result so far (though I still plan a lot more work).

Another big workfield is to have a GUI as easy to use, yet powerful, as possible. We have so many issues with other software where the powerful options are just so complicated to use that we end up using them badly. That’s obviously a very difficult part (which is why it is so bad in so many software; I was not saying that’s because they are badly done: the solution is just never as easy as one can think of at first) and hopefully we will get something not too bad in the end. Aryeom is constantly reminding me and complaining of the bugs and GUI or experience issues in my software, so I have no other choices than do my best. 😉

 

You’ll note also that we work on very short animations. We actually only draw a single cut at a time in a given XCF file.  From GIMP Motion, we will then export images and will work on cut/scene transitions and other forms of compositing in another software (usually Blender VSE, but we hear a lot more good of Kdenlive lately, so we may give it a shot again; actually these 2 introduction videos were made in Kdenlive as a test). Since 2 cuts are a totally different viewpoint (per definition), there is not much interest on drawing them in the same file anyway. The other reasons is that GIMP is not made to work with thousands of high-definition layers. Even though GEGL allows GIMP to work on images bigger than memory size in theory, this may not be the best idea in practice, in particular if you want fast renders (some people tried and were not too happy, so I tested for debugging sake: that’s definitely not day-to-day workable). As long as GIMP core is made to work on images, it could be argued that it is acceptable. Maybe if animations were to make it to core one day, we could start thinking about how to be smarter on memory usage.
On the other hand, cuts are usually just a few seconds long which makes a single cut data pretty reasonable in memory. Also note that working and drawing animation films one cut at a time is a pretty standard workflow and makes complete sense (this is of course a whole different deal with live-action or 3D animation; I am really discussing the pure drawn animation style here), so this is actually not that huge of a deal for the time being.

To conclude, maybe you are wondering a bit about the term “cel animation”. Someday I guess I should explain more what was cel animation, also often called simply “traditional animation” and how our workflow is inspired by it. For now, just check Wikipedia, and you’ll see already how animation cels really fit well the concept of “layers” in GIMP. 🙂

Have a fun viewing!

ZeMarmot team

Reminder: my Free Software coding can be supported in
USD on Patreon or in EUR on Tipeee. The more we get
funding, the faster we will be able to have animation
capabilities in GIMP, along with a lot of other nice
features I work on in the same time. :-)

Wilber week 2017: our report

Wilber Week 2017: the hacking place

ZeMarmot reached Bacelona airport!
ZeMarmot reached Barcelona airport!

Last week, the core GIMP team has been meeting for Wilber Week, a week-long meeting to work on GIMP 2.10 release and discuss the future of GIMP.  The meeting place was an Art Residency in the countryside, ~50km from Barcelona, Spain, with pretty much nothing but an internet access and a fire place for heating. Of course, both Aryeom and I were part of this hacking week. I personally think this has been a very exciting and productive time. Here is our personal report (it does not include the full result for everyone, only the part we have been a part of).

Software Hacking, by Jehan

GIMP on Flatpak

I’ve wanted to work on an official Flatpak build for at least 6 months, did some early tests already back in September, but could finally make the full time only this week. The build is feature-complete (this was not the case of the original nightly builds of GIMP, used as tests by Flatpak’s main developer, back when it was still called xdg-app; also these incomplete builds seem to have not been available anymore for a few months now), or nearly (since some features are still missing in Flatpak).

I’ll talk more on this later in a dedicated post, detailing what is there or not, and why, with feedback on the Flatpak project.
Bottom line: GIMP will have an official Flatpak, at least starting GIMP 2.10!

Heavy coding and arting going on at #WilberWeek
“Heavy coding and arting going on at #WilberWeek” (photo by Mitch, GIMP maintainer)

Working on the help system, Windows build, and more…

I’ve also worked in parallel on some other topics. For instance I’ve made a new Windows build of GIMP to test a few bugs (with my cross-build tool, crossroad, which I hadn’t used for a few months!), fixed a few bugs here and there, and also spent a good amount of time working on improving language detection for the help system (in particular some broken cases when you don’t have exactly the same interface language as the help you downloaded, since we don’t have documentations for as many languages as we have GUI translations). This part is mostly not merged in our code yet because unfinished. But it should be soon.
All in all, that was 26 commits in GIMP (and 1 minor commit in babl) last week, and a lot more things started.

Art hacking, by Aryeom

Aryeom, ZeMarmot director, contributed a lot of smiles (as always), art and design. Since Mitch forgot our usual “Wilber Flag”, she quickly scribbled one on a big sheet of paper (see in video).

Apart from playing with Wilber stamps, created by Antenne Springborn, Aryeom also spent many hours discussing t-shirt and patch designs with Simon Budig. Here is one of her nice attempts for a very classy outlined-Wilber design:

Outlined-Wilber design by Aryeom
Outlined-Wilber design by Aryeom

Funny story: she chose as a base a font called montserrat, without realizing that the region we were in at the time was called Montserrat as well. Total coincidence!

She has also been working on some missing icons in GIMP, for instance the Import/Export preferences icon.

And with time permitting, she scribbled various drawings on paper, because digital painting doesn’t mean you should forget analog techniques, right?

Social hacking: interviews and merchandise

Developer interviews

I have been wanting to bring a little more life to our communication ever since we got a new website for GIMP. We already produce more regular news. I wish we had even more. I also think we should even extend to community news. So if you’ve got cool events around the world involving GIMP, do not hesitate to tell us about them. We may be able to make it a gimp.org news when time permits.

Something else I wanted is showing the people behind GIMP: developers and contributors, but even the artists, designers and other creators making usage of GIMP as a tool in their daily creative process. I have talked about these interviews for a few months now, and Wilber Week was my first attempt to make them a reality. I interviewed Mitch, GIMP maintainer, Pippin, GEGL maintainer, Schumaml, GIMP administrator, Simon, a very early GIMP developer and Rishi, GNOME Photos maintainer and GEGL contributor.
All these interviews soon to be featured on gimp.org!

And that’s only a start! I am planning on interviewing even more contributors (developers and non-developers) and also artists. 🙂

Merchandising

We regularly have requests about t-shirts or other merchandising featuring Wilber/GIMP. So we sat down and discussed on what should be exactly GIMP’s official position on this topic. As you know, I, personally, am all for Libre Art, so this was my stance. And I am happy that we are currently willing to be quite liberal.

Yet we have a lot of values and that was our main concern: how nice is your design? Is your merchandising using good material? Is it produced with ecologically-conscious techniques? Do you give back to the community?… So many questions and this is why Simon Budig will work on a ruleset of what will be acceptable GIMP merchandising that we will “endorse”. Endorsement from the GIMP project will mean that we will feature your selling page link on gimp.org and also that you will be allowed to feature on your own page some “endorsed by GIMP” text or logo. I’ve been quite inspired by this system which Nina Paley uses for Sita Sings the Blues movie.

Well that’s the current status, but don’t take it as an official position and wait for an official news or page on gimp.org (as a general rule, nothing I write is in any way an official GIMP statement unless confirmed on the main website by text validated by peers).

Release hacking!

The one you’ve all been waiting for, so I kept it for the end, or close: what about GIMP 2.10 release? We finally decided that it is time to get 2.10 going. We still have a few things that we absolutely need to fix before the release, but the main decision is that we should stop being blocked by unfinished cool features.

We have got many very awesome features which are “nearly there”, but mostly untouched for years. Usually it means that it globally works but is either extremely slow (like the Seamless Clone or n-point deformation tools), or that it is much too instable (up to the crash), often also with unfinished GUI…
Well we will have to do a pass through our feature list and will simply disable whatever is deemed non-releasable. The code will still be here for anyone to fix, but we just can’t release half-finished unstable features. Sorry.
The good news is that it suddenly divides our blocker list by 10 or so! And that should make GIMP 2.10 coming along pretty soon.

But so what of all these cool features? Will we have to wait until GIMP 3 now? Not necessarily! We decided to relax the release rules, which come from a time where all free software released major versions with new features and minor versions with bug fixes only (some kind of semantic versioning applied to end software). So now, if any cool new feature comes along or if the currently deactivated features get finished, we are willing to make minor releases with them! Yes you read it well. This makes it much more exciting for developers since it means you won’t have to wait for years to see your changes in GIMP. But it also means that our contribution process gets much more robust to the unfinished-patch-dropping issue. Of course the libgimp API (used by plugins) still stays stable. Changes does not mean breaking stability!
This was also summed-up in an official gimp.org news recently.

I am so happy about this because I have been pushing for this change in our release process for years. Actually the first time I proposed this was in Libre Graphics Meeting 2014, Leipzig (as I explained in my report back then). I call it a rolling release, where we can release very regularly new stuff, even if just a little. This time though, the topic was brought up by Mitch himself.

People hacking

The conclusion of this week is that it was very nice. As Simon Budig put it in his interview: I mostly stay for the people. I think this is the same for us, and these kind of social events are the proof of it. The GIMP project is ­ — before all — made of people, and not just any people, even nice people! Such event is a good occasion for meeting physically, from time to time, and not just with pixels and bits exchanged through the internet.
We also spent a few hours visiting Barcelona, in particular Sagrada Familia, and doing a few hikes in Montserrat.

This slideshow requires JavaScript.

Awesome panorama shot showing several members of GIMP and GEGL (photo by Aryeom)
Panorama shot featuring several members of GIMP and GEGL (photo by Aryeom)

Financial hacking: ZeMarmot

As a conclusion, we remind you that ZeMarmot would be the way for me to work full-time on GIMP software development! We could do nearly as much every week if our project had the funding which allowed us to sustain ourselves while hacking Free Software. So if you wish to see GIMP be released faster with many cool features, don’t hesitate to click our Patreon links (for USD funding) or the Tipeee one (EUR funding).

See you soon!

ZeMarmot end-of-year report

Hi everyone!

How are going your last days of 2016 so far? It’s been a strange year? Well let’s not diverge, and focus on ZeMarmot, then, shall we? First be aware that our dear Director, Aryeom Han, is getting a lot better. She was also really happy to get a few “get well” messages and say thanks. Her hand is still aching sometimes, in particular on straining or long activities, but on the whole, she says she can draw fine now.

Reminding the project

I will discuss below what was done in the last months, but first — because it is customary to do so at end of year — I remind that ZeMarmot is a project relying on the funding by willing individuals and companies, with 2 sides: art and software.

I am a GIMP developer, the second biggest contributor in term of number of commits in the last 4 years and I also develop a plugin for digital 2D animation with GIMP, which Aryeom is using on ZeMarmot. I want to get my plugin to a releasable state by GIMP 2.10.

Aryeom is using the software to fully animate, draw and paint a movie, based on an original story which I wrote a few years ago, about a marmot who travels the world for reasons you will know when the film will be released. 🙂 Oh and the movie will be Creative Commons by-SA of course!

Up to now, our initial crowdfunding (~ 14 000 €) has allowed to pay several months of salary to Aryeom. I have chosen to not earn anything for the time being (not because I don’t like being paid but because we cannot afford it with current funding). Some of it is remaining but is kept to pay the musicians.

Now we are mostly relying on the monthly crowdfunding through the Patreon (USD funding) and Tipeee (EUR funding) platforms. But all combined, that’s about 180 € a month, which amounts to barely more than a day of salary (and with non-wage labour costs, that’s not all of it for Aryeom). 1 day per month to make a movie, that’s far from enough, right?

My dream? I wish we could some day consider ourselves a real studio, with many paid artists, producing cool Libre Art movies going to the cinema (yes in my crazy dream, Creative Commons by-sa films are on the big screen!), and developers paid to improve Free Software so that our media-making ecosystem gets even better and for everybody to use!
But right now, that’s no more than an experiment mostly done voluntarily.

Do you like my dream? Do you want to help us make it real? You can by helping the project financially! It can be the symbolic coin as the bigger donation, any push is actually helping us to make things happen!

Click here to fund ZeMarmot in USD on Patreon »
Click here to fund ZeMarmot in EUR on Tipeee »

Not sure yet? Feel free to read more below and to pitch in at any time later on!

Note that not only the money but also the number of supporters is of great help since it shows supports to bigger funders; and for us that’s good for morale too! A good monthly crowdfunding can also help us find producers without having to abandon any of the social and idealistic aspects of the project (note that we have already been contacted by a production who were interested by the film after the crowdfunding but we refuse to compromise too much on the ideal).

The animation

We illustrated Aryeom’s work by 2 videos presenting extracts of her work-in-progress. In this first video, she shows different steps in animating a few cuts of the main character:

In this second video, we examine some cuts of another character, the Golden Eagle, main predator of the marmot:

There are a lot which can be said on these few minutes shown about the work of “animator”. Many pages of books on the art of animating life could be filled from such examples! We will probably detail these steps in longer blog posts but I will still explain the basics here.

Animating = giving life

Aryeom says it in the first video and you can see it in several examples in both videos. When your character moves from A to B, you are not just “moving” it. You have to give the impression that the character is acting on oneself, that it is alive, inhabited, in other words: animated.

This is no surprise one of the most famous book on animation is called “The illusion of life” (by Frank Thomas and Ollie Johnston), also the bedside book of Aryeom. Going this way has a lot of ramifications on the animator job.

Believable, not realistic

Before we continue, I have to make sure I am understood. Even though realistic animation is also a thing (Disney comes to mind), making a good animation is otherwise not necessarily about making it “realistic”, but instead about making it “believable”.

It is very common to exaggerate some movements for various reasons (often because it is funnier, but also sometimes because exaggerating it may sometimes look even more believable than the realistic version!), or the opposite (bypassing anatomically-correct movements). There are no bad reasons, only choices to achieve what you want.

Now that this thing is clear, let’s continue.

You can’t just “move an arm”

The very classical example beginners will be given is often: “lift your right arm up”. That’s it? Did you only move your arm and the rest of the body stayed unchanged? Of course not. To stay in balance, your body shifted to the left as a counterweight; the right shoulder lifted whereas the left shoulder lowered; and so on.
A lot of things will change in your body with this simple action. Even your feet and legs may move to compensate the shift of the center of gravity. As a consequence, you don’t “move your arm”, you “move your whole body” (in a configuration where your arm is up).

This is one of the first reason why to just move a single part in a body, you cannot reuse previous drawings and change just this part. No, you will properly redraw the whole body because if you are to fake life, you may as well do it well.

Note: when you say “animation” to computer people, their brain usually immediately wires to “interpolation“, which is the mathematics to compute (among other things) intermediate positions. Because of what I said above, in reality, this mathematical technique is barely used in traditional (even when digital) animation. It is used a lot more in vector and 3D animation, but its role should definitely be minimized compared to the animator work even on these fields. In vector/3D, I would say that interpolation only replaced the inbetweener role (some kind of “assistant” who draw non-keyframe images) from the traditional animation world.

Timing, silence and acceleration

You often hear it from actors, poets, writers, singers, anyone who gives some kind of life: the silence is as important as the noise for their art. Well I would also add the acceleration and the symmetrical deceleration.

You can see this well on this first example of the video 1 (at 0’41). Aryeom was unhappy with her running marmot which was nearly of linear speed. Marmot arrived too fast on the flower. Well he slowed down, but barely. Her finale version, Marmot would arrive much faster with a much more visible slowdown, making the movement more “believable” (we get to the bases!).

The eagle flight in video 2 (at 1’09) is another good example of a difficult timing as Aryeom went through 2 stages before finding the right movements. With the wrong timing, her flying eagle feels heavy, like it has difficulty to lift itself into the air (what she called her “sick” eagle in the video); then she got the opposite with an eagle she felt more sparrow-like, too light and easy-lifted. She was quite happy with the last version (obtained after 8 attempts) though, and in particular of this very last bit in the cut, when the eagle gets in glider mode. Can you spot it? This is the kind of difference which just lasts for a few hundredth of seconds, barely noticed, yet on which an animator can spend a significant amount of time.

Living still images (aka “line boil”)

A common and interesting effects you find in a lot of animation is about a shaking still image. You can see it in the second video (at 0’33), first cut presenting the proud eagle still on his mountain. Sometimes you want to show a non-moving situation, but just sticking to a still image feels too weird because in real life, there is no perfect stillness. Even if you make all efforts to stay still for a few seconds, you will imperceptibly move, right? So how do you reproduce this? The attempt to stay perfectly still while this being impossible? Well commonly animators will just redraw the same image several times because as much as you can’t stay still, you can’t draw perfectly identical images twice either (you can get very close by trying hard though) and you loop them.

You usually don’t do this for everything. Typically, elements of the background, you accept them to be still much more easily. But this is common for your living character or sometimes to pull main elements which you want to tick out of the background.

Avoiding cycles

Now, loops are very usual in animation. But the higher quality you aim for, the less you have loops. Same as stillness does not exist in life, you never repeat exactly the same movement twice. So even though loops seems to be the first thing many animators will teach (the famous “walking cycles”), you don’t actually use these in your most beautiful animations. When your main character walks, you will likely re-animate every step.

Of course, it is up to you to decide where to stops. Maybe for this flock of birds in the background, far away, just looping (and even copy-pasting the birds to multiply them!) may be enough. Though this is all a matter of taste, time, and money ready to spent on animator-time obviously.

Camera work

This part has not really started yet, even though it has already been planned (from the storyboard step). But since Aryeom started (first video at 1’06), let’s give some more infos.

Panning and tilting

In animation, where the movement is by essence 2D as well, these refers to respectively a horizontal and vertical camera movement. Why do I need to say “in 2D animation”? Because in more traditional cinema, these will rather correspond to a tracking shot done on rails, whereas panning and tilting refer to angle movements of a static camera. Different definitions for different references. Note that even though 3D animation could be using one or the others, they mostly kept the animation vocabulary.

This gives you a good hint on how characters and background are separately managed. If you have a character walking, you will usually create a single image of the background, much bigger than the screen size, and your camera will move on it, along with the character layers. With fully digital animation, this usually means working on image files of much higher sizes than the expected display size; in traditional physically-drawn animations, it means using very large papers (or often even sticking papers together). As an example, at a Ghibli exhibition, they would display the background for a flying cut of “Kiki’s delivery service” and it would take a full wall in a very large room.

Animation is a lot of drawing

I will conclude the section on animation by saying: that’s a bloody lot of drawing!

As you can see, Aryeom spends time redrawing the same cuts so many times to get the perfect movement that sometimes she becomes crazy and thinks that she is just drawing the wrong animal. The story about the pigeon is a true story and I am the one who told her to add it to the video because that was so funny. Some day, she comes to me and show me her cut she has been working on for days. Then she asks me: “doesn’t it look like a pigeon?”
Hadn’t I stopped her, she was ready to start over.

This is an art where you even draw again when you want to show stillness, and you forbid yourself from using too much shortcuts like using loops. So what do you want: you probably have to be a little crazy from the start, no? 😉

There are actually several “schools”, and some of them would go for simplicity, shortcut and reusage. Japan is well known for the studio Ghibli which goes the hard way as we do, but this is quite a contradiction in the country industry. The whole rest of Japan’s animation industry is based on animating as little as possible. Haven’t they proved so many times that it is possible to show a single still image for 30 seconds, add sounds and voices, then call it an animation?

Sometimes it is just a choice or a focus. Some animation films focus on design rather than believable movements, or scenario rather than wonderful images. For instance, I don’t think you can say that The Simpsons has a wonderful graphics appeal and realistic animation (they even regularly makes meta-jokes inside episodes about the quality of their animation!), but they have the most fantastic scripts, and that’s what makes their success.
So in the end, there is no right choice. Every one should just go the way they wish for a given project.

And this is the way we are going for ZeMarmot!

Music

Just a very short note on music. We have started working with the musicians, remotely and on a physical meeting on December 1st.  We have a few extracts of “first ideas” but they won’t do justice to the quality of the work.

I think this will have to wait for much later.

Software

I went so long about animation that I hope I have not lost half of the readers already! If you are still reading, I’ll say what I worked on these last months.

GIMP

I am trying to do my share on GIMP, to improve it globally, speed up the release of 2.10 and because I love GIMP. So I count 259 commit authorship in 2016 (60 in the last 3 months) + 48 as committers only (i.e. I am not the author, but the main reviewer of a patch which I pushed into our codebase). I commented on 352 bug reports in 2016, making it a habit to review patches when possible.

I have a lot of projects for GIMP, some of the grander being for instance a plugin management system (to install, uninstall and update them easily from within GIMP, and a backend side for plugin developpers to propose extensions), but also a lot of ideas about the evolution of the GUI (this should be discussed topic-per-topic on later blog posts).

Also I have been starting to experiment with Flatpak so that GIMP can provide an official release for GIMP.  For years, our official stance has always been to provide a Windows installer, a OSX package, and GNU/Linux… yeah grab the source and compile or use the outdated version from your package manager! I think this situation can be considerably improved with Flatpak and similar technologies which were born these years.

Animation in GIMP

As explained already, I took the path of writing it as a plugin rather than a core feature. Anyway GIMP is only missing a single feature which would make it nearly as powerful: bi-directional notification (basically currently plugins don’t get notified when pixels are updated, layers are renamed, moved or deleted, images closed…). That’s actually something I’d like to work on (I already have a stash somewhere with WIP code for this).

The animation plugin currently has 2 views:

Storyboard view

GIMP's animation plug-in: storyboard view
GIMP’s animation plug-in: storyboard view

This actually corresponds to the very basic animation logic of 1 layer = 1 frame, which is very common by people making animated GIF (or MNG/WebP now), except with a nice UI to set each image duration (instead of tagging the layer names, a very nasty user experience, feature hidden and found only on some forums or old tutorials), do basic compositing and even comments on vignettes if-need-be. All this with a nice preview in real-time!

Cel-Animation view

GIMP Animation plug-in: Cel-animation view
GIMP’s Animation plug-in: cel-animation view

This is the more powerful view where you can compose a frame from several images, often at least a background and a character. In the above example, the cut is made from 3 elements composed together: the background, the eagle and the marmot.

You may usually know more of the “timeline” style of view, which is basically the same thing except that frames are displayed as horizontal tracks. I tried this too, but quickly shifted to this much more traditional view in the animation world, which is usually called an x-sheet (eXposure sheet). I found it much more practical, allowing commenting more easily too, easy scroll, and especially more organized. There is a lot you don’t see in this screenshot, but this view is really targetting a professional and organized workflow. In particular with layers properly named, you can create animation loops and line tests of dozens of images, with various timings,  in a few clicks.

I am also working on keyframing for effects (using animated GEGL operations) and camera movements.

Well there is a lot done but definitely a lot more I am planning to do there, which takes time. I will post more detailed blog posts and will push the code on a branch very soon (probably before Libre Graphics Meeting this year).

That’s all, folks!

And so that’s it for this end-of year report from ZeMarmot team! I hope you appreciate the project. And if so and can spare the dime (or haven’t done so yet), I remind the project accepts any amount on the links given above. Some people just give 1 Euro, others 15 Euro per month. In the end, you are all giving life to ZeMarmot!

Thanks and have a great year 2017!

Don’t be a stranger to GIMP, be GIMP…

I can try and do more coding, more code reviewing, revive designing discussions… that’s cool, yet never enough. GIMP needs more people, developers, designers, community people, writers for the website or the documentation, tutorial makers… everyone is welcome in my grand scheme!

Many of my actions lately have been towards gathering more people, so when I heard about the GNOME newcomers initiative during GUADEC, I thought that could be a good fit. Thus a few days ago, I had GIMP added in the list of newcomer-friendly GNOME projects, with me as the newcomers mentor. I’ll catch this occasion to remind you all the ways you can contribute to GIMP, and not necessarily as a developer.

Coding for GIMP

GIMP is not your random small project. It is a huge project, with too much code for any sane person to know it all. It is used by dozen of thousands of people, Linux users of course, but also on Windows, OSX, BSDs… A flagship for Free Software, some would say. So clearly coding for GIMP can be scary and exciting in the same time. It won’t be the same as contributing to most smaller programs. But we are lucky: GIMP has a very sane and good quality code. Now let’s be clear: we have a lot of crappy pieces of code here and there, some untouched for years, some we hate to touch but have to sometimes. That will happen with any project this size. But overall, I really enjoy the quality of the code and it makes coding in GIMP somewhat a lot more enjoyable than in some less-cared projects I had to hack on in my life. This is also thanks to the maintainer, Mitch, who will bore you with syntax, spaces, tabs, but also by his deep knowledge of GIMP architecture. And I love this.

On the other hand, it also means that getting your patch into GIMP can be a littler more complicated than in some other projects. I saw a lot of projects which would accept patches in any state as long as it does more or less what it says it does. But nope, not in GIMP. It has to work, of course, but it also has to follow strict code quality, syntax-wise, but also architecture-wise. Also if your code touches the public API or the GUI, be ready for some lengthy discussions. But this is all worth it. Whether you are looking for improving an already awesome software, adding lines to your resume, improving your knowledge or experience on programming, learning, you will get something meaningful out of it. GIMP is not your random project and you will have reasons to be proud to be part of it.

How to choose a first bug?

Interested already? Have a look at bugs that we think are a good fit for newcomers! Now don’t feel obligated to start there. If you use GIMP and are annoyed by specific bugs or issues, this may well be a much better entrance. Personally I never contributed to fix a random bug as first patch. Every single first patch I did for Free Software was for an issue I experienced. And that’s even more rewarding!

Oh and if you happen to be a Windows or OSX developer, you will have an even bigger collection of bugs to look into. We are even more needing developer on non-Linux platforms, and that means we have a lot more bugs there, but also most likely a good half of these are probably easy to handle even for new developers.

Finally crashes and bugs which output warnings are often pretty easy since you can usually directly investigate them in a debugger (gdb for instance), which is also a good tool to learn if you never used. Bugs related to a graphical element, especially with text, are a good fit for new developers too since you can easily grep texts to search through the code.

Infrastructure

Now there are whole other areas where you could contribute. These are unloved area and less visible, which is sad. And I wish to change this. One of these is infrastructure! GIMP, as many big projects, have a website, build and continuous integration servers, wikis, mailing lists… These are time-consuming and have few contributors.

So we definitely welcome administrators. Our continuous integration regularly encounters issue. Well as we speak, the build fails, not because of GIMP, instead because minimum requirements for our dev environment are not met. At times, we have had a failing continuous integration for months. The problem is easy: we need more contributors to share the workload. Currently Sam Gleske is our only server administrator but as a volunteer, he has only limited time. We want to step up to next level with new people to co-administrate the servers!

Writers

While we got a new website recently (thanks to Patrick David especially!), more frequent news (here I feel we have to cite Alexandre Prokoudine too), we’d still welcome new hands. That could be yours!

We need documentation for GIMP 2.10 coming release, but also real good quality tutorials under Free/Libre licenses. The state of our tutorials on gimp.org were pretty sad before the new website, to say the least. Well now that’s pretty empty.

Of course translations are also a constant need too. GIMP is not doing too bad here, but if that’s what you like, we could do even better! For this, you will want to contact directly the GNOME translation team for your target language.

Designers

And finally my pet project, I repeat this often, but I think a lot of GIMP workflow would benefit from some designer view. If you are a UX designer and interested, be welcome to the team too!

So here it is. All the things which you could do with us. Don’t be scared. Don’t be a stranger. Instead of being this awesome project you use, it could be your awesome project. Make GIMP! 🙂