Smart Colorization in GIMP

As part of the Image team at GREYC lab (CRNS, ENSICAEN, University of Caen), I implemented the “fill by line art” algorithm in GIMP, also known as “Smart Colorization“. You may know this algorithm in G’Mic (developed by the same team), so when they proposed me to work with them, I wanted to implement this algorithm in GIMP core. Thus it became my first assignment.

The Problem

The concept of filling by line art is simple conceptually: you draw a shape with a black pen, say an approximate circle, and you want to fill the inside with a color of your choice. You could already do this, more or less, with the bucket fill, when filling by color similarities. Unfortunately it has 2 major issues:

  1. If ever the line art is not properly closed (“holes” in the lines), the color will leak outside. This might be a small painting mistake sometimes, yet if you don’t see the holes (it could be just 1 pixel in worst case), wasting time finding it is not funny. Othertimes it may even be an artistic choice (“rougher” line style for instance, or any of the billion reasons why you’d want non-closed lines).
  2. Usually you get very ugly non-colored pixels next to line borders because of interpolation, aliasing or whatever (unless you draw with very hard lines) and this is not acceptable result.
2 main problems of Bucket Fill by color similarities

As a consequence, probably no digital colorists ever use the bucket fill directly. There are various other methods, often with the fuzzy select tool (or other selection tools), growing/shrinking the selection then bucket-filling it. Just even painting directly is sometimes the best. Attending one of Aryeom’s workshop on the matter of colorization is absolutely amazing and enlightening as she can teach you a dozen of different methods, and she herself does not always use the same method (she would say it depends on the situation). On ZeMarmot project, I also made some custom quick-colorization Python scripts which Aryeom have used for years now and which do a pretty decent job for optimizing this tedious job (though it’s still tedious!).

The algorithm

The research paper is called “A Fast and Efficient Semi-guided Algorithm for Flat Coloring Line-arts“. I have worked based on C++ code by Sébastien Fourey, with both his and David Tschumperlé’s input, being both co-authors of the paper.

For our needs, it basically has 2 main steps:

  1. Closing the line arts, which is done by finding “key-points” which are line edges with extreme local curvatures (we estimate them likely to be the end for unfinished lines), then closing the lines by joining the keypoints based on some “quality” criteria such as close opposite angles or maximum distances. Lines can be closed either with splines (i.e. curves) or straight segments.
  2. Actually colorizing the created closed regions, “eating” a bit under the line art pixels to ensure absence of uncolorized pixels near borders.

You can see that the algorithm actually deals with both issues previously raised (which I conveniently numbered in the same order 😉)! Nevertheless I only implemented the first step of the algorithm and went with my own solution for the second step (still based on similar concepts though), because of usability issues.

Here is what bucket-filling by line art looks like in GIMP:

Bucket Fill by line art detection

I am not going to re-explain the algorithm in full. So if interested by technical details, I suggest that you just read the research paper which is quite clear with nice self-explanatory images too. If you are more “fluent” with code, such as myself, than with math equations, you may also look at my implementation in GIMP, which is mostly self-contained in gimplineart.c, and in particular start with gimp_line_art_close().

Below I will focus more on improvements we made to the algorithm, and which you won’t even find in the paper. I remind that we also worked with the animator/painter Aryeom Han (the director of ZeMarmot project) as artist advisor so that the implementation is actually useful for real work.

Note: this article is mostly about the more technical side of things. If you are just interested into how to use the tool efficiently, wait a few days or weeks. We will make a short yet (hopefully) exhaustive video showing the usage of every option.

Step 1: line art closure

To get a basic idea of what’s going on under the hood, here is an example (using an unfinished sketch by Aryeom, which are good examples for the purpose of the algorithm as sketches are full of holes!). The leftest image is the scribble as-is, the middle is how it is transformed by the algorithm (you never see this version of the image!), the rightest one is a quick attempt to flat-colorize it by myself (using the bucket fill tool only, no brush, nothing!) in less than 1 minute (no joke, I timed!).

From line arts to colorized sketch, with internal closure representation in the middle

Note: of course do not look at this result from a “finished work” point of view. Colorization work is often done in several steps, the first step being flat colors. This tool is only working on this first step and the image may still need additional tweaks (even more with this example as I ran it off a very rough sketch image).

Estimate a global local line width (improved from paper)

I very quickly noticed that a part of the algorithm seemed a bit suboptimal. As said above, we need to detect key-points based on line curvatures. This was a problem with big brushes (either because you paint very fat lines on a small image or because you just paint very high resolution artworks hence even your thin lines may be dozens of pixels). The end of such lines may have very low curvatures and go undetected. In the original paper, the proposed solution was:

For the sake of independence with respect to the image resolution, a second step allows to reduce the width of the strokes to a few pixels, if necessary, using a morphological erosion. The radius to be used for this erosion is set automatically by estimating the width of the strokes found in the drawing.

Section ‘3. Pre-Processing of the anti-aliased image

Unfortunately computing a single estimation of the line width for the whole drawing assumes line arts in a same drawing all have the same thickness! Just ask a calligrapher how ridiculous it sounds! 😉

As first consequence, even though it still worked many times, it could add previously non-existing holes (if eroding a line thiner than the average line width). The research paper acknowledged this issue but discarded it, assuming the closure step (which necessary follows) would anyway close the new hole again:

One should note that disconnections that may result from the morphological erosion step applied here, which remain rare, will be corrected anyway by the closing step to be applied thereafter.

Section ‘3. Pre-Processing of the anti-aliased image

Yet in very early tests which Aryeom did with the first version of my implementation, she quite quickly encountered cases where the closing step did not close the created holes. In even at least one instance, we even had a perfectly closed line art which leaked the color fill ⇒ we got the reverse of why the algorithm was created! Quite paradoxical! To add to the worse, while finding micro holes in not well closed lines can be hard, finding non-existing holes (because only created in an internal, non-visible, representation) is close to impossible.

And the ironical part is that the erosion did not even performed its goal that well as it was very easy to create fat lines where no key points got found despite the erosion. In the end, the global width estimation+erosion idea added more problems than it solved.

Conclusion: this was bad.

After much discussion with David Tschumperlé and Sébastien Fourey, we came up to an evolution of the algorithm, computing approximate local line width for every line art pixel (instead of one single line width for the whole drawing), simply based on a distance map of the line art. Then we made the test deciding whether an edgel curvature was keypoint material relatively to the local line width (instead of eroding and assuming a theoretical global line width).

Not only did it perform a lot better, did not create unfillable holes in perfectly closed areas, did detect better end points on very huge strokes, allowed for variable strokes in a same drawing (whether as a stylistic choice or simply because perfection is hard), but it even ended up faster!

As an example, the original version of the algorithm would have failed to detect end-points for, and close this shape with such fat lines. The updated algorithm has no such problem:

» For these interested, see the commit «

Parallelization for fast processing

Line art closure was clearly the most time-expensive step. Though it does perform quite well on FullHD or even 4K images, it can still spend half a second on my laptop. And half a second for an interactive tool is huge! And if we run this on very big images (which are not uncommon at all nowadays), it may even take several seconds.

So I parallelized this whole process in order to run it as early as possible (as soon as the Bucket Fill tool is selected). As people are not robots, this makes the interaction quite seamless and you may not even notice the processing.

Available line art “Source” in the tool options

Partially for the same reason, you may notice that the “Source” option proposes you much more than all other tools (“Sample merged” versus active layer). Here you can also choose the layer above or below the active one. There was both a logical reason (you do not necessarily want to count colors as line arts) and a performance one (the software does not have to continuously recompute the closure at every edit).

Step 2: color filling

Making the algorithm interactive and error-safe

The original algorithm was based on filling all zones at once by using a watershed algorithm to fill the whole image.

I did the choice to just drop this part of the algorithm. It was mostly a usability choice. Basically the first time I saw the images demonstrating the algorithm on G’Mic, I found the result cool, but the implementation GUI seemed completely impractical. Still I am not the painter in the team, so I showed it to Aryeom. Her first words after seeing how it worked were “I will never use this” (or something along the line!). Note well that we are not talking about the end result, but about the computer-human interaction. As we said, colorizing is tedious, but if the smart colorization is even more tedious, then why use it?

So what’s tedious? There were several variant interactions in G’Mic: you can for instance let the algorithm color every zone randomly, then you could select each plain color zones for recolorization; there is also the possibility to guide the algorithm with color spots, hoping it performs as you want. I also suggest this interesting article by David Revoy, who contributed to the original version of the algorithm.

filtre « Colorize lineart [smart coloring] »
Smart colorization in G’Mic, a bit overwhelming…

Maybe they were interesting workflows sometimes, yet this is not always something you want to do in your drawings.

For once, it takes a lot of steps to color a single drawing. For an animation (i.e. ZeMarmot project), this is even worse, as we do it for dozens or hundreds of layers.
A second reason is that such workflows assume that the algorithm is never wrong. And we know that it is a very bold assumption! Undesired results may happen. This is not necessarily a problem; what you ask to such algorithm is to work well most of the time, as long as you can always go back to more down-to-earth methods in the rare edge cases. But if you have to undo the colorization for the whole image, change the color spots trying to guess what went wrong and try again, then using “smart” algorithm is only a waste of time.

Instead we needed a colorization process which is interactive and progressive, so that errors of the algorithms are easily bypassable by temporarily reverting to traditional techniques (just for some zones). This is why I based my interaction on the existing bucket fill tool. Colorization (by line arts) should work as it always used to: you click in a zone and see it being filled in front of your eyes… one zone at a time!

It is straightforward and very error-safe. If the algorithm doesn’t detect well the zone you are working on right now, just undo and fix this zone only.
Moreover I really wanted to not have to create a new tool if possible (there are so many already). This is basically the same use case as the Bucket Fill tool, right? It’s just a different algorithm to decide how to fill, that’s all. So it makes perfect sense to be an option for this same tool.

Therefore my solution to remplace watershedding the whole image at once is to use again a distance-transform map of the lines (both original line art and invisible closure lines). As I remind, this is a decent estimation of the local line thickness. Then when the bucket fill occurs, I flood the color a few more pixels (until the half width of the line arts), thus ensuring we don’t leave ugly non-colorized looking holes near the borders. Simple, fast and efficient.
This is somehow a local watershedding, but simpler and much faster, and also allows to add a “Max flooding” option to keep the color under control.

Using smart colorization without the “smart” part!

One very cool usage one can have of this new algorithm is to bypass the first step, i.e. not compute the line art closure! This can be very useful when you are using line style without any holes (simpler design with strong lines) so don’t need any algorithmic help for line closure. This way, you can fill colors in a single click without caring about over-segmentation or processing time.

To do so, just set the option “Maximum gap length” to 0. Here an example of a very simple design (left), filled by similar colors (middle) vs by line art detection (right), in a single click:

Left: AstroGNU by Aryeom – Middle: Fill similar colors – Right: Fill by line art detection

See this kind of “white halo” between the red color and the black lines in the middle image? Quality difference with right one is striking, right? While the historical “Fill similar colors” is absolutely non usable (by itself) for quality colorization work, the new line art detection mode is perfectly usable.

Bucket Fill enhanced for all cases!

As a bonus, I had to improve the current interaction of the Bucket Fill tool for all algorithms as well. It is mostly similar, but a bit improved. What is?

Click and hold

A main issue was to take care of the problem of sur-segmentation. You may call an algorithm “smart” or “intelligent”, this won’t make it “human-smart”. In particular here it creates artificial lines based on geometry, not on actual recognition of shapes or meaning, and even less by reading the painter’s thoughts! So often it will over-segment, i.e. creates more artificial zones than desired (if you paint with a crenelated brush, then it can be really bad). Let’s take again previous image as an example:

Closure is “over-segmented”

You can clearly see the issue. The dog for instance is most likely in a single color, but it ends up as about 20 zones! In such case, with former bucket fill interaction, you would then end up clicking 20 times (instead of ideally 1). Counter-productive, right? So I updated the Bucket Fill tool to work more like a brush. You can click, hold and move the cursor over various regions. This change works both with the line art algorithm and when colorizing similar colors (only for selection fill, it is meaningless). This makes filling over-segmented area much less a problem as you can still do it in a single stroke. This is not as fast as a single click, yet quite quick.

Color picking

Another nice change was to allow easy color picking with ctrl-click (no need to select the Color Picker tool). All paint tools could already do this, but not the Bucket Fill, even though it also works with colors. Being able to quickly change color (by picking nearby pixel, which is a very common usage by professional digital painters) makes the bucket fill tool very productive.

With these few changes, the Bucket Fill is now a first class citizen (even if you don’t use the smart colorization).

Limits and possible future works

Algorithm targeted at digital painters

Smart colorization works on “line arts”. This algorithm won’t perform well on random “non line art” images, and in particular is not made to work on photography. Though I’m sure some people may find interesting ways to use it elsewhere, as far as I can see, this is designed for digital painters only.

More than black line on white background use case?

Lines are detected the most basic possible way, with a threshold either on the alpha channel (if you work with transparent layers) or on the grayscale value (particularly useful when working on white paper scans for instance).

Therefore current version of the algorithm may have difficulties detecting the line arts, for instance if you scan a drawing done on not completely white paper. Or say you simply want to draw with light lines on dark backgrounds (everyone is free!). Not sure how often this occurs, and whether we should really pile up the tool with color options. We’ll see.

More optimization

Though I said it is very usable on many images of reasonable size, I consider this new algorithm still a bit slow when working on very big images (which is not so uncommon in the graphics printing industry where you often need higher resolutions than screen industry), even despite all the multi-thread code. So I would not be surprised than in some cases, you may come back to use your old-style colorization techniques.

I am hoping to be able to soon come back on my code and optimize it more.

Not so nice color borders

Color on borders does not have the nice “texture” you have when colorizing with brushes. For instance let’s look closer at our original example here.

The part where the border of the color shows will likely need edit. I added an “antialiasing” option, but this is clearly not the real solution in most cases either, and manual edit with a brush after color filling may be necessary.

Worse case is when you want to remove the lines afterwards. Aryeom does sometimes such drawing style where the lines are only for the first steps, then removed for the final render (actually a whole dreamy scene of ZeMarmot is done like this), then you’d want perfect control of the colorization border quality. This is one such example where the end painting is made only of colors, no border lines:

An image cut of ZeMarmot movie in production, drawn by Aryeom

No API yet

We are still missing API functions to be able to run line art detection in scripts. This is on purpose, as I am waiting a bit more to make sure I don’t miss any important usage, since an API has to be stable (unlike the GUI which can be updated more easily since our new release politics).

Actually you may have noticed that even in the graphical interface, I haven’t made visible all the options you can find in G’Mic (even though they all are implemented). This is because I am trying to not make this tool over-confusing as many of these options are based on the intimate understanding of the algorithm. So that none has to play constantly with sliders at random and without understanding, I am testing the best way to not over-expose options.

Fuzzy selection tool

This algorithm is currently only implemented for the bucket fill tool, but it would be perfectly suited as an alternative method in the Fuzzy Select tool as well!

Improve segmentation issue

With very clean lines and simple shapes, the algorithm is very neat. But as soon as you draw complicated drawing, and in particular use very crenellated brushes (for instance the default “Acrylic” series of GIMP brushes), it starts to detect a bit too many false-positive end points, hence over-segmenting. We encountered such issues so I tried various improvements, but so far none is working on all cases. One such improvement was to apply a median blur on the line arts before the detection of end points, as it would definitely smooth dents in the lines. And it did work quite nicely on one such example:

Middle: over-segmentation with current algorithm
Right: still segmented, yet much better, when median-blurring first

Unfortunately it works very badly on very thin lines as it would create holes (a problem we got rid of when we replaced the erosion step with local line widths!).

Middle: very acceptable result with current algorithm
Right: bad result as color would leak and we lost many details

So I’m still working on this. Ideally that’s an issue for which we can find a solution, though it is not an easy topic. We’ll see!

As a general rule, over-segmenting is a problem (false positives), but it is better than failing to close holes for such tool (false negatives), especially thanks to the new click-and-hold interaction. I did improve already several issues on this matter (for instance micro-zones which the paper calls “not significant regions” yet which are very significant for digital painters as they are annoying to fill; and recently an issue with the chosen algorithm to approximate very quickly region areas, and which may be wrong on open areas).

Conclusion

I will conclude that this project is already a very interesting one to confront research algorithms to reality with real-life day-to-day work. This was even more interesting as we even confronted 3 worlds: research (an algorithm thought by brilliant minds at CNRS/ENSICAEN), engineering (myself) and artists (Aryeom/ZeMarmot). By the way, many of my GUI improvements were ideas and propositions from Aryeom who tested our work on real projects on the field. This joint CNRS/ZeMarmot cooperation went so well that Aryeom was invited to present her work and all her drawing/animation problematics in a seminar at ENSICAEN university.

Now it is still a work-in-progress in my opinion. As you could see, several aspects deserve to be improved. It is not on my main track anymore, but I will definitely improve it as I will see fit. Yet it is already definitely in a releasable state, and I do hope that many people will use and appreciate it. Tell us what you think if you try!

A last comment is that ideas behind the best algorithms are not necessarily the most incredible technically. This Smart Colorization algorithm is based on many simple transformations yet it performs well, is quite fast (within some limits; as I said above), does not take all your memory nor make GIMP hang for 10 minutes… To me, this is much more impressive than maybe much more brilliant algorithms, yet unusuable on everyone’s desktop. This is what you need in a graphics desktop software when you actually want some work done. And that’s very cool. 🙂

Have fun everyone!

New header for the new year!

New year 2018

Happy new year everyone!

Aryeom started the first day of the year by the live drawing of a new header illustration welcoming this brand new year. Well that was time since we still had quite a summer-themed header until yesterday. 🙂

New year 2018This new image happens to be also in 16:9 format so it can be used as background image on most screens. Just click the thumbnail on the right to download it full-size.
It is licensed Creative Commons BY 4.0 by Aryeom Han, ZeMarmot director.

Also the drawing session was streamed live (as many of Aryeom’s GIMPing session now, as we explained in “Live Streaming while GIMPing” section of our 2017 report). If you missed it, you can have a look to the recording. As usual, this was not edited afterwards nor was it sped up or anything; oh and we certainly don’t add up any music to make it look cooler or whatever. 😛
This was a real focused live, which explains why it is nearly a one-hour video. Just skip through it if you are bored. 😉
Enjoy!

This drawing and this live are made possible thanks to our many donators!

Reminder: Aryeom's Libre Art creation can be funded on
Liberapay, Patreon or Tipeee through ZeMarmot project.

GIMP Motion: part 2 — complex animations

This is the second video to present GIMP Motion, our plug-in to create animations of professional quality in GIMP. As previously written, the code is pretty much work-in-progress, has its share of bugs and issues, and I am regularly reviewing some of the concepts as we experiment them on ZeMarmot. You are still welcome to play with the code, available on GIMP official source code repository under the same Free Software license (GPL v3 and over). Hopefully it will be at some point, not too far away, released with GIMP itself when I will deem it stable and good enough. The more funding (see in the end of the article for our crowdfunding links) we get, the faster it will happen.

Whereas the previous video was introducing “simple animations”, which are mostly animations where each layer is used as a different finale frame, this second video shows you how the plug-in handles animations where every frame can be composited from any number of layers. For instance a single layer for the background used throughout the whole animation, and separate layers for a character, other layers for a second character, and layers for other effects or objects (for instance the snow tracks in the example in the end of the video).

It also shows how we can “play” with the camera, for instance with a full cut larger than the scene where you “pan” while following the characters. In the end, we should be able to animate any effect (GEGL operations) as well. This could be to blur the background or foreground, adding light effects (lens flares for instance), or just artistic effects, even motion graphics…
All this is still very much work-in-progress.

One of the most difficult part is to find how to get the smoother experience. Rendering dozens of frames, each of these composited from several high resolution images and complex mathematical effects, takes time; yet one does not want to freeze the GUI, and the animation preview needs to be as smooth as possible as well. These are topics I worked on and experimented a lot too because these are some of the most painful aspect of working with Blender where we constantly had to render pieces of animation to see the real thing (the preview is terribly slow and we never found the right settings even with a good graphics card, 32GB of memory, a good processor, and SSD hard drives).
One of the results of my work in GIMP core should be to make libgimp finally thread-safe (my patch is still holding for review, yet it works very well for us already as you can see if you check out our branch). So it should be a very good step for all plug-ins, not only for animation only.
This allowed me to work more easily with multi-threading in my plug-in and I am pretty happy of the result so far (though I still plan a lot more work).

Another big workfield is to have a GUI as easy to use, yet powerful, as possible. We have so many issues with other software where the powerful options are just so complicated to use that we end up using them badly. That’s obviously a very difficult part (which is why it is so bad in so many software; I was not saying that’s because they are badly done: the solution is just never as easy as one can think of at first) and hopefully we will get something not too bad in the end. Aryeom is constantly reminding me and complaining of the bugs and GUI or experience issues in my software, so I have no other choices than do my best. 😉

 

You’ll note also that we work on very short animations. We actually only draw a single cut at a time in a given XCF file.  From GIMP Motion, we will then export images and will work on cut/scene transitions and other forms of compositing in another software (usually Blender VSE, but we hear a lot more good of Kdenlive lately, so we may give it a shot again; actually these 2 introduction videos were made in Kdenlive as a test). Since 2 cuts are a totally different viewpoint (per definition), there is not much interest on drawing them in the same file anyway. The other reasons is that GIMP is not made to work with thousands of high-definition layers. Even though GEGL allows GIMP to work on images bigger than memory size in theory, this may not be the best idea in practice, in particular if you want fast renders (some people tried and were not too happy, so I tested for debugging sake: that’s definitely not day-to-day workable). As long as GIMP core is made to work on images, it could be argued that it is acceptable. Maybe if animations were to make it to core one day, we could start thinking about how to be smarter on memory usage.
On the other hand, cuts are usually just a few seconds long which makes a single cut data pretty reasonable in memory. Also note that working and drawing animation films one cut at a time is a pretty standard workflow and makes complete sense (this is of course a whole different deal with live-action or 3D animation; I am really discussing the pure drawn animation style here), so this is actually not that huge of a deal for the time being.

To conclude, maybe you are wondering a bit about the term “cel animation”. Someday I guess I should explain more what was cel animation, also often called simply “traditional animation” and how our workflow is inspired by it. For now, just check Wikipedia, and you’ll see already how animation cels really fit well the concept of “layers” in GIMP. 🙂

Have a fun viewing!

ZeMarmot team

Reminder: my Free Software coding can be supported in
USD on Patreon or in EUR on Tipeee. The more we get
funding, the faster we will be able to have animation
capabilities in GIMP, along with a lot of other nice
features I work on in the same time. :-)

GIMP Motion: part 1 — basic animations

Mid-July, we finally published publicly the code of GIMP Motion, our software for animations in GIMP. It is available on GIMP official source code repository under the same Free Software license (GPL v3 and over).

We don’t have a public GIMP release containing this plugin yet. Hopefully it should happen soon, but the code is still much too experimental and incomplete. We are using it daily internally and you are welcome to do so as well, but the released version will be much better. 🙂
So it means that for the time being, if you want to play with it, you will have to build it yourself from source, or wait for someone to make a build (we may provide one at some point).

The video above describes some of the base features for simple animations, such as storyboards/animatics and most common needs for animated images (GIF, Webp…).  What we call “simple animations” is when you mostly have several images which you want to succeed at one another. No complex composition with background and character layers for instance. New features will still happen, for instance for panning/tilting/zooming on bigger panels (very common on storyboards as well), and adding various effects (a keyframed blur for instance would be a common movie effect).

We will soon publish a second part video where we will describe the more advanced features for complex animation (the ones with layered background/foreground/characters). Because we just scratched the surface of what we will be able to do with this plugin. 🙂

Have a fun viewing!

ZeMarmot team

Reminder: my Free Software coding can be supported in
USD on Patreon or in EUR on Tipeee.

Background design: ZeMarmot’s home

Background design: ZeMarmot home (title)

In an animation film, obviously the design does not refer only to characters. There can be props design when applicable, and of course background design. As an example, the most interesting case is how we designed ZeMarmot’s home! At least the outside part of its burrow, since we never see the inside (unlike in the initial comics attempt).

You remember the first research trip? Back then, we found this nice hill, next to Saint-Véran village with just a single tree in the middle.

The tree on the hill: ZeMarmot movie reference
The tree on the hill: ZeMarmot movie reference

And obviously on the bottom of this tree, there was a marmot burrow hole.

Burrow hole in tree roots: ZeMarmot reference
Burrow hole in tree roots: ZeMarmot reference

We thought that was just too cool. Most burrow holes are just in the middle of the landmass, but this felt like a “special hole”. Our main character is not a special marmot, it’s not a hollywood leader, chief of the marmot clan or anything, but still… it’s our hero, right? It’s not just any marmot, its ZeMarmot! So we wanted to give him a special burrow. Therefore ZeMarmot now lives under a cool tree. Only difference is that we didn’t set it on a hill but in a plain, since plains are also very common setups for marmots in the alps.

Here is how the burrow entrance looks in our storyboard:

Storyboard: burrow entrance
Storyboard: burrow entrance

Then with clean lines:

Drawing: burrow entrance
Drawing: burrow entrance

Finally adding some colors:

Colored ZeMarmot's burrow entrance (WIP)
Colored ZeMarmot’s burrow entrance (WIP)

Note that this last image is still work-in-progress. Aryeom said she is not fully happy with it yet. I thought it was still nice to show you the progression from our research photos to storyboard sketchs, drawing and coloring, with all the thinking we made on why and what.

Hope you enjoyed this little insight in our work! 🙂

 

Reminder: if you want to support our animation film,
ZeMarmot, made with Free Software, for which we also
contribute back a lot of code, and released under
Creative Commons by-sa 4.0 international, you can
support it in USD on Patreon or in EUR on Tipeee.

 

ZeMarmot work in progress: from animatics to animation

While production on the animation is still going full-steam, we thought we could show what exactly this is about. How do you go from static images to animated ones? Well this is all like progress layers, one step after another.

The Storyboard, then the Animatics

We have already talked about these at length so we won’t do it again. Feel free to check out our previous blog posts on the topic. These are the first 2 layers: comics-like static images (storyboard), and static images displayed in video (animatics).

Key Framing

In the digital world, “keyframes” is used with different meaning. On video formats, it is usually used to distinguish a standalone image in the stream with partial images which cannot be displayed by themselves. On 3D or vector animation software nowadays, it is usually used as extreme points in smooth transition which are computed by algorithm (interpolation). This is more or less the definition given by Wikipedia: “A key frame in animation and filmmaking is a drawing that defines the starting and ending points of any smooth transition.

This definition is a little too “mechanic” and tied to modern way of animating with vector or 3D (actually it is not entirely true even in 3D and vector but this is what one might think when discovering interpolation magic). Key frames are actually simply “important images” as determined by the animator in a purely judgemental way. Keyframing is part of the art of the animator, more than a science. It is true that they are often starting/ending points of movements, but this is not a necessity. Also called sometimes “key poses”, these are what the animator feels make the movement good or not, in one’s guts as an artist.

Pose to Pose vs Straight Ahead animation

When animating, there are 2 main techniques. The first method is to decompose the movement in key poses (keyframes) as a first step. Then later, when it looks good, you complete with intermediate frames (inbetweens). This is the pose to pose method and demonstrated a bit in the above video.

When you are a big studio, keyframes would usually be drawn by the main animators, and the inbetweens would be left to the assistants (less experienced animators). This allows to share the work with more multitasking. In ZeMarmot‘s case unfortunately, Aryeom does everything, since we don’t have the funds to hire more artists as of yet.

The other method is called “Straight Ahead” and consists on doing all frames one after another without prior decomposition. Timing is much harder to plan with such a technique and you may end wasting more drawing. On the other hand, some animators prefer the freedom it gives and by making movements less perfect, you can also avoid them being too mechanical (in other words, perfect movement are not always what you are looking for when you want to represent living being in their whole perfect imperfection).

Observing Aryeom, she uses both methods, depending on the cuts, as is the case for many animators.

Conclusion

Hopefully you appreciate this insight on the work behind animating life, and this small video where we display the same pieces of a scene at different steps in the work-in-progress, first one after another, then side by side.

You will notice that we mostly show always pieces of the same scene since we really want to try and avoid any spoiler as much as possible. 🙂

Have fun!

ZeMarmot team

Reminder: if you want to support our animation film, made with Free
Software, for which we also contribute back a lot of code, and
released under Creative Commons by-sa 4.0 international, you can
support it in USD on Patreon or in EUR on Tipeee.

ZeMarmot end-of-year report

Hi everyone!

How are going your last days of 2016 so far? It’s been a strange year? Well let’s not diverge, and focus on ZeMarmot, then, shall we? First be aware that our dear Director, Aryeom Han, is getting a lot better. She was also really happy to get a few “get well” messages and say thanks. Her hand is still aching sometimes, in particular on straining or long activities, but on the whole, she says she can draw fine now.

Reminding the project

I will discuss below what was done in the last months, but first — because it is customary to do so at end of year — I remind that ZeMarmot is a project relying on the funding by willing individuals and companies, with 2 sides: art and software.

I am a GIMP developer, the second biggest contributor in term of number of commits in the last 4 years and I also develop a plugin for digital 2D animation with GIMP, which Aryeom is using on ZeMarmot. I want to get my plugin to a releasable state by GIMP 2.10.

Aryeom is using the software to fully animate, draw and paint a movie, based on an original story which I wrote a few years ago, about a marmot who travels the world for reasons you will know when the film will be released. 🙂 Oh and the movie will be Creative Commons by-SA of course!

Up to now, our initial crowdfunding (~ 14 000 €) has allowed to pay several months of salary to Aryeom. I have chosen to not earn anything for the time being (not because I don’t like being paid but because we cannot afford it with current funding). Some of it is remaining but is kept to pay the musicians.

Now we are mostly relying on the monthly crowdfunding through the Patreon (USD funding) and Tipeee (EUR funding) platforms. But all combined, that’s about 180 € a month, which amounts to barely more than a day of salary (and with non-wage labour costs, that’s not all of it for Aryeom). 1 day per month to make a movie, that’s far from enough, right?

My dream? I wish we could some day consider ourselves a real studio, with many paid artists, producing cool Libre Art movies going to the cinema (yes in my crazy dream, Creative Commons by-sa films are on the big screen!), and developers paid to improve Free Software so that our media-making ecosystem gets even better and for everybody to use!
But right now, that’s no more than an experiment mostly done voluntarily.

Do you like my dream? Do you want to help us make it real? You can by helping the project financially! It can be the symbolic coin as the bigger donation, any push is actually helping us to make things happen!

Click here to fund ZeMarmot in USD on Patreon »
Click here to fund ZeMarmot in EUR on Tipeee »

Not sure yet? Feel free to read more below and to pitch in at any time later on!

Note that not only the money but also the number of supporters is of great help since it shows supports to bigger funders; and for us that’s good for morale too! A good monthly crowdfunding can also help us find producers without having to abandon any of the social and idealistic aspects of the project (note that we have already been contacted by a production who were interested by the film after the crowdfunding but we refuse to compromise too much on the ideal).

The animation

We illustrated Aryeom’s work by 2 videos presenting extracts of her work-in-progress. In this first video, she shows different steps in animating a few cuts of the main character:

In this second video, we examine some cuts of another character, the Golden Eagle, main predator of the marmot:

There are a lot which can be said on these few minutes shown about the work of “animator”. Many pages of books on the art of animating life could be filled from such examples! We will probably detail these steps in longer blog posts but I will still explain the basics here.

Animating = giving life

Aryeom says it in the first video and you can see it in several examples in both videos. When your character moves from A to B, you are not just “moving” it. You have to give the impression that the character is acting on oneself, that it is alive, inhabited, in other words: animated.

This is no surprise one of the most famous book on animation is called “The illusion of life” (by Frank Thomas and Ollie Johnston), also the bedside book of Aryeom. Going this way has a lot of ramifications on the animator job.

Believable, not realistic

Before we continue, I have to make sure I am understood. Even though realistic animation is also a thing (Disney comes to mind), making a good animation is otherwise not necessarily about making it “realistic”, but instead about making it “believable”.

It is very common to exaggerate some movements for various reasons (often because it is funnier, but also sometimes because exaggerating it may sometimes look even more believable than the realistic version!), or the opposite (bypassing anatomically-correct movements). There are no bad reasons, only choices to achieve what you want.

Now that this thing is clear, let’s continue.

You can’t just “move an arm”

The very classical example beginners will be given is often: “lift your right arm up”. That’s it? Did you only move your arm and the rest of the body stayed unchanged? Of course not. To stay in balance, your body shifted to the left as a counterweight; the right shoulder lifted whereas the left shoulder lowered; and so on.
A lot of things will change in your body with this simple action. Even your feet and legs may move to compensate the shift of the center of gravity. As a consequence, you don’t “move your arm”, you “move your whole body” (in a configuration where your arm is up).

This is one of the first reason why to just move a single part in a body, you cannot reuse previous drawings and change just this part. No, you will properly redraw the whole body because if you are to fake life, you may as well do it well.

Note: when you say “animation” to computer people, their brain usually immediately wires to “interpolation“, which is the mathematics to compute (among other things) intermediate positions. Because of what I said above, in reality, this mathematical technique is barely used in traditional (even when digital) animation. It is used a lot more in vector and 3D animation, but its role should definitely be minimized compared to the animator work even on these fields. In vector/3D, I would say that interpolation only replaced the inbetweener role (some kind of “assistant” who draw non-keyframe images) from the traditional animation world.

Timing, silence and acceleration

You often hear it from actors, poets, writers, singers, anyone who gives some kind of life: the silence is as important as the noise for their art. Well I would also add the acceleration and the symmetrical deceleration.

You can see this well on this first example of the video 1 (at 0’41). Aryeom was unhappy with her running marmot which was nearly of linear speed. Marmot arrived too fast on the flower. Well he slowed down, but barely. Her finale version, Marmot would arrive much faster with a much more visible slowdown, making the movement more “believable” (we get to the bases!).

The eagle flight in video 2 (at 1’09) is another good example of a difficult timing as Aryeom went through 2 stages before finding the right movements. With the wrong timing, her flying eagle feels heavy, like it has difficulty to lift itself into the air (what she called her “sick” eagle in the video); then she got the opposite with an eagle she felt more sparrow-like, too light and easy-lifted. She was quite happy with the last version (obtained after 8 attempts) though, and in particular of this very last bit in the cut, when the eagle gets in glider mode. Can you spot it? This is the kind of difference which just lasts for a few hundredth of seconds, barely noticed, yet on which an animator can spend a significant amount of time.

Living still images (aka “line boil”)

A common and interesting effects you find in a lot of animation is about a shaking still image. You can see it in the second video (at 0’33), first cut presenting the proud eagle still on his mountain. Sometimes you want to show a non-moving situation, but just sticking to a still image feels too weird because in real life, there is no perfect stillness. Even if you make all efforts to stay still for a few seconds, you will imperceptibly move, right? So how do you reproduce this? The attempt to stay perfectly still while this being impossible? Well commonly animators will just redraw the same image several times because as much as you can’t stay still, you can’t draw perfectly identical images twice either (you can get very close by trying hard though) and you loop them.

You usually don’t do this for everything. Typically, elements of the background, you accept them to be still much more easily. But this is common for your living character or sometimes to pull main elements which you want to tick out of the background.

Avoiding cycles

Now, loops are very usual in animation. But the higher quality you aim for, the less you have loops. Same as stillness does not exist in life, you never repeat exactly the same movement twice. So even though loops seems to be the first thing many animators will teach (the famous “walking cycles”), you don’t actually use these in your most beautiful animations. When your main character walks, you will likely re-animate every step.

Of course, it is up to you to decide where to stops. Maybe for this flock of birds in the background, far away, just looping (and even copy-pasting the birds to multiply them!) may be enough. Though this is all a matter of taste, time, and money ready to spent on animator-time obviously.

Camera work

This part has not really started yet, even though it has already been planned (from the storyboard step). But since Aryeom started (first video at 1’06), let’s give some more infos.

Panning and tilting

In animation, where the movement is by essence 2D as well, these refers to respectively a horizontal and vertical camera movement. Why do I need to say “in 2D animation”? Because in more traditional cinema, these will rather correspond to a tracking shot done on rails, whereas panning and tilting refer to angle movements of a static camera. Different definitions for different references. Note that even though 3D animation could be using one or the others, they mostly kept the animation vocabulary.

This gives you a good hint on how characters and background are separately managed. If you have a character walking, you will usually create a single image of the background, much bigger than the screen size, and your camera will move on it, along with the character layers. With fully digital animation, this usually means working on image files of much higher sizes than the expected display size; in traditional physically-drawn animations, it means using very large papers (or often even sticking papers together). As an example, at a Ghibli exhibition, they would display the background for a flying cut of “Kiki’s delivery service” and it would take a full wall in a very large room.

Animation is a lot of drawing

I will conclude the section on animation by saying: that’s a bloody lot of drawing!

As you can see, Aryeom spends time redrawing the same cuts so many times to get the perfect movement that sometimes she becomes crazy and thinks that she is just drawing the wrong animal. The story about the pigeon is a true story and I am the one who told her to add it to the video because that was so funny. Some day, she comes to me and show me her cut she has been working on for days. Then she asks me: “doesn’t it look like a pigeon?”
Hadn’t I stopped her, she was ready to start over.

This is an art where you even draw again when you want to show stillness, and you forbid yourself from using too much shortcuts like using loops. So what do you want: you probably have to be a little crazy from the start, no? 😉

There are actually several “schools”, and some of them would go for simplicity, shortcut and reusage. Japan is well known for the studio Ghibli which goes the hard way as we do, but this is quite a contradiction in the country industry. The whole rest of Japan’s animation industry is based on animating as little as possible. Haven’t they proved so many times that it is possible to show a single still image for 30 seconds, add sounds and voices, then call it an animation?

Sometimes it is just a choice or a focus. Some animation films focus on design rather than believable movements, or scenario rather than wonderful images. For instance, I don’t think you can say that The Simpsons has a wonderful graphics appeal and realistic animation (they even regularly makes meta-jokes inside episodes about the quality of their animation!), but they have the most fantastic scripts, and that’s what makes their success.
So in the end, there is no right choice. Every one should just go the way they wish for a given project.

And this is the way we are going for ZeMarmot!

Music

Just a very short note on music. We have started working with the musicians, remotely and on a physical meeting on December 1st.  We have a few extracts of “first ideas” but they won’t do justice to the quality of the work.

I think this will have to wait for much later.

Software

I went so long about animation that I hope I have not lost half of the readers already! If you are still reading, I’ll say what I worked on these last months.

GIMP

I am trying to do my share on GIMP, to improve it globally, speed up the release of 2.10 and because I love GIMP. So I count 259 commit authorship in 2016 (60 in the last 3 months) + 48 as committers only (i.e. I am not the author, but the main reviewer of a patch which I pushed into our codebase). I commented on 352 bug reports in 2016, making it a habit to review patches when possible.

I have a lot of projects for GIMP, some of the grander being for instance a plugin management system (to install, uninstall and update them easily from within GIMP, and a backend side for plugin developpers to propose extensions), but also a lot of ideas about the evolution of the GUI (this should be discussed topic-per-topic on later blog posts).

Also I have been starting to experiment with Flatpak so that GIMP can provide an official release for GIMP.  For years, our official stance has always been to provide a Windows installer, a OSX package, and GNU/Linux… yeah grab the source and compile or use the outdated version from your package manager! I think this situation can be considerably improved with Flatpak and similar technologies which were born these years.

Animation in GIMP

As explained already, I took the path of writing it as a plugin rather than a core feature. Anyway GIMP is only missing a single feature which would make it nearly as powerful: bi-directional notification (basically currently plugins don’t get notified when pixels are updated, layers are renamed, moved or deleted, images closed…). That’s actually something I’d like to work on (I already have a stash somewhere with WIP code for this).

The animation plugin currently has 2 views:

Storyboard view

GIMP's animation plug-in: storyboard view
GIMP’s animation plug-in: storyboard view

This actually corresponds to the very basic animation logic of 1 layer = 1 frame, which is very common by people making animated GIF (or MNG/WebP now), except with a nice UI to set each image duration (instead of tagging the layer names, a very nasty user experience, feature hidden and found only on some forums or old tutorials), do basic compositing and even comments on vignettes if-need-be. All this with a nice preview in real-time!

Cel-Animation view

GIMP Animation plug-in: Cel-animation view
GIMP’s Animation plug-in: cel-animation view

This is the more powerful view where you can compose a frame from several images, often at least a background and a character. In the above example, the cut is made from 3 elements composed together: the background, the eagle and the marmot.

You may usually know more of the “timeline” style of view, which is basically the same thing except that frames are displayed as horizontal tracks. I tried this too, but quickly shifted to this much more traditional view in the animation world, which is usually called an x-sheet (eXposure sheet). I found it much more practical, allowing commenting more easily too, easy scroll, and especially more organized. There is a lot you don’t see in this screenshot, but this view is really targetting a professional and organized workflow. In particular with layers properly named, you can create animation loops and line tests of dozens of images, with various timings,  in a few clicks.

I am also working on keyframing for effects (using animated GEGL operations) and camera movements.

Well there is a lot done but definitely a lot more I am planning to do there, which takes time. I will post more detailed blog posts and will push the code on a branch very soon (probably before Libre Graphics Meeting this year).

That’s all, folks!

And so that’s it for this end-of year report from ZeMarmot team! I hope you appreciate the project. And if so and can spare the dime (or haven’t done so yet), I remind the project accepts any amount on the links given above. Some people just give 1 Euro, others 15 Euro per month. In the end, you are all giving life to ZeMarmot!

Thanks and have a great year 2017!

Timing your movie…

A big question when you write a scenario is: how do you time your movie?

CIMA museum's clock, by Rama (CC by-sa)
CIMA museum’s clock, by Rama (CC by-sa 2.0).

From the scenario

You can already do so from your written script. It is usually admitted that 1 page is roughly equivalent to 1 minute of movie. Of course to reach such a standard, you have to format your file appropriately. I have searched the web to find what were these format rules. What I gathered:

Format

  • Pages are A4.
  • Font is 12-point Courier.
  • Margins are 2.5 cm on every side but the left margin which is 3.5 cm.
  • Add 5,5 cm of margin before speaker names in dialogues.
  • Add 2,5 cm of margin before actual dialogue.
  • No justification (left-align).
  • No line indentation at start of paragraphs.

I won’t list more because there are dozen of resources out there which does it in details, with sometimes even examples. For instance, this page was helpful and for French-speaking reader, this one also (and it uses international metric system rather than imperial units), or even Wikipedia.
It would seem that the whole point of all these rules is to have a script with the less possible randomness. A movie script is not meant to be beautiful as an object, but to be as square as possible. Thus exits any kind of justification (which stretches or compresses spaces), as well as any line indentation (which does not happen every line) because they don’t have a behavior set in stone. They were made only so that your document “looks nice” which a script-writer cares less than in the end than being able to say how long will the movie last by just counting the pages.

Free Fonts

Some people may have noted that 12-point Courier is a Microsoft fonts. For GNU/Linux users out there, you can get these with a package called msttcorefonts. On Debian, or Ubuntu, the real package is “ttf-mscorefonts-installer” and it does not look like it is in Fedora repositories. That’s ok because I really don’t care. I use personally Liberation Mono (Liberation is a font family created by RedHat in 2007, under a Free license). FreeMono is also another alternative, but the Liberation fonts work well for me.

You may have noticed that these are all monospace fonts, which means that every character occupy the same horizontal space, i.e. ‘i’ and ‘W’ for instance uses up the same width (adding spaces around the ‘i’ for instance), which opposes to proportional fonts (more common on the web). Once again, proportional fonts are meant to be pretty whereas monospace fonts are meant to be consistent. It all comes back to consistent text-to-timing conversion.
Not sure why Courier ever became a standard in script-writing, but I don’t think that any other font would be much of a problem. Just use any metrically-compatible monospace font.

Side note: I read 3 scenarios in the last year (other than mine) and none of them were using Courier, nor actually most of the rules here. So really I am not sure how much this rule is enforced, at least in France. Maybe in other countries, this is more an hard-on rule?

Writing with LibreOffice

Right now, I simply write with LibreOffice. Now I am not going to make a tutorial about using LibreOffice, because this will diverge too much but my one advice is: use styles! Do not “hardcode” text formatting: don’t increase indents manually, don’t use bold, nor underline your titles…
Instead create styles for “Text body” (default texts), “Dialogue speaker”, “Dialogue”, “Scene title”… Then save a template and reuse it every time you write a new scenario.

While writing this post and looking for reference, I read weird stuff like “use a dedicated software because you don’t want scene titles ending a page”. Seriously? Of course, if you make scene titles by just making your text bold, that happens. But if you use styles, this won’t (option “Keep with next paragraph” in “Text flow” tab which is a default for any Header style). So once again, use styles.

Note: dedicated software are much more than just this basic issue, and they would have a lot more features making a scenarist life easier. I was also planning on developing such a software myself, so clearly I’m not telling you not to use one! I’m just saying that for now, if you can’t afford a dedicated software, LibreOffice is just fine, and styling issues like “scenes titles should not end a page” are just lack of knowledge on how to properly use a word processing software.

So that’s it? I just follow these rules and I get my timing?

Of course, real life hits back. First of all, every language may be more or less verbose. For instance German and French are more verbose than English, which in turn is more than Japanese. So using the same formatting, your page in French would be less than a minute on screen whereas a page in Japanese would be longer than a minute.

There is also the writer’s style. Not everyone writes as concisely and you may write the same scenario with a different timing than your colleague.

As a consequence, writers evaluate their scripts. You can try to act them out for instance. Try to see how long your text really lasts. And then I guess, you can either create a custom text-to-length conversion or adapt the text formatting to end up with the “1 page = 1 minute” approximation. If your scripts are usually going faster, then you need more text in one page. Make smaller margins or use a smaller font maybe?

Of course, it may also be that you use a much too verbose style. A scenario is not a novel: you should not try to make a beautiful text with carefully crafted metaphors and imaging. You are writing a text for actors to read and understand (and in our case, for painters and animators to draw).

ZeMarmot’s case

Moreover the 1 min = 1 page rule is not consistent in the same script either: a page with no dialogue could last several minutes (descriptions and actions are much more condensed than dialogues) whereas a page with only dialogue could be worth a few seconds of screen. But that’s ok, since this is all about average. The timing from scenario is not meant to be perfect. It gives us an approximation.

Yet ZeMarmot is particular since we have no dialogue at all. So are we going to have only 5-minute pages? That was a big question, especially since this is my first scenario. Aryeom helped a lot with her animation experience, and we tried to time several scenes by imagining them or acting them out. This is a good example which shows that no rule is ever made to be universal. And in our case, it took a longer time to accurately calibrate our own page-time rule.

Animatics

This is more animation-specifics: the next step after storyboarding (or before more accurate storyboarding starts) is creating an animatic, which is basically compiling all the storyboard’s images into a single video. From there, we can have a full video, and we will try to time each “image”. Should this action be faster or last longer? This requires some imagination since we may end up with some images lasting a few seconds and we have to imagine all in-between images to get the full idea. But in the end, this is the ultimate timing. We are able to tell quite accurately how long the movie will last once we agree on an animatic.

Should timing lead the writer?

The big question: should the timing lead us? You can get a different timing than you expect, and there are 2 cases: longer or shorter.

The shorter one is easy. Unless you are really really too short (and you don’t qualify anymore as a feature-length for instance), I don’t think it is a problem to have a shorter-than-average movie. I’d prefer 100 times a short but well timed and interesting movie than a boring long movie.

Longer is more difficult because the trend nowadays seem to have longer and longer movies. Now 2h30, sometimes up to 3h, seems to be a standard for big movies (and they manage to lengthen them in the “director cut” edition!). I have seen several movies these last years which were long and boring. I am not even talking of contemplative art movie, but about hard action-packed movies. No, superhero battling for 3 hours, this is just too much.
So my advice if your movie is longer than expected, ask yourself: is it really necessary? Won’t it be boring? Of course, I am not the one to make the rule. If you work in Hollywood, well first you probably don’t read me, and second you don’t care whatever I say. You will make a 2h30 movie and people will go and watch it anyway. Why not. I’m just saying this as a viewer. And since I think this is really not enjoyable, I don’t want to have our own viewer experience be boring (well at least by movie length!).

And so that’s it for my small insight about timing a movie. Of course, as I already told, I am mostly a beginner on the topic. Everything I say here is a mix of my searches these last months, my own experiments, Aryeom’s experience… So don’t take my word as is, and don’t hesitate to react in comments if you have better knowledge or just ideas on the topic.

By the way: ZeMarmot‘s pilote (not the finale movie) has been timed to be about 8 minutes long. 🙂

Reminder: if you want to support our animation film, made with Free
Software, for which we also contribute back a lot of code, and
released under Creative Commons by-sa 4.0 international, you can
support it in USD on Patreon or in EUR on Tipeee.

Excerpt of ZeMarmot’s Animatic

We made the choice from the start about ZeMarmot animation film to not show too much during production process because we don’t want to totally spoil the fun. By the past, I already had the case where I saw a movie animatic before the finale movie, and it really spoiled some fun out of the finale artworks.

But since we realize that you’d all would like to see some of what is happening “behind the stage”, here is a small excerpt of our animatic for the pilote (about 30 secs off the first scene). As usually, whatever is digitally-drawn (some parts were pencil-drawn) was done with GIMP.

What is the animatic

So I think I may have already vaguely told about it. The animatic is basically the next step after the storyboard: you take all the images from the storyboard, and chain them with the right rhythm, as well as basic effects (pans, zooms…) to get a “feeling” of what your finale movie will look like. It still requires some imagination because that’s basically a few static image every second or so. Not enough to be called a proper animation yet!

But with the right mindset, this is enough to get an idea and also time your movie!

In animation-making, this is also the moment when you freeze your story, the script, the scene and many direction choice. Whereas feature films allow for last second camera or script changes, it would be much more costly in animation films, unless you feel OK asking your painter to redraw several times the same scenes with various perspectives and to throw away days, if not weeks of work. This is why at some point, you need to make choices and try to stick to it as much as possible (last minute changes may always happen, but you always have to weigh the pro and cons).

ZeMarmot’s animatic

We finished ours around April, and have 2 versions: one without sound, and one heavily commented by myself (in French) for the musicians to get an idea of what they have to work on. We are currently working with them on a few songs. I hope we can soon give you more feedbacks about this particular part of our production too.

Currently the full animatic for the pilote is about 7 minutes. But this is likely to change slightly.

Reminder: if you want to support our animation film, made with Free
Software, for which we also contribute back a lot of code, and
released under Creative Commons by-sa 4.0 international, you can
support it in USD on Patreon or in EUR on Tipeee.
The animatic excerpt above is drawn and edited by Aryeom and also
released under CC by-sa 4.0,

Character design (2): clay models

Another aspect of character designing that we did was making the characters actual 3D. No I am not saying modeling them in a 3D software, like Blender. I really mean like “real world physical 3D“: you can touch it with your fingers and feel bumps here and there. I know, this is incredible technology! 😉

Doing clay (or other suited material) representation of characters, objects or even places is a pretty common tool in animation and film making, before actually making, filming, drawing, 3D-modeling (or whatever technology your film uses) the finale images. Such a technique is done in probably all animation schools and animation studios. Therefore this is not about doing props used in the movie, but really for the movie (i.e. not used on camera, and never seen in the actual film). This is a design tool, or a reference for 3D modelers, painters, animators, actors or directors.

As a side note, we saw some cool exhibits of this while visiting Weta studio in New Zealand (a famous studio which does props used in most big Hollywood movies). Amongst other cool stuff, they had this huge fake gorilla — actual size, like more than 2 meters high — in the visitable part of their workshop, which had not been used other than as a reference (I don’t remember which movie, or even if they told us).

So you remember when I was saying in an early post that character sheets are used as reference, right? What if instead of a turnaround character sheet, you had your real physical character you could really turn around? Well we don’t have the real marmot, but we can do clay models.

That’s really cool, right? 🙂

These were actually made back in November, and at the time I only thought of making a small message on @ZeMarmot twitter account in December. Their first usage was helping designing the current version of the main character, by experimenting physically with various shapes. Later they may again be used, as said above for instance for perspective drawing, positioning (or 3D modeling if we ever needed a 3D marmot), and many other cases.

You may also have spotted a few of these statues in the video interview of Aryeom, on her desk. But then, I thought it deserved a blog post on its own, don’t you? 🙂

Note: originally a private post whose link was only given
to Patreon contributors over $10 on February 27, 2016, and made
publicly visible on March 3.
If you like, consider supporting ZeMarmot project on its Patreon page too!

Reuse: all photographs in this blog post are works by Jehan,
portraying Aryeom, under Creative Commons by-sa 4.0 international.