Read my book

I wrote books about Webpack and React. Check them out!

Thursday, October 29, 2015

Thoughts on Blender Conference 2015

I had a chance to visit this year's Blender conference after a hiatus of a few years. The conference isn't particularly big one (~120 people) but it's a nice experience especially if you are into computer graphics. The size of the conference has remained quite static. There were a lot of new faces. Perhaps that reflects the growth and evolution of the community.

I used Blender for 3D modeling very actively for a few years (2005-2010) and was involved in development. Blender is an interesting example of open source project success. Initially the software was closed and since becoming an open source project it has been growing solidly.

I joined the conference on its third day. As a result I missed a large part of content. It was a nice experience regardless. You can find the conference sessions through YouTube. Read on to see what I thought about some of the sessions I participated in. Before that, I want to explain you some background as you might not know Blender that well.

On Blender Foundation's Animation Projects 

Blender Foundation is known for its animation projects. They are funded using various sources, including public and community support. This model allows to push the software forward in a meaningful manner. Each project has managed to make the software better in its own way. That said, something the features implemented are project specific hacks that aren't useful beyond some specific purpose. But sometimes you have to do what it takes.

Cosmos Laundromat - The Feature Film?

Initially Blender Foundation's newest project, Cosmos Laundromat, was meant to become a feature film. That would have required a heavy amount of funding (in 2-3 million euro range). This goal was not met. Jason van Gumster has dug deeper into the topic.

As a result the scope matched earlier Blender Foundation short film projects. It is possible, however, that the development will continue in an episodic manner. This depends entirely on how well they are able to resolve the funding situation.

Rendered Using Cycles

What makes Cosmos Laundromat particularly impressive compared to the earlier efforts is the fact that it has been rendered using Cycles. Cycles is an unbiased, physically based, path tracing rendering engine designed for animations. Even though slower than traditional renderers, the progressive approach means that you can just resume a render if you want less noise. Stack Overflow goes into great detail on what this means.

Cosmos Controversy

For some reason Blender Foundation's projects have a tendency to cause some level of controversy. That applies to PG-13 rated Cosmos Laundromat that includes a F word and begins in a rather grim way. Technically it's an excellent piece, though, and easily the best film they have done so far. See it below.



Cosmos Laundromat - Art and Pipeline

In the first session of the third day several key members of the Cosmos Laundromat project discussed their experiences. As you can imagine, developing new capabilities while trying to develop a short film can be somewhat challenging. Especially rendering was a great hurdle. For a short film like this they needed 17455 frames (25 fps) at 2048x858 resolution. It might not sound that bad. Unfortunately the frames could be computationally expensive due to the amount of special effects used. Especially rendering realistic grass can be a hard problem.

Rendering with Qarnot and ANSELM

As far as I understand, a significant part of the rendering effort was pushed to a company known as Qarnot Computing. The university of Ostrava gave access to their computing cluster (ANSELM) to help further.

Qarnot has managed to combine the idea of radiators with compute. As computing can produce a large amount of heat, this makes perfect sense. I won't describe their system in detail here but I recommend looking up their technology. Perhaps we can replace our heaters with something smarter in the future.

Problems During Production

Besides Qarnot, Blender Foundation has a little computing cluster of its own for test renders. They encountered particular rendering related problems during the projects. I've tried to list them below:
  • Their rendering nodes could run out of space. I'm not exactly sure how this could happen, though. It feels like a technical issue (logrotate for rendering?).
  • Blender wasn't always up to date on their nodes. This could be problematic especially if some particular fix was made to make the scene being rendered work correctly. This feels like a technical issue as well. I feel performing a check against a version stored within a file before rendering would have mitigated this. At least you'll avoid wasting some effort then.
  • Rendering on different operating systems could lead to different results. I don't know if there's an easy solution for this. Likely having a strong test suite would help in this regard. Ideally you would have a continuous integration system in place rendering using different scenarios under different setups. It's not trivial to setup but I believe it would have helped to spot these problems earlier.
  • Sometimes render times could be unpredictable. This applied especially to frames that had a lot of grass in them. Assuming render times are comparable between different resolutions, I expect it would have been possible to predict this problem by performing preview renders in smaller scale first and then analyzing the results to see where possible problems might arise. You can always try to tweak the worst spots if you are aware of them.
  • Due to the nature of rendering used, noise could be an issue. Of course the solution is simple, just resume the rendering till its smooth enough. They implemented resuming particularly for this project and it will likely make it to the stable release sometime in the future.
They tackled these problems by implementing extensive logging, increasing the amount of available computing power, and reducing scene complexity where it made sense. They likely applied some technical solutions as well. I imagine implementing features, such as LOD (level of detail) checks based on the distance to camera, could lead to nice improvements. Computer graphics is all about cheating after all. If it looks good, it is good.

To keep their computing power manageable, they implemented a system known as flamenco. It supports only Blender for now and it's in early stages development-wise. That said, it's nice to see projects like this to grow out of Blender Foundation projects. Hopefully more people will find it.

UI Team - Report on an Ongoing Journey

As Blender is notoriously famous for it's difficult to learn user interface, it was nice to take a part in a session dedicated to it. Even though Blender is hard to pick up initially, it is an amazingly productive software. Blender's UI design has been heavily inspired by Jef Raskin's Humane Interface. It is the same book that has inspired Apple's design decisions.

Blender Has to Serve Both Beginners and Pros

Likely the biggest single challenge that Blender has to face in the future is how to serve both beginning users to grow the user base while keeping the existing power users happy. This is the reason why Blender has a UI team these days. It makes all those decisions nobody else wants to make. Without having some authority to make the decisions, you easily end up bikeshedding. As time goes by and no concrete decisions are made, the situation can only get worse.

The problem with a big program like Blender is that decisions made in the past have a huge inertia. It can be difficult to change things one way or the other as you need to be careful not to lose something valuable in the process. This is the same problem many other software suites face. When you try to cater many different groups of users, it is difficult to keep everyone happy. I believe that's not a good goal even.

This topic was touched by Gianluca Vita in his session about Blender for architects. The challenge is that traditionally architects are taught to think in terms of 2D plans. They have a very specific set of requirements. It is not surprising that solutions, such as SketchUp, are therefore popular amongst architects. Being easy to pick up, SketchUp can be an amazing software. It's nowhere near as popular as Blender but it doesn't have to be.

Blender as a Kernel?

I believe Blender should aim to become more like a kernel. The current Blender would just be one shell on top of that. If you wanted something more specific, like a Blender for architects, you would design it so. The software has taken important steps towards a future such as this. They may have to be more intentional to make this happen, though. In this future Blender would become the Linux of 3D suites.

Moving towards this direction would make Blender accessible to larger amounts of users. There's an amazing amount of technology below the current shell. The big challenge is in exposing that in a such way that makes sense to specific groups of users.

It is not possible to please everyone with a single offering. If you can be more opinionated, however, you have better chances. I'm aware this direction will splinter the community based on focus. I feel it's something worth pursuing, however, as you increase the size of the overall community and serve everyone included better.

UI Team Efforts so Far

The UI team has done some valuable work already. They've introduced UI features, such as tabs and pie menus. They've also put effort towards improving the graphics of the application. Blender might have a better looking theme in the future. Of course a part of this work is cosmetic. The team has faced certain distinct problems.

Given it's a volunteer effort in large part, these problems include communication, time management, and keeping in sync with development. I feel many open source projects face the same issues. In part it's a leadership problem. It is easy to start working on features. The hard part is actually finishing those and merging them to the trunk. Certain amount of decisiveness is needed as otherwise things will just remain hanging and never get finished.

I think it's great that Blender has a UI team these days. Earlier the UI related efforts were too fragmented and ad hoc. It is always easier to add something to the UI than actually make fundamental changes to improve the user experience. The kernel ideas goes back to that as then you can actually be opinionated and optimize the user journeys based on specific users, not just an amorphous mass.



TV commercials - Packshots in Cycles

As I know nothing of producing TV commercials, it was nice to get some inside insight to the topic by Bartek Skorupa. It appears there's a lot in common with software development. The clients like to change their mind, and this can happen quite late in the process. As Bartek put it, it can be a smart idea to try to anticipate the changes. This will allow you to provide better service at a more affordable price.

2D vs. 3D

In TV commercial production a large part of work is preparation. Therefore it is important to get that phase right. Depending on the commercial there might be a varying amount of 2D and 3D content. If you can produce the whole commercial in 3D, it is easier to deal with changes required. In case you composite 3D content on top of 2D footage, it can become more difficult. You lose control over lighting, object placement, and so on.

A mixed approach can make sense as going full 3D is expensive, especially if you want to reach high grade results. It is easier just to film certain sequences. 3D allows more versatility and physically impossible things.

Saving a 2D Project

Even though changes are more difficult when you are dealing with 2D, they can still be possible. Bartek showed us how to achieve this using a feature known as tracking. Adobe Premiere comes with rudimentary tracking features. You can even deal with it outside of the application itself, say in Blender. Tracking simply allows you to track a point or a shape to a feature across time. As it happens, this is extremely useful as you can then animate using the data.

You can for example tie a text to tracked point location. This will tie it to the scene better. It is one of the most basic usages of tracking. It can also be used to fix things. You can use tracking information to mask out objects. In this case you would use a clone tool to in various frames of your track to eliminate the objects you don't need. The application is then able to interpolate based on your cloned frames and tracking data. It is just the classic image manipulation technique applied for video.

It is not possible to fix every project using this technique. You still cannot fix project lighting or perform heavy changes. Bartek's session showed me that you can still get some quite neat things done in post production. Of course it would be better to sort out the problems even before you start to film the footage required.



From Photographer to 3D Artist, a Personal Journey

Interestingly I've been moving from 3D towards photography. It was cool to participate in a session where Piotr ZgodziƄski showed how to go to the opposite direction. Now that I think of my 3D days, I believe a basic understanding of photography would have helped a lot. This is what Piotr's presentation was about.

If you can afford it, 3D provides significant benefits over traditional photography. In fact, a large part of Ikea's product photos are 3D graphics. The renderers have certainly evolved to a high level. The question is how to reach results in 3D that rival, or even surpass, more traditional results? The answer is simply to use traditional techniques in 3D.

The biggest insight for me is that there's actually a lot to learn in old magazines (think pre 2000s) and books. Modern ones have been saturated with work that has gone through Photoshop. It is better to learn from sources that haven't. You can pick up subtle ideas related to lighting for instance. We can implement these techniques effectively in 3D as we don't have to worry about objects obscuring our view. We place lighting however we want while keeping it invisible to the camera.

Instead of relying on something very technical, such as HDR, Piotr suggests it's more valuable to learn to light yourself. This gives you optimal control over the results. You can put those highlights where you want this way.

The problem in optimizing for great results for a single shot is that the results may not be ideal for animation usage. Dealing with that would take an entirely different set of skills. Perhaps learning from the cinematographers of the past would yield solutions to that.


Conclusion

I would say the conference was worth a visit overall. It is very reasonably priced (3 days, 150 euros) and you can get a day ticket cheaper. You always pick some new ideas and get to see where the project is at the moment.

I'm fairly confident the project will be around for quite a while. There are some definite challenges in sight. I'm most curious to see how the UI develops. Even though the project is great, there's room for improvement to get it into the hands of more people.

I would love to see the second part of Cosmos Laundromat to happen. Only time will tell how it goes with that. Combining business with open source is always a daunting proposition.