Read my book

I wrote a book about Webpack and React. Check it out!

Friday, October 30, 2015

Thoughts on OSCON 2015 (Amsterdam)

The conference venue - RAI
I had the privilege to attend OSCON in Amsterdam this year. Organized by O'Reilly, it's perhaps one of the better known conferences focusing on open source. This trip was partially sponsored by jsDelivr. I ran a little stand at OSCON representing the project. Thanks for the opportunity Dmitriy!

jsDelivr is a free CDN (Content Delivery Network) that allows you to consume popular open source JavaScript libraries, fonts, and CSS frameworks easily. It leverages the power of multiple commercial CDN providers. This yields excellent performance as you can see through CDNperf, a service we developed to keep track of the performance of jsDelivr and various alternatives.

As I haven't run a stand before, it was quite an experience. I stayed at OSCON for two full days while skipping the tutorial day. Besides running a stand, I managed to attend some of the sessions. In this post I'll highlight my conference experiences. Read on.

Booth Person - Level 0

My awesome booth
I'll never look at booth persons the same again. It is actually hard work, especially if you aren't that extroverted to begin with. That said, it can be quite rewarding to have little conversations with people and make light bulb go on when people understand what you are actually offering.

Given the scope of the conference, there were a lot of people with different backgrounds. You could actually see this in the conference programs. There were five(!) tracks. The content was split between nine topics. To say there was a lot of content would be an understatement. Not that I'm complaining.

It's About Education

From a booth person's point of view this meant I could not expect lot. In part my work was about educating people of how CDNs work and why they are valuable for front-end developers. Only after you understand that, you can start to see the value that jsDelivr provides. To make it easier for myself, I prepared a couple of slides to show (only Chrome is guaranteed to work well). As I didn't want to waste people's time, I tried to keep simple.

What Would I Do Differently?

On retrospect I should have brought a stand banner with me. Now the stand was sort of barebones. It would have been even more so without a laptop and a minimal stand to make it pop up a little. We decided to bring some t-shirts, brochures, and stickers to share.

I feel the t-shirts offered to our sponsors were very nice. Brochures were quite nice as well. Instead of 235g paper I would probably stick with 300g now as it's more substantial. I have mixed feelings on stickers. My biggest gripe with conference stickers is that they aren't useful. If I run a booth again, I'll try to be more original and perhaps bring some items tied to my culture. Originality doesn't hurt.


The first day of the conference was more active at the booth as more people browsed around. The second one was obviously quieter given they had already seen the booths. I must have had tens of conversations at the booth. I took a different strategy on the second day as I knew it was quieter. Instead of waiting for people to come at the booth, I went to other booths in order to prompt discussions.

Overall it was a nice experience to run a booth. It definitely leaves you less time to socialize and attend sessions. On the other hand you get to engage with people you wouldn't otherwise meet. I hope this helped more people to find jsDelivr and see the value it provides.

Thoughts on Day 1

Morning tea
I think the highlight of the conference for me were the keynotes held at the beginning of the each day. They were short and sweet meaning they got straight to the point. I simply love this format.

I wish more conferences would adopt it given it's so effective. You could even run a single track. Given the talks are so appropriately timed, it wouldn't hurt even if every talk was a "home run" from your point of view. You always learn something.

I recommend that you watch the keynotes through YouTube. I can guarantee there's something for you to learn there. The keynote speakers were simply top notch and I have nothing to complain.

Keynotes of Day 1

I've tried to gather some of the main points of the keynotes of the first day below:

Rebecca Parsons - The Evolution of Evolutionary Architectures

  • Be ready for the future, don't predict it. - I think this is particularly true given the technology landscape changes so fast. Architecting to allow change is valuable.
  • Continuous delivery allows us to focus on new features instead of avoiding screw-ups. - I feel lowering the cost of changes is an important goal.

Douglas Crockford - The Seif Project

You cannot unsee Crockford
  •  Originally web was meant as a document retrieval system. Now we are using it to deliver applications. As a result workarounds are needed. These workarounds are out of time spent delivering value. - I share his view that there's a clear impedance mismatch.
  • Standards only increase complexity. - Amen.
  • Web is insecure by design (XSS, XSRF, clickjacking, passwords) - Quoted for truth.
  • Security shouldn't be an afterthought. Something else is needed: Seif - If I understood correctly, the point Douglas made is that we should rethink what we're doing from ground up. They're starting from Node.js level (better crypto) and working their way from there. This means using better ways to generate entropy for RNGs (Random Number Generator), and eventually even new ways to implement user interfaces. I'm a little puzzled why he mentioned Qt in this context but perhaps we'll know better in the future.

Nils Magnus - Docker Security

Trust but verify
  • Containers are exciting. Despite excitement people are still skeptical about them in production. - I think this was a fair point. It's just a part of the normal hype cycle.
  • It is easy to overlook container security. Trust but verify. - This means we should try to keep our infrastructure as simple as possible to keep it understandable. Security-wise having less possible attack vectors is always better. Just trusting on something without understanding can have dangerous effects.

Simon Phipps - Enough Foundations Already!

I don't even remember what this was about
  • There are a lot of open source foundations already. It's not a good idea to set up one blindly. - I agree with his assessment. A foundation can be useful but it's a big step for a project.
  • Licenses are constitutions for communities. - I couldn't agree more. Essentially open source licensing allows competing entities to collaborate while serving their own interests. Of course this leads to open source politics.
  • Companies will try to game the system the best they can. That's just the nature of the beast. - It's not the fault of the companies that they operate for profit. Managing the games they play is a challenge.

Sam Aaron - Programming as Performance: Live Coding with Sonic Pi

Sam in action. You should see him once
  • Sonic Pi == Turtle/Logo on Raspberry Pi but for audio - It is an amazingly affordable little machine that allows you to achieve a lot.
  • Perfect for education as it's easy to get started with. - It takes just a single line of code to make a sound and go from there. Better yet, Sonic Pi provides a platform for live coding.
I discussed with Sam in detail about the project. The great thing is that he sees an intersection between teaching programming and professional audio. Sonic Pi is simple enough for students to pick up while allowing very advanced musicianship. Making the platform easy to access seems to serve the benefits of both parties. It comes with impressive documentation. It's not an afterthought like in many open source projects you see out there.

Chaos Patterns - Architecting for Future in Distributed Systems

Learning about chaos engineering
Chaos Patterns by Jos Boumans and Bruce Wong was an amazing talk especially for someone that hasn't received a lot of exposure to chaos engineering. It is a practice that has been introduced by companies, such as Netflix, to make sure their systems operate in scale.

Even though it provides great value to them, the practices are useful even for smaller companies. I believe chaos engineering is one of those ways you can stand out amongst competition while producing more resilient software.

The simple fact is that the world is a chaotic place. Essentially programming is about managing this chaos somehow. Even still, we're hit by chaos every once in a while. When AWS goes down, half the internet stops working. Heartbleed hit the internet even on a larger scale. Accepting chaos is one way towards more stable software. This discipline is known as chaos engineering.

The great insight of chaos engineering is that instead of waiting for chaos to hit us, we can face it on the terms of our own. This means exposing our systems to chaotic situations on purpose. Instead of waiting for a service to go down, we might want to take a part of it down ourselves and see if things still work. Doing this voluntarily allows us to develop resiliency to our systems. So when the actual chaos hits us, we'll be prepared.

By acknowledging chaos, we can improve the user experience. Consider something like Twitter. It prioritizes data based on accuracy. Not everything that is shown to the user has to be absolutely accurate always. Sometimes we're better off with partial data to keep the user experience good. Chaos may hit us at any time.

As a result we might want to have fallback systems in place. We can, for example, use various levels of caching (localStorage, different levels at the API). Due to the way the CAP theorem works, we'll have to make some compromise. For normal applications eventual consistency is enough.

Having all your eggs in a single basket is never a good plan. To deal with this, it was suggested that you should split your control plane. Instead of using every service through a provider such as Amazon, you should split the risk by using other vendors. The problem is that if Amazon's whole infrastructure goes down, your service will go with it. Splitting the control plane avoids this unfortunate possibility to large extent. It's about managing the risk.

Chaos engineering provides us means to be prepared for chaos. It won't protect us completely. It is still far better to inflict a little chaos on ourselves than let it hit us. This allows us to develop more resilient systems and fix potential issues earlier.

Modern Web Accessibility

Accessibility or a11y for true nerds
Accessibility is one of those aspects that is often an afterthought if it is given any thought at all. Patrick Fox's talk delved into this. There are entire specifications, such as WAI-ARIA and WCAG 2.0 about the topic. Instead of going too deep into these, Patrick provided several actionable tips on how to approach accessibility.

Just being WCAG compliant isn't enough. We need to be smart about it. Most importantly we need to have empathy towards our disabled users. I've tried to summarize Patrick's main points below:
  • Minimize the use of ARIA. It's better to default to semantic code and add ARIA only if it's really needed.
  • Aim for the widest reader/browser support possible.
  • Test often, don't wait until the end. Include developers/QA in the effort.
  • Consider accessibility in the context of the project lifecycle.
  • Dynamic user interfaces (Single Page Applicationss in particular) come with particular challenges. The systems in use have been designed to work with static content!
  • We can use ARIA to announce content correctly. That way the screen reader can let the user know that a specific page has been reached for example. aria="live" is useful here and there should be only one of those per project to keep it simple.
  • patrickfox/a11y_kit provides a good starting point.
I am not particularly strong when it comes to accessibility. Patrick's talk gave me some idea of the issues related to it. This is one of those topics that doesn't come up in the community that often. It definitely deserves some further thought and research at a later time.

ES6 Metaprogramming Unleashed

Getting meta with Javier
Javier Arias Losada dug into ES6 metaprogramming. The idea itself was somewhat familiar to me. The approach is useful for developing small DSLs (think Chai assertions for example). ES6 provides enough power for that.

Even better, we can finally throw proper errors for invalid object access if we want. You should check out the slides for details. These are good techniques to know and Javier explains the ideas well.

Building a Mobile Location Aware System with Beacons

I know Bluetooth beacons have been a relevant topic for a while especially now that Apple has gotten involved with iBeacon. We've had global positioning for a long time. Now we're solving the remainder, local positioning. Beacons allow us to achieve this and the technology is starting to become affordable for mass consumption. Tim Messerschmidt went quite deep into this topic and it was nice to see the state of the art.

Especially the development of Bluetooth Smart has helped a lot. The original implementations of Bluetooth consumed a lot of power. The newer implementations are far better. I believe this is one of the main factors that helps to drive the adoption of beacons. The basic ideas behind local positioning are simple. We can either use triangulation based on beacon locations or trilaterate based on distances to object to track. It's all very simple math.

Just deploying beacons and starting to push content to consumers isn't enough. It has to be done in a tactful manner and actually deliver value. Otherwise you are just building infrastructure for nothing.

Beacons come with certain limitations that it's good to be aware of. They can receive interference from various sources including microwaves, satellite connections, electrical sources, monitors and LCD displays, and anything in 2.4/5GHz (Wifi) for example. Incidentally materials and even people (mostly water after all) can cause interference and hurt results.

Writing Code that Lasts

Red again. Coincidence?
Rafael Dohms dug into practices that help in writing code that lasts. As his slides go into great detail about the topic, I won't regurgitate those here. Let's just say that a little bit of discipline goes a long way in programming. Certain principles can help us to avoid traps.

In my experience you develop an intuition for good code but sometimes you need a little advice to push you to the right direction. Rafael's talk was full of these little tips.

The tips are quite technical in nature, though, and won't help with larger scale issues like how to know what to do and when. Developing the code is only a small part of it all. Learning to develop robust code is a good starting point of course.

Thoughts on Day 2

Dutch design. Reminds me of Finland.
As the first day was quite intense, I spent more of the second day at the booth and socializing in general. That is not to say the topics weren't interesting. It always comes down to making some hard choices. Just like in the first day, the keynotes of the second day were top notch. In fact, I managed to pick up several actionable ideas from there.

Keynotes of Day 2

Just as for day 1, I've tried to gather some of the interesting points below:

Stuart Frisby - AB testing: Test your own hypotheses, and prepare to be wrong

Especially for a big company like, it is very important they understand what they are doing and why. This is where AB testing comes in. It allows them to prove themselves wrong. Simply asking the question "this or that" enough times can provide you the confidence you need to choose between the options.

It's not that hard to set up an AB test. First you need to decide what you want to test, figure out how to measure it, and then run the test and analyze the results. This is where statistics come in and you'll need to understand whether the results are statistically significant. I've done some AB testing myself but Stuart's talk made me realize I should test more to make better decisions.

The problem is that often our intuition is wrong. AB testing provides us means to eliminate that bias. Instead we just need to become really good at designing and executing tests. For that to be possible you are going to need enough traffic, though, and you need to be very specific about the goals of your testing. I believe this ties back down to business metrics.

Stuart mentioned several anti-patterns. You shouldn't test too much at once. Keep it simple and run one experiment at a time. The test results depend on the context and it is not a good idea to generalize too much. The great hamburger menu debate is a good example of this. Using one makes sense in certain contexts. In other contexts something else works better.

AB testing ties back to the company culture. You need to be prepared to be wrong and let the data guide you. This isn't trivial. I believe this is a good way to grow an online business as instead of relying on hunches, you actually have something more solid.

Ari Gesher - Privacy: The next frontier

To be honest, I didn't get a lot out of Ari Gesher's talk as I didn't have the correct background to appreciate it. I understood that privacy is a different problem than security. In the future we are going to need more granular ways to deal with aspects related to privacy. This potentially means stricter technological controls. I can only imagine how difficult problem it is in large scale systems where you want to make sure only the right persons have the access to the right data.

Mandy Waite - Kubernetes: Changing the way we think and talk about computing

If I understood correctly, Kubernetes is something that allows you to orchestrate containers. It provides the semantics you need to control them in a large scale environment and split tasks between them so that the hardware gets utilized in a smart manner. This is a major problem for companies such as Google and it's likely the reason why they developed the system.

Ninh Bui, Hongli Lai - Bootstrapping a business around open source

Yes, that's Shia
Ninh's and Hongli's talk was one of the highlights of the conference for me. Given I'm essentially bootstrapping a little business of my own on top of open source, the talk couldn't have been any more relevant. There were several very valuable lessons in the talk:
  1. Beware of doing too much consulting on the side. Multi-tasking comes with a significant overhead. It will take attention from your own product.
  2. Charge money for your products. Open source doesn't mean you cannot make money. After all that's what enables you to work on it in the first place!
  3. Market actively. I concur with this point completely. I feel Linus' "Build it and they'll come" doesn't work for everyone, especially these days.
  4. Focus and commitment is vital. You have to show you are willing to push your offering to new levels.
On business model side there were a couple of cool points as well:
  1. Selling support contracts can be problematic as it requires sales persons and a large upfront investment. It is a business that's difficult to scale.
  2. Ideally you should be able to produce passive income. It is even better if it is in the form of recurring revenue as that keeps the boat from sinking.
  3. Have and open core and build value on top of that. Consider selling subscription based licensing (see 2.).
  4. Develop paid products, avoid relying on donations. Be sure not to sell premium without any extra features.
  5. Charging money for your software enables a future for your business.
They also made a few interesting points on marketing:
  1. Word of mouth can work well with open source due to its nature and lead to organic growth.
  2. Marketing is art of repetition.
  3. Engineering can be seen as a marketing resource. I.e., engineering something around the current offering can lead to good results and help with the adoption of the main product.

David Arnoux - Growth Hacking: Data and Product Driven Marketing

Getting into growth hacking with David
David Arnoux's talk was another highlight of the conference for me. There was an amazing amount of little golden nuggets in it as listed below:
  • Small army can beat a bigger one when using subversive tactics. In fact, the statistics work greatly in favor of the small one in this case (63.6% victory rate). I don't know where the figure comes from but the point is that tactics matter. This reminds me of the Winter War.
  • People don't like to remain still. When revamping the Houston airport, just reducing the queuing time wasn't enough. They actually needed to make sure people remained in motion to make complaints go away. Solve the right problem.
  • Airbnb bootstrapped itself based on Craigslist data. In this case working out a creative approach lead to amazing success. That wouldn't have been legal in Finland due to the law.
  • As AdWords weren't working out for Dropbox and it was costing them money, they built a referral program within their product. It worked out amazingly well.
  • Growth hacking tactics rely on using other people's network (OPN) somehow. The problem is that a channel that might grow for a while will get saturated fast. The challenge is finding undervalued channels in which to grow.
  • Growth hacking is perfect when you have limited resources and you need good return on investment. It allows small companies to reach significant market share by being more agile than the incumbents.
David listed several growth hacking principles:
  • Build it, they don't come. - I might call this the inverse Linus' law.
  • Only testing shows what's successful. - This ties back to the AB testing talk.
  • Scale working, kill failing. - Don't get emotionally attached to ideas. Instead, work through a large amount of ideas and find those that work.
  • Speed. - Speed is of essence.

Overall the talk was just great and I recommend watching it if you want to get a better grasp of the topic. It's no surprise there's a growing demand for growth hackers in the industry. You will have to master many separate disciplines to be truly effective at growth hacking. Given it's so hard for one person to have a good understanding of each, companies have begun to form growth hacking teams to push their products forward in a market driven way.

Death to passwords -  Tim Messerschmidt

Even though I was aware that passwords are problematic, Tim Messerschmidt's second talk of the conference made it me even more aware of that fact. The main issue is that passwords are weak by definition. There are both technical and psychological reasons to this. If we enforced safe passwords, that would mean compromising user experience to some extent. There are some tweaks we can implement (i.e., show password on mobile, generate strong passwords, gamify password creation) but in the end they are just crutches.

It is possible to screw up the situation on technical side as well. Every once in a while we see results of that in the news. At the very least we should perform properly salted hashing over the passwords using a safe algorithm, such as bcrypt. Just don't use weak algorithm, like SHA-1, MD5, and such.

Tim brought up the topic of two factor authentication. I've been using Google Authenticator for a long time and lately I've been experimenting with a Yubikey. You should pick up Google Authenticator at least and hook it up with services that support it (Google and GitHub come to mind). It's cheaper than suffering a security breach.

Other means of secondary authentication include biometrics and trusted devices. For instance, if it's possible to ascertain you are in the possession of a certain device, that's something that could be useful.

In mobile usage it may be enough just to ask the user's email on login and send an authentication link there. This works very well and avoids the problem of passwords altogether. Of course your email can get compromised but then you are sort in trouble anyway.

There was also discussion about a trust score assigned to a user. That can be understood as something that consists of no id, social id (think social security number), and concrete id within the system. This ties back to the topic of authentication (are you who you say you are?) and authorization (do you have permission to do something?). You might be able to perform certain operations in a system through a weaker level of authentication. Heavier operations could require heavier authentication.

Of course OAuth and OpenID Connect were mentioned as well. Given it's an important and broad topic, standards have emerged to help with the situation. I have a feeling we still might way to go until we can ditch passwords. At the very least I recommend using some sane password manager to push the problem out of your head if nothing else.

Chris Chabot - Technology isn't interesting until it's technologically boring

Chris Chabot's talk was the perfect way to end the conference (for me anyway). It's amazing how much technology we take for granted. It is just the nature of technology. Now we have generations that haven't lived a day without the internet. These so called digital natives have completely different view on the world than those that lived before. I can only imagine what the world looks like in a decade or two.

I can only tell that the pace of technological progress has been increasing. You can see this from the speed in which technological diffusion accelerates. Each revolution happens faster than previous one and we're able to accept new technology faster and faster. Just consider something like Uber and the way it revolutionized the taxi industry. There are new Ubers on the way.

As technology progresses and diffuses through the layers of the society, it becomes something that can be built on top of. When most of people have mobile phones, you can start selling mobile applications for them. At some points mobile phones will become obsolete and replaced with some newer technology. Rinse and repeat.

According to Chris innovation happens at the edges. I think the real question is figuring out where that edge is. Once you know where it is, you can push progress forward. Perhaps the accelerating pace of developing tells us that people are getting better at this. As the technology landscape grows, there are more niches to exploit. This is innovation at scale.


Open source camera from the hardware pavilion
OSCON was easily the most amazing conference I've participated in. If I get a change, I'll gladly participate again. There's an immense amount to learn and the conference gives you a nice cross section of what's going on at the moment.

It's not the cheapest conference but I believe you might get some good value out of it depending on how edge you prefer to be. It's definitely worth it for companies to send their people there. It gets harder to justify if you have to pay for it all yourself. You could do a worse investment, though.

Thursday, October 29, 2015

Thoughts on Blender Conference 2015

I had a chance to visit this year's Blender conference after a hiatus of a few years. The conference isn't particularly big one (~120 people) but it's a nice experience especially if you are into computer graphics. The size of the conference has remained quite static. There were a lot of new faces. Perhaps that reflects the growth and evolution of the community.

I used Blender for 3D modeling very actively for a few years (2005-2010) and was involved in development. Blender is an interesting example of open source project success. Initially the software was closed and since becoming an open source project it has been growing solidly.

I joined the conference on its third day. As a result I missed a large part of content. It was a nice experience regardless. You can find the conference sessions through YouTube. Read on to see what I thought about some of the sessions I participated in. Before that, I want to explain you some background as you might not know Blender that well.

On Blender Foundation's Animation Projects 

Blender Foundation is known for its animation projects. They are funded using various sources, including public and community support. This model allows to push the software forward in a meaningful manner. Each project has managed to make the software better in its own way. That said, something the features implemented are project specific hacks that aren't useful beyond some specific purpose. But sometimes you have to do what it takes.

Cosmos Laundromat - The Feature Film?

Initially Blender Foundation's newest project, Cosmos Laundromat, was meant to become a feature film. That would have required a heavy amount of funding (in 2-3 million euro range). This goal was not met. Jason van Gumster has dug deeper into the topic.

As a result the scope matched earlier Blender Foundation short film projects. It is possible, however, that the development will continue in an episodic manner. This depends entirely on how well they are able to resolve the funding situation.

Rendered Using Cycles

What makes Cosmos Laundromat particularly impressive compared to the earlier efforts is the fact that it has been rendered using Cycles. Cycles is an unbiased, physically based, path tracing rendering engine designed for animations. Even though slower than traditional renderers, the progressive approach means that you can just resume a render if you want less noise. Stack Overflow goes into great detail on what this means.

Cosmos Controversy

For some reason Blender Foundation's projects have a tendency to cause some level of controversy. That applies to PG-13 rated Cosmos Laundromat that includes a F word and begins in a rather grim way. Technically it's an excellent piece, though, and easily the best film they have done so far. See it below.

Cosmos Laundromat - Art and Pipeline

In the first session of the third day several key members of the Cosmos Laundromat project discussed their experiences. As you can imagine, developing new capabilities while trying to develop a short film can be somewhat challenging. Especially rendering was a great hurdle. For a short film like this they needed 17455 frames (25 fps) at 2048x858 resolution. It might not sound that bad. Unfortunately the frames could be computationally expensive due to the amount of special effects used. Especially rendering realistic grass can be a hard problem.

Rendering with Qarnot and ANSELM

As far as I understand, a significant part of the rendering effort was pushed to a company known as Qarnot Computing. The university of Ostrava gave access to their computing cluster (ANSELM) to help further.

Qarnot has managed to combine the idea of radiators with compute. As computing can produce a large amount of heat, this makes perfect sense. I won't describe their system in detail here but I recommend looking up their technology. Perhaps we can replace our heaters with something smarter in the future.

Problems During Production

Besides Qarnot, Blender Foundation has a little computing cluster of its own for test renders. They encountered particular rendering related problems during the projects. I've tried to list them below:
  • Their rendering nodes could run out of space. I'm not exactly sure how this could happen, though. It feels like a technical issue (logrotate for rendering?).
  • Blender wasn't always up to date on their nodes. This could be problematic especially if some particular fix was made to make the scene being rendered work correctly. This feels like a technical issue as well. I feel performing a check against a version stored within a file before rendering would have mitigated this. At least you'll avoid wasting some effort then.
  • Rendering on different operating systems could lead to different results. I don't know if there's an easy solution for this. Likely having a strong test suite would help in this regard. Ideally you would have a continuous integration system in place rendering using different scenarios under different setups. It's not trivial to setup but I believe it would have helped to spot these problems earlier.
  • Sometimes render times could be unpredictable. This applied especially to frames that had a lot of grass in them. Assuming render times are comparable between different resolutions, I expect it would have been possible to predict this problem by performing preview renders in smaller scale first and then analyzing the results to see where possible problems might arise. You can always try to tweak the worst spots if you are aware of them.
  • Due to the nature of rendering used, noise could be an issue. Of course the solution is simple, just resume the rendering till its smooth enough. They implemented resuming particularly for this project and it will likely make it to the stable release sometime in the future.
They tackled these problems by implementing extensive logging, increasing the amount of available computing power, and reducing scene complexity where it made sense. They likely applied some technical solutions as well. I imagine implementing features, such as LOD (level of detail) checks based on the distance to camera, could lead to nice improvements. Computer graphics is all about cheating after all. If it looks good, it is good.

To keep their computing power manageable, they implemented a system known as flamenco. It supports only Blender for now and it's in early stages development-wise. That said, it's nice to see projects like this to grow out of Blender Foundation projects. Hopefully more people will find it.

UI Team - Report on an Ongoing Journey

As Blender is notoriously famous for it's difficult to learn user interface, it was nice to take a part in a session dedicated to it. Even though Blender is hard to pick up initially, it is an amazingly productive software. Blender's UI design has been heavily inspired by Jef Raskin's Humane Interface. It is the same book that has inspired Apple's design decisions.

Blender Has to Serve Both Beginners and Pros

Likely the biggest single challenge that Blender has to face in the future is how to serve both beginning users to grow the user base while keeping the existing power users happy. This is the reason why Blender has a UI team these days. It makes all those decisions nobody else wants to make. Without having some authority to make the decisions, you easily end up bikeshedding. As time goes by and no concrete decisions are made, the situation can only get worse.

The problem with a big program like Blender is that decisions made in the past have a huge inertia. It can be difficult to change things one way or the other as you need to be careful not to lose something valuable in the process. This is the same problem many other software suites face. When you try to cater many different groups of users, it is difficult to keep everyone happy. I believe that's not a good goal even.

This topic was touched by Gianluca Vita in his session about Blender for architects. The challenge is that traditionally architects are taught to think in terms of 2D plans. They have a very specific set of requirements. It is not surprising that solutions, such as SketchUp, are therefore popular amongst architects. Being easy to pick up, SketchUp can be an amazing software. It's nowhere near as popular as Blender but it doesn't have to be.

Blender as a Kernel?

I believe Blender should aim to become more like a kernel. The current Blender would just be one shell on top of that. If you wanted something more specific, like a Blender for architects, you would design it so. The software has taken important steps towards a future such as this. They may have to be more intentional to make this happen, though. In this future Blender would become the Linux of 3D suites.

Moving towards this direction would make Blender accessible to larger amounts of users. There's an amazing amount of technology below the current shell. The big challenge is in exposing that in a such way that makes sense to specific groups of users.

It is not possible to please everyone with a single offering. If you can be more opinionated, however, you have better chances. I'm aware this direction will splinter the community based on focus. I feel it's something worth pursuing, however, as you increase the size of the overall community and serve everyone included better.

UI Team Efforts so Far

The UI team has done some valuable work already. They've introduced UI features, such as tabs and pie menus. They've also put effort towards improving the graphics of the application. Blender might have a better looking theme in the future. Of course a part of this work is cosmetic. The team has faced certain distinct problems.

Given it's a volunteer effort in large part, these problems include communication, time management, and keeping in sync with development. I feel many open source projects face the same issues. In part it's a leadership problem. It is easy to start working on features. The hard part is actually finishing those and merging them to the trunk. Certain amount of decisiveness is needed as otherwise things will just remain hanging and never get finished.

I think it's great that Blender has a UI team these days. Earlier the UI related efforts were too fragmented and ad hoc. It is always easier to add something to the UI than actually make fundamental changes to improve the user experience. The kernel ideas goes back to that as then you can actually be opinionated and optimize the user journeys based on specific users, not just an amorphous mass.

TV commercials - Packshots in Cycles

As I know nothing of producing TV commercials, it was nice to get some inside insight to the topic by Bartek Skorupa. It appears there's a lot in common with software development. The clients like to change their mind, and this can happen quite late in the process. As Bartek put it, it can be a smart idea to try to anticipate the changes. This will allow you to provide better service at a more affordable price.

2D vs. 3D

In TV commercial production a large part of work is preparation. Therefore it is important to get that phase right. Depending on the commercial there might be a varying amount of 2D and 3D content. If you can produce the whole commercial in 3D, it is easier to deal with changes required. In case you composite 3D content on top of 2D footage, it can become more difficult. You lose control over lighting, object placement, and so on.

A mixed approach can make sense as going full 3D is expensive, especially if you want to reach high grade results. It is easier just to film certain sequences. 3D allows more versatility and physically impossible things.

Saving a 2D Project

Even though changes are more difficult when you are dealing with 2D, they can still be possible. Bartek showed us how to achieve this using a feature known as tracking. Adobe Premiere comes with rudimentary tracking features. You can even deal with it outside of the application itself, say in Blender. Tracking simply allows you to track a point or a shape to a feature across time. As it happens, this is extremely useful as you can then animate using the data.

You can for example tie a text to tracked point location. This will tie it to the scene better. It is one of the most basic usages of tracking. It can also be used to fix things. You can use tracking information to mask out objects. In this case you would use a clone tool to in various frames of your track to eliminate the objects you don't need. The application is then able to interpolate based on your cloned frames and tracking data. It is just the classic image manipulation technique applied for video.

It is not possible to fix every project using this technique. You still cannot fix project lighting or perform heavy changes. Bartek's session showed me that you can still get some quite neat things done in post production. Of course it would be better to sort out the problems even before you start to film the footage required.

From Photographer to 3D Artist, a Personal Journey

Interestingly I've been moving from 3D towards photography. It was cool to participate in a session where Piotr Zgodziński showed how to go to the opposite direction. Now that I think of my 3D days, I believe a basic understanding of photography would have helped a lot. This is what Piotr's presentation was about.

If you can afford it, 3D provides significant benefits over traditional photography. In fact, a large part of Ikea's product photos are 3D graphics. The renderers have certainly evolved to a high level. The question is how to reach results in 3D that rival, or even surpass, more traditional results? The answer is simply to use traditional techniques in 3D.

The biggest insight for me is that there's actually a lot to learn in old magazines (think pre 2000s) and books. Modern ones have been saturated with work that has gone through Photoshop. It is better to learn from sources that haven't. You can pick up subtle ideas related to lighting for instance. We can implement these techniques effectively in 3D as we don't have to worry about objects obscuring our view. We place lighting however we want while keeping it invisible to the camera.

Instead of relying on something very technical, such as HDR, Piotr suggests it's more valuable to learn to light yourself. This gives you optimal control over the results. You can put those highlights where you want this way.

The problem in optimizing for great results for a single shot is that the results may not be ideal for animation usage. Dealing with that would take an entirely different set of skills. Perhaps learning from the cinematographers of the past would yield solutions to that.


I would say the conference was worth a visit overall. It is very reasonably priced (3 days, 150 euros) and you can get a day ticket cheaper. You always pick some new ideas and get to see where the project is at the moment.

I'm fairly confident the project will be around for quite a while. There are some definite challenges in sight. I'm most curious to see how the UI develops. Even though the project is great, there's room for improvement to get it into the hands of more people.

I would love to see the second part of Cosmos Laundromat to happen. Only time will tell how it goes with that. Combining business with open source is always a daunting proposition.

Sunday, October 18, 2015

Afterthoughts - Tampere Goes Agile '15

Tampere Goes Agile is one of those events I visit each year. It's free to attend and you get to see some old acquaintances while at it. And of course you get exposed to some new ideas and meet new people.

We had a Lean Coffee session before the talks
Just like last year, the event was held at Sokos Hotel Ilves. There were roughly 140 attendees. Unlike last year, there were no workshops. The theme of the conference this year was "Inspired beyond agile".

I think that was a good pick given agile is getting a little worn out as a topic itself. Agile practices are widely in use. Perhaps the main challenge is in getting from "doing agile" to "being agile". This means organization level acknowledgment of the ideas and means entire mindset has to change.

Bob Marshall - After Agile

The day started with a keynote by Bob Marshall. He has introduced the concept of rightshifting to the community. The core idea is that a large amount of organizations are underperforming. There are some exceptions to the rule of course. That said, there's a lot we can do to improve the situation.

Prisoners of Existing Ideas

The problem is that we're always more or less prisoners of our mindset and existing ways. Replacing each person of an organization while keeping the underlying ideas the same wouldn't mean a thing. Instead of replacing people, we'll need to operate on a more fundamental level. In order to improve the effectiveness and efficiency of our organizations, we'll need to be able to imagine better ones.

What Does an Ideal Organization Look Like?

I think the greatest challenge is in figuring out what these better organizations might look like. We have certain models that we've inherited from the industrial era. Even though they sort of work, the problem is that knowledge work has unique qualities of its own. Beyond this, each organization operates within contexts of its own. All involved parties have needs of their own. The challenge lies in responding to these needs in an adequate manner.

Going Beyond Agile

As Bob put it, agile thinking gets us only up to a certain point. In order to improve our organizations, we'll need to look at the whole. Simply optimizing software development is suboptimal. Real improvements can be achieved only by taking a holistic view. As a solution Bob suggests organizational psychotherapy. That can be seen as a way to understand organization health and changing the mindset of the organization to one that's more conducive for high performance.

Forming Better Organizations

It is interesting to ask similar questions about forming organizations. Instead of conducting therapy, you actually might have a chance to build a performing culture to begin with. This might be an area that's easy to neglect when growing a company. If we understood how to perform better from the start, that would allow us to build better companies faster. Bob's ideas have merit beyond existing organizations.

Jaakko Kuosmanen - Is Agile everything or is it the only thing?

Jaakko's presentation provided us multiple different views on agile. I've tried to summarize some of the main points below:
  • Agility can yield business value. Liquid assets (renting vs. owning) can yield competitive advantage when the market is volatile. The problem is that fixed assets can become expensive if the situation changes in a way you cannot predict accurately.
  • When you are dealing with a fickle market (i.e., game development), it can be worth your while to experiment a lot and fail fast with ideas. Focus only on those that work. Kill your darlings.
  • Projects tend to happen in process context and there's interaction between them. Depending on the view, you might have different focus. You can for instance think projects as assembly line work and discard aspects such as maintainability. Focusing on process might yield entirely different results.
  • Software is just one part of the equation. It is concrete services that yield actual value.
  • Especially in big organizations different business units might have conflicting views of the world. Sticking to too rigid plans without synchronization can lead to disasters.
  • There's a rift between the physical and virtual world. Physical material costs whereas bits are free. This leads to different economics.
  • Services and infrastructure have needs of their own. Services can be seen as something flexible whereas infrastructure is something rigid. 
  • It is important to understand your core competencies. Consider outsourcing parts that aren't.
  • Solving the right problem is more important than solving the problem right.
  • There can be a conflict between the roles of a customer and a user. Customer pays and drives development while the user might have to endure. In self-service these can be the same, however.
  • Understanding what to document can be valuable in the future. A small amount of work beforehand can help to avoid a huge amount of work later on.

Markus Päivinen: How to make learning a lifestyle

Markus described how they have managed to make learning a lifestyle at Ericsson. They've acknowledged that in order to remain in business, they'll need to retain their edge.

As a result they've put a significant effort in nurturing a company culture that enables people to learn. The problem is that if you aren't learning, you are regressing compared to the competition.

They've implemented various ways in which they push towards a learning culture. I've listed the ways covered below:
  • Ericsson Academy - Online courses for onboarding and learning those skills you are going to need for your job.
  • Code School for kids - By teaching children you can actually learn new ways to approach problems. They have a different view on the world. That's something adults tend to lose as they grow up.
  • Game Jams and hackathons - Allowing people to develop in more freeform manner might uncover hidden talent.
  • Breakfast Club - Sharing a breakfast can work.
  • Learnathons - Ericsson cross-trains people inside the company. They look up topics people are interested in picking up, figure out who understand enough to train, and then do it. This can be scaled globally.
The most important learning here is that you can change company culture through actions. It is the people that create the culture. Encouraging people to learn leads to improvements as the newly gained skills and knowledge is put into action. Even though learning takes time away from work, is there an alternative really?

Andy Edmunds: Disciplined Agile Delivery for Critical System Development

Andy's lightning talk showed how to take some of the ideas from agile into an academic context. The problem is that formal methods and high integrity systems are rigid by definition. 

Bringing agile processes into the equation may make the techniques palatable to more people.

Juha Vuolle: Modern Companies

Juha's talk highlighted some of the key challenges modern companies face. I've summarized the key ideas below:
  • The nature of available data has changed. It is becoming more structured.
  • There's more data available than an individual, or even an organization, can manage.
  • There's a relation between the amount of structure in data and organization performance. Too much structure can hurt performance especially in the context of software business.
  • 52% of F500 companies from the year 2000 have vanished. It is the age of disruption.
  • There are large differences between old and new business practices. Whereas older models are more reactive by nature, modern are rather proactive.
  • There are major differences in the way we structure business (pyramid vs. self-organizing). Older companies split themselves by function and title. Modern companies organize by teams, advisory forces, and roles within these.
  • While older companies favor centralized decision making, modern ones favor decentralization. Let those who have the best information make the decisions.
  • Company DNA can be modeled within an operating system, a set of rules defining its limits and its approach.
  • Pushing decision making to the people but it can be challenging. It is better to make a bad decision fast than a good decision late. There should be means to track the quality of decisions so better ones can be made in the future.
  • A modern company should have a conflict resolution process. If something goes wrong, there should be clear ways to deal with it.
  • A modern company is limited by the development of its leaders. It can be challenging to match compensation with contribution.
  • There's no single recipe for how to build a modern company. Principles transfer, practices do not.
  • When scaling a company up, rephrase the problems encountered.

Timo Stordell: DevOps. Boosting the agile way of working

DevOps is one of those terms that crops up every once in a while. As I was curious, I went to see Timo's presentation. DevOps isn't anything revolutionary.

Instead, it should be seen an incremental way to improve our development practices.

Small Bangs over a Big Bang

Instead of performing big bang releases, it is more beneficial to releasing smaller bits faster. Besides decreasing the possibility of disaster, this allows you to receive feedback sooner. This can also work as a competitive advantage and allow you to capture market.

Requirements Management Meet Acceptance Testing

In ideal situation your requirements management and acceptance testing should be closely connected. Acceptance testing gives you degree of security and allows you to catch potential issues before deployment. Of course there's more to testing than that but the basic idea is solid. Timo demonstrated how they perform acceptance testing using physical devices and automation. Building a rig of your own can be absolutely worth it.

Standardize Development Environments

The DevOps practice of standardized development environments is beneficial as it makes it easy to reproduce potential problems. Thanks to virtualization we can get new people aboard with little effort.

Monitor to Understand What to Develop

Monitoring can give us a good idea of how well a product is performing. I expect this could be tied to business indicators. Having data available allows you to make better decisions faster and focus your effort on the right things.

A book known as The Phoenix Project should be a good starting point for digging deeper in the topic.

Allan Kelly: Beyond Projects

The second keynote of the conference was held by Allan Kelly. His main thesis is that projects don't make sense anymore. I couldn't agree with him more. Interestingly he started out by making a book analogy.

He wrote his first books in a traditional manner. His newest one took an agile approach and completely changed the way he thinks about writing. Incidentally I've used a similar approach with mine as I couldn't get a publishing deal.

Digital Environment is Ideal for Agile

The reason why agile approaches work with book writing has to do with the fact that writing has become digital. This enables fast iteration and easy distribution. Instead of having to plan everything carefully, we can learn as we go and make decisions based on that. I would say writing a book this way is almost instinctive. You listen to the people and do the right thing.

Projects - on Schedule, on Budget, with Sufficient Quality

Projects go against all of this. Generally project success is determined by staying on schedule, on budget, and with quality. This doesn't tell anything of the value delivered. Projects don't put value in flexibility. The basic problem is that requirements change. This goes against the fixed nature of projects.

Projects Emphasize the Wrong Things

The key problem is that projects put focus on wrong things. Especially in product development we should put focus on benefit delivered. The most important question during development should be when can I deliver value next? Putting emphasis on iteration speed and value delivery instead of sticking to a plan makes sense.

While projects are temporary, software is forever. There's a clear conflict here. As projects put emphasis on getting things done by a certain date, this may lead to cutting corners and reduced quality. This sacrifices the long term view. This is not to say deadlines aren't a good thing. Checkpoints can bring focus to development. Then it becomes a problem of figuring out how to provide most value by the deadline. Focus on flow and value.

Projects put emphasis on the idea of temporary organizations. Essentially you form a task force, finish the project, and move on. The great tragedy is that you will have to break a functioning team and start all over again. Putting emphasis on teams instead would be more beneficial. Treat a team a unit and push work through it.

Software Development Doesn't Have Economies of Scale

Project model has been optimized for big. Unfortunately software development doesn't have economies of scale. It is, in fact, a converse situation! It is cheaper to produce software in smaller quantities as you decrease the amount of possible risk. It is cheaper to make small mistakes than big ones. Smaller batches work better for software as you can deliver sooner and de-risk your work as you go.

We're Still Stuck with Project Nomenclature

The biggest problem of them all is the fact that we're still stuck with project related nomenclature. This all ties back to Bob's presentation and mindset. It is difficult to see the world in any other way if you are stuck in a project way of thinking. After all that's how majority of the industry operates. A new language is needed to break down old habits. We'll need to put emphasis on producing value. As Allan put it, it's time for Waterfall 2.0 - continuous flow.


I ended the day with a Pepsi
Overall the conference was quite nice. Especially the keynotes alone made the trip worth it. I probably didn't get that much from all the talks but then I've been to a quite a few conferences already and it's inevitable some of the ideas begin to repeat at some point.

If it was up to me, I would probably double the amount of normal presentations while cutting their time in half to twenty minutes. It's enough to get a couple of ideas across while forcing to keep it simple. This would give the audience exposure to more. That's the point, after all.

An interesting alternative would be to have the same amount of talks while keeping it all in a single track. As the talks would be so short, having a couple you don't care about that much wouldn't hurt. As a bonus benefit you would see talks you would skip otherwise.

I preferred the after-party location of the previous year. That might have something to do with the fact that nerds tend to like cellars for some strange reason.

Monday, October 12, 2015

How Would I Make GitHub Better?

GitHub is an amazing service. It revolutionized social coding. As it has matured, some warts become apparent. It still gets the job done but there are some parts where it could do better. Just the fact that it's so popular will keep it popular for a long time to come unless something radical happens. That's the way it goes in web business.

There are alternatives, such as Bitbucket, GitLab, and Kiln. It's hard to see any of these usurping GitHub anytime soon, though. Of course, thanks to the nature of Git, moving between the systems isn't that hard. You can, for example, benefit from the free private repositories of Bitbucket. GitHub's pricing starts to hurt pretty fast, especially if you have a lot of private work. GitLab is nice if want something self-hosted. And Kiln has Spolsky behind it so that goes some way.

I have several gripes with GitHub. Some are minor and likely very fast for GitHub to resolve should they want to. There are also some structural issues I'm not particularly happy about. I'll try to illustrate my problems next and show existing solutions where they exist.

Starring is Too Limited

I like to mark interesting repositories with a star (1.7k stars given so far). Obviously, once you have starred enough projects, it becomes counter-productive. Ideally you should be able to filter based on project meta information. GitHub allows you to attach only project site and description to a project. No tags are supported. Of course you could say it's better to deal with this within the repository itself (i.e., package.json). That feels too low-level, though, and cannot be used for queries trivially.

There are solutions, such as Astral, that address this problem. I do wish GitHub allowed people to attach more meta information to their projects, though. This brings me to my next gripe.

Limited Control Over Your Portfolio

Once you hit a certain amount of projects under your personal account, it starts to work against you. Just look at my listing of repositories. How can you tell which repositories have good stuff in them? If there was more metadata available, it would be possible to filter better at least. I know GitHub tracks repository language automatically. Having a language filter would be a start.

A good next step would be to allow more control over the way you display your repositories to the public. Perhaps you might want to hide some of them altogether. At the very least it would be nice to mark a select few so that they stand out more.

Given your GitHub account works often as your code portfolio, having more control over it would seem like a no-brainer to me. Obviously you could develop something custom on top of the GitHub API. Perhaps there's something like Astral but for portfolios.

One way to deal with the problem is to push bigger projects below organizations and deal with it there. This won't work always, but when it does, it's a nice way to split up your work and collaborate with other people.

No Pull Requests for Wikis

It is nice that GitHub provides free wikis for projects. In fact, I built jswiki on top of this idea. To allow people to contribute I made it open for modifications. Of course this means some joker might mess up the wiki content somehow given there are no controls against this. Ideally you should be able to accept pull requests against wikis.

It is possible to work around this issue by maintaining your wiki content at your project repository, keeping the wiki closed, and setting up a script that copies the content there. It's not ideal but it works.

No Revision History for Issues

Even though GitHub's issue tracker works amicably well for many purposes, there are a couple of major flaws. Most importantly it doesn't keep track of changes made to the issues. Sometimes it is important to see what the issue description looked like in the past. You can of course dig this information from your email provided you have subscribed to the issue. It's not ideal, though.

The ideal solution for me would be to have the issues available as a Git repository. This is the way the wiki works after all. The repository would track issues and associated comments in some simple format (JSON/YAML?). Besides providing revision history, this would make it possible to maintain the issues through a CLI without having to jump through hoops.

Issues Can Be Removed from a Project Without a Warning

Project issues are behind a checkbox at project settings. Simply unchecking that you will deny access to your project issues. I don't approve with this design decision as it can be quite perilous for open source projects. At the worst you are losing hundreds of hours of effort just like that. Only the parent project retains references to the issues and once they are gone, they are gone.

There's not a clear solution for this. I suppose someone could whip up a service that would backup issues over GitHub's API. Of course, I would rather see this resolved on GitHub's side somehow.

Issues Can Devolve Into +1's

Often people like to drop by issues of popular projects and do a little "me too" kind of reply. As a result you end up with quite long threads sometime with low signal to noise ratio. In addition you end up annoying the project maintainers. Ideally GitHub should allow people to +1 an issue simply by hitting a button somewhere. That would allow the maintainers to see where to put their efforts clearer.

Github +1s Chrome plugin solves this issue partially. It will collapse the +1 comments and show them on top of an issue. The issue subscribers will still receive mail when someone +1's an issue, though.

No Clear Way to Find the Living Fork

As projects develop, sometimes the originating repository simply stops developing and a fork will pick up the torch. Currently there's no easy way to find it. Ideally GitHub would show the most notable forks at the root project page. I know there's network information but I feel it's a little hidden.


I still like GitHub as a service and it's hard to imagine developing without it. I believe it could become significantly better with a little TLC here and there. Not all of these gripes are easy to fix. I do hope GitHub will continue improving as that would be in everyone's interest.

Friday, July 31, 2015

SurviveJS - Webpack and React v1.5 is Out!

The book project keeps on progressing. It managed to attract an editor. You could say that sped things up considerably. It is very useful to have another pair of eyes to push you further and I think it shows in this release. We just reached an important milestone with v1.5 release. The book is structurally much better and easier to approach. Get started by checking out the introduction.

Monday, July 13, 2015

On The Economics of Ebook Publishing

Authoring SurviveJS - Webpack and React has taught me quite a few things. Being a first time author it mistakes have been inevitable. But in some ways I've gotten really lucky. For instance I've gained awesome contacts and received numerous external contributions that have helped to boost the quality of the book. On the flip side as the content is freely available it has been hard to capture value and actually make this financially viable for me. I go into more detail at a little post I wrote under title SurviveJS - The Story So Far. See also Balancing between open and closed publishing.

Publishing is Changing

The world is changing in sense that it's very easy to publish something now. You can even skip traditional publishers altogether. Publishers such as Leanpub provide a hefty royalty. For instance Leanpub takes 10% + $0.50 per transaction leaving the rest to you. With a big traditional publisher you may expect a 15% royalty. If you do the math you can see you would have to sell a lot of book using the traditional way to reach the same income.

The downside of doing it all by yourself is that you'll have to take are of marketing, sales and editing. Leanpub just takes care of the annoying VAT bit. Especially given due to the EU VAT changes made at the beginning of this year things just became more complicated if you want to sell yourself. In effect you'll have to figure out where the book was bought and apply VAT based on that. It's better to let someone else to handle the bureaucracy at least when you are a small player.

It is important to note that Leanpub allows you to publish through other channels. You could sell the book through Amazon or iBooks for instance. Leanpub should probably be thought as an experimentation platform. It will allow you to publish a work in progress book, gauge the interest and develop your book based on the feedback. There have been cases where a traditional publisher has picked up the finished book as it has been shown that there's significant demand for it. It is an excellent value proposition for them after all. Just pour money.

Closed vs. Open Content

One of the biggest questions when it comes to publishing is how to deal with the content and pricing. You could go the traditional way, keep it all closed and put it behind a paywall. This model has been proven to work. You will have to be strong at marketing and get the right message at the right people but it is doable.

Another way, which I chose for my book, is to keep the content freely available. I did this through GitHub. The surprising benefit of this has been the influx of external contributions. You can definitely receive errata in a closed model as well but it feels like an open model is more conducive to collaboration. At times I have felt more like a shepherd rather than a author but I suppose that's a good thing.

As the content is freely available it has enabled more people to get exposed to it. It is always heartwarming to see a positive mentions about the book. Unfortunately this hasn't translated into sales but at least I know I have made a difference for some.

To encourage people actually to buy the book I decided to play on laziness. The digital version available through Leanpub has a minimal price set. I am not giving it out for free. You can definitely compile a digital version of your own but it's always a hassle. I'm afraid having this little hurdle in place isn't quite enough, though. The sales have been mediocre at best and if things continue this way, it's simply not financially feasible to keep it up.

Setting the price of the Leanpub version to zero might not make much of a difference. Perhaps more people would get it through Leanpub then but I'm not seeing the point at the moment. I feel the minimum price of $15 feels fair for a solid book.

As a Finn I cannot ask for donations directly due to legislation so there has to be some intermediate in between. Leanpub allows me to avoid breaking the law.


Given the content is free to begin with the big question is why would you pay for something that's free? There are reasons why Kickstarter, Patreon and such work. Going the inverse way doesn't feel like a feasible approach at least based on the current experience.

This is something I have to find a good answer for. I could start developing commercial content on top of free one for instance or go completely closed. If you have any insight on the topic, I'm all ears. There are likely models that can work but now it's looking a little grim.