Friday, March 27, 2015

Why to pick ESLint over JSLint, JSHint and co.?

Given it's so easy to screw up things with JavaScript, a little discipline goes a long way. Linting is one of those techniques that will simplify your life a lot with a minimal cost. It is possible to integrate the linting process into your editor/IDE. This will allow you to fix potential problems before they become actual issues. It won't replace testing but it will simplify your life and make it more boring. Boring is good.

Why ESLint then? It allows you to develop custom rules. Better yet there is a nice set of rules available for React! This is a good reason alone to give ESLint a serious look if you develop using React.

I've got a basic setup at react-component-boilerplate. It is a little boilerplate I designed to make it easier to develop React components for public consumption. I have tried to integrate what I consider best practices to it. The boilerplate relies heavily on Webpack and provides goodies such as hot reloading and a starting point for Jest tests. These things alone made it worth it for me to develop it.

Even though a young project, ESLint shows a lot of promise already. As people have written a lot about the topic already, I won't do the same. Consider the following starting points if you are interested:

I cannot think of a good reason why not to lint your code. It's just one of those things you should set up as that will help to avoid a massive amount of headache over longer term.

Thursday, February 26, 2015

Linkdump 24 - Business, Personal Development, Computer Graphics...

Long time no link dump. This is going to be the first one of the year given previous one was at November. Enjoy?


Personal Development




Computer Graphics



  • yaronn/blessed-contrib - Build terminal dashboards using ascii/ansi art and javascript
  • jq - Manipulate JSON over cli easily
  • pgcli - Proper cli for Postgres


Software Development












Web Development


Mobile Web









Development Tools

  • crapify - Simulate slow, spotty HTTP connections






Wednesday, February 25, 2015

How to Publish and Maintain NPM Packages?

It is almost amazing how popular NPM is these days. At the time of writing it has whopping 127k packages! It is useful beyond Node.js and people use it increasingly on frontend side as well. Tools such as browserify and webpack can hook into packages hosted on NPM. You can even build a package manager on top of NPM but that goes sort of meta.

As I have been publishing and maintaining NPM packages for a few years I thought it might be a good idea to document some of my practices. It is a simple system to use as long as you are aware of a couple of tricks.

Initializing a NPM Project

Technical Difficulties by Wonderlane
NPM relies on a configuration file, package.json. It is placed on the project root. NPM cli provides a handy utility for generating it, npm init. Simply answer the questions and you should end up with a package.json. Don't worry about all the questions, you can tweak the file later.

Introduction to Mankees

When I'm starting a new project I like to cheat a bit. Years ago I developed mankees, a little tool that makes it easier to script on top of Node. It comes with a little package manager that has been developed on top of NPM. It just does its best to hide this fact.

One of the first script I wrote for the environment was known as init. It is a simple tool that can generate project scaffolding for you. It parses ~/.mankees/config.json and then injects those values to Handlebars templates of the project you want to create. It is simple enough to create your own templates.

I have set up basic, package.json, LICENSE, .travis.yml and .gitignore in my basic Node template. It can be tedious to set each by hand so this saves some effort. All I need to do is to hit mankees init node {project_name}. It will create a directory for me with basic details set up. After that I just need to code, set up GitHub and publish to NPM.

Other Scaffolding Tools

I know there are more powerful scaffolding tools such as Yeoman. For a simple Node package they seem a bit too much. I rather take something simple and add than take something complex and remove. Less effort.

Set Up Version Control

After you have set up your basic project you should hook up Git. The basic steps include git init, committing your work as an initial commit and pushing the work to some repository (ie. set up something at GitHub or Bitbucket).

Publishing to NPM

Node packages come in all shapes and sizes by Diana Schnuth
package.json contains actually quite much information. Nodejitsu's interactive guide gives you a good idea of what each field does. You should aim to fill the most. It is particularly important you make the main field point at the entry point of your package.

If there's a cli script included, you should set bin. If the cli command name matches your package name, you can enter a string there directly, otherwise you should use an object.

As NPM won't allow multiple packages with same name, you should check out{your package name} before settling on one. Sometimes this can be the most difficult part of the project as many common names have been already taken.

At times people like to name their Git repository with node- prefix. This is less ambiguous than just sticking to a package name. The package will still retain its short, Node specific name.

Testing Configuration

You can test your main and bin configuration by using npm link. This will make your package available through Node environment and you should be able to access it anywhere. Just require('yourproject') within Node console or try to hit cli command(s) if you set it up.

Before publishing anything it can be a good idea to tag a release and update package.json. As doing these steps manually is utterly boring, there's a little utility for this. Simply hit npm version {version} (example: npm version 0.1.0). This will perform the steps for you and create a git commit with the version.
If you haven't registered to NPM yet, you should set up an account at NPM site. To make the cli aware of this, you should use npm adduser.  In case you want to share authorship of a package with someone other, you should use npm owner.

Publishing a Package

The next step is the one you have been waiting for. Hit npm publish and your package should appear to the registry. You can verify this by checking out{your package name}. Besides this you should remember to hit git push and git push --tags.

Maintaining a NPM Package

There are a couple of simple things to keep in mind when maintaining a NPM package. It will make it a lot easier for you if you respect the semver. This will make it simpler and safer to consume the package.

Publishing Something to Test

In case you want to publish something for public to test, you can do this in two simple steps. First hit npm version {version}-beta. After that publish like this: npm publish --tag beta . npm install will still point at the stable release. To install beta you would hit npm install {your package name}@beta. You can of course vary the naming and be more specific but this should give you the basic idea.

Types of Dependencies

Dependencies by Linux Screenshots
NPM packages come with three kind of dependencies. Direct dependencies, devDependencies and peerDependencies. Direct ones get installed with your package.

Development dependencies are something that you are meant to use only when developing the package itself (ie. testing utilities and such). Node will install both by default. You can avoid fetching development dependencies by hitting npm install --production.

Peer dependencies are most lenient of these. Suppose you have a plugin but you would rather not depend on the environment directly. In the worst case you could end up with a project that has multiple different versions of the host environment due to dependency declarations. That's definitely not good.

peerDependencies solve this problem. By setting up a peer dependency you defer the problem to a higher level. If you are developing for instance a React component, this would be the right way to go.

Dependencies come with some further complexity, namely dependency version declarations. NPM defaults to caret (^). In addition it is possible to use tilde (~). Given this topic can get rather complicated fast, you should study node-semver with care. That's where it all stems from.

Sometimes it may be handy to point to some dependency directly (say it's under development, not at NPM etc.). In case of GitHub, you can simply state a dependency version like this: {github user}/{project}#{reference}. Reference is optional and may be commit hash, tag or branch.

If you want to exclude certain files out of your distribution version, set up .npmignore. You may also find it useful to utilize NPM hooks. Those allow you to do things at various steps (ie. before publishing, after installing and so on).

Dealing with Scripts

Note that if install some testing tool through devDependencies, you can point at it directly within scripts section. Ie. in case of webpack you would do something like this:
  1. npm i webpack --save-dev (--save works too. You'll have to add peer dependencies by hand!)
  2. "scripts": {"build": "webpack"} at package.json
  3. npm run build - This works because NPM will add webpack build tool to the PATH temporarily when you hit npm run.

Updating Project Dependencies

As part of maintaining is about worrying about dependencies, I have set up a mankees script for that purpose. mankees update_deps bumps up project dependencies for me. It is bit of a nuclear option but it has been a great timesaver for me. npm-check-updates seems like another good alternative.

Services Helping with Maintenance

There is a lot to worry about when developing packages. It gets only worse when you get to frontend side since then you may have to support multiple different environments. I won't dig into that, however, as Alexey Migutsky has done so in detail. Instead I'll link you to a various of services to check out and apply as you feel necessary:
  • SauceLabs - SauceLabs runs your frontend tests (Selenium etc.) against multiple browsers. Free for open source. In addition they have a badge available you can include at project README.
  • Travis - Travis is able to run your tests against multiple Node environments. This can reveal issues with specific versions. Again, there is a badge available and Travis will be able to check GitHub PRs automatically.
  • David, VersionEye, Gemnasium - These services are able to check your package dependencies and give warnings accordingly. There are badges available. I have been using Gemnasium myself. It gives me a weekly digest. In addition it warns about possible security issues.


I hope this post gave you some idea how to deal with NPM packages. In the end it's not complicated once you learn the basic commands. The hard part is in figuring all of this out.

It would be interesting to hear what sort of workflow and tooling you use to make it easier to develop and maintain Node packages.

Monday, February 23, 2015

Canva - a Light Design App for Non-Designers

A little poster I did for
my frontend loving friends
Design is one of those things that looks deceptively simple but that can be hard to get right. It can be daunting to get started. As I happen to be interested in the topic, I tried out a little iPad app known as Canva. They provide the same app as a web service and the app and the service are kept in sync.

I know there are tons of design apps out there. What makes Canva interesting from my point of view are the free tutorials they provide. You simply walk through these tutorials by using the app. They highlight some basic concepts of design and give certain confidence to a beginning designer. Even better they have whole workshops available.

The app provides features from a casual vector editor, makes it easy to manipulate photos on the go and contains an easy to search photo gallery. It seems to me that is where they make their money out of. Non-watermarked photos seem to cost dollar a piece. Fortunately you may upload your own imagery to the service so I don't find that restricting.

I am sure you will start hitting the limitations of the app at some point, especially if you are a professional designer. For an enthusiast it is a good option as it combines many of the common tasks into single app and allows you to get something decent looking done fast. Even if you don't end up using the app just going through those tutorials is probably worth the time investment.

Tuesday, February 10, 2015

Introducing Reactabular, react-pagify and react-ghfork

I've been doing React component development a while. This has been a good chance to update my skillset. In the process I've become even greater fan of Webpack.

One sign of this is that I've collaborated a bit with Christian Alfoni on a little cookbook about React and Webpack. You should check it out if either topic sounds interesting to you. Feedback on the content would be welcome.

Reactabular - Spectabular tables for React

I spent most of the time working on Reactabular. I know people have written table libraries for React already. I also came by a monster known as jQuery DataTables. You could describe my library as an antithesis of it. I aimed for a simple API that allows extension without having to hack the library itself. This is something I felt was lacking in many available solutions. I took some inspiration from them of course.

You can build a full-featured table with pagination, search, filtering and editing (inline, modal) using Reactabular. It will take a little bit of work but on the plus side, you get flexibility. If you are not happy with the default solutions, you can just swap them with something that suits your needs better.

The nice thing about Reactabular is that given it's so flexible, you can hook it with Flux architecture with little effort. I don't have an example available of this but it should be just a matter of replacing some setStates with something else.

react-pagify and react-ghfork

As it usually happens when you develop a component, you might end up doing a couple of others as a byproduct. In this case react-pagify and react-ghfork are such components.

react-pagify provides simple semantics for building a pagination control. I designed it to work with cases where you have a large amount of pages. Instead of generating a link for each I made it possible to define how many links to show at the beginning, the end and around the current page. There is some additional logic to deal with possible overlaps. Overall it's not the most complex project out there but then a pagifier doesn't have to be.

react-ghfork allows you to build those "Fork me at GitHub" banners without too much hassle. I built it on top of Simon Whitaker's definitions and simply provided a sane API for it.

Lessons Learned

Even if the projects don't sound earth-shattering, doing them was a good learning experience. I learned a good gh-pages flow (see project package.json) and learned to set up more complicated configurations using Webpack. It seems like a good pick for packaging React modules as it gives UMD output without too much hassle.

Other Releases - grunt-umd, libumd and pypandoc

Speaking of UMD, I pushed out new versions of grunt-umd and libumd out. Especially latter is handy if you need to deal with UMD transformation on tooling level. Both projects have reached a stable phase.

The same applies for pypandoc, a pandoc wrapper I maintain for Python. If you need to deal with document conversions, check it out.

Tuesday, January 6, 2015

Thoughts on the Future of Web Development

Since my previous post about React.js during the Summer, I've had time to level up my development skills and reflect. You could say frontend development tools move forward quite fast. The same goes for backend of course.

On Build Tools

During the past few years I've moved from Grunt to Gulp and Browserify and then onto Webpack. The smart guys have stuck with Make. You can even write simple configuration directly to package.json scripts field but I tend to do that only in panic. My sweet spot is a combination of Gulp and Webpack.

Grunt - It's Magic

The problem with Grunt is that it's filled with magic. That's never good. You don't want to have to maintain a three hundred line Gruntfile. It's doable but not particularly fun.

Gulp is a step towards something better. After all it's just about piping. You can pick it up in ten minutes. Gulp isn't without its issues but it's a significant step ahead. Given it's just JavaScript, you can always hack it if the going gets too tough.

Browserify - a Step Ahead

Compared to RequireJS, which I blogged about years ago, Browserify is a step ahead. The primary benefit over RequireJS is the fact that you can continue writing code in CommonJS module format. You can also directly hook into NPM infrastructure which is a massive bonus.

Webpack - The Holy Grail?

Webpack can be considered the next logical step. What if instead of bundling just JavaScript you had a tool that could bundle pretty much anything including CSS, LESS, SASS, CoffeeScript, Jade, whatnot? Well, this is exactly what Webpack is meant for. Instead of having to build configuration in your Gruntfile or Gulpfile you can just let Webpack deal with it.

You still get the goodies Browserify gives you (NPM, bundling) and then some!

It can even create bundles per page for you. No longer you are forced to download everything on the first load. Instead it will split up the source appropriately and provide partial loads. This can improve site performance massively especially if you have a lot of dependencies split on multiple views. Pete Hunt's guide to Webpack covers the basic approach. I'm aware you might be able to achieve something similar using other tooling but so far this seems very novel approach to me.

Webpack works very well with Dan Abramov's react-hot-loader and takes your React development to the next level. Developing using good tooling makes you work so much faster it's almost unbelievable.

As I'm not an absolute guru with Webpack yet I prefer to use some other tooling, such as Gulp, to copy distribution files around. No doubt there are ways to manage just with Webpack, though, but right tools for right tasks and all that.

The Future

It is difficult to say where build tools might be moving. Perhaps yet another tool comes out and kicks Gulp in shins in turn? Too early to say. Webpack seems to solve the biggest issue for me and no doubt will become more popular as people discover it.

Hot loading might become more than just a development goodie. At some point people will start doing hot loading in production and people will receive updates to JavaScript as they use the app without having to reload. No doubt that opens new cans of worms but in theory that sounds very fun!

On Libraries and Frameworks

On library/framework side the route has been from jQuery to Angular and finally React and friends. You could say jQuery is the PHP of JavaScript. It gets the job done and it's everywhere. Unfortunately it doesn't scale that well for larger scale development. If you need to spruce things up a bit on a static site it's a good pick but I wouldn't develop a JavaScript driven site using it if I can avoid that.

Angular over jQuery

Angular can be considered the next logical step over jQuery and you can even use them together. Sometimes you might want to avoid jQuery altogether. In fact often it's quite trivial to implement something in Angular that would require a yet another plugin in jQuery. I would say Angular is a very good fit for small projects and prototypes. I have my doubts about scaling.

Given Angular is a framework it provides tons of functionality. The problems begin once you hit the boundaries. What if instead of loading each and every dependency the way Angular expects you want to start loading them dynamically per page? Let's just say you have just found a world of pain.

This is a recurring theme during explorations to the Angular world. It works just fine until you hit some sharp edge and hurt yourself. Particularly directives hide a lot of complexity and it's easy to get them wrong. Performance-wise watchers can bite you and you will need to be very careful with them. It may be better to avoid them altogether and consider some alternative approaches.

In fact people are experimenting with ways to simplify Angular development by borrowing ideas from the world of React. Two-way binding isn't your friend always. Some might even go as far and say it's an anti-pattern and I agree. If you can get something done with one-way binding, prefer that to two-way. It's just less headache for everyone.

React.js over Angular

The lack of two-way binding without helpers is one of the strong points of React. Given the flow goes to one direction, it is easy to reason about. As vanilla React deals with just the view portion of an application, you will eventually run into questions like how to deal with models, how should I share data between my components and so on.

To make React shine, you will need to complement with a couple of libraries. I've stuck with react-router, axios (http client) and Reflux. In a future stack I might replace axios with a Swagger client that will generate the frontend API for me automatically. I'll get back to Swagger in a bit.

Flux Architecture and Reflux

The Flux architecture starts where vanilla React stops. In short its implementations and derivatives allow you to scale up from mere components. I have found Reflux a particularly light and smart implementation. It answers to the problems I highlighted very effectively. The idea is quite simple. Your components may trigger Actions. Actions in turn modify Stores in some way. The state of stores gets propagated to your components and the cycle is complete.

Let's say we're modeling selection. I would define a SelectionAction and a SelectionStore. SelectionAction would contain actions select/deselect (accepts item to select/deselect). SelectionStore would maintain the state and on change let components know it changed. In vanilla React the state would be within components themselves. Here we have effectively extracted it out.

By default we are dealing with Singletons so to avoid sharing state you would have to create separate instances but this is more of a special case. Another thing to keep in mind is that if you want to deal with asynchronous operations (say backend query), you should implement these in a particular way.

Reflux provides a preEmit hook for this purpose. In case we implement a basic operation like fetch to initialize our Store, we would define three actions: fetch, fetchComplete, fetchError. fetchComplete and fetchError would then get triggered within the preEmit hook of fetch depending on the result. This in turn would cause our Store to either populate itself or deal with the error somehow.

Error in turn could be passed to some other place, say ErrorStore. You could then listen to that and show the errors to the user, log them and so on. The approach has definite power in it.

Of course you would have to play around with React and Reflux to appreciate the approach. Initially it might feel that you are writing a lot of code for nothing but that's not the point. The goal here is not to minimize the amount of code written but rather to make it easy to follow and reason about. This is something that can get obscured in the Angular and jQuery world unless you are careful.

The Future

It feels like React and Reflux are steps towards a better future. So far I haven't had to worry about performance when dealing with React. There have been gotchas of course and the way you need to think in is quite different than what you might have gotten used to. The approach forces you to keep your entities small and pushes towards components. The cognitive load for creating new components is lower than in Angular as there are less concepts to worry about.

One of the main benefits of React is that it allows you to develop isomorphic JavaScript. Initial attempts have been made to allow Reflux support isomorphism as well. Instead of throwing a bit of HTML and JS to the client and expecting the client to construct the UI using JavaScript, in isomorphic approach we let the backend render HTML and perform initial queries needed. The frontend will then continue from this.

It's back to the same old but this time we are better prepared and gain benefits from the both world. Performance is better and SEO is improved. In a world where latency and poor SEO means lost sales and poorer visibility, what is there not to like?

If you want to take a peek at post-React world, you should study Cycle. There is still room for improvement and perhaps React was just a start. It would not surprise me a lot if it started feeling obsolete within a year or so.

On Backend

As frontend development has become more prominent during the past few years, the purpose of backend has changed. Now it's more about providing a sane API for frontend. RESTful patterns, HATEOAS and whatnot have appeared. You still have to deal with some basic concerns here such as authentication, authorization, business logic, validation and databases. In addition it would be awesome if there was decent documentation available.

Swagger - Definition for Your API

Lately I have been benchmarking Swagger. It is a definition that builds on top of JSON Schema. In short Swagger can be used to describe your API. Various tools can be developed on top of this description. For instance you get interactive API documentation and frontend API client for free.

Depending on the tooling you choose there is of course actual work to do. You will still need to deal with plenty of concerns but using a definition such as Swagger has potential to simplify work. Using a tool like this avoids the pain of having to maintain documentation that is separate from your API. You could of course generate one based on an existing API but that's still extra work that can be avoided.

Furthermore the approach has potential to simplify validation a lot. Given each data model is described in JSON Schema, you can validate against the same schema in both frontend and backend. If the schema changes, the code doesn't have to change necessarily. In ideal world migrations could be generated based on schema changes (JSON Diff?) and propagated to database automatically. In simple cases even databases could be generated without having to maintain duplicate definitions.

The Future

It would not surprise me a lot if usage of definitions like Swagger became more common. Especially when you are working alone or in a small team, you will want to avoid waste. Tools like this have a great potential to do that and allow you to be more agile and responsive towards changes.

From frontend point of view having an API definition simplifies things as it means you don't need to maintain a separate API client. You just generate one based on the definition. Furthermore the definition gives you something to fuzz with. This in turn can be used to improve API quality and security.


Web development moves forward fast. It doesn't take long for new technologies to appear and old ones to stagnate. In a couple of short years we've gone through a couple of build tools and there is no end in sight. I do wonder what on earth could replace Webpack and how?

Just when it looked like Angular had "won", backwards incompatibility of Angular 2.0 was announced. I have a feeling that might have stolen their thunder especially given the release date is still about a year away. In the meanwhile library based approaches will have time to evolve. I would bet on React and friends.

On the backend side approaches like Swagger seem very promising. They take away some complexity while providing a lot if you have patience and time to write out a formal definition for your API. You will have to do that eventually so why not to start with it? This doesn't answer to the problem of API evolution but it's a starting point and much better than nothing!