The conference venue - RAI |
jsDelivr is a free CDN (Content Delivery Network) that allows you to consume popular open source JavaScript libraries, fonts, and CSS frameworks easily. It leverages the power of multiple commercial CDN providers. This yields excellent performance as you can see through CDNperf, a service we developed to keep track of the performance of jsDelivr and various alternatives.
As I haven't run a stand before, it was quite an experience. I stayed at OSCON for two full days while skipping the tutorial day. Besides running a stand, I managed to attend some of the sessions. In this post I'll highlight my conference experiences. Read on.
Booth Person - Level 0
My awesome booth |
Given the scope of the conference, there were a lot of people with different backgrounds. You could actually see this in the conference programs. There were five(!) tracks. The content was split between nine topics. To say there was a lot of content would be an understatement. Not that I'm complaining.
It's About Education
From a booth person's point of view this meant I could not expect lot. In part my work was about educating people of how CDNs work and why they are valuable for front-end developers. Only after you understand that, you can start to see the value that jsDelivr provides. To make it easier for myself, I prepared a couple of slides to show (only Chrome is guaranteed to work well). As I didn't want to waste people's time, I tried to keep simple.
What Would I Do Differently?
On retrospect I should have brought a stand banner with me. Now the stand was sort of barebones. It would have been even more so without a laptop and a minimal stand to make it pop up a little. We decided to bring some t-shirts, brochures, and stickers to share.
I feel the t-shirts offered to our sponsors were very nice. Brochures were quite nice as well. Instead of 235g paper I would probably stick with 300g now as it's more substantial. I have mixed feelings on stickers. My biggest gripe with conference stickers is that they aren't useful. If I run a booth again, I'll try to be more original and perhaps bring some items tied to my culture. Originality doesn't hurt.
Conclusion
The first day of the conference was more active at the booth as more people browsed around. The second one was obviously quieter given they had already seen the booths. I must have had tens of conversations at the booth. I took a different strategy on the second day as I knew it was quieter. Instead of waiting for people to come at the booth, I went to other booths in order to prompt discussions.
Overall it was a nice experience to run a booth. It definitely leaves you less time to socialize and attend sessions. On the other hand you get to engage with people you wouldn't otherwise meet. I hope this helped more people to find jsDelivr and see the value it provides.
Thoughts on Day 1
Morning tea |
I wish more conferences would adopt it given it's so effective. You could even run a single track. Given the talks are so appropriately timed, it wouldn't hurt even if every talk was a "home run" from your point of view. You always learn something.
I recommend that you watch the keynotes through YouTube. I can guarantee there's something for you to learn there. The keynote speakers were simply top notch and I have nothing to complain.
Keynotes of Day 1
I've tried to gather some of the main points of the keynotes of the first day below:
Rebecca Parsons - The Evolution of Evolutionary Architectures
- Be ready for the future, don't predict it. - I think this is particularly true given the technology landscape changes so fast. Architecting to allow change is valuable.
- Continuous delivery allows us to focus on new features instead of avoiding screw-ups. - I feel lowering the cost of changes is an important goal.
Douglas Crockford - The Seif Project
You cannot unsee Crockford |
- Originally web was meant as a document retrieval system. Now we are using it to deliver applications. As a result workarounds are needed. These workarounds are out of time spent delivering value. - I share his view that there's a clear impedance mismatch.
- Standards only increase complexity. - Amen.
- Web is insecure by design (XSS, XSRF, clickjacking, passwords) - Quoted for truth.
- Security shouldn't be an afterthought. Something else is needed: Seif - If I understood correctly, the point Douglas made is that we should rethink what we're doing from ground up. They're starting from Node.js level (better crypto) and working their way from there. This means using better ways to generate entropy for RNGs (Random Number Generator), and eventually even new ways to implement user interfaces. I'm a little puzzled why he mentioned Qt in this context but perhaps we'll know better in the future.
Nils Magnus - Docker Security
Trust but verify |
- Containers are exciting. Despite excitement people are still skeptical about them in production. - I think this was a fair point. It's just a part of the normal hype cycle.
- It is easy to overlook container security. Trust but verify. - This means we should try to keep our infrastructure as simple as possible to keep it understandable. Security-wise having less possible attack vectors is always better. Just trusting on something without understanding can have dangerous effects.
Simon Phipps - Enough Foundations Already!
I don't even remember what this was about |
- There are a lot of open source foundations already. It's not a good idea to set up one blindly. - I agree with his assessment. A foundation can be useful but it's a big step for a project.
- Licenses are constitutions for communities. - I couldn't agree more. Essentially open source licensing allows competing entities to collaborate while serving their own interests. Of course this leads to open source politics.
- Companies will try to game the system the best they can. That's just the nature of the beast. - It's not the fault of the companies that they operate for profit. Managing the games they play is a challenge.
Sam Aaron - Programming as Performance: Live Coding with Sonic Pi
Sam in action. You should see him once |
- Sonic Pi == Turtle/Logo on Raspberry Pi but for audio - It is an amazingly affordable little machine that allows you to achieve a lot.
- Perfect for education as it's easy to get started with. - It takes just a single line of code to make a sound and go from there. Better yet, Sonic Pi provides a platform for live coding.
I discussed with Sam in detail about the project. The great thing is that he sees an intersection between teaching programming and professional audio. Sonic Pi is simple enough for students to pick up while allowing very advanced musicianship. Making the platform easy to access seems to serve the benefits of both parties. It comes with impressive documentation. It's not an afterthought like in many open source projects you see out there.
Chaos Patterns - Architecting for Future in Distributed Systems
Learning about chaos engineering |
Even though it provides great value to them, the practices are useful even for smaller companies. I believe chaos engineering is one of those ways you can stand out amongst competition while producing more resilient software.
The simple fact is that the world is a chaotic place. Essentially programming is about managing this chaos somehow. Even still, we're hit by chaos every once in a while. When AWS goes down, half the internet stops working. Heartbleed hit the internet even on a larger scale. Accepting chaos is one way towards more stable software. This discipline is known as chaos engineering.
The great insight of chaos engineering is that instead of waiting for chaos to hit us, we can face it on the terms of our own. This means exposing our systems to chaotic situations on purpose. Instead of waiting for a service to go down, we might want to take a part of it down ourselves and see if things still work. Doing this voluntarily allows us to develop resiliency to our systems. So when the actual chaos hits us, we'll be prepared.
By acknowledging chaos, we can improve the user experience. Consider something like Twitter. It prioritizes data based on accuracy. Not everything that is shown to the user has to be absolutely accurate always. Sometimes we're better off with partial data to keep the user experience good. Chaos may hit us at any time.
As a result we might want to have fallback systems in place. We can, for example, use various levels of caching (localStorage, different levels at the API). Due to the way the CAP theorem works, we'll have to make some compromise. For normal applications eventual consistency is enough.
Having all your eggs in a single basket is never a good plan. To deal with this, it was suggested that you should split your control plane. Instead of using every service through a provider such as Amazon, you should split the risk by using other vendors. The problem is that if Amazon's whole infrastructure goes down, your service will go with it. Splitting the control plane avoids this unfortunate possibility to large extent. It's about managing the risk.
Chaos engineering provides us means to be prepared for chaos. It won't protect us completely. It is still far better to inflict a little chaos on ourselves than let it hit us. This allows us to develop more resilient systems and fix potential issues earlier.
Modern Web Accessibility
Accessibility or a11y for true nerds |
Just being WCAG compliant isn't enough. We need to be smart about it. Most importantly we need to have empathy towards our disabled users. I've tried to summarize Patrick's main points below:
- Minimize the use of ARIA. It's better to default to semantic code and add ARIA only if it's really needed.
- Aim for the widest reader/browser support possible.
- Test often, don't wait until the end. Include developers/QA in the effort.
- Consider accessibility in the context of the project lifecycle.
- Dynamic user interfaces (Single Page Applicationss in particular) come with particular challenges. The systems in use have been designed to work with static content!
- We can use ARIA to announce content correctly. That way the screen reader can let the user know that a specific page has been reached for example. aria="live" is useful here and there should be only one of those per project to keep it simple.
- patrickfox/a11y_kit provides a good starting point.
I am not particularly strong when it comes to accessibility. Patrick's talk gave me some idea of the issues related to it. This is one of those topics that doesn't come up in the community that often. It definitely deserves some further thought and research at a later time.
ES6 Metaprogramming Unleashed
Getting meta with Javier |
Even better, we can finally throw proper errors for invalid object access if we want. You should check out the slides for details. These are good techniques to know and Javier explains the ideas well.
Building a Mobile Location Aware System with Beacons
I know Bluetooth beacons have been a relevant topic for a while especially now that Apple has gotten involved with iBeacon. We've had global positioning for a long time. Now we're solving the remainder, local positioning. Beacons allow us to achieve this and the technology is starting to become affordable for mass consumption. Tim Messerschmidt went quite deep into this topic and it was nice to see the state of the art.
Especially the development of Bluetooth Smart has helped a lot. The original implementations of Bluetooth consumed a lot of power. The newer implementations are far better. I believe this is one of the main factors that helps to drive the adoption of beacons. The basic ideas behind local positioning are simple. We can either use triangulation based on beacon locations or trilaterate based on distances to object to track. It's all very simple math.
Just deploying beacons and starting to push content to consumers isn't enough. It has to be done in a tactful manner and actually deliver value. Otherwise you are just building infrastructure for nothing.
Beacons come with certain limitations that it's good to be aware of. They can receive interference from various sources including microwaves, satellite connections, electrical sources, monitors and LCD displays, and anything in 2.4/5GHz (Wifi) for example. Incidentally materials and even people (mostly water after all) can cause interference and hurt results.
Writing Code that Lasts
Red again. Coincidence? |
In my experience you develop an intuition for good code but sometimes you need a little advice to push you to the right direction. Rafael's talk was full of these little tips.
The tips are quite technical in nature, though, and won't help with larger scale issues like how to know what to do and when. Developing the code is only a small part of it all. Learning to develop robust code is a good starting point of course.
Thoughts on Day 2
Dutch design. Reminds me of Finland. |
Keynotes of Day 2
Just as for day 1, I've tried to gather some of the interesting points below:
Stuart Frisby - AB testing: Test your own hypotheses, and prepare to be wrong
Especially for a big company like booking.com, it is very important they understand what they are doing and why. This is where AB testing comes in. It allows them to prove themselves wrong. Simply asking the question "this or that" enough times can provide you the confidence you need to choose between the options.
It's not that hard to set up an AB test. First you need to decide what you want to test, figure out how to measure it, and then run the test and analyze the results. This is where statistics come in and you'll need to understand whether the results are statistically significant. I've done some AB testing myself but Stuart's talk made me realize I should test more to make better decisions.
The problem is that often our intuition is wrong. AB testing provides us means to eliminate that bias. Instead we just need to become really good at designing and executing tests. For that to be possible you are going to need enough traffic, though, and you need to be very specific about the goals of your testing. I believe this ties back down to business metrics.
Stuart mentioned several anti-patterns. You shouldn't test too much at once. Keep it simple and run one experiment at a time. The test results depend on the context and it is not a good idea to generalize too much. The great hamburger menu debate is a good example of this. Using one makes sense in certain contexts. In other contexts something else works better.
AB testing ties back to the company culture. You need to be prepared to be wrong and let the data guide you. This isn't trivial. I believe this is a good way to grow an online business as instead of relying on hunches, you actually have something more solid.
Ari Gesher - Privacy: The next frontier
To be honest, I didn't get a lot out of Ari Gesher's talk as I didn't have the correct background to appreciate it. I understood that privacy is a different problem than security. In the future we are going to need more granular ways to deal with aspects related to privacy. This potentially means stricter technological controls. I can only imagine how difficult problem it is in large scale systems where you want to make sure only the right persons have the access to the right data.
Mandy Waite - Kubernetes: Changing the way we think and talk about computing
If I understood correctly, Kubernetes is something that allows you to orchestrate containers. It provides the semantics you need to control them in a large scale environment and split tasks between them so that the hardware gets utilized in a smart manner. This is a major problem for companies such as Google and it's likely the reason why they developed the system.
Ninh Bui, Hongli Lai - Bootstrapping a business around open source
Yes, that's Shia |
- Beware of doing too much consulting on the side. Multi-tasking comes with a significant overhead. It will take attention from your own product.
- Charge money for your products. Open source doesn't mean you cannot make money. After all that's what enables you to work on it in the first place!
- Market actively. I concur with this point completely. I feel Linus' "Build it and they'll come" doesn't work for everyone, especially these days.
- Focus and commitment is vital. You have to show you are willing to push your offering to new levels.
On business model side there were a couple of cool points as well:
- Selling support contracts can be problematic as it requires sales persons and a large upfront investment. It is a business that's difficult to scale.
- Ideally you should be able to produce passive income. It is even better if it is in the form of recurring revenue as that keeps the boat from sinking.
- Have and open core and build value on top of that. Consider selling subscription based licensing (see 2.).
- Develop paid products, avoid relying on donations. Be sure not to sell premium without any extra features.
- Charging money for your software enables a future for your business.
They also made a few interesting points on marketing:
- Word of mouth can work well with open source due to its nature and lead to organic growth.
- Marketing is art of repetition.
- Engineering can be seen as a marketing resource. I.e., engineering something around the current offering can lead to good results and help with the adoption of the main product.
David Arnoux - Growth Hacking: Data and Product Driven Marketing
Getting into growth hacking with David |
- Small army can beat a bigger one when using subversive tactics. In fact, the statistics work greatly in favor of the small one in this case (63.6% victory rate). I don't know where the figure comes from but the point is that tactics matter. This reminds me of the Winter War.
- People don't like to remain still. When revamping the Houston airport, just reducing the queuing time wasn't enough. They actually needed to make sure people remained in motion to make complaints go away. Solve the right problem.
- Airbnb bootstrapped itself based on Craigslist data. In this case working out a creative approach lead to amazing success. That wouldn't have been legal in Finland due to the law.
- As AdWords weren't working out for Dropbox and it was costing them money, they built a referral program within their product. It worked out amazingly well.
- Growth hacking tactics rely on using other people's network (OPN) somehow. The problem is that a channel that might grow for a while will get saturated fast. The challenge is finding undervalued channels in which to grow.
- Growth hacking is perfect when you have limited resources and you need good return on investment. It allows small companies to reach significant market share by being more agile than the incumbents.
David listed several growth hacking principles:
- Build it, they don't come. - I might call this the inverse Linus' law.
- Only testing shows what's successful. - This ties back to the AB testing talk.
- Scale working, kill failing. - Don't get emotionally attached to ideas. Instead, work through a large amount of ideas and find those that work.
- Speed. - Speed is of essence.
Overall the talk was just great and I recommend watching it if you want to get a better grasp of the topic. It's no surprise there's a growing demand for growth hackers in the industry. You will have to master many separate disciplines to be truly effective at growth hacking. Given it's so hard for one person to have a good understanding of each, companies have begun to form growth hacking teams to push their products forward in a market driven way.
Death to passwords - Tim Messerschmidt
Even though I was aware that passwords are problematic, Tim Messerschmidt's second talk of the conference made it me even more aware of that fact. The main issue is that passwords are weak by definition. There are both technical and psychological reasons to this. If we enforced safe passwords, that would mean compromising user experience to some extent. There are some tweaks we can implement (i.e., show password on mobile, generate strong passwords, gamify password creation) but in the end they are just crutches.It is possible to screw up the situation on technical side as well. Every once in a while we see results of that in the news. At the very least we should perform properly salted hashing over the passwords using a safe algorithm, such as bcrypt. Just don't use weak algorithm, like SHA-1, MD5, and such.
Tim brought up the topic of two factor authentication. I've been using Google Authenticator for a long time and lately I've been experimenting with a Yubikey. You should pick up Google Authenticator at least and hook it up with services that support it (Google and GitHub come to mind). It's cheaper than suffering a security breach.
Other means of secondary authentication include biometrics and trusted devices. For instance, if it's possible to ascertain you are in the possession of a certain device, that's something that could be useful.
In mobile usage it may be enough just to ask the user's email on login and send an authentication link there. This works very well and avoids the problem of passwords altogether. Of course your email can get compromised but then you are sort in trouble anyway.
There was also discussion about a trust score assigned to a user. That can be understood as something that consists of no id, social id (think social security number), and concrete id within the system. This ties back to the topic of authentication (are you who you say you are?) and authorization (do you have permission to do something?). You might be able to perform certain operations in a system through a weaker level of authentication. Heavier operations could require heavier authentication.
Of course OAuth and OpenID Connect were mentioned as well. Given it's an important and broad topic, standards have emerged to help with the situation. I have a feeling we still might way to go until we can ditch passwords. At the very least I recommend using some sane password manager to push the problem out of your head if nothing else.
Chris Chabot - Technology isn't interesting until it's technologically boring
Chris Chabot's talk was the perfect way to end the conference (for me anyway). It's amazing how much technology we take for granted. It is just the nature of technology. Now we have generations that haven't lived a day without the internet. These so called digital natives have completely different view on the world than those that lived before. I can only imagine what the world looks like in a decade or two.I can only tell that the pace of technological progress has been increasing. You can see this from the speed in which technological diffusion accelerates. Each revolution happens faster than previous one and we're able to accept new technology faster and faster. Just consider something like Uber and the way it revolutionized the taxi industry. There are new Ubers on the way.
As technology progresses and diffuses through the layers of the society, it becomes something that can be built on top of. When most of people have mobile phones, you can start selling mobile applications for them. At some points mobile phones will become obsolete and replaced with some newer technology. Rinse and repeat.
According to Chris innovation happens at the edges. I think the real question is figuring out where that edge is. Once you know where it is, you can push progress forward. Perhaps the accelerating pace of developing tells us that people are getting better at this. As the technology landscape grows, there are more niches to exploit. This is innovation at scale.
Conclusion
Open source camera from the hardware pavilion |
OSCON was easily the most amazing conference I've participated in. If I get a change, I'll gladly participate again. There's an immense amount to learn and the conference gives you a nice cross section of what's going on at the moment.
It's not the cheapest conference but I believe you might get some good value out of it depending on how edge you prefer to be. It's definitely worth it for companies to send their people there. It gets harder to justify if you have to pay for it all yourself. You could do a worse investment, though.