Wednesday, 21 May 2014

Browsers Should Shame ISPs for not Providing IPv6

tldr;
I propose something like in this image, so browsers put some pressure on ISP's and webhosts to adopt ipv6




For years we have heard that the Internet is about to break because the world is running out of ipv4 addresses. Sad, but not true. In my career as a programmer I've seen dirty hacks grow into extremely elaborate systems, despite the fact that they were originally set up to solve only moderately difficult problems. NATting is certainly a dirty hack, and now that ipv4 addresses are running out we're going to see lots of multi-level NAT gateways, which are certainly elaborate systems if they need to provide high availability.

The problem is that these elaborate systems are difficult to maintain, and with each additional change you sink deeper into the quicksand. So even though I believe engineers will be able to work around the problems in ever dirtier ways it would be really good for the Internet if we could throw away the old system and switch to ipv6 en masse.

Who can help us escape from the quicksand that is ipv4? The Government? A plane? Superman? No. Browsers.

The Internet is in a catch-22 situation concerning the ipv6 switch. Consumers don't feel the need to switch as long as they can still use ipv4, and producers don't want to invest in ipv6 while all consumers still support ipv4. We should keep in mind that the general public is completely unaware of this issue. When I tell my fellow programmers that I've set up an ipv6 tunnel at home, they don't say "wow that's amazing, can you help me set one up too?". No, they say "cool story bro", shrug and walk away. In a way that's logical as the end-user gains nothing by the switch, everything worked just fine in the dirty-hack system. The only winners are the Internet engineers. But if this is how programmers who know about the issue respond, then what does the average consumer know about ipv6? Nothing.

Now what does everyone, including the average consumer, use to connect to the Internet? That's right: browsers.

What if your browser displayed a green endorsement like this (I know, I'm not a designer).



And if your ISP (or the server) is stupid:

The information block should also contain a link to a site displaying why ipv6 is good for the Internet. I know that if I were a CEO, I would not like customers to see this on the website of my company.

Good idea? Bad idea? Discuss on HN.

Tuesday, 20 August 2013

I moved a 4000 line coffeescript project to typescript and I liked it

TLDR: jump straight to the TypeScript section
About 8 months ago I started a new complex web app in javascript, and it quickly grew out of hand.
It had:
  • a server with routes
  • a singleton object with state, logic and helper functions
  • a bunch of similar plugins that extend functionality
  • the singleton object lives both on the server and on the client
Very soon I decided that javascript allowed too much patterns. I wanted modules, classes and easy binding on the this keyword.
Someone recommended CoffeeScript and I went with it.
The codebase expanded to about 4000 LOC in a matter of weeks.


So CoffeeScript hm, what about it?

These are my experiences after maintaining a non-trivial coffeescript application for a couple of months.

Pros:
  • Programming was quicker, stuff you want is already in the language. (classes, inheritance, array comprehension, filters)
  • Less verbose.
  • for k,v in object
  • fat arrow
  • "string interpolation #{yes.please}"
Cons:
  • fat arrow is very similar to thin arrow, git diff thinks this sucks
  • syntax. The attempt of avoiding braces is horrible. Function calling is a mess.
  • It smells like ruby. I dislike ruby with a vengeance.
  • no more var keyword? This is disturbing and error prone, given its significant subtleties in javascript.
  • everything is an expression? I like to be explicit about return values kthnxbye.
The result: a buggy codebase that feels scary, lots of unsafe monkey patching, coffeescript that seems to disagree with the idea of coffeescript.


TypeScript

When I started this codebase TypeScript had just launched. I deemed it a bit too experimental to work with, but last weekend I decided to give it a go. On Sunday I did git checkout -b typescript-conversion, installed the typescript syntastic plugins and started up vim. Fourteen straight hours of refactoring later it was done and 4238 lines of coffeescript had turned into 6145 lines of typescript.

I compiled all the .coffee files to .js files, removed all the .coffee files from the repo, and renamed all the .js files to .ts. Technically I was already done, as js is a strict subset of typescript, but doing everything typescript style was a bit more work.

Here are my experiences.

Pros:
  • fat arrow: removed almost all uses of self = this.
  • static type and function signature checking. I immedeately fixed about ten hidden bugs thanks to this.
  • classes and modules have never been easier
  • linking using tags
  • compiling linked files to one concatenated file out of the box using tsc --out
  • aim at ecmascript 6 forwards compatibility
Cons:
  • slower tooling (vim syntastic takes about 3-5 seconds after each buffer save)
  • no way of doing stuff like for k, v in object
  • no string interpolation
  • no automatic ecmascript 3 compatibility layer (monkeypatching Array with indexOf etc.)

Conclusion


I really really really like TypeScript. My project feels really clean, I see lots of room for improvement (and this time I know where to start). For larger codebases typescript will greatly improve maintainability.

If you work on a large codebase you can either automate testing, enforce developer discipline or move to static typing and a compile step. I think the last option is greatly preferred.

Wednesday, 10 April 2013

Storing branch names in git (so not Only the Gods will know which branches you merged)

tl;dr: a better git branching workflow under the bold sentence below.

So yesterday I enjoyed some of the Git Koans by Steve Losh (http://stevelosh.com/blog/2013/04/git-koans/). While I believe that the criticism is valid, I think it also misses the point about git.


Forget what you know about svn, mercurial or other version control systems when thinking about git. The fact that most people use it to version source code is irrelevant. Today I thought of a way to use git to store/merge and distribute the nginx configurations across our front facing webservers. We use git to version /etc , etc. So the big question here is: when will we see a perfect version control system for source code on top of the git data structure/algorithms?

I've been using git for almost two years, but only since reading the Pro Git book 6 months ago did I really understand any of it. The thing about git is:

a git repo is a data structure and the git commands are algorithms, and that's it.

But anyways, I wanted to offer a solution to the problem posed in "Only the Gods".

It is quite simple: when you create a branch, always start with creating an empty commit in that branch, like so:

git checkout -b BRANCH_NAME
git commit --allow-empty -m "created branch BRANCH_NAME in which we will ...."

That way, after merging, you will always have a reference to the branch names in which the commits were made. Seems like a good way to use the git system for versioning source code :)


Discuss this on HN.








Some more explanation if you need it:


In the git data structure branches are pointers to commits. Much like a variable in most programming languages, its name is irrelevant to the rest of the system. (I can hear you thinking: "But I work with variable names all the time, to me they are important!". Yes you're right, but like I said: forget about it and focus on what the algorithm does, at least for now). When examining a commit history using tig, we can trace the two commits involved in a merge operation:


You can see that the text of the merge commit already holds the branch names, but this is not guaranteed when you're merging remote branches or want to have a custom commit message.

Tuesday, 15 January 2013

WTF Google, you stole my $5 - Update

A couple of weeks ago I posted about this issue I had with my Google Chrome Developers account. After a minor public outrage (#1 on Hacker News for an hour or two) I thought: this can go two ways, either they fix the problem perfectly right now and send me apologies and a free phone or something, or they keep ignoring it. Either way: I should tell the world how it went.

Well, the actual outcome was somewhere in the middle. After three days of silence I got a reply on the Chromium Apps discussion board from Google's Developer Advocate Joe Marini. He apologized, said that the issue had been fixed and so it had. No big deal and certainly no free phone ;) (Bummer, my old HTC Vision just broke down)

I published my extension straight away. The status turned to published but it did not show up in search and neither was I able to install it. I waited, after two days hit published again. Now the status turned to pending review. I waited two weeks. I posted this new issue on the Chromium Apps board. After half a day Joe replied once more and what do you know, after a couple of hours the issue got fixed.

This morning my extension finally got published! Not exactly a smooth process though.

Two things struck me:
A) this Joe guy responds to almost all the questions on the Chromium forums. Apart from his replies Google is dead silent.

B) Google handled this in a rather off-hand manner. Does that mean that they're not really working on improving this process? Would it have been an indicator of improvement if they would have made a bigger deal out of it? There is so much silence from their side that it is really hard to make out what is actually happening.

Well, at any rate, you now know that my app finally got published after 2.5 months. And you've learned that complaining works every now and then, however much it goes against your personality. It certainly does not come natural to me.

You can discuss this on HN

Sunday, 30 December 2012

WTF Google

tl;dr: Google charged me 5$ for a Chrome Developer's account two months ago and I still can't get into my account, although I've reported the problem right away. Apparently going public is the only way to get their attention.

Two months ago I created my first Chrome Extension. I had an itch using a crappy website. With a Chrome Extension you can run your own javascript in other websites which can greatly improve their user experience. That doesn't really matter. What does matter is that I could not publish the extension.

I have an Android app for another itch that had to be scratched. When I created that, I had to pay $25 to sign up for a Developers Account. This was for administration purposes and as a measure to prevent fraudulent applications or whatever. Ok, I get that. Besides, $25 is peanuts. So I paid.

But now Google wants even more peanuts. I thought having a Google Account meant being able to use it on all Google Services. Silly me. Of course I only had my *Android* Developer account verified! Now I needed to pay $5 to have a *Chrome Extensions* Developer account.

Nevertheless, I clicked the "Pay now for your Chrome Developers account" button, because well, $5 would not really break the bank. I entered my credit card code. After about five seconds I got a popup saying: "Uh oh. There was a problem." - "We couldn't complete your purchase because of a technical issue."

Fair enough. Being a programmer I can be very patient. I tried again, same error. Now what? More patience. Days later the problem persisted. In the meantime I had filed a ticket with Google support. Here is the entire transcript of the e-mail exchange.


-----  Oct 21, 2012
Question regarding order #xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:
I got a "There was an error processing this order" error. I've tried again but with the same results. I would like to publish my chrome extensions, please let me know what the problem is.


----- Oct 22, 2012
Hello
Thanks for contacting the Chrome Web Store fee team. It looks like this order has been successfully charged.
Let us know if you have any other questions about this specific order.
Regards,
[name],
The Chrome Web Store Team


----- Oct 23, 2012
Hello,
14 hours ago I added new comments to the order, stating that I still had a problem. I've not received any reply and I'm not happy.
I can't proceed in the developer dashboard. I still get the Developer Registration page whenever I try to publish something. When I try paying I see: "Uh oh. There was a problem." - "We couldn't complete your purchase because of a technical issue."
I am getting somewhat annoyed, could you please activate my account?
By the way, I have already paid an Android Developers fee that's linked to my Google Account almost two years ago. Seems kind of strange that I have to do the same for a Chrome Developers account.
--
Jouke


----- Oct 23, 2012
Hello Jouke,
Apologies for the delayed response. I have forwarded your concern to our specialists. I'll update you know once I hear from them. We appreciate your patience as we work to resolve your concern.
Sincerely,
[name],
The Chrome Web Store Team


----- Nov 3, 2012
I AM STILL WAITING!


-----  Nov 6, 2012
Hello Jouke,
Apologies for the delayed response. I'm still waiting for to hear back from the relevant department. . I'll update you once I hear from them. We appreciate your patience and understanding as we work to resolve your concern

So now it's almost 2013, and I'm still waiting.
The amount has been charged  from my card by the way, so technically, they stole my $5. The Extension I developed would be useful to thousands of people, and it'd be a great example of why Chrome Extensions are a good idea. So I'd say that it'd be very good for Google if I could publish it. Instead, Google is creating frustration with developers.

Oh, and it has been discussed by others on the Chromium apps Google Group as well.

Hopefully this will generate a bit of noise and Google will do something. You can discuss this on Hacker News

Sunday, 14 October 2012

A secure home gateway on the Raspberry Pi in four parts. Part four, proxying to your devices


I have some very nifty devices lying around in my home:
  • A couple of computers
  • A very smart router with the Tomato firmware
  • A Raspberry Pi model B (the only one you can get right now)
  • A Popcorn Hour A200
Besides that, I have full control over a domain name (waleson.com.).

The amount of cool things you can do with this is enormous. However, until yesterday morning, these devices were working with most of their default settings (BOOOORING). Here's how I made it awesome in one evening.

Part four, proxying to your devices.

Objective: You want to access the devices inside from outside over the internet, securely

So far we used all of the devices above, except for the Popcorn Hour. This device offers, amongst other things, a torrent client over the web. If you go to http://192.168.1.133:8077, or whatever IP your popcorn hour has, you'll get redirected to http://192.168.1.133:8077/transmission/web, and you will see the Transmission Web UI. But that's only accessible over the internal network. It'd be nice if we could control our torrents over the web. (Torrents are great for downloading large files like open source linux distro's!).

So in our nginx configuration file, we add a location directive:

server {
server_name home.waleson.com;
listen 443 ssl;
error_log /var/log/nginx/home.error;
access_log /var/log/nginx/home.access;
ssl on;
ssl_certificate /usr/local/nginx/conf/home.waleson.com.crt;
ssl_certificate_key /usr/local/nginx/conf/home.waleson.com.key;
root /srv/www;
index index.html /index.html; 
location /transmission {
proxy_pass http://192.168.1.133:8077;
}
}
Now restart nginx:
/etc/init.d/nginx restart

And voila, we can access the transmission interface securely over the web from https://home.waleson.com/transmission/web. Great!

I said securely, but it's not really. No one can eavesdrop on the connection itself, but anyone will be able to access our torrent server! Not good, not good!

We need to password protect everything under /transmission.

To do that, we add two lines to the location directive:

server {
....
location /transmission {
auth_basic            "Are you l33t enough to torrent?";
auth_basic_user_file  htpasswd;
proxy_pass http://192.168.1.133:8077;
}
}
The auth_basic_user_file is a list of usernames and crypted passwords. It is important to realize that the path is relative to the main nginx.conf file in /etc/nginx.

You can easily create a login entry from bash like so:
printf "USER:$(openssl passwd -crypt PASSWORD)\n" >> /etc/nginx/htpasswd

To see what this does, run
printf "USER:$(openssl passwd -crypt PASSWORD)\n"
to display the output in the terminal itself. It will be:

USER:CRYPTEDPASS
Instead of displaying it directly, we want to append that line to a file, so we use >> /etc/nginx/htpasswd to append the line to the /etc/nginx/htpasswd file. If it does not exist, it will be created.

Restart nginx, and now when you go to https://home.waleson.com/transmission/web, you'll be prompted for a password.

We're not done yet.

Torrenting is fun, but what about accessing the router settings? As said earlier, this is something that would be cool to do, but you need security. We have https now, so if we work with passwords, they can't be eavesdropped. Let's make it so.

We could simply add another location like this:

server {
....
location /router {
auth_basic            "Are you l33t enough to access the router?";
auth_basic_user_file  htpasswd;
proxy_pass http://192.168.1.1;
}
}

But if you try this, you will get a 404. The request you made will be sent directly to the router. However, the router's web server has no idea what /router means. The admin interface is available under /, not under /router/. So instead, we'll have to use a location like this (notice the trailing slashes after /router and after the ip):

server {
....
location /router/ {
auth_basic            "Are you l33t enough to access the router?";
auth_basic_user_file  htpasswd;
proxy_pass http://192.168.1.1/;
}
}

This will strip the /router bit from all of the requests.

Another problem arises, unless you've been careless. Your router's admin interface will prompt you for a password, but nginx has already prompted you for a password. You can only specify one username/password for the entire connection though. If you chose the exact same username/password combination, nginx will probably pass the credentials along with the requests. This could be what you want, but the problem is that it is an implicit contract, which makes it hard to debug when things go awry. Furthermore, I'm not sure that the basic auth attributes aren't stripped from the request by nginx. Fortunately, we have two options to make this work anyway.
  1. No nginx authentication for /router
  2. Let nginx fill in the router credentials for you
If we omit the auth_basic settings for the /router/ location, we are prompted for credentials by the router. It has security so that's all good. Unfortunately, we have to use remember multiple passwords within our nice https portal.

I chose the second option, by putting the router's credentials in the nginx directive:
server {
....
location /router/ {
auth_basic            "Are you l33t enough to access the router?";
auth_basic_user_file  htpasswd;
proxy_pass http://192.168.1.1/;
proxy_set_header Authorization "Basic XXXXX";
}
}
Of course you shouldn't put XXXXX there, you should base64 encode the string "USER:PASS" and put it there. Something like this:


jt@augustine:~$ python
Python 2.7.3 (default, Aug  1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import base64
>>> base64.b64encode("USER:PASS")
'XXXXX'
>>>
Put that XXXXX value in the nginx proxy_set_header and you're all set!

So there you have it, I can now safely manage my home devices from over the internet! Thank you for reading, and please be thankful for the RaspberryPi foundation and all of the open source packages I used.

A secure home gateway on the Raspberry Pi in four parts. Part three, free HTTPS to the rescue

I have some very nifty devices lying around in my home:
  • A couple of computers
  • A very smart router with the Tomato firmware
  • A Raspberry Pi model B (the only one you can get right now)
  • A Popcorn Hour A200
Besides that, I have full control over a domain name (waleson.com.).

The amount of cool things you can do with this is enormous. However, until yesterday morning, these devices were working with most of their default settings (BOOOORING). Here's how I made it awesome in one evening.

Part three, free HTTPS to the rescue

  1. Part one - Dynamic DNS
  2. Part two - nginx on the Raspberry Pi

Objective: You want to wear protection before we take this to the next level

Congratulations to you all, you now have a working http server on the RaspberryPi in your home, accessible from a nice url. Sure, it's not fast, and only serves static pages, but that doesn't matter.

I said earlier that opening up your router's web interface over http to the entire internet was a bad idea. Eavesdroppers are able to see your credentials and do all kinds of nasty stuff to your router. However, having access to your router from the outside would be very useful! And this is just one of the many cool things you want to do that requires authentication and security. We need an encrypted connection.

As many of you will now, StartSSL offers free SSL certificates to natural persons. I happen to be just that. This is absolutely fantastic. I went the the express lane on their site, filled out my info, and selected to create a certificate for waleson.com. I already had a paid SSL certificate for that, but you can select one additional subdomain in the class one certificate that you get for free. For my paid certificate I chose waleson.com and subdomain www.waleson.com.

For this new certificate I chose waleson.com and home.waleson.com. (I'm not going to use it for waleson.com, I just want the subdomain here).

Somewhere in the process you can create your own private key for which you need a passphrase. I created a random passphrase and put it in my password safe. Out of the StartSSL express lane, you end up with two files: the certificate (ssl.crt) and the key (ssl.key). You need to download two more: ca.pem and sub.class1.server.ca.pem from https://ca.startssl.com/certs/. Move all of these files to a location on the Raspberry Pi. Let's say /usr/local/etc/ssl. Do this:

cd /usr/local/etc/ssl
cat ca.pem sub.class1.server.ca.pem ssl.crt > crt.pem
This will create a certificate chain, from the root certificate of StartCOM to their intermediate certificate and then down to our certificate. We'll present this entire chain to the clients, so we conCATenate all these certificates into one large certificate chain file.

Now, in nginx, let's configure the server section (/etc/nginx/sites-enabled/home.waleson.com.conf) once more.

server {
listen 80;
rewrite ^(.*) https://$host$1 permanent;
}
server {
server_name home.waleson.com;
listen 443 ssl;
error_log /var/log/nginx/home.error;
access_log /var/log/nginx/home.access;
ssl on;
ssl_certificate /usr/local/etc/ssl/crt.pem;
ssl_certificate_key /usr/local/etc/ssl/ssl.key;
root /srv/www;
index index.html /index.html;
}
To everyone connecting on port 80 we say: "No way man. Be secure. Connect on port 443 and you'd better remember it, forever!"

Now when we try to restart the server:
/etc/init.d/nginx restart
we'll get asked for the passphrase because the server needs to access the private key.

You could type it in, but I don't recommend it. We don't want to do this every time we restart the server. So instead we go back to the certificate directory and store a decrypted version of the private key.
cd /usr/local/etc/ssl
mv ssl.key ssl.key.secure
openssl rsa -in ssl.key.secure -out ssl.key (Now you enter the passphrase)
/etc/init.d/nginx restart
Try opening the site again! If everything went alright, you will now see a secure version of the index.html file.

Objective three accomplished, we now have a protected connection.

Read on: part four - proxying to your devices