Hey, I’m Parker.

Creator of music, photography, and (mostly open) software.

Don't Like Being Tracked?

I don't like being tracked by Web giants when I'm not on their websites. As more sites integrate Twitter, Facebook, and Google support, I can't help but be tracked on almost every site I visit.

Luckily, the integrations for these three aforementioned companies is quite simple to subvert (at least partially). If you're running a unix-based machine, you can add just a few lines to your /etc/hosts file and you're well on your way to Web privacy.

Open up a new tab, and open the Developer Console. Open the Network tab. Now navigate to your favourite blog, news site, etc. You'll see each individual network request that is made from that page listed in the Network pane of the Developer Console. Scroll through requests and make note of the domain names you wish to block.

Once you have the list of domain names, simply use your hosts file to reroute those domains to your local server (127.0.0.1). Here's an example:

127.0.0.1 connect.facebook.net
127.0.0.1 google-analytics.com www.google-analytics.com
127.0.0.1 platform.twitter.com
127.0.0.1 adroll.com a.adroll.com d.adroll.com
127.0.0.1 ib.adnxs.com
127.0.0.1 googleadservices.com www.googleadservices.com

In this example, I've blocked Facebook, Google Analytics, Twitter, Ad Roll, Google Ad Services and the unknown "adnxs" service.

Preface each domain name with the address of your local server 127.0.0.1 and group each line based on the second-level domain (e.g. adroll.com). Add each of these lines to your /etc/hosts file (note: this will require root privileges). Don't forget to save it when you're done.

Now navigate to that same site again, with the Network pane still open. You should now get 404's or 500's when you try to access those same domains you 'blocked'.

For a solution that doesn't require halting access to these hosts, check out the Tor project.

Clearing Up Confusion Around Baseurl

TL;DR: Don't use baseurl. It'll drive you crazy.

Hey, so there's been a bit of confusion about what the Jekyll configuration option called baseurl is. Part of the beauty of open-source and of Jekyll is that there's a lot of flexibility. Unfortunately, much of this flexibility doesn't apply to baseurl. Here's a quick distillation of its intentions, and how to use it.

Mimic GitHub Pages

baseurl was originally added back in 2010 to allow "the user to test the website with the internal webserver under the same base url it will be deployed to on a production server".1

Example

So let's say I come up with a cool new project. I want to make documentation for a project I'm working on called "ubiqity", and I'll be deploying it to GitHub Pages as a repo under my username, "@parkr". Its documentation will be available at the URL http://parkr.github.io/ubiquity.

In this example, the term "base URL" refers to ubiquity. When I go to develop my website, I can set the baseurl key to equal ubiquity and navigate my website from http://localhost:4000/ubiquity/, as though it were hosted on GitHub Pages. Notice that the only difference here between development and production is the host: parkr.github.io vs. localhost:4000.2

Configuring Your Site Properly

  1. Set baseurl in your _config.yml to match the production URL without the host (e.g. ubiquity, not http://parkr.github.io/ubiquity).
  2. Run jekyll serve -w and go to http://localhost:4000/your_baseurl/, replacing your_baseurl with whatever you set baseurl to in your _config.yml, and not forgetting the trailing slash.
  3. Make sure everything works. Feel free to prepend your urls with site.baseurl.
  4. Push up to your host and see that everything works there, too!

Success!

You've done it! You've successfully used baseurl and the world is wonderful.


  1. https://github.com/jekyll/jekyll/pull/235 

  2. The port also differs, but that's not what's important here. The point is that everything after the / after the host is the same. 

Always Moving Forward

Emeryville California

Today, I'm proud to announce that I have accepted a job offer from Visual Supply Co, a small and happenin' start-up based in the Bay Area. Visual Supply Co, also known as VSCO, produces the well-loved VSCO Cam iPhone and Android apps, as well as VSCO Film, VSCO Grid, and VSCO Keys. I'll be working on all things web, including the Grid service and the API.

Super stoked to begin this next chapter in my journey through life working on a slick series of products with a stellar team. Follow my adventure on my grid!

Fixing Common Mistakes When Working With EC2

I'm lucky enough this semester to be taking CS 5300 at Cornell, a class entitled "The Architecture of Large-Scale Information Systems." For this class, we will need to know our way around Amazon's Web Services. I learned a lot about AWS when I worked at 6Wunderkinder last year, so I was feeling up to the challenge. Little did I know that the tooling 6W had created around its ops was far superior to anything else out there.

I already had an AWS account, so my first step was to find a good CLI. I did the logical thing, and asked Google. Turns out, Amazon ships its own aws client, written in Python and shipped via pip. Marvelous! I ran pip install aws, aws configure and presto, I was in business.

Our first question asked us to launch an instance of the AMI ami-bba18dd2, a simple Fedora distribution. After asking for the man pages for aws in 6 different ways, I got them. I discovered I would need to specify the instance type and security group as well. So I created a security group and went ahead:

$ aws ec2 run-instances \
--image-id ami-bba18dd2 \
--instance-type t1.micro \
--security-group-ids sg-sgsga888

Yay, it worked! Ok, now I need to ssh into this bad boy.

$ ssh ec2-la-la-la.amazonaws.com
Access denied (publickey).

Huh? What's that all about? I had created a key pair from previous messing around with EC2. Hm... After a few minutes of puzzlement, I realized I needed to pass another option to my aws ec2 command. Let's try this again:

# don't forget to terminate old instances!
$ aws ec2 terminate-instances --instance-ids i-1111111
# now, create the new one
$ aws ec2 run-instances \
--image-id ami-bba18dd2 \
--instance-type t1.micro \
--security-group-ids sg-sgsga888 \
--key-name parker

Booted! Now, let's try to ssh again:

$ ssh ec2-li-la-le.amazonaws.com
Access denied (publickey).

Bollocks! Looks like I am still missing something...

Ah! After taking a look at a tutorial, I realize I need to login as ec2-user. Let's give this one more try:

$ ssh ec2-user@ec2-li-la-le.amazonaws.com
Last login: Tue Feb 4 05:13:24 2014 from cpe-88-88-88-88.twcny.res.rr.com
__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-ami/2013.09-release-notes/
7 package(s) needed for security, out of 25 available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-10-9-162-71 ~]$

YES! I did it. Ok, so lessons:

  1. Login as ec2-user, not as albie or any other name.
  2. Make sure you specify the key-name for the instance(s) you want to launch.
  3. Always err on the side of being more specific. Defaults can be bad.
  4. Always terminate once you're done using the box.

Simple fixes for problems that seem so intractible.

Installing Command-T With OS X Maverick's Built-In Vim

I was fortunate enough to, just today, pick up a new computer. My first hardware in over 4 years, I had been holding off. Once my trusty MacBook Pro bit the dust last night and I found out the repair cost was extraordinary, I bit the bullet.

So, you're probably in a similar place. You relatively recently got a shiny new Macintosh and you're so excited to start writing code and making a difference with those skillz of yours. Except one this is missing: Command-T.

Lucky for you, sir, I am here to help. OS X Maverick's built-in vim distribution comes with Ruby support already (which it needs for Command-T) so you're good there. Now you need to download and compile Command-T. Should be easy, right? Well, not quite.

Mavericks was notable for Ruby users because it ships with Ruby 2.0. All previous versions that I had ever used shipped with 1.8.7 so this was a huge bonus. Problem is, your pre-installed vim wasn't compiled with 2.0.0, it was compiled with 1.8.7.

To check this, run the following in vim in NORMAL mode:

:ruby puts "#{RUBY_VERSION}-p#{RUBY_PATCHLEVEL}"

For me, that output 1.8.7-p358. So that means the Ruby verison that vim is using is 1.8.7-p358, and we need to compile Command-T with that version. To do so, install it:

$ rbenv install 1.8.7-p358

Boom! Now download and install Command-T:

$ git clone https://github.com/wincent/Command-T.git ~/.vim/bundle/Command-T
$ cd ~/.vim/bundle/Command-T # for tpope's Pathogen
$ rbenv local 1.8.7-p358
$ rbenv rehash
$ gem install bundler
$ bundle install
$ bundle exec rake make

Aaaaand boom, you're done. Open up vim and type your leaderkey then t (for me, that's ,t) to launch the prompt.

If you get a weird SIGTERM error when you launch vim, then you installed Command-T with the wrong Ruby version. Remove ruby/command-t/ext.bundle and try again.

Fixing Memory Issues in Dokku Hosted on DigitalOcean 512MB Droplet

As many other Heroku customers, I rejoiced when Dokku was released. To have complete freedom to host as many apps with whatever databases or other plugins I wanted -- all for the cost of a small VM -- was wonderful news.

I decided, for cost's sake, to boot up one of those famed $5 DigitalOcean Droplets. I didn't see that DigitalOcean provides an "app" for Dokku, so I created a vanilla box of Ubuntu 13.04 and pressed on. I got everything up and running and went to go deploy my first app, only to see this when I ran git push:

runtime: panic before malloc heap initialized
fatal error: runtime: cannot allocate heap metadata

Well, golly, that sure is unhelpful. Looks like 512MB doesn't cut it. Luckily, we can avoid paying the extra $5/mo for the 1GB version by running the following as root:

dd if=/dev/zero of=/extraswap bs=1M count=512
mkswap /extraswap

Then adding the following to your /etc/fstab file so the swap persists between reboots:

/extraswap none swap sw 0 0

Then run this to enable /extraswap for swapping:

swapon -a

Boom. Now re-run git push and you're in business. Magic!

Credit goes to the brilliant @dhassler for the idea and the code. Just thought I'd share and preserve for my own benefit in the future.

Taking Over Someone Else's Open-Source Project

Last December (2012), Tom Preston-Werner granted me push & pull access to mojombo/jekyll. I had written a letter to Tom about the future of Jekyll and after a bit of persistence on my part, he relented:

mojombo added parkr to mojombo/jekyll

After a somewhat lengthy (and amusing) conversation via Skype, I knew what Tom's priorities were for the project and where I could help. I started off trimming down the number of open issues. At the time, they numbered just over 300. After a quick visit to Buffalo to tackle some issues with Nick Quaranto at a Buffalo OpenHack meetup at CoworkBuffalo, we were down to fewer than 200 open issues. I continued to work through them (much to my parents' dismay) all through Winter Break. I was making good progress in the triage process: I knew what many of the problems were with v0.12.1 and was formulating ideas about how we might go about fixing them.

jekyll shall inherit the earth

In January, I flew out to San Francisco to visit my sister for a week. I setup a meeting with Tom to talk more about Jekyll and a plan for moving forward to a v1.0.0 release. After about an hour or so at GitHub's office in SoMa, talking through various PR's, Tom said I could start merging pull requests.

Consultation with Tom was challenging (he runs a multi-million dollar company after all), as it was increasingly difficult for him to find time for Jekyll. Despite announcing that he was going to commit some of his "20% time" to Jekyll, he eventually abandoned that as impractical. As time went on, I received fewer replies to my emails and eventually stopped sending anything that didn't absolutely require Tom's input.

In March, I knew I needed more help. Tom wasn't able to give much time and I was trying to tackle this project alone. I had noticed that Matt Rogers shared my vision for Jekyll, so I asked Tom to add him on as a contributor to the repo. By v1.0.3, he had started merging pull requests like a BAW$ and was the sanity to my ridiculously-obsessed, inexperienced mind. Since Matt joined, I haven't really heard much from Tom. I write the occasional email and get the occasional reply but he's essentially phased out of the normal development of Jekyll. Matt & I have essentially taken over the project altogether.

By May, we were ready to ship v1.0.0. With the help of all the amazing contributors, Matt, and my many mentors along the way, I finally ran rake release for 1.0. The project was ours.

Since realizing that this product has few constraints outside of those I've constructed for it in my mind (no one is going to tell me, "No, you may not implement that feature or add that enhancement" without a valid argument as to my solution's invalidity), I've been thinking about what it means to take over someone else's project. Jekyll is 5 years old. I wasn't there in 2008 when it was first born as autoblog. Jekyll has changed a lot, as has its vision and purpose. It's used by lots of people now, whether via GitHub Pages or locally.

Taking over someone else's project takes a deep understanding of what the project is and what it should be. As much as I wish Jekyll could make me kräuterquark, that's outside the scope of the project. Developing that lens is crucial when taking over someone else's project.

Mentoring on the part of the original owner/previous maintainer(s) is exceedingly significant in the development of this critical lens. I am so grateful that I was able to meet up with Tom back in January and to have some amount of input from him as I understood what the project should be.

It would be ill-advised to take over someone else's project without a partner. Once Matt joined the team, Tom was able to step back into a much lesser role. If I had tried to do what I did without any guidance (or at least sanity checks), Jekyll v1.0.0 would have been a complete shit-show.

Trust in the community must also be earned. If people highly doubted my ability to handle Jekyll, Tom would have likely removed me and found a replacement. I argued a lot and got into some pretty heated debates at the beginning. There were a couple people who didn't share the vision that I had inherited from Tom and stopped contributing to the project altogether. I feel like we now have a pretty great community around Jekyll and are able to help each other out and share cool plugins and sites that we made for/with Jekyll.

Taking over someone else's open-source project was new terrain for me in December, and still is today. It can be successful with the right amount of mentoring and gradual removal of trusted feedback by the previous maintainer(s).

With some old projects that are brought back to life by the passing of the torch, the new maintainers are able to experience the added bonus of feeling like Frankenstein, which, I will say, is a pretty cool feeling.

igor

Fix the Government: Open-Source Legislation

The following post was first written for a class at Cornell University taught by Phoebe Sengers called "Designing Technologies for Social Impact".


We have a rather significant problem in the U.S. Indeed, our government seems to be malfunctioning in a rather obvious and irresponsible way. The question of today is simply: can it be fixed? If so, how?

One must first examine the problem and its root cause. The problem of malfunction is most basically the conflict of multiple competing interests without proper ideological or procedural basis for dialogue. The root cause of this is that the existing platforms for dialogue (meetings, letters, rallies, news spots) are severely outdated. What worked pre-internet is not passing the test of time.

In particular, the legislative process is slow, dilapidated, and closed. This leads to laws with earmarks and loopholes, confusing language and unnecessary provisions. What if the law were made out in the open, for everyone to see, the way open-source software is made? Version control and online access to bills in-the-works as well as laws would certainly improve transparency. The Library of Congress runs a system called THOMAS which is updated about daily, but the bills aren't organized logically and the user interface is fair at best.

The open-source legislation system envisioned here is based on an open-source software system called GitHub. GitHub is setup like so: users and organizations own repositories of code, text or binary documents. Each repository contains the entire history of a particular set of documents as well as the "master" version, or the latest repository-owner(s)-approved version of the document(s). Each repository uses a version control system called git to keep track of the information about the documents it contains: current versions as well as previous versions. What is unique about this system, and the incredible strength it offers for open legislation, is the concept of a "diff". A "diff" is simply the difference between a document at one point in time (referred to as a "revision") and the same document at a different point in time. Each repository can be "forked" (cloned to a user's own profile for editing) by a user; this offers the most significant freedom of open-source, which is that anyone has the freedom to modify a copy of the source and suggest changes to the main project based on those derivative works. In GitHub parlance, the suggestion for a change is called a "pull request," but this term need not be adopted by this new system.

To illustrate the design of the ideal system:

  1. Use git derivative for prose editing, rather than (line-by-line) code editing
  2. Be online, accessible by anyone
  3. Offer input from anyone in any form (suggested changes or just comments)
  4. Do not make differentiation between users other than repository owner & non-owner (those with direct access to the repository and those who can fork and suggest changes, but can't incorporate those changes themselves)
  5. Each repository is a proposed bill owned by the deciding body at the given time during the legislative process as dictated by the U.S. Constitution. Each repository contains a text file for each sub-segment of the proposed bill as well as a rationale or goal, e.g. . (root) | |-- README.txt (rationale, goal(s) of bill, and other meta info) |-- section01.txt |-- section02.txt | ...

  6. The entire process of the creation and formulation of the law is public and open insofar as the culture can push for this

  7. Moderator of some sort to ensure comments are productive (usually repo owner). All of the comments that are hidden still exist inline and can be shown by any reader of the comment thread

  8. Membership of individual lawmakers in committees and other larger bodies reflected in membership of organization accounts on the platform, which gives them access to directly changing the legislation they have access to through their organizational memberships

  9. Offer "points of interest" as a means of discussion about a particular aspect of a bill without the need to suggest changes. Can ask anything from "why is this section worded this way?" to "what are the implications of this on agricultural development in Upstate New York?" No question is too dumb and all of them require an answer insofar as they are productive and relevant.

  10. Easy citation of preexisting laws and other bills to allow for discussion surrounding conflict or other interference

  11. Access to signing letters written by President (usually interpretations of pieces of the law for purposes of execution of the law)

  12. Easy viewing of votes on final versions of bills (who, when, did it pass?, etc)

  13. Means of mentioning lawmakers in a comment if comment is directed at them, or question is asked directly of them

  14. Links to more information about execution by Executive & other related policy/law

  15. Comments and changes can only be created from user accounts, e.g. David Skorton can suggest a change, but Cornell University cannot.

  16. Users can subscribe to updates from a bill repository and receive all information about changes that are made and discussions that occur.

One negative aspect of this approach is volume and the issues that come therefrom (see: student's answer on Piazza page linked to below). As the number of collaborators grows, the ability for those with direct access to the bills to handle suggestions for modification is greatly diminished. How are the interests of the few vs. those of the many weighed (e.g. individual comment vs comment from organization such as NAACP)? From the perspective of the way government works already, aides of the legislators who have access to the repositories will also have mirrored access to accept and discuss changes and questions from citizens. As each law is its own repository, the load is distributed over the many hundreds of bill repositories being considered. Additionally, Nissenbaum & Benkler discuss the idea of self-selection and volunteerism as an element of open-source. This applies very much so to the scalability of open-source legislation and responses to citizens. If a citizen is not interested in a particular bill, that citizen will not involve himself or herself with the creation of that bill. This self-selection will greatly reduce the number of individuals with whom the maintainers of the bills will have to contend with in discussion about a bill and suggestions for change.

Ultimately, this idea is predicated on the idea that citizens, when given the right tools, can reach compromise and have productive discussions to "get the job done" efficiently and accurately.

It's Just Semantics: App.net Alpha

This post was originally written as an assignment for INFO 2450: Communication & Technology taught by Professors Jeff Hancock and Drew Margolin in Fall 2013. The course focuses heavily on design of technologies. The assignment asked for an evaluation of an online application or product based on the principles outlined in Donald Norman's "The Design of Everyday Things". It's a very basic look at the technology, but I thought I'd share it anyhow.


App.net is a social platform designed from the beginning to be nothing more than a platform for developers to build great social apps upon. This blog post will be about the very first application built upon App.net, called Alpha. App.net will henceforth be referred to as "ADN."

The main purpose of Alpha is to demonstrate the ADN API in a somewhat familiar way. Alpha behaves much the way Twitter does: a user creates posts, users can follow other users and interact with them in the familiar way. The divergence from Twitter (and other "microblogging" social applications) is twofold: a user pays for his or her account, and the posts may be longer.

By asking the user to pay for an ADN account (which is used by Alpha as the poster's identity), the normal business incentives shift from building an application (or platform) which optimizes for advertisement revenue to an application (or platform) which optimizes for the happiness of the user (if users like to use the service, they will continue to pay the $5/month or $32/year). Twitter, Facebook, and Google are well-known for accepting user detriment (in terms of the virtual design of the products) for greater advertisement revenue. This is not the case with ADN (and by extension, with Alpha).

On Alpha, posts may be up to 256 characters long and users can choose to attach media to their posts. The "@" symbol is used to mention another user (the other user receives a notification when this happens) and the "#" is used to denote topics. With the longer character limit, one could easily surmise that Alpha is attempting to make having asynchronous, open conversations online a bit easier. The openness of being able to write anyone (divergence from Facebook) coupled with the 256 character limit (a physical constraint) (divergence from Facebook, and extension of Twitter's idea that a post should be "bite-sized") produce a fantastic application for having chats with anyone about anything in a way conducive to conciseness and to expressiveness. One common frustration with Twitter is that conversations on it are very challenging, as 140 characters is far too few to really say anything substantive. ADN hopes to remedy this problem, and in my experience, arguments (and general discussions, for that matter) have been incredibly easy and productive.

Screenshot of alpha.app.net

Above, you see the home screen of Alpha (namely, mine). This shows the user's stream. The search box and post box (based on their physical attributes) afford writing. The change of the "POST" button from grey to a nice deep red-orange highlights a change of the mapping from a button that may not be pressed to one that may be pressed (I would consider the change itself feedback). As the user types in a post, the number (shown here as "256") decreases by one for each character that is typed into the box above it, providing very valuable feedback about the user's actions. This number's presence also provides visibility for the physical constraint placed on the size of the posts allowed in the service. The buttons on the right of each post afford clicking and change color (feedback) when clicked. All of the aforementioned design elements, in addition to the placement of the post box at the top, suggest that the application is trying to get the user to submit content to his or her stream. The feed below seems to indicate that the application wishes for perusal and interaction of other users' posts by the logged-in user.

Screenshot of a singular post

As you saw above, each post is given its own box. There are 5 interactions which only show up when the user mouses-over the post: "Discussion," "Reply," "via (app)," "Mute User," and "Report". In order to better facilitate the app's goal of interaction, it would make sense for the "Discussion" and "Reply" buttons to always be shown. This would improve the visibility of the interaction paradigms and afford interaction without the user making the first move (to mouseover).


That's it.

Launching a Rails Console With Capistrano

If you're using the popular Capistrano web deployment framework, you will likely have wished you had an easy way to perform a quick task in the production rails console on one of your servers. Many thanks to @colszowka for this solution:

NOTE: This is for Capistrano v2. Things are different for v3.

namespace :rails do
desc "Remote console"
task :console, :roles => :app do
run_interactively "bundle exec rails console #{rails_env}"
end
desc "Remote dbconsole"
task :dbconsole, :roles => :app do
run_interactively "bundle exec rails dbconsole #{rails_env}"
end
end
def run_interactively(command, server=nil)
server ||= find_servers_for_task(current_task).first
exec %Q(ssh #{server.host} -t 'cd #{current_path} && #{command}')
end

And, vòila! Run cap rails:console and you're in business.