Format Toolbar

by Ben Ubois

The format toolbar has some options for styling the look of articles. There’s a couple of nice font choices including Whitney and Sentinel both from Hoefler & Frere-Jones. You can also change the font size and make the text full width.

Todd is starting to Dribbble some Feedbin design stuff so go check out his page to see what’s in the works.

Feedbin is Open Source

by Ben Ubois

I spend a lot of time thinking about how to compete with free. I believe the answer is to change the meaning of the word, so starting today Feedbin is free as in freedom.

I think there are many great reasons to make Feedbin open source, but my main reasons are:

  • I want your help.
  • I like transparency and there’s nothing more transparent than being able to view source.
  • It makes it so Feedbin cannot pull a Google Reader.

Tom Preston-Werner of GitHub wrote a great article outlining some of the pros of open sourcing software and I’m hoping to reap those benefits as well.

Mostly I’m just excited to see what happens. It should be a cool experiment.

I wanted to thank Karl Fogel for his help making this happen. He literally wrote a book on producing open source software and is in the process of revising it so it was great to have him as an advisor. Also thanks to Alex Kessinger and Samuel Clay for their encouragement and help.

Discuss this on Hacker News.

Graphing Feedbin

by Ben Ubois

Feedbin uses Librato Metrics for graphing. It even works well for monitoring systems. For example this shell script generates a set of graphs for one of the web servers like:

For monitoring various other services, Feedbin uses a ruby script that gets run once per minute using Sidekiq.

This produces some great metrics about Redis, Postgres and Memcached:

Speed

by Ben Ubois

Feedbin now preloads content which makes it pretty much instant to load articles.

This is the biggest improvement in perceived performance yet and brings Feedbin close to my personal goal of native app responsiveness.

Try it out now.

Tag Management

by Ben Ubois

You can now rename and delete tags in Feedbin.

Videos

by Ben Ubois

Embedded videos from YouTube and Vimeo will now show up in Feedbin.

Why didn’t Feedbin always show videos?

I’m glad you asked. Feedbin sanitizes all feed content for security reasons. Feed content is passed through a few different filters using the excellent html-pipeline by @jch and others. This library does some great stuff like sanitizing markup, rewriting image sources to go through an SSL proxy and turning relative links and image sources into fully qualified URLs.

Because of the sanitization all iframes, CSS and JavaScript are removed. One side affect of this is that no video content would show up since most videos on the web are embedded through iframes. The change today whitelists iframes that load content from YouTube or Vimeo.

If there are any other video hosts you’d like to see whitelisted please let us know.

Feedbin Updates

by Ben Ubois

Reeder Update

Reeder 3.2 is out. This update brings support for the faster and more accurate version 2 of the Feedbin API. Between the API change and what I can only assume is some sort of magic added by Silvio, sync is fast. Very fast.

I encourage everyone to update because:

API V1 Deprecation

With the release of Reeder 3.2 all major clients are using version 2 of the API. I’d like to deprecate version 1 as soon as possible because it uses a ton of database resources. The index for the records that track read/unread information for V1 of the API has grown to about 30GB, most of it sitting in RAM at all times. Getting rid of this will be huge relief for the database server.

If you are still using V1 of the API for anything please let me know. Version 2 is very similar except for the way it tracks read/unread information so it should be easy enough to switch over. Check out the documentation for more details.

Pricing

I announced this on Twitter yesterday but the price for new customers has gone up to $3/month or $30/year. The plan for the extra money is to be able to invest more in Feedbin. The types of things I’d like to spend the money on are paying for top-of-the line hardware, building expensive features like search and if I’m really lucky, hiring some help. It’s hard to say whether it will pay off because I definitely expect less people to sign up.

The rates for all existing customers will remain the same.

I was surprised and touched when existing customers started saying they would pay more, given the option, so this is now possible in your billing settings.

The Future

Now that the server move is out of the way and v1 of the API is on the way out, some real work can begin. First up is a long list of issues and feature requests that have been ignored for far too long. After that Todd and I have a bit of a redesign in the works. Finally there is one surprise that could have a big impact on Feedbin, but I’m not quite ready to talk about that yet :)

Full Metal Hosting

by Ben Ubois

Feedbin is now running in its new home at SoftLayer WDC01.

I wanted to talk a bit about the architecture and hardware that’s being used.

The Data Center

The data center is located in Washington D.C. This was chosen because it seemed have the best latency of all the options between the US and Europe. Most of Feedbin’s customers are in the US, but Europe is close behind, specifically the U.K. and Germany.

The Bare Metal

Feedbin is heavily I/O bound and cloud servers just were not cutting it for much of functionality so I selected physical machines for the primary servers.

Database Server

Postgres 9.2.4 on Ubuntu 12.04.2 LTS 64bit (with a 3.9 kernel upgrade)

Big thanks to Josh Drake, Andrew Nierman and the rest of the team at Command Prompt for setting this up. These guys do excellent (and fast) work. I wanted a top-notch, reliable and fast Postgres setup and they nailed it.

  • Motherboard: SuperMicro X9DRI-LN4F+ Intel Xeon DualProc
  • CPU: 2 x Intel Xeon Sandy Bridge E5-2620 Hex Core 2GHz
  • RAM: 64GB
  • Storage: 4 x Intel S3700 Series SSDSC2BA800G301 800GB in RAID 10 for about 1.6TB of usable space
  • Alternate storage: 2 x Seagate Cheetah ST3600057SS (600GB) used for WAL.

The database server is in a 6U enclosure. Check out pictures of these things, they’re monsters.

The database has a similarly configured hot standby (using Command Prompt’s PITRtools) and ships write-ahead log files to S3 constantly as well as taking a nightly base backup using Heroku’s wal-e.

Web Servers x 3

Nginx in front of Unicorn running Rails 4.0, Ruby 2.0 (using rbenv) on Ubuntu 12.04.2 LTS 64bit

  • Motherboard: SuperMicro X9SCI-LN4F Intel Xeon SingleProc
  • CPU: Intel Xeon Ivy Bridge E3-1270 V2 Quadcore 3.5GHz
  • RAM: 8GB

Background Workers x 2

Sidekiq Pro, Ruby 2.0 (using rbenv) on Ubuntu 12.04.2 LTS 64bit

  • Motherboard: SuperMicro X8SIE-LN4F Intel Xeon SingleProc
  • Processor: Intel Xeon Lynnfield 3470 Quadcore 2.93GHz
  • Ram: 8GB

The Cloud

At Softlayer there are a handful of cloud servers as well. There are two load balancers running nginx that handle SSL termination and load balancing. Both of these get used because DNS is hosted at Route 53 which offers Active-Active DNS Failover.

There is also a 4 GB memcached instance here.

DigitalOcean

All feed refreshing is done in another datacenter. In this case DigitalOcean is being used. They provide low cost AND high performance virtual private servers and it’s fast to turn them on or off. The refresh job gets scheduled to a 16GB instance running Redis 2.6.14. There are 10 2GB 2 core instances running Sidekiq Pro that pick jobs off the queue. The job is simple but needs to run as quickly and as parallel as possible. It makes an HTTP request with if-modified-since and if-none-match headers to take advantage of HTTP caching. Then the feed is parsed using feedzirra. A unique id is generated for every entry in the feed and the existence of this id is checked against the redis database.

The data structure for the ids in redis was inspired by this Instagram blog post about efficiently storing millions of key value pairs.

A unique id for an entry is a sha1 of a few different attributes of the entry. After an entry is imported a key is added to redis like:

HSET "entry:public_ids:e349a" "e349a4ec0bd033d81724cf9113f09b94267fe984" "1"

Using the first 5 characters in the id hash creates a nice distribution of keys per hash. For 20,000,000 ids only about 1,000,000 redis keys are created. This means it gets to take advantage of redis hash memory efficiency. I tried this with a few other data structures and found this to be the most efficient. For example storing the keys as a redis set took more than twice the memory.

If it is determined that an entry is new the full entry is converted to JSON and inserted back into redis as a Sidekiq job where it can get imported by a background worker running at SoftLayer.

I’d love to hear from you if you have any questions or suggestions about the architecture.

Feedbin Server Move

by Ben Ubois

Tomorrow Feedbin will be moving to a new home at SoftLayer. Feedbin will be down from 9:00AM PDT - 3:00PM PDT.

Over the past week the traffic has been going up and performance has been going down. It’s been crazy trying to keep up over the last 3.5 months and this move will bring some much needed power and flexibility.

The new setup is going to be great. I’ll have more details about how I’ve been spending your money in another post, but there will be a total of 21 servers powering the new architecture (6 bare metal, 15 cloud) with some really high performance hardware.

The new setup has been running beautifully as a staging server for the past few days. The downtime tomorrow is to do the final database transfer from the current host to the new host. Running both systems at once could cause some issues with data loss so I’m taking the safe route by turning the site off completely.

Feedbin is built on paying customers and I really appreciate your support and patience. Thanks to you Feedbin has been profitable since three weeks in and will be able to stick around for a while.

Please contact me by email or on Twitter if you have any questions.

Starred Import

by Ben Ubois

You can now import your starred items from Google Reader.

First export your Google Reader data. Then visit the Feedbin Import/Export page and upload the starred.json file from your Google Reader export.

Importing is done in the background so there will be some delay before everything shows up. File size is limited to 75MB, so if you have a larger file please get in touch.