The experience of using Feedbin in Mobile Safari has been improved too. You can now swipe horizontally to navigate between the panels.
Feedbin used to support Add to Home Screen, but an update was made to home screen web apps that prevented you from navigating between pages. This meant the feature really only worked for single-page sites, because any attempt to login would kick you out to Safari.
Articles often link to other websites and blogs. I’ll usually open these links in a new tab as I go, to read what the links contain. However, I like to do all my reading in Feedbin because it’s a pleasant and consistent reading environment.
This feature adds the ability to view the contents of a link, all without leaving Feedbin. Only the article contents are displayed, so anything loaded this way is optimized for reading.
JSON Feed is an alternative to the RSS/Atom formats. The great thing about JSON Feed is that it encodes the content as JSON instead of XML. This is good because parsing and writing XML feeds is hard.
The specification has a small surface area and is a great piece of technical writing. You should check it out. If you publish a website, consider offering a JSON Feed alongside your RSS feed.
One of the criticisms I’ve seen of JSON Feed is that there’s no incentive for feed readers to support JSON Feed. This is not true. One of the largest-by-volume support questions I get is along the lines of “Why does this random feed not work?” And, 95% of the time, it’s because the feed is broken in some subtle way. JSON Feed will help alleviate these problems, because it’s easier to get right.
I also want JSON Feed to succeed because I remember how daunting RSS/Atom parsing were when building Feedbin. If JSON Feed was the dominant format back then, it would have been a non-issue.
This command pushes the database to S3. This functions as a base backup that when combined with the WAL archives, that are continuously uploaded, can be restored to any point after the base backup started.
WAL-E offers a counterpart command, backup-fetch, to actually restore the data from a backup-push. To test the backups I needed to build in an automated way to restore the database.
Feedbin already uses Digital Ocean for a few things, so my first thought was to use Digital Ocean for this. I wrote a script to provision a Digital Ocean server, restore the backup to it and then delete the server after the backup completed.
This turned out to be too expensive. Sending data to S3 is free, but reading it back out will cost you. For Feedbin’s database this worked out to be about $40 every time I restored the database. I wanted to test backups daily but the data transfer cost would quickly add up to about $1,200/month.
While I was looking at S3 pricing, I found out that reading from S3 is free when it is read by an EC2 instance in the same region as your S3 bucket. It was also possible to save money on the EC2 instance itself by using Spot Instances instead of on-demand. With Spot Instances you bid on your instance and AWS tells you if you can have it for that price or not.
Critically, no matter what your bid is, you never pay more than the spot price which is “The current market price of a Spot instance per hour.” With this in mind you don’t have to guess what to bid and since your bid matches the on-demand price, your instance will never be terminated early due to the price exceeding your bid.
The instance I want costs $0.78/hr so I bid $0.78/hr, but only end up paying the spot price of ~$0.18/hr.
I was new to the AWS CLI, but once I figured out the right data to send, it turned out to be a fairly simple script.
This puts in a request to launch a c4.4xlarge instance with 800GB of storage. It also specifies UserData which is executed by the instance after it boots up, which is a perfect fit for the script that configures the machine to run postgres and restore the database backup.
#!/bin/bash# pg_restore# Add postgresql.org offical releases as an apt source
sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
apt-get install -y wget ca-certificates
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
# Install dependencies
apt-get -y update
apt-get install -y postgresql-9.2 postgresql-contrib-9.2 postgresql-server-dev-9.2 \
build-essential python3-dev python3-pip libevent-dev daemontools lzop pv ssl-cert
# Make the postgres user a sudoer so it can shut down the machine laterecho"postgres ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/postgres
chmod 440 /etc/sudoers.d/postgres
# Install and configure WAL-E
python3 -m pip install wal-e[aws]
mkdir -p /etc/wal-e.d/env
echo"secret-key" > /etc/wal-e.d/env/AWS_SECRET_ACCESS_KEY
echo"access-key" > /etc/wal-e.d/env/AWS_ACCESS_KEY_ID
echo's3://bucket/' > /etc/wal-e.d/env/WALE_S3_PREFIX
chown -R root:postgres /etc/wal-e.d
# Download the latest backup
service postgresql stop
rm -Rf /var/lib/postgresql/9.2/main
envdir /etc/wal-e.d/env /usr/local/bin/wal-e backup-fetch --pool-size=16 /var/lib/postgresql/9.2/main LATEST
# set the postgres recovery settings
sudo -u postgres bash -c "cat > /var/lib/postgresql/9.2/main/recovery.conf <<- _EOF_
restore_command = 'envdir /etc/wal-e.d/env /usr/local/bin/wal-e wal-fetch --prefetch=16 \"%f\"\"%p\"'
recovery_end_command = 'mail -s \"Database Restore Complete\" firstname.lastname@example.org && sudo shutdown -h now'
service postgresql start
This is all that is needed to stand-up a fully functioning PostgreSQL server and restore the database. No Chef, Ansible or any other provisioning tools required.
The important part here is that postgres lets you specify a command to run once recovery is complete, the recovery_end_command.
Here I have it send me an email and shut down the server, which terminates the EC2 instance so it’s no longer incurring cost.
If the email goes missing, then I know the restore never completed and I can go figure out what went wrong. AWS helps you out here too. The results of the UserData script are automatically logged to /var/log/cloud-init-output.log So you can seen exactly where the restore went wrong.
I would be interested in hearing any questions or comments about this.
Readability is shutting down and that means making a few changes to Feedbin.
Readability offered two services both named Readability.
A read-it-later/Instapaper type of service. Feedbin offered an integration to let you easily add links to your Readability account. This has been removed from Feedbin.
A parser API. Feedbin used this service to provide the full content of partial-content feeds. This functionality will continue to exist in Feedbin, but powered by Diffbot.
For now I’ve chosen Diffbot to fill in for the functionality that Readability’s parser API provided. Diffbot’s data is great. The whole company is focused on offering a suite of products that extract useful information from webpages and has a simple subscription business model so I’m optimistic that it will only improve.
I looked at a few alternatives to Diffbot, including some open-source projects and Mercury. Ultimately Diffbot’s solid data and presence of a business model made it the easy choice.
I’m planning on leaving the Readability icon in-place. Readability’s parser functionality is tough to convey in an icon, and I think that taking advantage of the brand recognition of Readability makes sense for now. Also, I like to think of it as a small homage.
Readability was a great product that Feedbin relied on for years. It will be missed!
It turned out that DiffBot did not offer the performance necessary for this feature. It’s been updated to use Mercury Web Parser. Also the icon has been updated to: