I found an interesting article in this week’s Ruby Weekly newsletter—a post from Martin Fowler about the circuit breaker concept in Ruby.
The idea is pretty simple, but pretty slick: wrap calls to external services that can fail in a ‘circuit breaker’, which will detect when the call is failing (or acting particularly slow) and short-circuit calls. In the simplest case, this can help avoid slow-downs when a non-critical remote service fails. For example, if you normally made an inline call to send a welcome email to new signups, you might fall back to just enqueuing the task if the mailserver call slows down—or perhaps just take them to a webpage with the same content.
In the best case, this can prevent cascading failures. Webpages make a blocking call to an external service, which goes down, thus filling up the queue of available application servers, thus leading to a service outage.
In building an interactive site that, well, isn’t horribly slow, you frequently want to schedule background jobs, or queue asynchronous tasks, that should be run, but not right away. For example, if a user signs up and you want to send them a welcome email, you don’t necessarily want to do it while the page is rendering for the user, because if it takes a moment, it would slow things down. Spam checks, updating a search index, and so on might fall into the same category.
I’m starting to tinker with a half-baked idea for a WordPress plugin, and spent a while searching for how to do this. As best as I can tell, WordPress doesn’t provide a background worker per se. But it has a related concept that may suit your needs — the wp-cron system, particularly wp_schedule_event. (But check out all the wp-cron functions.)
Unlike a background worker that’s always running to pick up new tasks off the queue, wp-cron is a bit different. It appears that it can literally be invoked by cron on a *n.x system, or it can be woken up by page visits if it’s “time” for a scheduled event. I haven’t delved into it enough yet to be sure, but I suspect it wouldn’t be good for things where you assume that “queuing” them will almost-instantly run them. (Though that’s not really a safe assumption to make even for always-on background workers.) But if time isn’t quite of the essence, this may work very well.
A bunch o’ interesting links I’ve happened across recently:
- Git: Twelve Curated Tips And Workflows From The Trenches presents a blend of interesting tips. Who hasn’t wanted to “Search for a string in all revisions of entire git history” before? It turns out to be really easy!
- NewRelic has a recent post, Deploying a Scalable Application with AWS Elastic Beanstalk and New Relic. It’s a good introduction to Elastic Beanstalk even if you don’t use NewRelic.
- High Scalability has some highlights in 5 Ways to Make Cloud Failure Not an Option, though it’s light on details. #5, incidentally, is exactly what Aeolus seeks to do.
- High Scalability also looks at SSDs and databases, concluding “If you’re not using SSD, you are doing it wrong.” (Well, they do go on to add, “Not quite true, but close.”)
- Remarkable Labs is doing a Rails 4 post a day until the new year, looking at what’s new and different in Rails 4.
- Ruby Inside posts a video with Pat Shaughnessy, A Simple Tour of the Ruby MRI Source Code for people who really want to know the innards of Ruby.
- redis-throttle provides easy rate-limiting to Rails apps using Redis — with graceful fallback in the event of a Redis failure.
- On the subject of rate-limited APIs, last night I began tinkering with ericboeh’s nest_thermostat gem, to control the Nest thermostats I’ve installed. A quick typo extending it accidentally got me an infinite loop hitting their (unofficial, undocumented) API. It seems that Nests have saved a whole lot of energy for their owners, though.
- Thanks to my coworker Petr for introducing me to the amazing better_errors gem, which certainly lives up to its name.
- Ever wanted to automatically reload code in irb? At least with Rails, you can make it happen.
- Amazon has been tight-lipped about the earnings of AWS, lumping it into its “Other” category of revenue. But since AWS was launched in 2006, “Other” has grown from about $50 million to $600 million in annual revenue.
While you should always use the appropriately-sized images on websites (rather than serving 10-megapixel images on your pages — people actually do that!), when dealing with dynamic content, sometimes it’s nice to ensure that images don’t expand beyond their container.
That’s easily solved with setting a max-width value in the CSS on my other blog, but the problem was that this seemed to only adjust the width. I wanted to scale images proportionally, so if an image that was too big made it through, it wouldn’t look awful.
It turned out to have a really easy fix — but it was astonishingly hard to find information on. Here’s the fix:
The “height: auto” is the key, making height proporational when it is resized. (The “100%” in max-width gets interpreted as 100% of the element it’s contained in, not 100% of the image itself.)