I recently needed to run some geographical analysis and PostGIS seemed like a perfect fit for this. So off I went to install it, here’s how.
Here’s an idea; You enter a Twitter hashtag (or any search, I guess), and have the resulting tweets replayed in real time as they would have appeared had you been following the hashtag when the event occurred. That way you get the full Twitter-augmented experience when watching anything after the fact.
A feature to speed up and slow down is probably a good idea as syncing is going to be a challenge, but overall it doesn’t have to be synced with millisecond precision.
It would also be interesting if you could add your own tweets to the timeshifted stream after the fact, but I am not entirely sure how that’d work. Perhaps adding a machine tag indicating a timeshift factor or timestamp could handle it. It would be pretty confusing for your followers seeing your out of context tweets appearing, though.
It is also easy to have different stubs for different arguments:
colors.stub(:favorite).with("Person A").and_return("Mauve") colors.stub(:favorite).with("Person B").and_return("Olive")
However, the other day I needed to stub out the same method many times for a fair amount of different arguments and repeating the
foo.stub(...).with(...).and_return() part gets tedious real fast.
Thankfully, RSpec stubs have a way around this.
While the danish Ruby on Rails community isn’t as big as it should be, we’re far from an inactive bunch.
I know you’ve all been waiting for this, and this is in absolutely no shape or form an attempt at bolstering my Google juice, no sir, so here it is, the definitive list of the most popular content on mentalized.net in 2012.
One of my client projects have grown a fairly large test suite over the last years and it has become way unwieldy. Our original CI server took more than 30 painstakingly slow minutes to complete a test run.
While we are currently writing most our new tests in the fast test style, rewriting 20.000 lines of slow tests isn’t something that’ll be done in the near future.
So we needed to do something differently that would work here and now.
I seem to often find myself munging datasets from various sources in CSV format. While grep, awk, and sort go a long way, command line tools for processing CSV data are surprisingly lackluster.
Sure, I could import the data into a database and run SQL queries on it, but that gets tedious after a few runs - unless, of course, if you automate it. Which I did, and I have decided to share the tool as CSV Query.
Let’s assume I write software for a living.
Let’s assume I built something and the total cost of development is a nice round number like $10.000.
Let’s assume I expect to sell a certain number of copies, say 1.000.
Let’s assume I like to turn a profit (I do) and not give away my work (I don’t). How much should I charge?
$1? Don’t be ridiculous.
$10? Assuming every copy is sold that’d put me at break even. But at $0 net I might as well not have built anything. And if I don’t sell 1.000 copies, I’d have lost money.
$20? Now we’re talking. I’d be looking at recouping my costs and making a decent profit - I might even be able to pay for food and housing for a few months.
$100? If 200 customers find my something worth that much, why not?
Now, let’s assume the number of copies with certainty cannot exceed 1.000 for whatever reason.
How much would you charge?