Dreaming on my back

August 2nd, 2011 5 comments

Over the past couple of years, I have become increasingly aware of a curious phenomenon: when I sleep on my back, I tend to have more vivid dreams than when I sleep on my side or chest.  I don’t mean vivid as in “nightmare” — it’s been a long time since I’ve had a nightmare — I just mean that they seem more colorful, have better plots, have better sound, and generally are more intense.

Ordinarily, I find my sides and my chest to be more comfortable than my back for sleeping, but every once in a while some good back sleep fits the bill.  Sometimes I’ll drift off to sleep while on my back.  Other times I’ll go to sleep not on my back but wake up mid-dream and find myself on my back.

There are a variety of potential issues with my observed back-sleep–vivid-dreams correlation:

  1. There could be a strong reporting bias — if I sleep on my back and don’t have  a vivid dream, the event passes unnoticed.
  2. The occurrence of vivid dreams on the back could be no higher than in any other position, but they might seem more notable because I tend not to sleep on my back.
  3. Dreams are highly personal experiences, so external observation and objective measurement are impossible.
  4. Correlation is not causation — perhaps having vivid dreams causes me to sleep on my back, not the other way around.
  5. The back dreams might not actually be any more vivid than dreams in other positions, as quantifying dream vividness is fraught with challenges.
  6. There might be something about sleeping on my back that simply makes me more likely to remember my dreams.

The final item in the list could be crucial: what if something about me sleeping on my back simply makes me more likely to remember a dream?  That implies that something is causing me to wake up during the dream.  As it turns out, I am aware of just such a thing: snoring.

I snore.  I’ve never heard myself snore, but I know from the accounts of others (and midnight jabs in the side) that I snore.  (If I snore like my grandpa snored, then — well, I apologize.  I hope it isn’t that bad.)  Anyway, I’ve noticed that I’m more likely to snore when I sleep on my back.  Assuming that I really am having more vivid dreams while sleeping on my back, perhaps they are related to my snoring.

Maybe the snoring noise is seeding the dreams with information on which to operate. Perhaps my snoring is waking me up, causing me to remember the dreams.  Perhaps my snoring wakes other people up, which causes them to jab me and wake me up, which would also lead me to remember the dreams.  Or maybe something about the snoring is affecting oxygen levels in my brain and thus its behavior.

I post this not because I have answers but because I have questions.  A search of the literature produced no promising leads, so the next step is to find out: am I alone?  Have others experienced predictable dreaming changes based on their bodies’ positions during sleep?

Sticky: Trip updates

June 21st, 2011 Comments off

As a reminder, all updates related to my road trip to every US state and Canadian province will be exclusively on stoppingineverystate.com.  I have syndicated several posts from there to here, but I will not be doing so from now on.  Just trying to reduce clutter.

Hope to see you over there!

Update December 20, 2012: The trip was completed successfully! Now resuming regular blog service…

Objective experience

June 2nd, 2011 2 comments

A while back, there was a paper by Ericsson et al. that claimed that about 10 years of focused practice and experience were necessary to become an expert in something.  Gladwell converted that figure to 10,000 hours, and some guy is trying out the theory investing that much time in learning golf.  (Should be interesting to see if he gets more than fodder for a book out of the experience.)  I’ve been thinking about how much time I’ve put into my own pursuits, and the numbers turned out to be surprisingly small.

Take hockey goaltending.  I started playing goalie when I was 22 after graduating from Rose.  I started from zero experience.  I knew how to skate (though not well), and I had played some inline hockey, but I had never played organized ice hockey, and I certainly had never played goalie.  That was seven years ago.  I played for three seasons in Minnesota, didn’t play much for the two seasons I was in California, played off-and-on the first year I was back in Minnesota, and played a lot this past season.

I tallied up the various organized games, pick-up games, practices, coaching sessions, and so on, and I figured out that I have spent just 295 hours playing goalie.  In my entire life.

Hours of hockey per year

Hours spent playing ice hockey as a goalie by calendar year. The road trip will add about 90 hours to 2011's total.

That doesn’t seem like much.  Consider that two months of full-time work at your office job will bring you to about 300 hours: would you consider yourself an expert in your profession after just two months?  Or, looking at it a bit differently, what level of expertise does a summer intern have several weeks before the completion of his internship?

One might reasonably ask if that time, on rink or in office, was spent in an active, conscious attempt to improve.  When I was an intern, I wasn’t even sure what I should be improving let alone actually improving it.  With goaltending, I’ve been taking lessons over the past year, and I think that the deliberate practice and professional feedback has significantly improved my game.  Unfortunately, that dedicated practice time has amounted to only a small fraction of my already limited experience on the ice.

Getting some pointers from my goalie coach

Getting some pointers from my goalie coach this spring

Another example is backpacking.  I think of myself as a capable backpacker.  I’m comfortable in the woods, and I’ve spent lots of nights on solo trips in the wilderness.  But am I an expert?

The numbers would suggest not. I’ve been doing regular backpacking trips only since 2007 (plus one trip to Philmont as a Scout in 1998).  Excluding day-hiking, I figure I’ve covered about 280 miles while backpacking, over about 25 nights.  Of those, about half were solo adventures, the most hazardous being my off-trail Badlands loop.

So, about a month, give or take, and well under 1000 miles.  I feel more qualified than the numbers show, but I don’t think I’m anywhere near mastery.  Maybe my confidence is due to the fact that I’m prepared when I go out; the 10 essentials have a permanent home in my pack.  Perhaps it’s the amount of reading I’ve done on the subject, which has precipitated a significant evolution in my backpacking technique over the brief period I’ve been active.  Still, an expert I am not.

Fortunately, I don’t rely on hockey or backpacking for my livelihood.  My efforts to improve, particularly with hockey, are motivated simply by my desire to enjoy the activities more.

The thousands of hours spent studying, playing with, and working on computer technology, particularly software, are what have made me, if not a world-renowned expert, at least somebody competent in the field.

SiES: Pre-trip Q&A

May 24th, 2011 Comments off

New on my road trip blog, “Stopping in Every State”:

As the trip has grown closer, people have been talking to me more about it.  I’ve begun to notice some common questions emerge, so I thought I’d try addressing them here.

Q: How are you going to find hockey games?

A: Finding hockey games is

Continue reading on SiES –>

Concurrency and diminishing returns

May 1st, 2011 Comments off

For a while now, I’ve been working on making Blurity faster.  It’s come a long way since the first public release.  What once took 6 minutes now takes about 10 seconds.  That increase in speed has come from a variety of factors, including rewriting all of the image processing code in multithreaded C++ and having several ground-up algorithm redesigns.  Unfortunately, the newest version of the blur modeling code suffered from poor concurrency.  I could throw about three worker threads at it, but beyond that there were sharply diminishing returns.  As in no returns.  I’d throw more threads at the problem, and I wouldn’t see any improvement in the completion time.

I had implemented a basic producer-consumer pattern in the blur modeling code.  I wanted to be able to support server instances with a variable number of cores, and since I knew it was bad to have more running threads than cores, it seemed like a reasonable choice to have a variable number of consumer threads servicing the work queue.  That work queue had a constant number of items in it when filled: five virtually identical items and one item that would require slightly less processing.  The boss would be called and fill the queue with six items, then the workers would consume the items, and the boss would wait for the queue to be empty and all of the workers to be done before returning.

When I first noticed that having more than three worker threads seemed to bring no increase in performance, I came up with some crazy hypotheses about the cause.  For a while I was even thinking that calls to malloc() and its equivalents were somehow causing the worker threads to block one another, but that, of course, was not happening.  Then I got out Valgrind, a powerful profiling tool, and gathered data about where the app was spending its time.

In spite of overwhelming evidence that the program really was dominated by arithmetic related to FFTs, I somehow convinced myself that I was dealing with a cache problem.  Several days were waisted trying to eliminate the handful of spots in the parallel code that were experiencing ~1% L2 cache misses (that was as bad as it got, fortunately).  At one point, I was even thinking that the scheduler was doing a bad job of keeping threads on the same core, thus supposedly causing the cache to be unnecessarily invalidated, so I started looking into sched_setaffinity() and other such nonsense.

That mucking around didn’t help matters from a cache-hit perspective (and in many cases made things worse — thank you, version control!), but it did lead me to discover I had been using FFTW in an inefficient way, so the time wasn’t totally wasted. In fact, fixing my FFTW gaffe led to a 40% improvement in the blur modeling algorithm execution time — but there was still no improvement when using more than three worker threads.

The actual cause of the problem struck me today when I was, of all things, calculating the incremental price per bagel for a potential multi-bagel purchase at Panera.  I realized that it was simply a matter of factorization, and I immediately felt stupid.

Remember that there are always six work items that start in the queue.  If the number of consumer threads is a factor of the number of work items, and the work items all take roughly the same amount of time to process, then the queue will be emptied with little to no wasted time.

Going from 2 to 3 worker threads decreases the total time (t_done)

However, if the number of consumer threads is not a factor of the number of work items, the queue will not be emptied efficiently, and time will be wasted waiting for one or more of the threads to finish after the others are done.

Going from 3 to 4 worker threads does not change the completion time

All of this is blindingly obvious in hindsight.  Since 1, 2, and 3 are all factors of 6, it comes as no surprise that adding threads to have 2 or 3 total workers leads to speed improvements.  Likewise, since 4 and 5 are not factors of 6, it should have been no surprise that having those numbers of worker threads led to no speed improvements.

That meant that the next major speed boost should happen at the next higher factor of 6: namely, 6 worker threads.  Indeed, that’s what we see in the execution times.

Execution times as function of number of worker threads (lower is better)

Three things to note on that chart:

First, there are diminishing returns.  Going from 1 to 2 worker threads leads to a 31% improvement, but going from 2 to 3 worker threads gives only an additional 14% improvement.  Going from 3 to 6 worker threads improves times only 10% more.  In addition, this suggests that other parts of the code — outside of the threaded producer-consumer block — are potential targets for optimization.  That is because the gains from each thread are not as high as we’d expect if the work items dominated the overall execution time.

Second, there is a very slight improvement in execution time with 5 threads, about 3%.  This is because of the previously noted “special” work item, which takes slightly less time to process than all of the other work items.  In the 5-thread case, the thread that gets two work items will always have the special item and a normal item, whereas in the 4-thread case one of the threads would have two normal items, thus the time difference.

Third, the execution time actually gets worse as the number of threads increases beyond 6.  There are never more than 6 work items in the queue, so those extra worker threads are just added overhead.

(All of these measurements were on the same idle 8-core machine.)

In conclusion, I should have recognized this now-obvious situation earlier.  I guess I simply overlooked the slight dip with six threads.  Now that I do see the high-level cause, I can tell that throwing more hardware at the problem is not a good use of money; a 10% speed improvement for double the cores is a bad return on investment.  The other parts of the code, or the algorithm itself, will need to be improved in order to see significant gains.

Oh, and the rule still applies.