April 2010


Over the years there have been a lot of discussions of how to improve Bugzilla’s powerful-but-intimidating interface. Again and again these explorations seem to run aground on the jagged rocks of “Well, if we change anything, we’ll break it for all the existing users who depend on it for their daily work.”

Atul Varma has a promising approach: Leave Bugzilla itself right where it is, but use its APIs to build a new, simpler, high-level interface that abstracts away a lot of the sordid details.

And that’s the topic of this week’s Design Lunch. Atul will be presenting his “Bugzilla Dashboard” and looking for feedback on the design.

The Design Lunch will be, as usual, 12:30 PM Pacific time this Thursday, at Mozilla HQ in Mountain View. It will be recorded and broadcast on air mozilla; instructions for watching or calling in are here.

I wrote a whole rant about how anti-competitive Apple’s App Store model is, and I was all set to post it here. But when I was fact-checking myself I found out that actually Apple approved Opera Mini for iPhone on April 12. So Apple is in fact willing to approve apps that compete with their built-in functionality.

That weakens the argument I was going to make. Hmmm. I’ll have to think about how much this changes my position. Just because Apple decided to allow competition this time doesn’t change the fact that they have the power to block any competition they don’t want. I suppose it comes down to a question of whether you think Apple is an “enlightened despot” or just a despot.

Congrats to Opera, though!

While we were working on Test Pilot studies, Patrick Dubroy was doing his own research on Firefox tab usage patterns. He presented his findings a paper at CHI 2010 last week. Now he’s put up an excellent blog post summarizing what he found out. Go read it right now!

A couple of months ago, the Test Pilot team sat down with six volunteer users, one at a time, and asked them to go through the steps of installing Test Pilot and submitting test results.

(Yes, that’s right: we were testing Test Pilot — feel free to make infinite recursion jokes.)

What we found was extremely valuable. The same problems happened again and again. Six users may not seem like enough to give you useful information, but believe me, after you’ve seen the fourth or fifth user in a row trip over the exact same usability problem, you’ll have a pretty good idea of how high a priority it is. (Statistics rule of thumb: if you have a problem that affects 1/3 of users, then you only need to interview 5 users to have an 85% chance of seeing it).

(more…)

Alexander Limi recently started a Reddit thread to ask the Reddit hive mind about their pet peeves with Firefox:

What I’m after is more the “one hundred paper cuts”, the stuff that annoys you on a daily basis, and that I could help with getting prioritized as a User Experience / User Interface person.

Over 2300 comments later, Limi has his answers, and those are what he will be sharing with us at this week’s Design Lunch. It will be Thursday at 12:30pm Pacific time, and will be recorded and broadcast on Air Mozilla. Call-in info is here.

The next Labs Night will be April 29, which is next Thursday, here at the Mountain View Mozilla HQ. More details as we get closer to the event.

Computer scientists have been working on artificial intelligence for over 50 years now. The results have been disappointing for anyone who was looking forward to a world of intelligent robots. AI research has produced some useful algorithms, which combined with Moore’s Law have allowed us to brute force certain narrow problems (like winning at chess); not to belittle the cleverness and hard work of AI researchers, but everything we’ve seen is just cleverer ways of solving problems by rote calculation. We have yet to see a computer program do anything remotely like what we’d think of as “true” Artifical Intelligence: independent reasoning, original thought, self-awareness, comprehending human language, etc.

So the thought I had is that maybe strong AI is just not something that Turing machines are capable of. No matter how fast or powerful our computers are, they’re still Turing machines, and Turing machines have been mathematically proven to be incapable of solving certain types of problems, like the halting problem (given a program, correctly determine whether the program ever halts or whether it loops forever). This is just a hunch, not something I could back up with any solid evidence, but it seems to me like the halting problem is much easier than strong AI. Intuitively, a computer that has what we think of as true intelligence ought to be smart enough and capable of enough introspection to avoid getting caught in self-referential loops of the kind that make the halting problem un-computable. Therefore, a true intelligence could not be a Turing machine.

If this is true, then trying to build strong AI on a Turing machine is like trying to build a web browser out of gears and springs: with extreme cleverness you might solve some very specialized sub-problems, but it’s the wrong tool for solving the main problem. Not just the wrong tool, but the wrong type of tool.

Strong AI might still be possible, but if my hunch is true, then it would have to wait for some future type of hardware which is not simply a faster Turing machine, but a completely different computing paradigm.