Appreciation for the builders of the web

Valerie's Happy Willy Wednesday!
Happy Willy Wednesday! by Valerie

I just spent a couple of days playing around with ideas, words to describe those ideas, and code to make those ideas come to life. It had been quite a few months since I’d spent the time to sharpen my tools, and while the challenges to the web are greater than ever, it’s also a more impressive toolkit than ever.

I’d like to call out a few heroes whose generous work as open source and easily accessible services have made the last couple of days particularly fun:

  • The folks at Heroku
  • The amazing perseverance of @automattic and the WordPress team, whose recent moves to both redo the front-end and open-source the .com site are one of a long string of bold and ambitious moves.
  • The energy and diligence of the Ghost team
  • The JS module hackers who make github and npm useful as opposed to just necessary
  • Brock and the rest of the team for a product that “just works”
  • The Firebase team, for not shutting down one of the tools I think has managed to bridge the ergonomics of a Heroku with the developer power that AWS-style services provide
  • AWS, for paving the way (and because even S3 is still remarkable)
  • The React team at Facebook & beyond, who are boldly moving the client side forward
  • The Material Design team at Google, for taking design seriously and then implementing it in an accessible way.
  • The LetsEncrypt team, one of the projects I’m proud to say Mozilla had a big hand in.

Thank you all.

Crowdsourcing thoughts

On Wednesday, I’m attending Remixology 2, an event put together by Fresh Media, on the topic of crowdsourcing.  In particular, I’ll somehow be the representative of the entire open web perspective on crowdsourcing (!), Alfred Hermida will be talking about the journalist’s perspective, and Leigh Christie will be there representing artists.  I’m hoping that the audience doesn’t expect any one of us to speak authoritatively on any topic, and that we can instead have a conversation.  Since talking to Hanna Cho about the event, I’ve had a couple of thoughts on the topic that I’m hoping I’ll be able to fit into that conversation.

Crowdsourcing, like most buzzwords, is loaded with too many meanings, and I rarely use it.  I’m more interested in figuring out how to leverage the internet to enable collaboration on a grand scale.  Everyone has experience with 1-1 collaboration, whether through email, shared writings spaces, voice calls, etc.  The internet has provided the technologies to make such collaboration radically cheaper and faster than before, and the biggest challenge it has brought have been widely discussed: we’re always connected, for better or worse; we’re always interrupted; the world is smaller; nobody knows you’re a dog.  All of which is old hat to anyone who’se spent any time online in the last couple of decades.

The advent of mass instant collaboration and mass participation is made possible by the same technologies, but I think we’re still in the earliest stages of figuring out both how to do it well, and what the societal impact will be.  I’m hoping we can talk about that a bit.

It’s easier than ever to spread a meme, and to recruit a population the size of a small army who are all interested or even passionate about your meme.  With ubiquitous communication systems (phones, laptops, cheap broadband, internet cafes, etc), social “viral” media  (twitter, facebook, chain letters, etc.), rich media production models (video on phones and youtube), it seems that viral messages spread like wildfire (of course there’s a massive selection bias: deliberately starting a wildfire is incredibly hard in practice).  Let’s grant that getting the word out is easy.  Depending on the topic, one can get the attention of a cohort of like-minded folks fairly easy (that’s 500 soldiers, if the roman army is a guide).  If any one of them has an hour or two to contribute, pretty soon we’re talking a person-year or more of effort, which can be a potent resource if focused!

The cost of building and running web sites has also plumetted, and the number of people capable of doing so is skyrocketing, which makes it easy (in theory) for anyone to create a place for these people to gather, discuss, coordinate, work, agitate, whatever.  Some will build dedicated websites, others will use shared tools like Facebook groups, mailing lists, etc.  In most countries, such gatherings are undetected, let alone regulated.  We now have mechanisms for coordination of group action.  The potential is seemingly unbounded.

Many online activities are virtually free.  Interestingly, even when there are real (or forecasted) costs to a project, the last few years have seen the maturation of many interesting micropayment systems.  The trendiest is Kickstarter, which somehow gathered the mindshare in the “let’s get together and fund X” world, and its most famous success is Diaspora, who raised $200k, which was 20 times what they asked for, just because they said they’d take on Facebook.  So even in the treacherous arena of cash, there are now funding models which seem to work (at least for small-scale efforts).  Thus, to the sheer hours of invested time, we can now add a few thousand dollars.

So now we have a few hundred people, excited about some idea.  There’s a website, and even a modest bank balance. To use the techy jargon, we’ve got scalable models for meme propagation, recruitment, coordination & communication, advocacy, marketing & PR, and funraising.  Awesome.

Now it’s time to actually do stuff.  In particular, it’s time to plan, schedule, prioritize, make decisions, commit some code, commit to something.  In my experience, that’s the part that we still don’t know how to scale.  Everyone in the army of volunteers has ideas about what should be done (but only a small percentage will actually have relevant skills or experience).  Everyone will have opinions about what words should be used, but only a small number will actually really listen to the other’s opinions.  If we’re not careful, we now have a large group of people who think share a goal, but who are not organized.  And that can be really hard to deal with, especially given that we’ve made it really easy for them to shout at each other.

Which leads to my main point, which is that the next challenge for mass collaboration and coordination over the internet isn’t going to be technological, but human.  Specifically, what will differentiate important projects from the rest are the people who can help groups of people achieve common goals.  That’s not a new task, but the cybernetic setting will require to adapt old skills and create new cultural norms.  Three skills at least are needed to facilitate that kind of coordination:

The first is some form of leadership.  Quite often, the initiator of the meme didn’t really intend to start a micro-movement.  She just tweeted something, or uploaded a ranty video, or wrote a scathing blog post.  And all of a sudden she is the center of attention from a bunch of strangers who “agree” and want to “do something about it”.  In that kind of situation, converting emotional energy into effective action will (I claim) depend on the emergence of a leader of some kind.  Which doesn’t mean a spokesperson, or a dictator (benevolent or not).  It just means someone who, using whatever means are appropriate for that group, can get the group focused, moving in a roughly consistent direction towards some vague approximation of a common goal.  Different groups of people will respond to different types of leadership, but I’m pretty sure all large groups need at least one individual they can anchor to.

The second is organizing.  The style of organization needed will vary wildly depending on the group, from simply taking notes to gardening a wiki to tweeting a lot, nagging, proofreading, testing.  But there is a yin to the leadership yang, and the people who are good at getting people excited are rarely the same who can remind them to uphold their commitments.

The third is what my friend David Eaves refers to as negotiation, or the process of seeking common interests among a set of potential collaborators, and building commitments and mutual trust along the way.  This skill is rarely explicitly discussed in many organizations, because most organizations have built-in power structures which have well understood tie-breakers (“the senior person decides”, “the client decides”) as well as clear consequences to disagreement (“you’re fired/demoted/etc.”, “this contract isn’t renewed”, “you’re not invited next time”, etc.).  Neither of these are as clear in a setting where peering and fraternity are assumed over hierarchy and management.  If I show up at your virtual event expecting to be treated like a peer, but it so happens that I misunderstood what your goal was, the odds are pretty good that one of us will frustrate or disappoint the other.  If we both care about our own visions, the odds of a flame war are high.  To avoid that, we need to clarify the goals up front and review them often.  We need to really explore everyone’s interests and both detect overlap and explore differences.  And we need to keep in mind everyone’s BATNA.  It’s work, but it’s the only way to actually draw from everyone’s strengths.  I think the open source / open web world is still a beginner in this arena, but I’m glad that we’re working on those muscles.

Of course, the technologist and UX thinker in me is keen to figure out whether we can design systems that help with these all-to-human (and all-too-fragile) tasks, build digital prostheses of a sort.  You can see baby steps emerging among the more “social” web apps of the day: the indicators of mood on support forums like for example, let people emote quietly, and provide non-verbal cues to emotional state, which are all to often lost in textual communications.  Building interfaces that surface the people behind the comments leads, I think, to more humane conversations (one of Facebook’s brilliant early moves was to encourage/require “real names, real photos”).  There are also simple tricks: at Mozilla, we’ve also found that if one detects conflict, it’s usually a good idea to try and resolve it using private voice calls rather than prolonged, public, painful email discussions.

I’m sure that by Wednesday I’ll have other thoughts in my head which will push these out of the way, but I’m curious to see whether these thoughts resonate with people in other disciplines, or whether different cultures lead to radically different world views.

Outlook PST importer anyone?

This week, Microsoft published an open source (Apache 2) SDK to read PST files. From what I heard, it works with Unicode PST files as generated by Outlook 2003 or later.

It’s a healthy move on Microsoft’s part, as it releases their users from feeling like their data is locked in to their relationship with Outlook. I hope the code is easy to use, etc.

I’d naturally be very interested to hear of anyone experimenting with using this code in an add-on to make the process of importing all one’s data from Outlook into Thunderbird. If you know of such an effort, let me know!

Tim O’Reilly on the future web wars

I’ve tended to limit my link referrals to my Twitter feed over the last year, but I wanted to advertise Tim O’Reilly’s latest post on this channel as well (it also feels great to have more than 100 characters to express myself!).  Tim explains well what the new battlegrounds for the future of the web are.  It’s a war that’s currently being fought with shiny discounted hardware, free access to proprietary data, and competing “privileged” interfaces to the web.  The stakes are huge, but oh-so-hard for people to grasp, as much of the mechanics of who wins what depend on economics which are far removed from the battleground:

  • People don’t pay transparently for mobile services or devices
  • People don’t pay for online news (although some surveys indicate many would)
  • People often end up “subscribing” to brands (Apple, Google, Facebook) and becoming brand consumers rather than active participants in their own digital life.  That delegation of trust is often pragmatic, but it’s worrisome if unchecked by alternatives.
  • The heterogeneity of the original internet can lead to an appearance of chaos, and many people prefer simpler, more uniform experiences.  Both technical and psychological factors encourage centralization of services with single providers.  Financially as well, “small, independent startups” have huge incentives to become part of one of the big centers of mass.

Finally, the huge psychological distance between the value of free services and the costs that funds them is one of the big topics that puzzle.  It applies to “how come I can get free map directions from Google but I have to pay to get them from TomTom?” as well as “how can I convince my neighbors that electing so-and-so to office will mean more tax revenue overall, which in turn will mean better schools?”.  In both cases, the number of steps between cost and service is huge, and coupling them tighter would destroy the huge advantages that centralization and scale offer.  (If I knew more about the derivatives crash I could make some pithy reference here).

I agree with Tim that “If you don’t want a repeat of the PC era, place your bets now on open systems. Don’t wait till it’s too late.”  I think he’d also agree that we need to think beyond code and copyright.  That’s like going to war with trucks but no tanks.  For the open, distributed, heterogeneous web to thrive, we need to incorporate thinking from a host of other fields, such as contract law, design, psychology, consumer behavior, brand marketing, and more.  Figuring out how to engage thinkers and leaders in those fields is likely one of the critical, still missing steps.

Design tools for the open web: reflections on the fixoutlook campaign

The twittersphere is abuzz with the current twitterstorm about Microsoft’s plan to use the “Word HTML engine” in the next version of Outlook.  It’s a campaign that’s an organization which represents people whose living depends on their ability to make compelling HTML pages in email, so it’s not surprising that they have a beautiful site which is getting a lot of people to retweet.

There are lots of campaigns that sweep the social networks on a regular basis, and this one is somewhat noteworthy because it’s about plans for a very commonly used piece of software, coordinated by marketers, and because the twittersphere is very receptive to anti-Microsoft sentiments.  None of that is what I want to talk about.

What I want to dig into a bit is how Microsoft got there, and the implications for the Open Web.  I’m not an expert on Microsoft’s history, or Outlook.  But I can make a few guesses, based on how I’ve seen similar things evolve.

Outlook became the dominant enterprise email client during a phase of Microsoft’s life where embracing the web sometimes meant making stuff up and pretending it was a standard, or equivalent shenanigans.  This was clear in Internet Explorer’s explorations outside of the normative specs, but it seems that some of the same “we can just do our own version of HTML” affected the Word team.  This makes sense — if you’re a company with market dominance and the web is not central to your value proposition, but office productivity software is, then you’re going to do what you can to make the best user experience possible for your users, even if it means that messages sent to non-customers can’t be read with as much fidelity as those sent to customers. In fact, in a very basic way, that’s standard marketing — make using your product look better, so people want to use it.

Microsoft, again logically, invested lots and lots of millions of dollars into making design tools for Word, and HTML was thought of as an export format, where low-fidelity was almost a commercial virtue (“you don’t really want that”).  The poor folks in charge of Outlook, who are mail experts, not HTML rendering wizards, had to deal with the use case of: “I want to send rich documents by email”, which blended office concepts (rich documents) and network concepts (email).  They had to choose between a moribund IE6 engine, and the maintained, evolving HTML engine designed for use in Word.  Given that most emails read in Outlook probably are written in Outlook and that Outlook users know the Word authoring tools, it was a rational choice.  It made life hard for email marketers, and for a few people who like to use HTML to express their creative side and who do care that all their correspondents can see what they intended to send.  But compromises are inevitable in a gigantic, complicated company like Microsoft.  Had I been the manager in charge, given their constraints, I may well have made the same choice.

Now, it’s 2010 (or almost).  Outlook is due for a new revision (gotta get the upgrade revenue).  The choice is stark: adopting a more standards-compliant engine like IE8’s makes sense in the framing of “html email messages going out on the net”, but to deploy it in the reality of Outlook (mostly internal emails, lots of document ping-pong, etc.) it would require that Microsoft have a stack of design tools to offer that could realistically replace their existing stacks.  There’s the rub — good HTML engines aren’t useful in a user context like Outlook’s if the authoring tools weren’t built with real HTML/CSS in mind.  And neither Word’s venerable composition tools or  Silverlight’s new-fangled ones were.  So the Outlook team is stuck with a product that needs an upgrade and a need for both composition tools and a rendering engine, neither of which it can control.  It’s not going to end well for at least some people.

[As a side note: the pragmatist in me wonders whether Outlook could use the Word HTML engine to render emails from Outlook users, and the IE8 engine for emails not from Outlook users.  As long as no one ever edits forwarded emails it’d work!]

Now, it’s awful easy to make fun of Microsoft.  The story on the side of the Open Web is better in part, but there are areas needing improvement.  On the rendering engine side (displaying beautiful documents with fidelity and speed), the world is looking better than it has in years, with several rendering engines competing in healthy ways like standards compliance, leading-edge-but-not-stupid innovation, performance, and the like.  Life is good.  For email marketers, getting email clients to render real web content is all that matters — they pay professional designers to author their HTML content using professional web page composition tools, and the revenue associated with a successful email marketing campaign makes those investments worthwhile.  Email is just a delivery vehicle to them, and it’s a perfectly valid perspective.  They like Thunderbird a lot, because we’re really good at rendering the web, thanks to Gecko.

However, for regular folks, life is not rosy yet in the Open Web world.  Authoring beautiful HTML is, even with design and graphics talent, still way, way too hard.  I’m writing this using WordPress 2.8, which has probably some of the best user experience for simple HTML authoring.  As Matt Mullenweg (the founder of WordPress) says, it’s still not good enough.  As far as I can tell, there are currently no truly modern, easy to use, open source HTML composition tools that we could use in Thunderbird for example to give people who want to design wholly original, designed email messages.  That’s a minor problem in the world of email, which is primarily about function, not form, and I think we’ll be able to go pretty far with templates, but it’s a big problem for making design on the web more approachable.

There are some valiant efforts to clean up the old, crufty, scary composer codebase that Mozilla has relied on for years.  There are simple blog-style editors like FCKEditor and its successor CKEditor.  There are in-the-browser composition tools like Google Pages or Google Docs, but those are only for use by Google apps, and only work well when they limit the scope of the design space substantially (again, a rational choice).  None of these can provide the flexibility that Ventura Publisher or PageMaker had in the dark ages; none of them can compete from a learnability point of view with the authoring tools that rely on closed stacks; none of them allow the essential polish that hand-crafted code can yield.  That’s a gap, and an opportunity.

I think radical reinvention is needed.  Something with the chutzpah of Bespin, which simply threw away most of the stack that we all assumed was needed, but this time, aimed at the creative class (and the creative side in all of us), rather than the geeks. I know that lots of folks at Mozilla would love to help work on this, but we know we’re too small to do it alone.  We know what modern CSS can do, we just don’t know how to make it invisible to authors.

This is a hard task, because it’s about designing design tools, which combines psychological, social, product design, usability, and technical challenges. It’s a worthy task, though, and one that I’d love to see someone tackle, especially if we can get non-geeks involved.  There are tens of thousands of web designers who know the magic triad of 1) design, 2) HTML/CSS, 3) what aspects of existing tools make them productive, and what aspects fail.  If we could get them to work productively with the tens of thousands of open source developers who currently build the applications that power the net (web, email, and others), we could throw away the broken metaphors of the 20th century and come up with new ways of designing using web technologies that everyone could use.  Or maybe we just need one brilliant idea.  I’ll take either.

Open Source, Open Standards, Open Data, Open Vancouver

Exciting Vancouver news!  Mayor Robertson has put forth a motion for city council to vote on next week which is chock full of amazing words, and which passed, will direct the city to have a bias towards openness — open source software, open standards, and open data.

That’s pretty impressive!  If the motion passes (which it should, riding on a global wave of sentiment towards openness, and fitting in with the platform that got seven of the councilors elected), this could mean great things for Vancouver, especially at the intersection of software, business, and the public.

On the issue of open source, I would love to show that local governments are able to recognize the strategic and control advantages inherent in software that they can influence and modify, and help push back the fear-driven campaigns which bias towards monopolies at taxpayer expense.  Similarly, promoting the use of open standards is a no-brainer that the best technocrats realize can give them the power that befits them as customers.  These ideas have been well articulated globally over the last few years, and I would hope that all high-level government staff and officials are briefed on the topics by now.  (If any local officials want to discuss this in greater detail, there are many qualified experts in Vancouver, don’t be afraid to ask for names or opinions!).

Open data is a more recent concept, the implications of which are likely as important as the rise of the web.  With open data, governments have a unique opportunity to create economic growth, reduce operating costs, and enrich the life of their constituencies, simply by making a policy decision such as the one in tuesday’s motion, and following through.

As Sir Tim Berners-Lee (the creator of the web) discusses in this 15-minute TED talk, the simple act of releasing public data enables others to create value.  Of course, as the motion indicates, personal privacy rights trump, and we don’t want to release data on individual citizens — luckily that’s not needed in order to enable value creation.  As an example, this impressive screencast of Wolfram Alpha demonstrates the power of new computational platforms leveraging public data. Vancouver’s data belongs there.

Most government data is public data by definition.  What’s compelling about open data in the age of the web isn’t the fact that citizens have access to such data — they typically have the legal right to obtain it through administrative requests, even though those are inconvenient (and very expensive for the city).  What’s compelling is that by making what belongs to the public available via the web, the city can accomplish many laudable goals at once:

  • In many cases, simply enabling self-service on the web will reduce costs for the city and provide better service to its citizens.
  • By making data that it doesn’t have time to process and analyze available, the city allows others with time and expertise to do such analysis with no cost to the city.  This will sound unbelievable to bureaucrats unused to open source, but this kind of thing really happens.  You can’t predict who will do what with what data, but you can be sure that it can’t happen unless and until the data is available.
  • Some of those activities will just be interesting. But some will create new businesses, or allow existing businesses to become more efficient.  What if local retailers could access demographic trend data for free on the web, today?  What if companies outside of Vancouver could get a deeper understanding of Vancouver simply by looking at the data?  Everyone knows that Vancouver is a great place to live.  The city’s economic strengths are not as well advertised.  Enabling an ecosystem of people who turn data into interesting, insightful, and useful applications and sites can only help.  Think of open data as the infrastructure of a chamber of commerce 2.0.
  • The city is there to serve the citizenry.  To the extent that it is the caretaker of public data, and that the public has good ideas for using it, its job should be to get out of the way.  Part of being a transparent government is to be invisible — to not get in the way of experimentation and innovation.  Promoting open data while preserving privacy feels like a great goal for the city’s IT staff.

There are also intangible benefits that come from these kinds of attitudinal shifts in how the city relates to the internet and the software economy.  From a recruitment point of view in the software industry in particular, a city which embraced openness and the internet would be that much more attractive to the kinds of technical, creative, and public-spirited individuals that I seek.

Finally, local technology leaders are that much more likely to engage with the city and provide their help.  I know that the notion of an “Open Vancouver” makes me much more keen to engage with the city, as it would put the city on the short but growing list of governments who understand how they can leverage the web and openness to improve life for their constituencies.

Positive Energy for Change, for a Change

Change is hard. I spend a lot of time trying to enable, encourage, foster, stimulate, provoke, change in software.

Part of that is because it feels like it’s that most plastic of human endeavors. That, of course, is only true to the extent that the people involved in the creation of software _are_ plastic.

One of the fascinating things in the last few months is that it feels that, with the Obama administration, people are thinking big about societal change, in a variety of contexts.

The latest one to cross my twitter stream is Carl Malamud’s bid for the government printing office, which is full of great, big ideas. So cool.

Read up about it:, and follow the links to the proposals & the videos. They’re quite compelling, optimistic, ambitious, and, I’m sure, threatening to the status quo. At the very least, it’s a great conversation.

Godspeed, Carl.

It’s Friday: Goofy but fascinating Thunderbird Add-ons day

Two different and equally goofy but interesting add-ons are in my personal news today:

Kent James released ToneQuilla, which I like to call “BiffTones!”, which allows you to set custom notification tones based on Thunderbird rules. Emails from the spouse make one sound, emails from the grandmother make another, etc. Neat!

Andrew Sutherland, on somewhat of a dare that I put in front of him (nothing like waving a red flag of visualization at a canvas bull like him) responded to the pretty but mostly useless Wordle meme which has been going around Mozilla circles, and built a wordle-like visualization of the database-driven queries that I blogged about a couple of days ago. If one can build an add-on to that in a day (well, a night without internet access), what couldn’t one do?

Both of these add-ons have somewhat of a goofy aspect to them, and both could evolve into something really useful. Notification overload is a huge problem in communication clients – it’s useful to know when something important happens, but useless to know when “a message was received” — tools like ToneQuilla can help. Simiarly, visualizations can provide insight into ones’ messaging history. See Themail for interesting research on the topic.

Thunderbird 3 beta 1 – a platform for innovation shapes up

Today, we’re announcing our first beta-quality release since the Thunderbird project was re-energized about a year ago. It’s exciting to see the first in what will be a series of releases aimed at a broader set of testers make it out the door.

In some ways, this is a typical beta — we’ve changed a lot of code since Thunderbird 2, and we need a lot of people to tell us if we’ve made any boo-boos when fixing bugs. It’s also a good beta in that we’ve moved the product forward, in part thanks to new capabilities in the underlying Mozilla platform, which gives us faster performance all around, an add-on manager which will be even more useful for Thunderbird users than for Firefox users. We also have important new mail-specific capabilities, including a new “autosync” system that gets Thunderbird to download IMAP message bodies early, so they’re already there when you need them, and a much faster implementation for deleting and moving IMAP messages, which I can’t imagine living without at this point. The one-click add-to-addressbook is also an elegant and shameless ripoff of the Firefox bookmarking model, which our alpha users love.

As a result, I feel that even for a first beta, Thunderbird 3 is much better than Thunderbird 2, thanks to a lot of hard work by a motley crew of great contributors worldwide, to whom I’m very grateful. All that and more is described in the release notes, which I encourage beta testers to read.

However, in some other ways it’s far from a typical beta. In particular, unlike the traditional definition of a beta release, we’re definitely not done making feature changes, including some pretty significant feature work that we expect will be integrated in Thunderbird 3 in later beta releases, some features that will live as optional add-ons, and some experiments which may end up in later releases of Thunderbird or not, depending on the result of the experiments.

I’ll talk a bit about some of these upcoming attractions, as I’m quite excited about them (and some more that will have to wait for another post).

First, the autoconfig work, which refers to a complete rethink of the account configuration process in Thunderbird. The account “wizard” in Thunderbird made sense in the early days, but over the years it has acquired complexity and lost relevance, as email systems have gotten more complex. Unfortunately, if you’re lucky enough to have a secure email server, the current Thunderbird user interface unjustly punishes you by making you go through 8 pages of questions and you end up with an account which requires manual tweaks before you can check mail. That’s not good. To deal with this, we have rethought account configuration completely, and came up with a dialog which, when it lands (becomes available by default), should make account configuration really, really easy. It’s been hard to come up with an elegant minimal user interface that hides all of the complexities of email configuration, but it’s worth doing it right.

Next up is tabs. Thunderbird 3 has a great opportunity to be basically coming up with a tabbed interface at a time in history where we’ve learned a lot about how tabs work well or poorly. In Thunderbird 3 beta 1, it’s a fair bit easier to work with tabs than it was in Thunderbird 2 (although many improvements are planned before the final release). For example, it’s much easier to create new kinds of tabs (the calendar Lightning add-on makes great use of those, for example, as I show below). One simple example of this is Bryan Clark’s “glodabook” add-on, which is a starting point for exploring new ways of navigating the address book.

Addressbook prototype

Next up is conversations. Thunderbird’s default mode saves emails you send in a “Sent messages” folder, and emails you receive filed in other folders, typically decided on a per-message basis by the users (more on that below). This is a fine default strategy, but it can make it hard to find related messages if they’re not in the same place (e.g. messages that are replies to emails you sent, or messages that are part of a long conversation, some of which is in your archive folders, and some of which is in your inbox). Thunderbird 3 includes a powerful search engine (“Gloda”) which is designed to let us efficiently find messages that are related, no matter where they are. In particular, it makes it quick to take a message and “show it in a conversation context”. This lets you view the messages you sent interspersed with the messages you received, but also messages from earlier in the conversation which you may have archived. This is still experimental, and not enabled by default in 3.0b1, but early results are very promising:

Conversation view

Next up, search. Part of the Gloda search engine mentioned above is a powerful full-text search engine, which seems to be working quite well so far. Thunderbird search is already better in 3.0b1 than in 2.0 because we’re more aggressively downloading emails, and doing a better job of finding the downloaded copies. With the new search engine, we’ll be able to efficiently do searches like “show me all messages from bryan mentioning ‘conversation’ in the body or the subject”. And we think we can make that easy for users to discover as well:

First we do autocomplete on existing contacts:

autocompleting contacts

and then encapsulate them in graphical objects to simplify the display:

experimental search results view

On that topic, one of the design topics we’re exploring is how to make it easier for users to be smarter about search. Thunderbird has always had very powerful search capabilities, but to use them people have to think like database programmers, which most of us aren’t. We have some plans there to help people build smart searches based on starting with the simple searches people are used to from the web, using suggested sub-searches based on analyzing their search results. Now that we have the search engine in place, we can start to experiment with many different search models, and see what works best.

The last two screenshots are particularly exciting to me because they demonstrate that we can leverage the foundational bits of Thunderbird, and experiment with new ways of working with messages, without disrupting the user experience that Thunderbird 2 that many users are comfortable with. What’s equally exciting is that these new ways can themselves be platforms for experimentation, whether by us, or by others. One such experimentation topics is conversation visualization and interaction models. Andrew Sutherland implemented an add-on that shows thread arcs (here using a view that it out of date by a whole week):

Thunderbird has always been an interesting experimental playground, because of its open source nature and the add-on model. The technology platform in Thunderbird 3 will make it even more so — 1) we have better technology that allows new ways to slice the data, 2) as we’re exploring new features through the use of add-ons ourselves, we find out early what changes we need to make to make the platform more extensible, and 3) because we’re fully leveraging web technologies, something which is a bit new for Thunderbird. In particular, all of the views above build on some of the most compelling advances in web technology, from the canvas widget to Javascript toolkit-based animations (JQuery for now) and modern CSS features.

Finally, last but not least, the Lightning calendaring add-on is moving along great. The Thunderbird+Calendar team has made a lot of progress on tackling the stack of issues that made it hard to integrate into the new Thunderbird codebase. We’re not done yet, but it’s looking great:

calendar tab

There are some other add-ons that some contributors are working on that I’ll talk about as they get polished and ready for screenshots.

As always, we love to get ideas for interesting new capabilities we can bring to the platform. We’re focusing on some of the basic capabilities we think are crucial to solving today’s mail problems, such as search and message management, but it’s a huge field, and email users are desperate for innovative ideas.

We’re identifying way more topics of interest than we have time to tackle, so we’re hoping to reach out to designers to get a broader set of participants helping us with some of the design challenges of a modern approach to messaging, within the context of Mozilla Labs. More on that soon.

Whether you’re a designer or an implementor, if you want to build new features on top of the views we’re building, add new kinds of data to add to our database (twitter, facebook, rss, etc.), or new visualizations, do get in touch.

If you’re interested in the extensions above, and aren’t afraid to try out code that changes daily, my recommendation is to use an IMAP server, Shredder (the nightly builds of Thunderbird, which are already different than the beta 1 build), and the extensions at the following locations: