Retiring the 'Save' button

by on March 21, 2008

Earlier this week I attended a presentation by David Platt, the noted software pundit and author of Why Software Sucks. He made some compelling arguments in favor of more user interaction design in our applications and was emphatic in his demand that software "should just work". This echoes the sentiments of several noted experience designers like Donald Norman of The Design of Everyday Things fame and Steve Krug, author of Don't Make Me Think, and indeed the sentiments of most low-tech consumers out there (hi mom, grandma).

What resonated with me was a comment he made about getting rid of the 'Save' button in our applications. He pointed out several applications that have already done away with it and the user experience has been noticeably improved. Two examples were Microsoft Money and Microsoft OneNote. Money is a personal money management application with a local database that stores your financial transactions. It uses a checkbook register metaphor where you enter a transaction and it is immediately posted (saved) to your account records. OneNote is an "always on" note taking application that sits in your task tray and when opened, let's you manage a hierarchical collection of notes through a simple notebook metaphor. It saves your notes automatically as certain events occur - switching notes, closing/minimizing the application, background timeout, etc. Both of these applications store data to the user's local folder on the machine they are running on.

Later this week, I began prepping for a talk on occasionally disconnected applications, which shows off the capabilities of the new Microsoft Sync Framework for .NET. This is a new library we're introducing that makes it VERY easy to create a local data store for your applications and have the data automatically synchronize with external network data sources - files systems, databases, or web services. There's a new generation of online file and database services emerging in the market. Using frameworks like Sync, new applications will be built to access our data in the cloud as we move from device to device. These applications, written specifically for the form factor of the device we're current using, will download and cache our data locally to provide the optimal user and computing experience, but also keep the data sync'd with our master copies in the cloud. It's already starting to happen - ask anyone who uses a Windows Mobile phone to connect to Exchange - it works wonderfully and they can't live without it.

Reflecting on David's talk, my initial thought was that the 'Save' button is going to slowly be replaced by the 'Sync' button as applications become more Internet aware. Makes sense, right? Heck, the floppy icon is antiquated anyway - I've heard more than one user ask why there's a Honda logo on their toolbar? But it's going to go farther than that. OneNote shows us that user shouldn't have to worry about saving at all. The computer can take care of that for us. We can build applications that are resilient to connectivity, hardware, and even user errors, cant we? And...they should be able to heal themselves by restoring to the last known good state in the case of catastrophic failure with minimal hassle to the user.

So how is this relevant to an architect. I believe this evolution is inevitable and we as architects need to anticipate it and plan for it. Just as the page and hyperlink metaphors set the standard for user interaction in the 90's, the new breed of resilient Internet aware applications will set the standard for the next decade to come. And they need to be smarter. A whole lot smarter if we're going to make them "just work" in a wild-west distributed environment like the web. They need to match our needs and lifestyles with minimal impedance. Our current crop of designers are learning an awful lot about user experience with the current generation of applications we're building. They're going to be asking for more sophistication from us in the near future. We need to be ready for what they come up with.

Can our existing patterns evolve to keep pace? Take the Command pattern for instance. This is a well established pattern for document editor style applications. You can very quickly put together an extensible framework that allows you take input from the user, trigger the appropriate custom coded command handler for that input, and store it to a history buffer in case the user wants to undo the command if it was triggered by mistake. It's a great pattern for a single-user application that edits a document on the local machine. It can easily be adapted to work with a single user application that leverages the cloud. But can we make it work for multiple users who need to collaborate on a single document in a live setting (apps like MindMeister are attempting this as we speak). We could give each their own undo buffer, use a shared buffer, or as we most often do on the web, no buffer at all. Do we abandon the pattern if it doesn't fit and push back on the designers to try another interaction model? How about the SharePoint or VSTS models - check-in/out and maybe shelving? Hmm...probably too complicated for the low-tech user. It might fail the "should just work" test.

It's a challenging time to be an architect. Designers are coming up with some wild stuff. I don't think the Gang of Four book will become obsolete any time soon, but we'll definitely need to find new patterns and paradigms to help us make our applications smarter. What do you think? Will the 'Save' button go away? Should it?