Search is almost never the task, it is always a step in the process. The task maybe trying to fix an old DVD player or trying to get directions to where you should vote.
One large problem with today’s web search engines is context switching. When a user needs to perform a search on the web, they are required to stop what they are doing, and transfer some portion of their current mental model to the web search engine. This is like the game of telephone, but each application has a different interface and requires a different piece of information. And just like the game of telephone you never really know what is going to come out at the other end.
We are training people to think in fragmented terms in order to support antiquated input requirements. This must evolve and web search engines must figure out how they can plug into the user flow and leverage context. The first company to figure this out will change history and become the next Google or Microsoft.
A simple example where applications work together and automate the flow are on the mobile phone. If I am looking at an email showing voting locations in my district, the phone number’s and addresses are represented as links. Clicking on either of them launches the appropriate application and set’s its context. For example, if I selected the map link on my iPhone it would launch Google Map’s, highlight the voting location on a map, and provide a method to get directions from my current location. This is a seamless context switch integrating search into the process.
As we begin thinking about tasks instead of applications, we will change how we develop software. I personally think this change will be as fundamental to the future of software development as writing multithreaded applications.
Processor speeds are peaking and the current trend of multi-core is here to stay. We need new ways of thinking about writing computer programs if we want to change the world. Integrating search into the user flow is a logical step. Who wants in?
Wow! In the past week, 2 new companies that want to leverage a sensor started making noise in the tech blog press. Very cool and welcome.
The History of Me.dium
I thought this would be good time for a history lesson on Me.dium. When we were brainstorming the concepts behind Me.dium we were coming from a very different space than most would have expected, enterprise publishing.
We built a technology that focused on sharing information in large work groups. Our experience told us that when people in large work groups created information they usually started from existing documents or templates and not from scratch. We developed an application that made it easier for people to reuse information.
This was initially accomplished by adding some code to our proprietary application. The code monitored and stored the actions people took, like copy, paste, drag, drop and save as.
The monitoring application or sensor also captured the offsets into the documents or the location on the disk, the user name, date, and time the action occurred. We stored the information externally from the document and added additional meta data. This allowed us to update the content from one document to another based on business rules or explicit user actions.
We built two graphical tools to interact with this new meta data. The first was a visual differencing engine that understood structure, content and the new meta data. The second was a hyperbolic tree that allowed the user to crawl around the meta data and see all the local and global relationships. They both offered tremendous value to the end user, but the hyperbolic tree provided an aggregate picture of the system we were not expecting. This became the original idea for Me.dium.
We started putting monitors in all types of applications, Microsoft Word, Adobe Acrobat, Internet Explorer an open source XSLT parser and the Windows OS. Quickly, learning that most applications were great at being automated and terrible at describing what they were doing internally, I called this new type of information activity context.
The activity context was a new piece of meta data that could be stored and leveraged by any user, not just the original creator. Once we were aware of the Activity Context, we were able to do things we could never do before and the most impressive one was connect people in real time based on their current interests.
We envisioned sensors everywhere, all creating activity context, and Me.dium’s Matching Engine gluing it all together. Everything from consumer software applications, internet applications, proprietary enterprise applications, hand held GPS devices, mobile phones, home appliances, computer games and automobiles could operate more efficiently if they had access to Activity Context.
Part 2 of Me.dium’s history comming soon.