Why are gestures starting to appear in web browsers?

A gesture is a form of non-verbal communication made with a part of the body, according to Wikipedia. A simple example would be waving hello or goodbye. When gestures are applied to a computer program they provide a method to execute common commands. You can think of a gesture as a quick way to invoke application functionality. 

Why are gestures starting to appear in browsers?

The browser has been morphing over the past 10 years from a rendering engine into a composite application framework. Robust API’s, universal install base and developer friendly content controls provide an obvious choice for most large and small software projects. 

As the browser continues to grow in popularity, it also grows in functionality. Making this functionality available without complicating the user experience is challenging. Gestures are one method to accomplish this goal.  

One of the new features introduced with IE8 is called Activities. Activities move entire website’s into the user’s right mouse, removing multiple steps from the user’s process. This has the opportunity to change how people interact with the web; things like searching, looking up word definitions and exploring addresses can now be accomplished in context with a single gesture inside of IE8. 

At Me.dium we rolled out an activity as part of the IE8 beta 1 launch. Try it, and let us know what you think.

Where is this going next

When I think of Apple’s IPhone or Microsoft’s surface technology multi-touch takes the concept of gesturing to a whole new level. A pinch or stretch in these UI paradigms visually changes the experience. The next wave in UI design might be completely gesture based, what do you think?

Why are sensor based applications popping up everywhere

Wow! In the past week, 2 new companies that want to leverage a sensor started making noise in the tech blog press. Very cool and welcome.

  1. socialbrowse.com
  2. Kiobo.com

The History of Me.dium

I thought this would be good time for a history lesson on Me.dium. When we were brainstorming the concepts behind Me.dium we were coming from a very different space than most would have expected, enterprise publishing.

We built a technology that focused on sharing information in large work groups. Our experience told us that when people in large work groups created information they usually started from existing documents or templates and not from scratch. We developed an application that made it easier for people to reuse information.

This was initially accomplished by adding some code to our proprietary application. The code monitored and stored the actions people took, like copy, paste, drag, drop and save as.

The monitoring application or sensor also captured the offsets into the documents or the location on the disk, the user name, date, and time the action occurred. We stored the information externally from the document and added additional meta data. This allowed us to update the content from one document to another based on business rules or explicit user actions.

We built two graphical tools to interact with this new meta data. The first was a visual differencing engine that understood structure, content and the new meta data. The second was a hyperbolic tree that allowed the user to crawl around the meta data and see all the local and global relationships. They both offered tremendous value to the end user, but the hyperbolic tree provided an aggregate picture of the system we were not expecting. This became the original idea for Me.dium.

We started putting monitors in all types of applications, Microsoft Word, Adobe Acrobat, Internet Explorer an open source XSLT parser and the Windows OS. Quickly, learning that most applications were great at being automated and terrible at describing what they were doing internally, I called this new type of information activity context.

The activity context was a new piece of meta data that could be stored and leveraged by any user, not just the original creator. Once we were aware of the Activity Context, we were able to do things we could never do before and the most impressive one was connect people in real time based on their current interests.

We envisioned sensors everywhere, all creating activity context, and Me.dium’s Matching Engine gluing it all together. Everything from consumer software applications, internet applications, proprietary enterprise applications, hand held GPS devices, mobile phones, home appliances, computer games and automobiles could operate more efficiently if they had access to Activity Context.

Part 2 of Me.dium’s history comming soon.


Get every new post delivered to your Inbox.

Join 2,208 other followers