This is long overdue, my bad, I should’ve written this months ago. Haha. Anyway, I had the chance to speak at GNOME.Asia 2015, a regional level conference on GNOME and open source software in general. In case you didn’t know, GNOME is one of the available desktop environment for Linux-based OS. If you’ve used (or still uses) Linux for the past few years, chances are, your application window (among others) is managed by GNOME. That’s how important GNOME is. Therefore, it’s such an honour to be able to speak hear, even though I didn’t register until the very last day of abstract submission.
The event ran on 7-9 May 2015 at Universitas Indonesia, Depok. I had the session on the final day, where I talked about “Prototyping User Interface for Gestural Input Devices”. In short, this is my ongoing research topic about how can I suggest a model or paradigm of UI design, based on the use of air gesture from device such as Microsoft Kinect or Leap Motion.
Due to the long technical details, I’m not gonna write everything about it here. But, just to conclude, I suggest an implementation of zero-UI philosophy with gestures act as a tool to choose and confirms choice when used as a complementary input to mouse and keyboard. I think this is the best choice today, especially for productive, non-entertainment use case. Here is the slide from the presentation session.
So, why is it important? Because I see that the use of air gesture as an input method grows from day to day. However, its adaptation still revolves around game or entertainment purpose, something that’s not really mission critical. I want to push the development further by challenging its use in something productive, even as a complementary device. Sure, someday (if this is important enough), big corporation will adopt it into its platform/OS. However, it is as much as I don’t see this device as a replacement for mouse, I don’t think today’s OS design can cater for uses of air gestures, simply because it relies much on icons.
This is where I think I got a lot of influence from GNOME. They released a desktop environment that relies on keyboard shortcuts, (gestures) that replaces (or lessens) icon-based interaction. A perfect model to replicate for air gestures.
Therefore, we decided to do a research based on “what if we have a desktop that’s fully optimized for air gestures input but still respects mouse and keyboard?” The result is the prototype that we’ve built based on web technologies.
Stay tune on the gory details. For now, enjoy the slides and I hope we can have discussions on this exciting topic.