Gestural Robot – Project Log pt. 1

Last week, between spare time from making material for coding classes at Framework, I made a small prototype of a finger gesture-controlled robot. Gestural control has been something that fascinates me for quite a year now and since I have a robot kit idle, I thought that this would be a good time to use gesture as an input to something other than pixel-based material.

I use Leap Motion as the sensor and input, get Processing to read the data of finger numbers and transmit it via serial to Arduino which in turns, orchestrate which wheel to move and its respective movement direction. I use a 2-wheel robot which is controlled by an L298 motor shield.

Continue reading →

99 Names

99 Names is a web VR experience that exhibits 99 Names of Allah in an immersive 3D space. This is my first foray in both Virtual Reality application and WebGL programming through ThreeJS. It’s a simple application, where user can access the web page using their phone or desktop/laptop browser and instantly they can feel the experience, where they’re surrounded by rotating circles showing the 99 names of Allah.

99 Names

The barebone of the project is completed using Web VR Boilerplate, where it ensures that everyone can get a grip of the experience, whether they’re on a desktop, or smartphone, with or without a virtual reality head-mounted display such as Oculus Rift or Google Cardboard. All within the web browser, no application to install. I think that this is a good way to evangelize VR, since at this point, the VR world really needs a whole lotta application to expose its features and shortcomings.

I had so much fun making it. The boilerplate makes it really easy to develop VR experience, so I can focus on the main content, which was all made using ThreeJS. Though I’ve read about it a lot in the past (it’s part of my to-learn-list for about 3 years now, haha), but this is actually the first time thoroughly learning about it. I can say that the best way of learning programming language/library is by making things with it. So far, I’ve learned a lot about 3D pipeline. Which makes me wonder, why didn’t I do this years ago?

However, from the interaction design point of view, I realize that catering VR experience to both kind of platform (desktop or smartphone) is tricky. For example, in smartphone based VR, the user input is limited. Not all phone can use the magnetic input from Google Cardboard, a thing that hopefully will be rectified by Google Cardboard 2. I’m not sure about the other HMD, maybe you, dear reader have another data?

While on the other hand, I can make plethora of input in the desktop version, since the user can use keyboard or mouse or joystick, or other devices to give input to the application. A thing that obviously won’t be mapped precisely in the smartphone counterpart. I did ran into the vreticle library which will help me make a gaze input system for VR, but I still founded some trouble implementing it.

Therefore, at this point, this experience is a passive one, no user input is involved. But I do hope to complete it with one at some point.

99 Names can be accessed at Play with it and let me know what you think.

Meanwhile, here are some screenshots of the steps I did in making it




Leap Motion Synth

I helped a friend develop a simple tone generator as a media for musical experiential learning for kids. He wants to use Leap Motion so kids can use their finger gestures to generate tone as well as learning the pitch of the notes.

Leap Synth

This was a good experience for me as I wanted to learn further about designing UI for gestural input device such as Leap Motion. This time, I propose this scenario:

  1. Use the right hand index finger for choosing which note to trigger
  2. Use the left hand to trigger playing and stopping note. When the palm is closed, a note is triggered, when the hand is opened, a note is stopped being played

As with previous projects, I used Processing for development as I can easily export this as a Windows application so he could deploy it without many hassles. The main challenge was to get Processing to detect which hand is right or left. In the end, I decided to detect hand position in relative to the Leap Motion. Afterwards, the finger detection and tracking was done. Mind that this was done May 2014, and several months after, Leap Motion released a new API which provide an easier way to detect left/right hand. Ha!



I went through several iterations, including using threads to ensure a smooth experience. However, in the end, I settle for a thread-less solution, since it didn’t require hands position detection in the start. It was a good learning experience, especially for designing UI. As I saw that this solution wasn’t really ideal, since the hands became very busy, though accurate enough to implement the choose-and-confirm paradigm as being employed in mouse.

I know that further development in UI paradigm is required to further improve the application of Leap Motion.


Client: Personal Project
Year: 2014
Location: web, access it at using Chrome or Firefox
Tools: headtrackr.js
OS: multi-platform

Screen Shot 2014-02-26 at 5.07.48 PM

As part of my contribution towards promoting vote during general election, I create this piece called “cARpres”. The name is a play of the words “capres” (contender for the presidency) and AR (Augmented Reality).

In this piece, you can swap your face with faces of several capres to choose from (more to come, I promise). Plus, you can also read tweets regarding the chosen capres, so you can see how the general perception towards that figure. Looking at this, imagine you are the capres that you choose and you can see how people perceive you.

Technically speaking, this piece teaches me to implement AR in browser by means of the HTML5 Canvas, headtrackr.js library and WebRTC. I’ve been wanting to play with this technology for some years now and luckily, WebRTC has been growing in a very good way that it’s now possible to be implemented for various purposes. Playing with it was quite intuitive for somebody who usually develops in Processing or openFrameworks. This has served me well as an introduction for me to start playing with it more in the future. I made it open source too, so people can develop it further.

Again, I hope that this piece can serve a lighter feeling towards general election. I see this as my own version of countering the heated situation in the internet where everybody (for better or for worse) start defending his/her own choice which sometime can lead to bitter argument. All available to read in Facebook or Twitter. I do think that we all should keep our heads cool and properly vote for the next Indonesia president.

Welcome to the democracy party, where we, the people, have the power to choose who will lead us in the next 5 years. Please vote. Don’t waste this opportunity.

More Work for Nike

Here’s a quick recap of 3 projects I’ve done with Nike in the past month. Projects are arranged by its date.

Interactive display for Nike Malaysia Booth at Stadion Bukit Jalil, Kuala Lumpur, Malaysia.
Date: 23 July 2012

Nike Malaysia wants the exact same content that we’ve previously developed for Nike Senayan City store for their booth exhibited during the Arsenal – Malaysia friendly match. So we flew there with our contents, have the Nike Malaysia guys set up the required hardwares and after 2 days of working, we have it set up properly.

Picture 1-5.

New Interactive Display Content for Nike Senayan City Store.
Date: 5 August 2012
A content update for Nike Senayan City Store. This time we want to display not only the triggered video, but also the image of the actual people playing in front of it. Keeping in with the whole triangle theme of the triggered video, we decided to show the person in a triangulated form an in addition that person can also create triangulation form using his/her hands.
This was made using vvvv in Windows 7.

Picture 6-7.

Treadmill Visualization for Nike run Event at Grand Indonesia
Date: 15 August 2012
For this event, Nike wants us to deliver 2 things: a displayed output of their Nike Run mobile app, which displays how far have the runner in treadmill gone, and a reactive visualizer which reacts to the speed of the runner. I took charge in the former and used a Kinect to do a frame differencing which in turns dictate the speed of the displayed grid and particles to create a sort of sci-fi style warping effect. This was made using Processing in Mac OS X 10.6.

Picture 8-14.

Gestural Automated Phone System

Tools: Kinect, Android Phone
Software: custom-made using Processing for both the MacBook and the Android
OS: Mac OS X 10.6
Year: 2012

Now, here’s something that I choose not to participate: fanboy-ism. You know, these days, the internet is filled with fanboys. Each freely talks about he/she’s opinion, which mostly are directed towards disregarding companies that are not his/her preference. Long story short, you have Apple fanboys, Microsoft fanboys, Google fanboys and Linux fanboys among others. Funny thing is, not all of them actually knows the capabilities of devices that they regarded so majestically (which ultimately turned them into fanboys in the first place). Very rarely do they able to code (Linux Fanboys not included, as in the previous sentence). So I found it very funny to worship a company that release a device that you never dived into. Anyway, that’s my 2 cents.

So, pardon the long intro. 3 days ago, I acquired an Android phone, it’s a Sony Ericsson Xperia Live. Cheap, but with quite a good spec. It’s my first Android phone. That night, I quickly do a research and founded that I’m actually able to create a program using Processing and running it straight in my Android phone. My mind was boggled. Imagination ran wild. And after a quick hello world, I decided to combine my previous knowledge in Kinect to control this wonderful phone. As a response to the previous paragraph rambling, I aimed to combine products from Apple (MacBook), Microsoft (Kinect) and Google (Android) into one system. As I’ve said before, I’m no fanboy. I admire every good piece of technology, no matter who’s the vendor.

So in general what I have here is a gestural automated phone system. I named that one myself. Sounds horrible. It’s a system that enables me to do a phone call without touching the phone, entering the number (or choosing from the address book) and pressing that call button. That action is triggered by a gesture, detected by the Kinect. In short, I’m making a system where my hand movement will make my phone calling another phone, without touching it. Sounds clear?

Under the hood, I have 2 softwares running at the same time. The first software is running on the MacBook, it’s a Kinect gesture detection. The second one runs on the phone, which will receive command from the MacBook and do a phone call afterwards. So, if I move my hand towards the Kinect, my hand will be detected and by moving it to the right corner of the screen, it will make the MacBook sending a command to the phone. This command is sent using OSC (Open Sound Control) protocol which required both the phone and the MacBook to be in the same network. Upon receiving the command, the phone will do a phone call. Here’s the demo (turn the volume UP!):

That video serves as a proof of concept, a crude demo which was achieved after a night of rapid prototyping. And yes, this is why I love Processing. It’s a perfect platform to prototype a rough concept. Of course, I can see many improvements required, but for now here’s what I have.

I can see this system implemented with the phone attached to its user who has a handsfree device available. Imagine waving your hand and make a call without having to reach the pocket first. Hmm. Sounds like a part of Iron Man. 🙂

Apple, Microsoft, Google living in harmony.

Augmented Reality Demo

Every now and then, I always get a question like this: “Hey, can you do an Augmented Reality stuff?” No matter what I feel about Augmented Reality (AR), it’s without a doubt one of the most underutilized-but famous interactive technology these days. So, in order to get my hands dirty on what today’s AR technology has to offer, I created two demos using different approach and tools.

The first demo is a marker-based AR made using Quartz Composer. In this video, I create a software that detects a marker and display a cube on top of it. To make it a bit more interesting, I set an audio analyzer so that the cube’s size is determined by the volume of a music coming in to my laptop. While still keeping the tracking system working, the cube will dance to the music. This is created to show that the animation for AR application don’t have to be static. I believe that an interactive animation will ad some depth to the AR app itself. Here’s the demo of it:

The second demo is a marker-less AR using Kinect. This time, I wanted to make an AR app without using a marker. I decided to use Kinect as an input, because it can accurately detect my body part without having to setup stuffs like a static background. In this app, a particle is created on top of my detected body part, sort of a wizard waving his hands to emit his magic power. A simple demo, but shows that a Kinect can be utilized to develop an AR system without relying on marker, thus resulting in a more natural way to interact. Let’s face it, nobody walks around carrying a marker 🙂 Here’s the demo:

Interactive Wall for Hero

Client: Hero
Year: 2012
Location: Hero Award Night at Sampoerna Strategic Square
Tools: custom software made in C++ and Adobe Flash Builder
Hardware: Kinect
OS: Snow Leopard

Interactive Wall made for Hero, a local retail chain. Upon walking in front of it, a picture will appear depicting the long history of the company. Different pictures appear as somebody walks along the wall display.

This work utilises 2 Kinects, as I have the job to detect person along 6m screen in quite a narrow space, probably 2m between the screen and the opposite wall. So with that, I have no other choice but to use 2 Kinects and placed them high enough and separate them so they’d cover as much space as possible.

I only have a short time to complete this project, luckily the client is kind enough to pair me with Erin, another developer who appeared already created the content for the display. So, my job is clear, to get input from Kinects and feed the input from them to the created content. Erin used Adobe Flash Builder and so I have the option to create a program which will detect people and send the detected person’s position using TUIO, which I know already has a ready-made library for Flash.

The result is as you can see from the above video.

Interactive Displays for Nike

Client: Nike
Year: 2012
Location: Nike Store, Senayan City Mall, Jakarta, Indonesia
Tools: Arduino and Processing (Interactive Product Display). custom software and TouchDesigner (Interactive Wall Video Display)
Hardware: Arduino and proximity sensor (Interactive Product Display) 2 Kinects (Interactive Wall Video Display)
OS: Windows 7

Here are two new installations I did for Nike to promote their 4 new football shoes in coinciding with the Euro 2012 football tournament . One is an interactive product display and the other one is an interactive wall video display. Generally speaking, both trigger videos in its own way. Both are deployed in Nike Store in Senayan city Mall in Jakarta.

For the interactive product display, customer can pick up a shoe from its display box and it will trigger an informational video regarding that product. This happens for all 4 shoes. Each shoes are placed on top of a computer (an integrated monitor + CPU Lenovo model). A proximity sensor connected to an Arduino is used to detect the shoe’s position to determine whether it’s lifted or not. Upon lifting, a video will be triggered. This is programmed using the new Processing 2.0a6 in order to achieve a smooth 720p video playback using its built in GStreamer video back end.






The interactive wall video display triggers video based on an audience position in front of the store’s window display. Again, 4 shoes are placed in front of the 3m x 1m video display so if somebody is standing in front of a shoe, a corresponding video is triggered and will be played in the display. 2 Kinects are used to detect people over a quite wide space. A custom software is used to stitch images from both Kinects and do blob tracking which sends the blob’s position to TouchDesigner to trigger different videos. TouchDesigner was chosen because its amazing capability to playback hi-res video without burdening the computer’s CPU because it’s working on the GPU.

It’s still on display until the end of June, so if you’re in Jakarta, hop in for a ride and grab a Nike product while you’re at it.