Gestural Robot – Project Log pt. 1

Last week, between spare time from making material for coding classes at Framework, I made a small prototype of a finger gesture-controlled robot. Gestural control has been something that fascinates me for quite a year now and since I have a robot kit idle, I thought that this would be a good time to use gesture as an input to something other than pixel-based material.

I use Leap Motion as the sensor and input, get Processing to read the data of finger numbers and transmit it via serial to Arduino which in turns, orchestrate which wheel to move and its respective movement direction. I use a 2-wheel robot which is controlled by an L298 motor shield.

Continue reading →

Working with Mixed Reality, Mixing AR and VR

Mixed Reality is a term for technology that mixes the daily, physical reality, with something virtual. Lately, this term has been implemented in the form of Augmented Reality (AR), which augments something on top of a certain object, that can be seen using external display, thus making it an augmented one. On the other hand, over the past 2 years, development of VR has been accelerating swiftly, mainly because of the rise of Oculus Rift, a Head-Mounted Display (HMD) that enables its user to experience a Virtual Reality (VR) World.

Continue reading →

Speaking At GNOME.Asia 2015

This is long overdue, my bad, I should’ve written this months ago. Haha. Anyway, I had the chance to speak at GNOME.Asia 2015, a regional level conference on GNOME and open source software in general. In case you didn’t know, GNOME is one of the available desktop environment for Linux-based OS. If you’ve used (or still uses) Linux for the past few years, chances are, your application window (among others) is managed by GNOME. That’s how important GNOME is. Therefore, it’s such an honour to be able to speak hear, even though I didn’t register until the very last day of abstract submission.

Continue reading →

99 Names

99 Names is a web VR experience that exhibits 99 Names of Allah in an immersive 3D space. This is my first foray in both Virtual Reality application and WebGL programming through ThreeJS. It’s a simple application, where user can access the web page using their phone or desktop/laptop browser and instantly they can feel the experience, where they’re surrounded by rotating circles showing the 99 names of Allah.

99 Names

The barebone of the project is completed using Web VR Boilerplate, where it ensures that everyone can get a grip of the experience, whether they’re on a desktop, or smartphone, with or without a virtual reality head-mounted display such as Oculus Rift or Google Cardboard. All within the web browser, no application to install. I think that this is a good way to evangelize VR, since at this point, the VR world really needs a whole lotta application to expose its features and shortcomings.

I had so much fun making it. The boilerplate makes it really easy to develop VR experience, so I can focus on the main content, which was all made using ThreeJS. Though I’ve read about it a lot in the past (it’s part of my to-learn-list for about 3 years now, haha), but this is actually the first time thoroughly learning about it. I can say that the best way of learning programming language/library is by making things with it. So far, I’ve learned a lot about 3D pipeline. Which makes me wonder, why didn’t I do this years ago?

However, from the interaction design point of view, I realize that catering VR experience to both kind of platform (desktop or smartphone) is tricky. For example, in smartphone based VR, the user input is limited. Not all phone can use the magnetic input from Google Cardboard, a thing that hopefully will be rectified by Google Cardboard 2. I’m not sure about the other HMD, maybe you, dear reader have another data?

While on the other hand, I can make plethora of input in the desktop version, since the user can use keyboard or mouse or joystick, or other devices to give input to the application. A thing that obviously won’t be mapped precisely in the smartphone counterpart. I did ran into the vreticle library which will help me make a gaze input system for VR, but I still founded some trouble implementing it.

Therefore, at this point, this experience is a passive one, no user input is involved. But I do hope to complete it with one at some point.

99 Names can be accessed at Play with it and let me know what you think.

Meanwhile, here are some screenshots of the steps I did in making it




Leap Motion Synth

I helped a friend develop a simple tone generator as a media for musical experiential learning for kids. He wants to use Leap Motion so kids can use their finger gestures to generate tone as well as learning the pitch of the notes.

Leap Synth

This was a good experience for me as I wanted to learn further about designing UI for gestural input device such as Leap Motion. This time, I propose this scenario:

  1. Use the right hand index finger for choosing which note to trigger
  2. Use the left hand to trigger playing and stopping note. When the palm is closed, a note is triggered, when the hand is opened, a note is stopped being played

As with previous projects, I used Processing for development as I can easily export this as a Windows application so he could deploy it without many hassles. The main challenge was to get Processing to detect which hand is right or left. In the end, I decided to detect hand position in relative to the Leap Motion. Afterwards, the finger detection and tracking was done. Mind that this was done May 2014, and several months after, Leap Motion released a new API which provide an easier way to detect left/right hand. Ha!



I went through several iterations, including using threads to ensure a smooth experience. However, in the end, I settle for a thread-less solution, since it didn’t require hands position detection in the start. It was a good learning experience, especially for designing UI. As I saw that this solution wasn’t really ideal, since the hands became very busy, though accurate enough to implement the choose-and-confirm paradigm as being employed in mouse.

I know that further development in UI paradigm is required to further improve the application of Leap Motion.


Kayubot adalah tempat pensil meja yang akan mengingatkan anda untuk membuka email. Ia akan menyala setiap ada email masuk sehingga penggunanya tidak perlu bolak-balik mengecek email.

Kenapa tidak mengecek email sendiri? Lalu apa yang salah dengan notifikasi email di layar komputer atau smartphone? Tidak ada yang salah, kecuali ketika kita memutuskan untuk mengecek email dengan cara seperti itu, kita membuka kemungkinan lebih lebar untuk terdistrakasi dan otomatis mengurangi produktivitas kerja. Berapa kali tab Facebook terbuka usai membuka inbox? Terdengar familiar?

Kayubot berusaha memberikan solusi untuk masalah itu. Penggunanya cukup fokus bekerja dan mengecek email hanya ketika benar-benar tahu bahwa ada email penting yang masuk. Ia bisa dikonfigurasi untuk mengecek email pada rentang waktu atau jam tertentu dan menyala ketika kriteria email masuk tertentu terpenuhi.

Butuh sesuatu untuk memenuhi hasrat nerd kamu? Well, prototype Kayubot dibuat dengan Arduino (pasti sudah tertebak). Aplikaso kontrol panelnya dibuat dengan Python dan Kivy sebagai framework UI-nya.

Saat ini Kayubot masih di fase prototype dengan harapan produksi untuk khalayak umum bisa dimulai pertengahan tahun 2015. Doakan saja.

Sementara itu, ini dia prototype Kayubot.











Client: Personal Project
Year: 2014
Exhibited at: Typotopia Exhibition – Lotte Shopping Avenue, Jakarta (2014)
Tools: Processing, Arduino
OS: Mac OS X

“Peradaban” (english: “Civilization”) is an interactive typography installation that I made for Typotopia, which as the name suggests, an art exhibition focusing on the theme typography across various media, not just graphic design. This exhibition, which was held as part of Indonesia – Korea Festival, ran from 8th to 31st October 2014. The exhibition features artwork from Indonesia and Korean artists. Which names that was quite familiar with, so it was an honour to share space with them.

In “Peradaban”, I visualizes the word “Peradaban” from a set of particles. Visitors could interact with the word by pouring water into the container in front of the screen. The word then reacted to that action by moving algorithmically, so that in the end the letters and word would become a different shape altogether. Since the movement was programmed to be quite random, not one action would result in the same shape.

This artwork represents how I view civilization as a combination of human actions and its consequences as well as the biological condition of the surroundings’ nature. That’s why, say, Indonesia have a different civilization as Russia’s. Partly because of the different climate, among others. That’s the point that people tend to forget. We become so selfish that we believe that our own bare hands can change the course of history, and not caring much about the nature anymore. I hope this installation can make the viewer re-thinks the position of nature and other biological entity.

I programmed the artwork in Processing, as usual, using several libraries to grab points from the font and animate them. I used the serial library to interface with Arduino. On the hardware side, I used a water flow sensor and read the change in the current that flows through the sensor, and sent that value to Processing to change the parameter of the particles’ movement. Pretty simple and I could accomplish it in matter of days.

Here are pretty pictures from the exhibition. Happy sight seeing 🙂


Client: Personal Project
Year: 2014
Location: web, access it at using Chrome or Firefox
Tools: headtrackr.js
OS: multi-platform

Screen Shot 2014-02-26 at 5.07.48 PM

As part of my contribution towards promoting vote during general election, I create this piece called “cARpres”. The name is a play of the words “capres” (contender for the presidency) and AR (Augmented Reality).

In this piece, you can swap your face with faces of several capres to choose from (more to come, I promise). Plus, you can also read tweets regarding the chosen capres, so you can see how the general perception towards that figure. Looking at this, imagine you are the capres that you choose and you can see how people perceive you.

Technically speaking, this piece teaches me to implement AR in browser by means of the HTML5 Canvas, headtrackr.js library and WebRTC. I’ve been wanting to play with this technology for some years now and luckily, WebRTC has been growing in a very good way that it’s now possible to be implemented for various purposes. Playing with it was quite intuitive for somebody who usually develops in Processing or openFrameworks. This has served me well as an introduction for me to start playing with it more in the future. I made it open source too, so people can develop it further.

Again, I hope that this piece can serve a lighter feeling towards general election. I see this as my own version of countering the heated situation in the internet where everybody (for better or for worse) start defending his/her own choice which sometime can lead to bitter argument. All available to read in Facebook or Twitter. I do think that we all should keep our heads cool and properly vote for the next Indonesia president.

Welcome to the democracy party, where we, the people, have the power to choose who will lead us in the next 5 years. Please vote. Don’t waste this opportunity.


Client: Abdul Dermawan Habir & Co.
Year: 2013
Location: A Study of Light Exhibition. Exhibited in Dia.lo.gue Artspace, Kemang, Jakarta
Tools: custom software made in C++ (openFrameworks and ofxTimeline addon)
Hardware: Arduino, Arduino Expansion Shield, Relay Module, Transistor, LED lamps
OS: Snow Leopard

1903 is a light installation that combines elements of audio, sculpture, light and fashion into a single installation. The creator, Abdul Dermawan Habir contacted me several months ago through this very website. He was looking form somebody who can program synchronized lamps and sound. Of course, I was up for it.

The installation tells a murder story through series of voice overs and mannequins. Its technical scenario is quite simple, every time a voice overs played, some lamps will be turned on to highlight a certain mannequin which represents the talking character. In additon, one lamp will be dimmed depending on the storyline to add more dramatic effect. This installation is setup on a close room. To see it, audience have to peep through a small hole, press a door bell and listen using a headphone.

As you may expect, this installation uses Arduino and relay modules to turn on the total of 9 220V lamps plus an NPN transistor and a resistor to dim a 5V LED lamp. To control the Arduino, I used openFrameworks and its Firmata library, plus the very helpful ofxTimeline addon which provides the GUI. With this, I can easily dictate which Arduino pin should be turned on according to the dialogue. Mind you that this isn’t audio reactive so I have to be very precise where to put the pin on/off command.

There are more detailed story on why I go with this solution, but I’ll leave it for later post. Meanwhile, you can see pictures from the installation below, including behind the scene (literally) images. It was exhibited in Dia.lo.gue artspace in Kemang, Jakarta, from 12-23 January 2013.