Leap Motion Synth

I helped a friend develop a simple tone generator as a media for musical experiential learning for kids. He wants to use Leap Motion so kids can use their finger gestures to generate tone as well as learning the pitch of the notes.

Leap Synth

This was a good experience for me as I wanted to learn further about designing UI for gestural input device such as Leap Motion. This time, I propose this scenario:

  1. Use the right hand index finger for choosing which note to trigger
  2. Use the left hand to trigger playing and stopping note. When the palm is closed, a note is triggered, when the hand is opened, a note is stopped being played

As with previous projects, I used Processing for development as I can easily export this as a Windows application so he could deploy it without many hassles. The main challenge was to get Processing to detect which hand is right or left. In the end, I decided to detect hand position in relative to the Leap Motion. Afterwards, the finger detection and tracking was done. Mind that this was done May 2014, and several months after, Leap Motion released a new API which provide an easier way to detect left/right hand. Ha!



I went through several iterations, including using threads to ensure a smooth experience. However, in the end, I settle for a thread-less solution, since it didn’t require hands position detection in the start. It was a good learning experience, especially for designing UI. As I saw that this solution wasn’t really ideal, since the hands became very busy, though accurate enough to implement the choose-and-confirm paradigm as being employed in mouse.

I know that further development in UI paradigm is required to further improve the application of Leap Motion.


Meet Robobrain. He’s a head of a robot. He doesn’t listen, he talks through his eyes. He talks with noises, but you can interpret freely what you want to listen. You can see through his brain, but you may not be able to perceive what you see with your naked eyes and ears. He tries to communicate with you with common objects, yet what you’ll get is something completely uncommon. That is Robobrain.

This is Robobrain

The Concept
Robobrain is my interactive sound art installation, made for the sound art exhibition “Derau” held by the Bandung’s new media art community, Common Room. In this installation, I’m trying to de-construct and construct how today’s communication is. With so many noises happening, a disinformation is just bound to happen. Today’s society tends to see with what they hear, and that plays a major role in how people perceive something. They listen to something bad about something and even before they see that thing, they already had a bad conception about it. That’s why I use robot as a form, because I can then freely construct and de-construct these metaphoric elements through how it will look while still being strangely familiar to the audience.

I also like to play with how it looks versus how it sounds. At it’s essence, Robobrain is just a noise generator, but I tried to make it looks less complicated and a little bit childish by using common objects such as cardboards, plastic food case and leaves as its main constructing objects. In the end, visually, it looks nothing dangerous, but once you hear it, it becomes something quite alien. I also employs a small guitar amplifier as its sound output, and it’s also visible, because I want to make an impression that this alien robot is trying to communicate with something that people are used to see.

This installation is created using only Arduino board and some electronic components. No laptop or sound file playback are involved.

Inside Robobrain

The Sound and Interaction
Changing the input voltage coming to the Arduino generates the sound on this installation. Some of the changing are created by rotating the three potentiometers. Another changing are achieved by sensing the sound from the plastic and leaves movement on the robot’s brain. That movement is created using a PC fan, so the resulting movement will always be random and that randomness will create a further variety on the resulting sound. Thus, even with the same knob position, the generated sound will never be the same. The sound will be played continuously; it’s not a loop because it never returned to the original sound. This is an analogy to how information transmitted; it has that degree of noise that can alter the original information. Even when one person talks to several people at the same time, the information transmitted by those listeners may be different.

Audience can interact with Robobrain through the knobs in Robobrain’s face. These three knobs let the audience change the oscillator frequency, pitch and tempo of the generated noise. The resulting sound ranges from something quite futuristic, imagine the swoosh of a space ship to downright abstract noise. There’s also an LED that will be turned on and off depending on the frequency of the oscillator, as if it’s pulsing following the tempo of the sound. These three elements, knobs, interactive sound and LED are things that I put as interaction objects of this installation.

Interacting with Robobrain

As an interface, knobs are something that people are familiar with, so, people can easily afford to use it and they can expect that something is going to happen when they turn these knobs. The sound is the result of this interaction through knobs, and with the broad range of sound, I expect people to keep on playing with the knobs to find the sound that they like. This is again a metaphor to how disinformations happen, because words are twisted to the preference of somebody. While the LED acts as a visual elements that act as a reward to somebody who played the knobs. Its’ interacting pulse can attract people to acknowledge that there’s something there that wants to communicate and people can actually communicate with it.

The Exhibition
I got some positive feedback from people played with Robobrain. Some people commenting on how they find the installation cute and funny because of how it looks and how they can play with the knobs. While some people with music background actually commented on how good the generated sound is, note that these people are serious synthesizer fan, so it’s flattering to hear it from them. I omit labelling the knobs because I want the audience to play with it, and from my examination, I can see people played with the knobs trying to figure which knob does what and they had fun interacting with it. I guess this proves how a simple and familiar interface can be engaging to people.

To Conclude
In the end, I’m so happy with this installation, because I’ve always wanted to create a sound installation, and from a technical point of view I always challenge myself to create a simple system on a microcontroller without using laptop. This installation also taught me how powerful an Arduino is and how it can be exploited to create a synthesizer/sound generator without adding external sound input. On the other hand, having learned interaction design for the past one year, I have a chance to apply what I’ve learned on this installation. Even though I have an ambition to employ a cutting edge interface, I sort of held back and decided to deploy a familiar interface just to see how people reaction to it. Still, it proves to be fruitful, people can use it. I can take this into account to my next installation where a more sophisticated tangible user interface is expected. In short, this has been an invaluable experience for the future.

Depriver | Pest Lives – Split

Split album of my drone metal project named Pest Lives and Adythia Utama’s shoegaze/black metal project named Depriver. The album is released by Drowning, a drone doom net label from Denmark. Go download it for free at http://drowning.cc/drown06

Here’s what the label has to say

Two very upfront Indonesian acts meet on this rare transmission from the Southern Hemisphere. Depriver is a shoegazing black metal band from Jakarta with a heavy sludgy sound. Pest Lives is a debuting but rock solid drone doom act with a personal sound and a fine-tuned sense of time and tension.

Enjoy \m/

Lite: An Interactive Art Installation by Adityo Pratomo

So finally, after all the hassle, limited time, and any kind of obstacles, I’ve managed to finished my interactive art work and exhibited it in an exhibition named Resonance. Resonance itself is a exhibition that exhibited interactive art and video art, and this is part of Helarfest 2009. Totally, there are 11 works that displayed in the exhibition. You can read more about Resonance and also Helarfest 2009 in general, by clicking here .

Lite in action

Lite in action

So, now let me introduce you to my work, I name it “Lite”. In a brief, in this work, I create an environment for a generative audiovisual art to grow. Here, light is the main character. Light is used as an argument for the algorithm that will generate some form of audiovisual. Basically, by moving a light source in front of the computer, people can generate music and visualization.

Technically speaking, in this work I use Processing and Pure Data. Processing is used to track light source movement in front of the camera. The output of this, is the coordinate of the light source, in x and y coordinate. The value of x and y, is then used as an argument in Pure Data to generate frequency in a simple synthesizer/16 step sequencer. In Pure Data, there will be 3 synthesizers that I made, one FM and one AM are used as a lead part while one simple Sine oscillator is heavily low passed and used as a low

frequency part of the sequence. How 2 values (x and y) are used as an input to the 32 frequency of the synthesizers are purely algorithm. I used simple math between these x and y to make them generate specific frequencies that relies on their respective value. So, different light source position will generate different part from the Pure Data. Processing and Pure Data are connected through OSC protocol. The Processing code is based on a Brightness Tracking Code from the Processing Example, while the Pure Data sequencer is based on a lesson in Pure Data Floss Manual.

Personally speaking, this is quite an achievement for me. I’ve managed to finish my

second interactive art work (albeit a simple one) and this is my first ever exhibited work. Spiritually, I also think that finishing an artwork is a step ahead of finding myself as a complete human. Having said this, now I’m ready to create another installation. So, if anyone out there seek for collaborator, I’m more than ready to contribute.

This work will be exhibited until 30 October 2009 at CCF, Jalan Purnawarman 32, Bandung. So, if you happened to be in Bandung, then make sure you visit this marvelous exhibition.