Gestural Robot – Project Log pt. 1

Last week, between spare time from making material for coding classes at Framework, I made a small prototype of a finger gesture-controlled robot. Gestural control has been something that fascinates me for quite a year now and since I have a robot kit idle, I thought that this would be a good time to use gesture as an input to something other than pixel-based material.

I use Leap Motion as the sensor and input, get Processing to read the data of finger numbers and transmit it via serial to Arduino which in turns, orchestrate which wheel to move and its respective movement direction. I use a 2-wheel robot which is controlled by an L298 motor shield.

Continue reading →

Arduino untuk Pemula Bagian 2: Keajaiban Pemrograman

Masih menyimpan rangkaian dari tutorial sebelumnya? Kalau iya, bagus. Kalau tidak, silakan baca pelan-pelan dan kembali setelah rangkaiannya siap 🙂

Jika sudah siap, maka kita bisa memulai bagian kedua tutorial ini. Kali ini, saya akan menunjukkan fleksibilitas dari pemrogaman Arduino. Sebagaimana kita ketahui, Arduino sejatinya adalah sebuah mikrokontroler yang bisa diprogram untuk mengolah input tertentu menjadi sebuah output. Artinya, sebagaimana sebuah komputer yang secara hardware tidak berubah, mampu diprogram untuk melakukan hal-hal yang beragam, maka Arduino juga bisa kita program untuk melakukan hal berbeda, meski terdiri dari komponen yang sama.

IMG_3669.JPG

Tutorial kali ini akan difokuskan pada pemrograman Arduino. Tutorial sebelumnya menghasilkan sebuah rangkaian yang terdiri dari tombol dan LED, di mana selama tombol ditekan, maka LED akan menyala. Sangat dasar dan tidak menarik. Kali ini, kita akan membuat LED berkedip jika tombol ditekan, dan mati jika tombol ditekan untuk kedua kalinya. Seperti kita menyalakan saklar. Ini akan kita lakukan dengan menggunakan rangkaian yang sama.

Kode yang kita gunakan adalah pengembangan dari kode sebelumnya. Hanya ada sedikit penambahan di awal, dan di bagian fungsi loop()

Bagi yang belum pernah melakukan programming, mungkin ada hal-hal berikut yang perlu disimak:

1. Setiap baris yang diawali dengan // artinya baris ini adalah komentar, berisi keterangan atau penjelasan tertentu. Bagian ini tidak akan di-compile dan diupload ke Arduino, hanya penjelasan bagi yang membaca source code program tersebut
2. dalam program, ada yang disebut sebagai variabel. Ini adalah tempat menampung nilai-nilai yang bisa berubah seiring program berjalan. Variabel-variabel ini memiliki tipe tertentu dan akan diinisialisasi dengan memberikan nilai. Sebagai contoh, untuk membuat sebuah variabel dengan nama x yang memiliki tipe data integer (bilangan bulat) dan memiliki nilai 10, kita bisa menulis
int x = 10

3. Setiap baris dalam program yang melakukan sebuah hal disebut sebagai pernyataan (statement). Satu pernyataan diakhiri dengan titik koma ;
4. Ada bagian program yang mengumpulkan satu atau beberapa pernyataan menjadi satu blok tertentu. Ini dikenal sebagai fungsi, dan ia akan melakukan tugas tertentu. Sebagai contoh di program kita, ada 2 buah fungsi, yakni setup() dan loop()

Yak, inilah kodenya:

// set pin nomor 2 sebagai pin button dan pin 13 sebagai pin untuk LED
const int buttonPin = 2;
const int ledPin = 13;

// membuat variabel untuk menyimpan status tombol, status sebelumnya, dan status LED
int buttonState = LOW;
int prevButtonState = LOW;
int ledState = LOW;

void setup() {
    // pin ledPin dibuat sebagai output
    pinMode(ledPin, OUTPUT);
    // pin button dibuat sebagai input
    pinMode(buttonPin, INPUT);
}

void loop(){
    // baca status dari pin button
    buttonState = digitalRead(buttonPin);

    // bagian ini akan kita ubah
    // kita memeriksa apakah kondisi button berbeda dengan kondisi sebelumnya
    if (buttonState != prevButtonState) {
        // jika iya, maka kondisi button sebelumnya, adalah kondisi button sekarang
        prevButtonState = buttonState;
        // jika kondisi button sebelumnya LOW, maka status LED akan berbeda dari sebelumnya
        if (prevButtonState == LOW) {
           ledState = !ledState;
        }
    }
    // jika ledState = HIGH, maka LED akan berkedip, jika tidak, maka LED mati
    if (ledState == HIGH) {
        blink();
    } else { 
        digitalWrite (ledPin, ledState);
  }
}

//fungsi ini membuat lampu LED berkedip
void blink(){
  digitalWrite(ledPin, HIGH);   // nyalakan LED
  delay(1000);                  // tunggu sebentar
  digitalWrite(ledPin, LOW);    // matikan LED
  delay(1000);                  // tunggu lagi
}

Kode yang kita ubah dalam fungsi loop() menyatakan bahwa kali ini, yang kita perhatikan adalah perubahan kondisi button. Apabila ada perubahan kondisi, yang terjadi setiap kali ia ditekan, maka variabel ledState akan berubah pula kondisinya. Seperti mengatakan ON atau OFF. Ini berbeda dengan sebelumnya, di mana yang kita perhatikan adalah kondisi button.

Apabila kondisi LED HIGH, maka program akan memanggil fungsi blink() yang mengatur LED agar berkedip, dan mati jika sebaliknya. Fungsi ini bisa digambarkan sebagai bagian program yang melakukan, well, fungsi tertentu. Setidaknya, dalam Arduino ada 2 fungsi yang wajib ada, yakni setup() yang berjalan sekali di awal program, serta loop() yang dieksekusi berulang-ulang selama program berjalan. Menggunakan fungsi yang berbeda ini adalah latihan yang bagus untuk berpikir prosedural, atau berurutan, yang bisa membantu pemula untuk memecahkan masalah secara terstruktur.

Itu dia tutorial kali ini, semoga membantu pengalaman menggunakan Arduino. Tutorial selanjutnya akan membahas jenis input yang lain. Jika ada ide, silakan juga tulis komentar pada postingan ini.

Selamat ngoprek 🙂

1903

Client: Abdul Dermawan Habir & Co.
Year: 2013
Location: A Study of Light Exhibition. Exhibited in Dia.lo.gue Artspace, Kemang, Jakarta
Tools: custom software made in C++ (openFrameworks and ofxTimeline addon)
Hardware: Arduino, Arduino Expansion Shield, Relay Module, Transistor, LED lamps
OS: Snow Leopard

1903 is a light installation that combines elements of audio, sculpture, light and fashion into a single installation. The creator, Abdul Dermawan Habir contacted me several months ago through this very website. He was looking form somebody who can program synchronized lamps and sound. Of course, I was up for it.

The installation tells a murder story through series of voice overs and mannequins. Its technical scenario is quite simple, every time a voice overs played, some lamps will be turned on to highlight a certain mannequin which represents the talking character. In additon, one lamp will be dimmed depending on the storyline to add more dramatic effect. This installation is setup on a close room. To see it, audience have to peep through a small hole, press a door bell and listen using a headphone.

As you may expect, this installation uses Arduino and relay modules to turn on the total of 9 220V lamps plus an NPN transistor and a resistor to dim a 5V LED lamp. To control the Arduino, I used openFrameworks and its Firmata library, plus the very helpful ofxTimeline addon which provides the GUI. With this, I can easily dictate which Arduino pin should be turned on according to the dialogue. Mind you that this isn’t audio reactive so I have to be very precise where to put the pin on/off command.

There are more detailed story on why I go with this solution, but I’ll leave it for later post. Meanwhile, you can see pictures from the installation below, including behind the scene (literally) images. It was exhibited in Dia.lo.gue artspace in Kemang, Jakarta, from 12-23 January 2013.

Raspberry Pi Review

Raspberry Pi is all the rage as we speak and I can’t see why not. Here we have a cheap and small computer, perfectly suited for your next embedded system experiment. I remember how excited I was at the time it was publicly announced and still wasn’t available at that time. However, I didn’t know how I would use it to suit my needs in developing interactive installation (which still is pretty much how I supply myself with monetary income).
I still didn’t know until I got Raspberry Pi in my hands couple of weeks ago.

Before I go into the bloody details of Raspberry Pi, I want to introduce you to the current platforms available for my work. No matter what software or programming environment I use, the base platform is always boiled down to two extremes, one is a computer (any form, PC, MacBook, you name it) and the other one is a microcontroller (most of the time this means Arduino). With the computer, I can create advance graphics that interact with different kind of input, such as movement or sound, using sensor such as webcam, Kinect or microphone. The output however is constrained to either a big screen (I count projection as a screen too) or sound, via speaker. Whenever I want to create a more physical output via daily objects, I resort to Arduino to do the job. With it, I can create blinking lights or rotating objects using input from the computer. Having said that, I’ve relied with the combination of computer -> Arduino -> Output or arduino -> computer -> output for years now. Obviously, with that configuration a big space (and cost) is required, even for a project that could’ve been simpler. That’s where I thought RPi can kick in and be a part of.

Are you still with me? Good. Sorry for the long intro, because the rest of the article seems pointless without it.

So, long story short, I ordered Raspberry Pi via Ngooprek, an Indonesian based online electronic components distributor. I think is the only place to get RPi here in Indonesia, CMIIW. After I got it, I was amazed by the small size of it, I thought this was cool. However, it took me a while to supply myself with the required accessories needed to start with RPi. At the end of the day, I got myself an SD card, HDMI cable and a card reader. Enough to start playing with RPi, since I can use my LED TV for RPi’s video output via HDMI and I can use my Android phone charger as its power supply. All set.

First, I downloaded the Debian ISO for the OS. I chose Debian instead of the Raspbian since I thought that there will be more software available for Debian. Burn the ISO to my SD card in Windows using Win32ImageWriter application. Plug the card to RPi, connect the power supply and HDMI cable and voila, I have it all running with no hassle. Can’t remember the last time I had a Linux machine running in such small time, really. Tested couple of built in app inside the OS, everything run smoothly, I thought this is good for daily computing activity such as internet surfing. Hey, for 700 MHz and 256 RAM, I had smaller PC spec back in the day and I could play game and browse, so this didn’t really surprise me. Not the end of the story though as this isn’t why I bought RPi the first time. It’s for the programming and its galore.

Having said that, I tried to test how will I develop in RPi by installing two of my favorite programming IDE for developing interactive installation, Pure Data and Processing. I counted C++ library such as openFrameworks or Cinder out because it took me a while to compile them in my MacBook, can’t imagine how long it will take me to do so in RPi. Installing Pure Data was easy breezy. It’s there on the Debian repo (see, choosing Debian wasn’t a bad idea), so some routine apt-get did the job. Opened Pure Data and surprisingly it feels pretty light. Weird, because it’s quite slow on my MacBook. Did some patching and it feels acceptable. Haven’t made any complex patches though. Anyway, patching is the name of programming in Pure Data, since you basically patch together lines from different boxes in order to create something.

What’s tricky was actually getting the sound to work. Pure Data is bread and butter a sound generator, so it’s pointless having it installed in a platform that can’t play sound. Theoretically, RPi can play sound via its headphone jack output or from its HDMI port, which will then be played back on the TV. The thing is, the output from the headphone port is nowhere near acceptable. I had horrible noises coming out when playing the Pure Data test sound patch. That’s even better than the HDMI counterpart who couldn’t play any sound at all. I’ve investigated ways to make this work in an acceptable way but so far I’ve failed. I suspect I have to do either configuring RPi and my TV to play sound via HDMI, or get a USB soundcard for the same purpose. Both way, I’m still intrigued and I’ll keep you guys updated.

On the other hand, installing Processing involved a bit of more work. Processing runs on top of Java, so obviously I need to install Java VM some way or another. I used OpenJDK 6 because I read that it was supported in ARM, the RPI’s processor. I then removed Processing’s built in Java library and linked OpenJDK to replace it. Voila, I had Processing running. Though, in the beginning Processing displayed a message saying that it didn’t like the Java VM I have. That’s the only peculiar thing happens, but Processing runs like normal anyway. What’s abnormal is the speed of it. Processing feels pretty heavy and slow during start up and preparing to run sketches. The memory indicator shows a full utilization of it and it took me like 2-3 minutes between pressing the play button and having the sketch running. Certainly with this condition it’s not efficient to have yourself doing code-test-code-test routine. As it will take some time to compile the program. I guess I have to go old school and code everything properly so I don’t have to run the sketch frequently. This is The Pi running simple Processing sketch, rectangle moving, nothing fancy.

However, it’s not all bad news. I realized that the RPi can be used as a more powerful and rich-featured Arduino. A small research in the internet provides me with many information to utilize RPi’s GPIO pins similar to Arduino’s input/output pins. Some companies like Adafruit and Element14 even produce their own RPi accessories to ease electronic prototyping and development using this board. Even better news is the fact that RPi has its own Ethernet port and capability to use WiFi so you can have Arduino+Ethernet/WiFi shield capability (even more) with half or even third of the price of that combination.

Having said that, I can see RPi being used in many more use case, either as a nostalgic standalone computer that requires not much processing power (no pun intended) or, as a more powerful version of Arduino. I just have to make peace with myself that for the time being, this board isn’t suitable to make a full-fledged DIY VJ box that I dreamed of in the first place. Maybe I should change my visual style for this tiny machine.

Interactive Displays for Nike

Client: Nike
Year: 2012
Location: Nike Store, Senayan City Mall, Jakarta, Indonesia
Tools: Arduino and Processing (Interactive Product Display). custom software and TouchDesigner (Interactive Wall Video Display)
Hardware: Arduino and proximity sensor (Interactive Product Display) 2 Kinects (Interactive Wall Video Display)
OS: Windows 7

Here are two new installations I did for Nike to promote their 4 new football shoes in coinciding with the Euro 2012 football tournament . One is an interactive product display and the other one is an interactive wall video display. Generally speaking, both trigger videos in its own way. Both are deployed in Nike Store in Senayan city Mall in Jakarta.

For the interactive product display, customer can pick up a shoe from its display box and it will trigger an informational video regarding that product. This happens for all 4 shoes. Each shoes are placed on top of a computer (an integrated monitor + CPU Lenovo model). A proximity sensor connected to an Arduino is used to detect the shoe’s position to determine whether it’s lifted or not. Upon lifting, a video will be triggered. This is programmed using the new Processing 2.0a6 in order to achieve a smooth 720p video playback using its built in GStreamer video back end.

 

 

 

 

 

The interactive wall video display triggers video based on an audience position in front of the store’s window display. Again, 4 shoes are placed in front of the 3m x 1m video display so if somebody is standing in front of a shoe, a corresponding video is triggered and will be played in the display. 2 Kinects are used to detect people over a quite wide space. A custom software is used to stitch images from both Kinects and do blob tracking which sends the blob’s position to TouchDesigner to trigger different videos. TouchDesigner was chosen because its amazing capability to playback hi-res video without burdening the computer’s CPU because it’s working on the GPU.

It’s still on display until the end of June, so if you’re in Jakarta, hop in for a ride and grab a Nike product while you’re at it.

Reefs on The Edge Interactive Multitouch Table Prototype Demo at Web Directions South 2011

On October 13-14 2011 at the Web Directions South 2011 Conference, me and my project collaborator, Phil Gough had a chance to exhibit out interactive multitouch table. This work is meant to be part of a larger installation called Reefs on The Edge that amis to communicate the effect of raise in sea temperature in coral lifespan. This multitouch table is our research project for our last semester in University of Sydney’s Master of Interaction Design and Electronic Art course.


RotE Prototype Demo

Initially, we were aiming at communicating the intended message in a more engaging fashion. We then designed how that information can be presented, how the scientific data can be communicated effectively, what are the means available to do it and so forth. We then decided to create a multitouch table with tangible objects that the audience can use to control the abstract visual that simulates how the coral grows in various sea surface temperature.

We then had a chance to show the first prototype demo in the Web Directions South 2011, a perfect place to showcase this work to a larger audience as well as doing some impromptu user testing. The result was fantastic. We had positive feedbacks from people who played with it. Not only that, without even having to promote this table by playing it, we had people come by themselves and actually interact with the work. I’ve seen people come and say to their friends saying “I wanna check that one!” More over, some people even come for more than once.

Me and Phil sometimes would happily explain to the audience what is this work about, but at other times, we prefer to stay back and see how people play with it and see how they would come up with their own conclusion of what is this they’re touching and seeing. It is a fascinating experience, seeing people still play with it without even have an idea of what it is.

One of the main focal point that we want to see in this prototype is how the object used by people. We went back and forth on our design process before we came up with this small, beautiful object that is affordable for people to touch and use. And they did use it. People would came and just randomly touching it, moving it and rotate it to see how it affects the animation. In conclusion: this object does actually work.

Of course, we still have some work to do from software point of view, notably the accuracy of the fiducial marker detection and color changing animation. But as an interactive work that was meant to be engaging for people, I can happily conclude that this one is a massive success.

Here are some more pictures from that exhibition:

note: at the end of the 2 days conference, I’ve had 3 people asking me “hey, is this Microsoft Surface?”. That’s a compliment, considering we made this from scratch in Phil’s garage.

Have a great weekend everybody 🙂

 

 

 

Controlling RGB LED Color with Arduino and Processing

There are times where RGB LED looks so interesting. The ability to mix the color, mmm, create pixel colors in real world, amazing. But there are also times where you feel you want to have that certain color in your RGB LED, but for some reason, you just couldn’t nail it or you have to go through a tedious process to obtain the perfect value of voltage you have to feed to your RGB LED. I always thought that the process is simply achievable using the Processing’s color picker tool, but I was wrong.

I’ve just had the latter problem, just then. For some reason, the different luminance of each color in my RGB LED makes getting the color that I aim so difficult. Even complete high value in the blue and red pin doesn’t result in magenta. Annoying.

So, I decided to use Arduino and Processing in order to control the voltage input coming in to the RGB so I can mix the perfect amount of each red, green and blue color to get the color that I want. Since this is a simple project, I decided to control it using keyboard, so I can have a precise control of the input voltage (or rather output voltage coming out to the Arduino).

The schematic is dead simple. Connect each value of the RGB LED pin to the PWM output of Arduino (PWM pins: 3, 5, 6, 9, 10 ,11). Make sure you put some resistors between the LED’s pins and the Arduino pin. Refer to the datasheet of your LED to get the proper resistor value. For my project, I use 180 ohm for the Red pin and 100 ohm for the Green and Blue pin.

After that, you can upload this Arduino code to your board:

int incomingByte = 0;
int g;
int b;
int r;

void setup() {
  Serial.begin(9600);
}

void loop() {
  analogWrite (3, g);
  analogWrite (5, b);
  analogWrite (6, r);
  if (Serial.available() > 0) {
    incomingByte = Serial.read();
    if (incomingByte == 'R') {
      r += 1;
    }
    if (incomingByte == 'S') {
      r -= 1;
    }
    if (incomingByte == 'G') {
      g += 1;
    }
    if (incomingByte == 'H') {
      g -= 1;
    }
    if (incomingByte == 'B') {
      b += 1;
    }
    if (incomingByte == 'C') {
      b -= 1;
    }
  }
}

Then run this Processing code to control the mix of color for your LED. I use the up, down arrow for the Red color; right, left arrow for the green color; and letter b and c for the blue color. So unintuitive but I just want a quick solution. Change those value if you feel comfortable doing it the other way around.

Here’s the Processing code:

import processing.serial.*; 

int r;
int g;
int b;

Serial port; 

PFont font; 

void setup() {
  size(200, 180);
  background (140);
  //opens the serial port connection
  println(Serial.list()); 
  port = new Serial(this, Serial.list()[0], 9600);
  
  //load the font for the text
  font = loadFont("Arial-Black-24.vlw");
  textFont(font);
  smooth();
}

void draw() {
  fill (255, 0, 0); 
  text (r, 30, 90);
  fill (0, 255, 0); 
  text (g, 90, 90);
  fill (0, 0, 255); 
  text (b, 150, 90);
  println ("red " + r  + "green " + g  + "blue " + b);
}

void keyPressed() {
  //for the blue color
  if (key == 'b') {
    port.write('B');
    b += 1;
    background(140);
  }
  if (key == 'c') {
    port.write('C');
    b -= 1;
    background(140);
  }
  if (key == CODED) {
    //for the red color
    if (keyCode == UP) {
      port.write('R');
      r += 1;
      background(140);
    }
    if (keyCode == DOWN) {
      port.write('S');
      r -= 1;
      background(140);
    }
    //for the green color
    if (keyCode == RIGHT) {
      port.write('G');
      g += 1;
      background(140);
    }
    if (keyCode == LEFT) {
      port.write('H');
      g -= 1;
      background(140);
    }
  }
}

This code will help you get the exact value of the RGB LED that you have to mix to get the color that you aim for. In fact, it will show you the precise number of value needed to pass on to those RGB LED pins. So, next time you want to get the color, you just need to write those value to the output pin of the Arduino.

Hope this helps. Just feel free to comment if you need more help or if you find the code’s not working.

Ciao.

Skin – The Technical Aspects

This post describes how I technically build the whole system for our interactive art installation “Skin”. At this stage we’ve pretty much decided how our artwork would be. It will have a texture of a plastic bag on the screen that audience can interact with. We will also have a grid of plastic bags as the physical part of the installation that will represent the dynamic surface of the plastic bag on the screen. The plastic bags will have LEDs on them which will react to what happen on the screen as well as the audience’s position on the whole environment. We’re aiming to give the audience the sensation of touching the skin, that’s why we do not only give the chance to do it virtually, but also physically. That’s why we also need to have a physical aspect on the installation.

From that design, I then formulated how each part of the artwork should be and how they are connected, which is showed in the picture below:

I will now describe how each part was built.

The Processing Part

This is the heart of the artwork, as the whole interaction process will be based on what happen on the screen. For this, we will have a texture of the plastic bag applied on the whole screen. This surface will then be reactive to the audience’s movement front of the screen.

The textures of the plastic bag were achieved in two steps. First of all we did a cloth simulation using the Traer Physics library for processing. We created a grid of particles that has a spring feature. That way, the particles position and velocity can be manipulated dynamically with a degree of bouncy feeling to it so that they can return to their default position (e.g. flat on the screen). To further simulate a plastic surface feeling, I applied a heavy damping on the particles, so that the surface will only bounce once in a slow motion, because that’s what being expected from a plastic surface.

After I finished this particle part, I then applied texture of plastic bag on top of the particles.  To do this, I created a triangle strip polygon on where the particles grid is and then lays the plastic bag textures on top of it. Johnny created this plastic bag texture using image from his camera that was manipulated furthermore using Photoshop. While this may seems to be an easy part, in truth this phase was proven to be tricky, as we have to found a perfect balance between a clearly perceived image of a plastic bag texture and the maximum details that can be displayed clearly on the SmartSlab. In practice, we had to go forth several time before we finally nailed the final form of the plastic bag texture.

The next step was to manipulate that plastic bag texture using image from Kinect. Initially, I was aiming to use the coordinates from the Kinect’s depth image to dictate which particles’ position can be manipulated. Eventually, this method proven to be performance heavy, resulting in a lagged display. Rob then came to help and suggested that we can instead use just the Kinect’s depth image as a parameter to offset the particles, without even having to extract each depth image pixels’ x and y coordinate. The result was magical, the plastic bag surface was interacting with the movement from Kinect with no delay on the display and a faster performance overall. We then have to tweak some more parameters in order for the system to be suitable for the exhibition, even though at that point, the interactive screen was already quite impressive.

In the end, we managed to come up with a better parameter for the mapping between Kinect’s depth image and the particles. I also applied light on the surface, in order to get the effect of the person shape in the screen and to provide a better effect on the interaction. At this point Johny who has a better experience on 3D, suggested that in order to provide a more detailed display, we can either make the polygon’s shape smaller so that the light will be more detailed, or we can apply shader instead of Processing’s light method. But due to the lack of time, we omit this option and instead putting it as some suggestions for the future.

The Arduino Part
Now, if the screen was the heart, then for me the physical element of this installation will either make or break the whole installation. The lights will emphasize the whole message that we try to convey from this installation, that’s why I put a lot of pressure on my mind to make sure that the light will actually work. Luckily, technically speaking this is quite a simple job to do, plus we already had a working prototype from our previous concept.

The idea was to make lights turning on depending on where the action on the screen happened, a simple mapping. Then we also wanted to reward people that actually trying to interact with the plastic bag. That’s why we put additional LED that will be activated depending on the person’s range relative to the plastic bag. The closer the audience with the plastic bag then more LED will be turned on.

To achieve that, I opted to use two separate Arduino boards; one board will be connected to the Processing sketch via XBee wireless communication line so that the laptop and the Arduino can be located in separate places. Another Arduino will receive input from an IR sensor that senses range of audience. Both Arduinos will be a separate entity. Now, this may seems quite inefficient, but for me this will provide a failover system, so in case one of the Arduino is not working, we will still have LEDs turned on in the plastic bags. Plus, since I have two Arduinos lying on my room, then why not use them.

For the Arduino that receive input from the IR sensor, the workflow is quite streamlined. After connecting the IR sensor and The Arduino, I then programmed the Arduino to receive and read the input from the IR sensor. Here, I achieved that using a lookup table to interpolate the reading from the sensor. It’s easier this way because the raw data from the IR is quite hard to read and used since it’s an analog sensor. This way, I managed to create a mapping between the voltage input from the IR and the range of an object to the sensor. I then map the range into the number of LED that will light up, again the closer the audience to the sensor, then more LED will be turned on. Quite a simple process altogether.

Connecting Processing and Arduino
For the other Arduino that will talk to the Processing sketches, the process of creating it proven to involve more work. First is deciding which part of the screen interaction that will be passed on as a message for the Arduino and how. After some tinkering, I decided that the depth image’s position could be the parameter to turn on the LEDs. For this, I used one of the examples from Dan Shiffman’s openkinect library that will track the depth image’s position. This process was proven to be light enough that it didn’t affect the whole performance of the system.

After that milestone was achieved, the next step would be connecting those parameters to Arduino to turn on the LEDs. I already knew that this communication will be done via serial port. So, I first tried to make Processing do a bunch of serial write process depending on the parameter from the Kinect’s depth image to give different effects of LED. But this process caused a massive lag on the system and the display. Then, I tried different approach, which is to do serial write only twice and let the Arduino interpret the whole bunch of data, in short, load balancing. This was then the solution for that problem.

That step was done using the USB port on my laptop. Now to the real part, which is connecting the laptop and Arduino wirelessly using XBee protocol. For this part, I used two XBee Series 1 chips, one for sending and one for receiving. This is probably the trickiest bit of all process given that I never used XBee before. But after some readings I founded that in order to do a communication via XBee, some steps are required. First of all, I have to program both XBee chips so that they’re in the same network. I used the application CoolTerm on my Mac machine to do this, programmed both chips using XBee commands. After some testing to make sure that they can communicate with each other I then connect the sending XBee to my laptop and use that as the Processing’s serial port to send data to the receiving XBee. Voila, it worked. Not as hard as I imagined in the first place.

Conclusion
In the end we managed to get all technical part of the system working. Quite a lengthy process, but we enjoyed every single one of it. Of course with so many building blocks we always had a bigger risk of failure, it’s then up to us to cope with it when system failure occurred.

Skin

This is an interactive art installation that my group, Fy, did for the last semester. This installation is entitled “Skin”. The group consisted of me, as the artist/programmer, Susanne Chan as the designer and Johnny Campos as the 3D artist. Here’s a video that briefly describes the installation.

The Concept
Skin, is a mixed media screen art interaction that incorporates a physical spatial installation. In the words of Juhani Pallasmaa, “As we look, the eye touches, and before we even see an object, we have already touched it and judged its weight, temperature and surface texture.” Skin, plays with the sense of touch and bring attention to the unconscious element of touch in vision and the hidden tactile experience that defines the sensuous qualities of the perceived.

The Experience
As the viewer interacts the screen, what can be seen and defined, is a skin-like surface layered to generate the movement of the viewer. The skin fluidly reveals edges and corners, blending the texture of plastic bag to form the temporal and continuity of the viewer’s presence. The interaction with the screen can be experienced as both mental and physical, it allows exploration of the hapticity of skin screen surface. The term, haptic relates to or based on the sense of touch. For this reason, the physical spatial installation was critical in activating all the senses within a new environment. The concept-encouraged viewers to touch with their eyes and sense with their being – brings the experience to life.

To create this intended user experience, we played around with the technologies introduced to us during the course of the semester. We managed to come up with a mix of motion sensing using Kinect and Processing and physical feedback through lights using Arduino and IR sensors.