Gestural Robot – Project Log pt. 1

Last week, between spare time from making material for coding classes at Framework, I made a small prototype of a finger gesture-controlled robot. Gestural control has been something that fascinates me for quite a year now and since I have a robot kit idle, I thought that this would be a good time to use gesture as an input to something other than pixel-based material.

I use Leap Motion as the sensor and input, get Processing to read the data of finger numbers and transmit it via serial to Arduino which in turns, orchestrate which wheel to move and its respective movement direction. I use a 2-wheel robot which is controlled by an L298 motor shield.

Continue reading →

Processing and Coding Literacy

Few months ago, I had the chance to gave workshop on making data visualization using Processing. The course were designed for someone who didn’t come from a technical background, who wished to hit the ground running making something using programming or specifically, Processing. On that day I had several people coming who are designers and architects.

For several hours, I sat there, talking, explaining, running some examples and gave short exercises for them, guiding them to the magical world of creative coding, and they did grasp it. The best thing about working with these creative people is that I could plan seed in their mind and I would instantly watch as they bend codes into their will. One guy has made his own brand identity using Processing code at that workshop. Another one, I know, is still exploring and happily tells me new things he learns.

Continue reading →

Leap Motion Synth

I helped a friend develop a simple tone generator as a media for musical experiential learning for kids. He wants to use Leap Motion so kids can use their finger gestures to generate tone as well as learning the pitch of the notes.

Leap Synth

This was a good experience for me as I wanted to learn further about designing UI for gestural input device such as Leap Motion. This time, I propose this scenario:

  1. Use the right hand index finger for choosing which note to trigger
  2. Use the left hand to trigger playing and stopping note. When the palm is closed, a note is triggered, when the hand is opened, a note is stopped being played

As with previous projects, I used Processing for development as I can easily export this as a Windows application so he could deploy it without many hassles. The main challenge was to get Processing to detect which hand is right or left. In the end, I decided to detect hand position in relative to the Leap Motion. Afterwards, the finger detection and tracking was done. Mind that this was done May 2014, and several months after, Leap Motion released a new API which provide an easier way to detect left/right hand. Ha!

wpid-wp-1437968120433.jpeg

wpid-wp-1437968111415.jpeg

I went through several iterations, including using threads to ensure a smooth experience. However, in the end, I settle for a thread-less solution, since it didn’t require hands position detection in the start. It was a good learning experience, especially for designing UI. As I saw that this solution wasn’t really ideal, since the hands became very busy, though accurate enough to implement the choose-and-confirm paradigm as being employed in mouse.

I know that further development in UI paradigm is required to further improve the application of Leap Motion.

Installing OpenCV in Processing for Windows 7

The good thing about having Java OpenCV is we can use Processing along with many of its ease to develop computer vision application. Sure, there does exist additional library that ports Java OpenCV to make development even easier, like this OpenCV for Processing. However, having a vanilla OpenCV will allow you to learn about the inside of this library. I figure that this is a good route, teaching wise. But for something more practical, and faster to develop, please use the aforementioned library. It really is that good.

Now, installing OpenCV in Processing could not be easier. Again, I use pre-build OpenCV 2.4.11. Here are the steps:

1. Download and install Processing
2. Make a new sketch (File – New)
3. Give it a name and save it, even before you type anything
4. Now, in Windows Explorer, go to Processing sketch folder, by default it’s at “My Documents – Processing”
5. Go to where you saved your sketch
6. Now, make a folder named “code” and copy both opencv_2411.jar and opencv_java2411.dll from your opencv_directory/build/java
7. That’s it, now you can use OpenCV inside your Processing sketch.

To test it, let’s copy and paste the code from previous tutorial. It was in Java, so it should be working without any hiccup.

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Scalar;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;

import java.nio.*;
import java.util.List;
import java.awt.*;            
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import javax.swing.*;        

PImage img;
Mat mat;
Mat alpha;

void setup() {
  size(640, 480);
  background(0);
  println(Core.VERSION);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  noLoop(); //so the code only runs once

void draw() {
  mat = Highgui.imread(dataPath("fifa.png")); //put the file in data directory of your sketch
  Image imgToShow = Mat2BufferedImage(mat);
  displayImage(imgToShow);
}

BufferedImage Mat2BufferedImage(Mat m)
{
  //source: http://answers.opencv.org/question/10344/opencv-java-load-image-to-gui/
  //Fastest code
  //The output can be assigned either to a BufferedImage or to an Image

  int type = BufferedImage.TYPE_BYTE_GRAY;
  if ( m.channels() > 1 ) {
    type = BufferedImage.TYPE_3BYTE_BGR;
  }
  int bufferSize = m.channels()*m.cols()*m.rows();
  byte [] b = new byte[bufferSize];
  m.get(0, 0, b); // get all the pixels
  BufferedImage image = new BufferedImage(m.cols(), m.rows(), type);
  final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
  System.arraycopy(b, 0, targetPixels, 0, b.length);  
  return image;
}

void displayImage(Image img2)
{   
  ImageIcon icon=new ImageIcon(img2);
  JFrame frame=new JFrame();
  frame.setLayout(new FlowLayout());        
  frame.setSize(img2.getWidth(null)+50, img2.getHeight(null)+50);     
  JLabel lbl=new JLabel();
  lbl.setIcon(icon);
  frame.add(lbl);
  frame.setVisible(true);
  frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}

Interestingly, the amazing Bryan Chung gives different and simpler way to do it. He posted it here. Where the image’s pixel data gets read, copied into a buffer array where the colors get arranged properly, before being moved to different array. Hence, the image is being shown properly. I adapted his code here:

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Scalar;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;
 
import java.nio.*;
import java.util.List;
import java.awt.*;            // for ImageIcon type
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import javax.swing.*;           // for ImageIcon type
 
PImage img;
Mat mat;
Mat alpha;
 
void setup() {
  size(640, 480);
  background(0);
  println(Core.VERSION);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  //noLoop();
}
 
void draw() {
  mat = Highgui.imread(dataPath("fifa.png"));
  Mat out = new Mat(mat.rows(), mat.cols(), CvType.CV_8UC4);
  alpha = new Mat(mat.rows(), mat.cols(), CvType.CV_8UC1, Scalar.all(255));
  byte [] bArray = new byte[mat.rows()*mat.cols()*4];
  img = createImage(mat.cols(), mat.rows(), ARGB);
  ArrayList ch1 = new ArrayList();
  ArrayList ch2 = new ArrayList();
 
  Core.split(mat, ch1);
 
  ch2.add(alpha);
  ch2.add(ch1.get(2));
  ch2.add(ch1.get(1));
  ch2.add(ch1.get(0));
 
  Core.merge(ch2, out);
 
  out.get(0, 0, bArray);
  ByteBuffer.wrap(bArray).asIntBuffer().get(img.pixels);
  img.updatePixels();
  image(img, 0, 0);
  out.release();
}

There you have it. In short, you just need to make a folder named “code inside your Processing sketch folder, and copy both opencv-2411.jar and opencv_java2411.dll there.

Installing OpenCV in Windows 7

I’ll be teaching Computer Vision next semester using OpenCV as the programming tool. Though I’ve been using it couple of times in the past, I think it would be better if I switch to Windows for the sake of teaching, since most of my students use that OS. Therefore, they can focus on the makn thing, the theory and practice of computer vision.

Now, since they’ve used C++ and Java (through Processing) in the past, I then have several options to setup the dev machine:

  1. OpenCV in C++ using Visual Studio 2012
  2. OpenCV in Java using Eclipse
  3. OpenCV in Java using Processing IDE

I will cover the installation process for these three options, mainly because I need a single place of reference for similar activity in the future (i.e. less Googling).

Initial Steps
Some notes to read regarding the environment

  1. This guide uses OpenCV 2.4.11. Make sure you download it from the OpenCV website. I may use 3.0.0 in the future, but for now, this is enough.
  2. For the sake of getting the environment up and running quickly, I use the pre-build OpenCV. You’re free to build from source, as in the end, you’ll end up using the same files.
  3. I use Windows 7 64-bit, but I think this should work for Windows 8 too.

Main Menu
Generally speaking, our installation involves these steps:

  1. Getting pre-build OpenCV
  2. Importing the OpenCV components (libraries) to the IDE
  3. Testing it by building a simple OpenCV example

To make things easier to read, I’ll separate the scenarios into 3 different blog posts. Happy reading 🙂

    Raspberry Pi Review

    Raspberry Pi is all the rage as we speak and I can’t see why not. Here we have a cheap and small computer, perfectly suited for your next embedded system experiment. I remember how excited I was at the time it was publicly announced and still wasn’t available at that time. However, I didn’t know how I would use it to suit my needs in developing interactive installation (which still is pretty much how I supply myself with monetary income).
    I still didn’t know until I got Raspberry Pi in my hands couple of weeks ago.

    Before I go into the bloody details of Raspberry Pi, I want to introduce you to the current platforms available for my work. No matter what software or programming environment I use, the base platform is always boiled down to two extremes, one is a computer (any form, PC, MacBook, you name it) and the other one is a microcontroller (most of the time this means Arduino). With the computer, I can create advance graphics that interact with different kind of input, such as movement or sound, using sensor such as webcam, Kinect or microphone. The output however is constrained to either a big screen (I count projection as a screen too) or sound, via speaker. Whenever I want to create a more physical output via daily objects, I resort to Arduino to do the job. With it, I can create blinking lights or rotating objects using input from the computer. Having said that, I’ve relied with the combination of computer -> Arduino -> Output or arduino -> computer -> output for years now. Obviously, with that configuration a big space (and cost) is required, even for a project that could’ve been simpler. That’s where I thought RPi can kick in and be a part of.

    Are you still with me? Good. Sorry for the long intro, because the rest of the article seems pointless without it.

    So, long story short, I ordered Raspberry Pi via Ngooprek, an Indonesian based online electronic components distributor. I think is the only place to get RPi here in Indonesia, CMIIW. After I got it, I was amazed by the small size of it, I thought this was cool. However, it took me a while to supply myself with the required accessories needed to start with RPi. At the end of the day, I got myself an SD card, HDMI cable and a card reader. Enough to start playing with RPi, since I can use my LED TV for RPi’s video output via HDMI and I can use my Android phone charger as its power supply. All set.

    First, I downloaded the Debian ISO for the OS. I chose Debian instead of the Raspbian since I thought that there will be more software available for Debian. Burn the ISO to my SD card in Windows using Win32ImageWriter application. Plug the card to RPi, connect the power supply and HDMI cable and voila, I have it all running with no hassle. Can’t remember the last time I had a Linux machine running in such small time, really. Tested couple of built in app inside the OS, everything run smoothly, I thought this is good for daily computing activity such as internet surfing. Hey, for 700 MHz and 256 RAM, I had smaller PC spec back in the day and I could play game and browse, so this didn’t really surprise me. Not the end of the story though as this isn’t why I bought RPi the first time. It’s for the programming and its galore.

    Having said that, I tried to test how will I develop in RPi by installing two of my favorite programming IDE for developing interactive installation, Pure Data and Processing. I counted C++ library such as openFrameworks or Cinder out because it took me a while to compile them in my MacBook, can’t imagine how long it will take me to do so in RPi. Installing Pure Data was easy breezy. It’s there on the Debian repo (see, choosing Debian wasn’t a bad idea), so some routine apt-get did the job. Opened Pure Data and surprisingly it feels pretty light. Weird, because it’s quite slow on my MacBook. Did some patching and it feels acceptable. Haven’t made any complex patches though. Anyway, patching is the name of programming in Pure Data, since you basically patch together lines from different boxes in order to create something.

    What’s tricky was actually getting the sound to work. Pure Data is bread and butter a sound generator, so it’s pointless having it installed in a platform that can’t play sound. Theoretically, RPi can play sound via its headphone jack output or from its HDMI port, which will then be played back on the TV. The thing is, the output from the headphone port is nowhere near acceptable. I had horrible noises coming out when playing the Pure Data test sound patch. That’s even better than the HDMI counterpart who couldn’t play any sound at all. I’ve investigated ways to make this work in an acceptable way but so far I’ve failed. I suspect I have to do either configuring RPi and my TV to play sound via HDMI, or get a USB soundcard for the same purpose. Both way, I’m still intrigued and I’ll keep you guys updated.

    On the other hand, installing Processing involved a bit of more work. Processing runs on top of Java, so obviously I need to install Java VM some way or another. I used OpenJDK 6 because I read that it was supported in ARM, the RPI’s processor. I then removed Processing’s built in Java library and linked OpenJDK to replace it. Voila, I had Processing running. Though, in the beginning Processing displayed a message saying that it didn’t like the Java VM I have. That’s the only peculiar thing happens, but Processing runs like normal anyway. What’s abnormal is the speed of it. Processing feels pretty heavy and slow during start up and preparing to run sketches. The memory indicator shows a full utilization of it and it took me like 2-3 minutes between pressing the play button and having the sketch running. Certainly with this condition it’s not efficient to have yourself doing code-test-code-test routine. As it will take some time to compile the program. I guess I have to go old school and code everything properly so I don’t have to run the sketch frequently. This is The Pi running simple Processing sketch, rectangle moving, nothing fancy.

    However, it’s not all bad news. I realized that the RPi can be used as a more powerful and rich-featured Arduino. A small research in the internet provides me with many information to utilize RPi’s GPIO pins similar to Arduino’s input/output pins. Some companies like Adafruit and Element14 even produce their own RPi accessories to ease electronic prototyping and development using this board. Even better news is the fact that RPi has its own Ethernet port and capability to use WiFi so you can have Arduino+Ethernet/WiFi shield capability (even more) with half or even third of the price of that combination.

    Having said that, I can see RPi being used in many more use case, either as a nostalgic standalone computer that requires not much processing power (no pun intended) or, as a more powerful version of Arduino. I just have to make peace with myself that for the time being, this board isn’t suitable to make a full-fledged DIY VJ box that I dreamed of in the first place. Maybe I should change my visual style for this tiny machine.

    More Work for Nike

    Here’s a quick recap of 3 projects I’ve done with Nike in the past month. Projects are arranged by its date.

    Interactive display for Nike Malaysia Booth at Stadion Bukit Jalil, Kuala Lumpur, Malaysia.
    Date: 23 July 2012

    Nike Malaysia wants the exact same content that we’ve previously developed for Nike Senayan City store for their booth exhibited during the Arsenal – Malaysia friendly match. So we flew there with our contents, have the Nike Malaysia guys set up the required hardwares and after 2 days of working, we have it set up properly.

    Picture 1-5.

    New Interactive Display Content for Nike Senayan City Store.
    Date: 5 August 2012
    A content update for Nike Senayan City Store. This time we want to display not only the triggered video, but also the image of the actual people playing in front of it. Keeping in with the whole triangle theme of the triggered video, we decided to show the person in a triangulated form an in addition that person can also create triangulation form using his/her hands.
    This was made using vvvv in Windows 7.

    Picture 6-7.

    Treadmill Visualization for Nike run Event at Grand Indonesia
    Date: 15 August 2012
    For this event, Nike wants us to deliver 2 things: a displayed output of their Nike Run mobile app, which displays how far have the runner in treadmill gone, and a reactive visualizer which reacts to the speed of the runner. I took charge in the former and used a Kinect to do a frame differencing which in turns dictate the speed of the displayed grid and particles to create a sort of sci-fi style warping effect. This was made using Processing in Mac OS X 10.6.

    Picture 8-14.

    Gestural Automated Phone System

    Tools: Kinect, Android Phone
    Software: custom-made using Processing for both the MacBook and the Android
    OS: Mac OS X 10.6
    Year: 2012

    Now, here’s something that I choose not to participate: fanboy-ism. You know, these days, the internet is filled with fanboys. Each freely talks about he/she’s opinion, which mostly are directed towards disregarding companies that are not his/her preference. Long story short, you have Apple fanboys, Microsoft fanboys, Google fanboys and Linux fanboys among others. Funny thing is, not all of them actually knows the capabilities of devices that they regarded so majestically (which ultimately turned them into fanboys in the first place). Very rarely do they able to code (Linux Fanboys not included, as in the previous sentence). So I found it very funny to worship a company that release a device that you never dived into. Anyway, that’s my 2 cents.

    So, pardon the long intro. 3 days ago, I acquired an Android phone, it’s a Sony Ericsson Xperia Live. Cheap, but with quite a good spec. It’s my first Android phone. That night, I quickly do a research and founded that I’m actually able to create a program using Processing and running it straight in my Android phone. My mind was boggled. Imagination ran wild. And after a quick hello world, I decided to combine my previous knowledge in Kinect to control this wonderful phone. As a response to the previous paragraph rambling, I aimed to combine products from Apple (MacBook), Microsoft (Kinect) and Google (Android) into one system. As I’ve said before, I’m no fanboy. I admire every good piece of technology, no matter who’s the vendor.

    So in general what I have here is a gestural automated phone system. I named that one myself. Sounds horrible. It’s a system that enables me to do a phone call without touching the phone, entering the number (or choosing from the address book) and pressing that call button. That action is triggered by a gesture, detected by the Kinect. In short, I’m making a system where my hand movement will make my phone calling another phone, without touching it. Sounds clear?

    Under the hood, I have 2 softwares running at the same time. The first software is running on the MacBook, it’s a Kinect gesture detection. The second one runs on the phone, which will receive command from the MacBook and do a phone call afterwards. So, if I move my hand towards the Kinect, my hand will be detected and by moving it to the right corner of the screen, it will make the MacBook sending a command to the phone. This command is sent using OSC (Open Sound Control) protocol which required both the phone and the MacBook to be in the same network. Upon receiving the command, the phone will do a phone call. Here’s the demo (turn the volume UP!):

    That video serves as a proof of concept, a crude demo which was achieved after a night of rapid prototyping. And yes, this is why I love Processing. It’s a perfect platform to prototype a rough concept. Of course, I can see many improvements required, but for now here’s what I have.

    I can see this system implemented with the phone attached to its user who has a handsfree device available. Imagine waving your hand and make a call without having to reach the pocket first. Hmm. Sounds like a part of Iron Man. 🙂

    Apple, Microsoft, Google living in harmony.

    Interactive Displays for Nike

    Client: Nike
    Year: 2012
    Location: Nike Store, Senayan City Mall, Jakarta, Indonesia
    Tools: Arduino and Processing (Interactive Product Display). custom software and TouchDesigner (Interactive Wall Video Display)
    Hardware: Arduino and proximity sensor (Interactive Product Display) 2 Kinects (Interactive Wall Video Display)
    OS: Windows 7

    Here are two new installations I did for Nike to promote their 4 new football shoes in coinciding with the Euro 2012 football tournament . One is an interactive product display and the other one is an interactive wall video display. Generally speaking, both trigger videos in its own way. Both are deployed in Nike Store in Senayan city Mall in Jakarta.

    For the interactive product display, customer can pick up a shoe from its display box and it will trigger an informational video regarding that product. This happens for all 4 shoes. Each shoes are placed on top of a computer (an integrated monitor + CPU Lenovo model). A proximity sensor connected to an Arduino is used to detect the shoe’s position to determine whether it’s lifted or not. Upon lifting, a video will be triggered. This is programmed using the new Processing 2.0a6 in order to achieve a smooth 720p video playback using its built in GStreamer video back end.

     

     

     

     

     

    The interactive wall video display triggers video based on an audience position in front of the store’s window display. Again, 4 shoes are placed in front of the 3m x 1m video display so if somebody is standing in front of a shoe, a corresponding video is triggered and will be played in the display. 2 Kinects are used to detect people over a quite wide space. A custom software is used to stitch images from both Kinects and do blob tracking which sends the blob’s position to TouchDesigner to trigger different videos. TouchDesigner was chosen because its amazing capability to playback hi-res video without burdening the computer’s CPU because it’s working on the GPU.

    It’s still on display until the end of June, so if you’re in Jakarta, hop in for a ride and grab a Nike product while you’re at it.