99 Names

99 Names is a web VR experience that exhibits 99 Names of Allah in an immersive 3D space. This is my first foray in both Virtual Reality application and WebGL programming through ThreeJS. It’s a simple application, where user can access the web page using their phone or desktop/laptop browser and instantly they can feel the experience, where they’re surrounded by rotating circles showing the 99 names of Allah.

99 Names

The barebone of the project is completed using Web VR Boilerplate, where it ensures that everyone can get a grip of the experience, whether they’re on a desktop, or smartphone, with or without a virtual reality head-mounted display such as Oculus Rift or Google Cardboard. All within the web browser, no application to install. I think that this is a good way to evangelize VR, since at this point, the VR world really needs a whole lotta application to expose its features and shortcomings.

I had so much fun making it. The boilerplate makes it really easy to develop VR experience, so I can focus on the main content, which was all made using ThreeJS. Though I’ve read about it a lot in the past (it’s part of my to-learn-list for about 3 years now, haha), but this is actually the first time thoroughly learning about it. I can say that the best way of learning programming language/library is by making things with it. So far, I’ve learned a lot about 3D pipeline. Which makes me wonder, why didn’t I do this years ago?

However, from the interaction design point of view, I realize that catering VR experience to both kind of platform (desktop or smartphone) is tricky. For example, in smartphone based VR, the user input is limited. Not all phone can use the magnetic input from Google Cardboard, a thing that hopefully will be rectified by Google Cardboard 2. I’m not sure about the other HMD, maybe you, dear reader have another data?

While on the other hand, I can make plethora of input in the desktop version, since the user can use keyboard or mouse or joystick, or other devices to give input to the application. A thing that obviously won’t be mapped precisely in the smartphone counterpart. I did ran into the vreticle library which will help me make a gaze input system for VR, but I still founded some trouble implementing it.

Therefore, at this point, this experience is a passive one, no user input is involved. But I do hope to complete it with one at some point.

99 Names can be accessed at adityo.net/99names. Play with it and let me know what you think.

Meanwhile, here are some screenshots of the steps I did in making it

99-names-1st-step

99-names-2nd-step

99-names-3rd-step

Leap Motion Synth

I helped a friend develop a simple tone generator as a media for musical experiential learning for kids. He wants to use Leap Motion so kids can use their finger gestures to generate tone as well as learning the pitch of the notes.

Leap Synth

This was a good experience for me as I wanted to learn further about designing UI for gestural input device such as Leap Motion. This time, I propose this scenario:

  1. Use the right hand index finger for choosing which note to trigger
  2. Use the left hand to trigger playing and stopping note. When the palm is closed, a note is triggered, when the hand is opened, a note is stopped being played

As with previous projects, I used Processing for development as I can easily export this as a Windows application so he could deploy it without many hassles. The main challenge was to get Processing to detect which hand is right or left. In the end, I decided to detect hand position in relative to the Leap Motion. Afterwards, the finger detection and tracking was done. Mind that this was done May 2014, and several months after, Leap Motion released a new API which provide an easier way to detect left/right hand. Ha!

wpid-wp-1437968120433.jpeg

wpid-wp-1437968111415.jpeg

I went through several iterations, including using threads to ensure a smooth experience. However, in the end, I settle for a thread-less solution, since it didn’t require hands position detection in the start. It was a good learning experience, especially for designing UI. As I saw that this solution wasn’t really ideal, since the hands became very busy, though accurate enough to implement the choose-and-confirm paradigm as being employed in mouse.

I know that further development in UI paradigm is required to further improve the application of Leap Motion.

Installing OpenCV in Processing for Windows 7

The good thing about having Java OpenCV is we can use Processing along with many of its ease to develop computer vision application. Sure, there does exist additional library that ports Java OpenCV to make development even easier, like this OpenCV for Processing. However, having a vanilla OpenCV will allow you to learn about the inside of this library. I figure that this is a good route, teaching wise. But for something more practical, and faster to develop, please use the aforementioned library. It really is that good.

Now, installing OpenCV in Processing could not be easier. Again, I use pre-build OpenCV 2.4.11. Here are the steps:

1. Download and install Processing
2. Make a new sketch (File – New)
3. Give it a name and save it, even before you type anything
4. Now, in Windows Explorer, go to Processing sketch folder, by default it’s at “My Documents – Processing”
5. Go to where you saved your sketch
6. Now, make a folder named “code” and copy both opencv_2411.jar and opencv_java2411.dll from your opencv_directory/build/java
7. That’s it, now you can use OpenCV inside your Processing sketch.

To test it, let’s copy and paste the code from previous tutorial. It was in Java, so it should be working without any hiccup.

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Scalar;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;

import java.nio.*;
import java.util.List;
import java.awt.*;            
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import javax.swing.*;        

PImage img;
Mat mat;
Mat alpha;

void setup() {
  size(640, 480);
  background(0);
  println(Core.VERSION);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  noLoop(); //so the code only runs once

void draw() {
  mat = Highgui.imread(dataPath("fifa.png")); //put the file in data directory of your sketch
  Image imgToShow = Mat2BufferedImage(mat);
  displayImage(imgToShow);
}

BufferedImage Mat2BufferedImage(Mat m)
{
  //source: http://answers.opencv.org/question/10344/opencv-java-load-image-to-gui/
  //Fastest code
  //The output can be assigned either to a BufferedImage or to an Image

  int type = BufferedImage.TYPE_BYTE_GRAY;
  if ( m.channels() > 1 ) {
    type = BufferedImage.TYPE_3BYTE_BGR;
  }
  int bufferSize = m.channels()*m.cols()*m.rows();
  byte [] b = new byte[bufferSize];
  m.get(0, 0, b); // get all the pixels
  BufferedImage image = new BufferedImage(m.cols(), m.rows(), type);
  final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
  System.arraycopy(b, 0, targetPixels, 0, b.length);  
  return image;
}

void displayImage(Image img2)
{   
  ImageIcon icon=new ImageIcon(img2);
  JFrame frame=new JFrame();
  frame.setLayout(new FlowLayout());        
  frame.setSize(img2.getWidth(null)+50, img2.getHeight(null)+50);     
  JLabel lbl=new JLabel();
  lbl.setIcon(icon);
  frame.add(lbl);
  frame.setVisible(true);
  frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}

Interestingly, the amazing Bryan Chung gives different and simpler way to do it. He posted it here. Where the image’s pixel data gets read, copied into a buffer array where the colors get arranged properly, before being moved to different array. Hence, the image is being shown properly. I adapted his code here:

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Scalar;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;
 
import java.nio.*;
import java.util.List;
import java.awt.*;            // for ImageIcon type
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import javax.swing.*;           // for ImageIcon type
 
PImage img;
Mat mat;
Mat alpha;
 
void setup() {
  size(640, 480);
  background(0);
  println(Core.VERSION);
  System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
  //noLoop();
}
 
void draw() {
  mat = Highgui.imread(dataPath("fifa.png"));
  Mat out = new Mat(mat.rows(), mat.cols(), CvType.CV_8UC4);
  alpha = new Mat(mat.rows(), mat.cols(), CvType.CV_8UC1, Scalar.all(255));
  byte [] bArray = new byte[mat.rows()*mat.cols()*4];
  img = createImage(mat.cols(), mat.rows(), ARGB);
  ArrayList ch1 = new ArrayList();
  ArrayList ch2 = new ArrayList();
 
  Core.split(mat, ch1);
 
  ch2.add(alpha);
  ch2.add(ch1.get(2));
  ch2.add(ch1.get(1));
  ch2.add(ch1.get(0));
 
  Core.merge(ch2, out);
 
  out.get(0, 0, bArray);
  ByteBuffer.wrap(bArray).asIntBuffer().get(img.pixels);
  img.updatePixels();
  image(img, 0, 0);
  out.release();
}

There you have it. In short, you just need to make a folder named “code inside your Processing sketch folder, and copy both opencv-2411.jar and opencv_java2411.dll there.

Installing Java OpenCV in Eclipse for Windows 7

This one is a bit easier, since the official guide from OpenCV is accurate and I’ve followed it without any trouble. So, you just need to do these:

  1. Download and install JDK
  2. Download and install eclipse
  3. Follow this guide from OpenCV’s website

As for testing, you can use the following code, who will do the exact same thing as in the C++ version. You may note that this code is longer than its C++ counterpart. That’s because Java OpenCV doesn’t have equivalent method for imshow() in C++. So, most of the code here are doing image conversion from Mat data type to something that Java can display.

import java.awt.*;						// for ImageIcon type
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import javax.swing.*; 					// for ImageIcon type
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.highgui.Highgui;

// This class displays image 
// Its first function converts from Mat to BufferedImage
// Second function displays the converted image
class ImageDisplayer
{
   public BufferedImage Mat2BufferedImage(Mat m)
   {
		//source: http://answers.opencv.org/question/10344/opencv-java-load-image-to-gui/
		//Fastest code
		//The output can be assigned either to a BufferedImage or to an Image
	
	   int type = BufferedImage.TYPE_BYTE_GRAY;
	   if ( m.channels() > 1 ) {
	       type = BufferedImage.TYPE_3BYTE_BGR;
	   }
	   int bufferSize = m.channels()*m.cols()*m.rows();
	   byte [] b = new byte[bufferSize];
	   m.get(0,0,b); // get all the pixels
	   BufferedImage image = new BufferedImage(m.cols(),m.rows(), type);
	   final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
	   System.arraycopy(b, 0, targetPixels, 0, b.length);  
	   return image;	
   }
   
   public void displayImage(Image img2)
   {   
	   ImageIcon icon=new ImageIcon(img2);
	   JFrame frame=new JFrame();
	   frame.setLayout(new FlowLayout());        
	   frame.setSize(img2.getWidth(null)+50, img2.getHeight(null)+50);     
	   JLabel lbl=new JLabel();
	   lbl.setIcon(icon);
	   frame.add(lbl);
	   frame.setVisible(true);
	   frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
	}
}

// This is the main class
// We load image using OpenCV as a Mat
public class hello
{
	public static void main( String[] args )
	{
	    System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
		Mat img = Highgui.imread("C:\\Users\\Didit\\Pictures\\fifa.png"); //Change to any file you want
	    ImageDisplayer displayer = new ImageDisplayer();
		Image imgToShow = displayer.Mat2BufferedImage(img);
	    displayer.displayImage(imgToShow);
	}
}

I didn’t actually notice that there would be so much code in the beginning, but Java is pretty easy to read anyway. So, I don’t think this trade off should scare you from using OpenCV in Java.

Installing C++ OpenCV in Visual Studio 2012 for Windows 7

Let’s make it quick, here are the steps required. I use pre-build OpenCV 2.4.11 in a 64-bit Windows 7 machine.

Getting The OS Ready
First of all, let’s download OpenCV and do some OS-level configurations

  1. Download OpenCV
  2. Extract OpenCV to your desired folder. It’s a good idea to use the shortest path to it and keeping the version name too. Example: “D:\OpenCV-2.4.11”
  3. Open Command Prompt and type
    setx -m OPENCV_DIR D:\OpenCV\Build\x64\vc11 

    Note that this is for 64-bit Visual Studio 2012

  4. Right click on My Computer, click Properties and choose Advanced System Settings
  5. Click Environment Variables button
  6. In System variables, choose the Path Variable and click Edit button
  7. Now, type
    ;%OPENCV_DIR%\bin;

    At this point, the system knows where OpenCV is. If you need to change the directory, you just need to redo step 3 and change accordingly

Adding OpenCV to Visual Studio Project
Now, we’ll look into how to add OpenCV to a Visual Studio project. It involves adding the appropriate libraries, header files and DLLs.

Note that the official tutorial from OpenCV also suggests global mode of adding libraries. I prefer local method. For the sake of completion, I’d suggest that you go and visit that page

  1. Open Visual Studio, choose File – New Project
  2. Choose Win32 Console Application, give it a name and choose where to save it. Click Next
  3. In the next wizard window, choose Empty Project under Additional Options, just to make things cleaner. This is optional though.
  4. Now you have a basic project ready to be used. Let’s add OpenCV to it.
  5. Go to Project-$Project_Name Properties or press Alt+F7
  6. Because I want to use 64-bit, I go to Configuration Manager and change the platform to x64
  7. Next, in the Property Pages window, choose Configuration PropertiesC/C++ and under Additional Include Directories insert:
    $(OPENCV_DIR)\..\..\include
    $(OPENCV_DIR)\..\..\include\opencv2
    $(OPENCV_DIR)\..\..\include\opencv2
    
  8. Then, in Linker – Additional Library Directories insert
    $(OPENCV_DIR)\lib
  9. Now, check your Configuration on top of the Property Pages window. You can set it for Debug or Release. Make sure that you have these additional libraries for both configuration.
  10. For Debug configuration, in Linker – Input – Additional Dependencies insert these following items:
    opencv_calib3d2411d.lib
    opencv_core2411d.lib
    opencv_features2d2411d.lib
    opencv_flann2411d.lib
    opencv_highgui2411d.lib
    opencv_imgproc2411d.lib
    opencv_ml2411d.lib
    opencv_objdetect2411d.lib
    opencv_photo2411d.lib
    opencv_stitching2411d.lib
    opencv_superres2411d.lib
    opencv_ts2411d.lib
    opencv_video2411d.lib
    opencv_videostab2411d.lib
    
  11. And finally, for release, repeat the last step, insert aforementioned items, but remove the letter “d” before lib, so you have:
    opencv_calib3d2411.lib
    opencv_core2411.lib
    opencv_features2d2411.lib
    opencv_flann2411.lib
    opencv_highgui2411.lib
    opencv_imgproc2411.lib
    opencv_ml2411.lib
    opencv_objdetect2411.lib
    opencv_photo2411.lib
    opencv_stitching2411.lib
    opencv_superres2411.lib
    opencv_ts2411.lib
    opencv_video2411.lib
    opencv_videostab2411.lib
    

That’s it, you now have a working OpenCV libraries in Visual Studio. You may want to save it as a template, so you don’t have to repeat it all over again.

Testing The Installation
Now for the fun part, let’s add a source code and compile it. We’ll make a simple image viewer using OpenCV. Add a new C++ file, name it anything. Type in the following code

// Simple image display using OpenCV

// Include opencv
#include <opencv2\opencv.hpp>
#include <opencv2\highgui\highgui.hpp>

void main()
{
	cv::Mat img = cv::imread("your image file here");// for example C:\\pictures\\image.jpg
	cv::imshow("image", img);
	cv::waitKey(); // so the program doesn't close directly.
}

Now build it, and see the image. If everything’s fine you should see an image being displayed.
Congratulations! You now have a working OpenCV installation in Visual Studio 2012. Now go and make something exciting.

Installing OpenCV in Windows 7

I’ll be teaching Computer Vision next semester using OpenCV as the programming tool. Though I’ve been using it couple of times in the past, I think it would be better if I switch to Windows for the sake of teaching, since most of my students use that OS. Therefore, they can focus on the makn thing, the theory and practice of computer vision.

Now, since they’ve used C++ and Java (through Processing) in the past, I then have several options to setup the dev machine:

  1. OpenCV in C++ using Visual Studio 2012
  2. OpenCV in Java using Eclipse
  3. OpenCV in Java using Processing IDE

I will cover the installation process for these three options, mainly because I need a single place of reference for similar activity in the future (i.e. less Googling).

Initial Steps
Some notes to read regarding the environment

  1. This guide uses OpenCV 2.4.11. Make sure you download it from the OpenCV website. I may use 3.0.0 in the future, but for now, this is enough.
  2. For the sake of getting the environment up and running quickly, I use the pre-build OpenCV. You’re free to build from source, as in the end, you’ll end up using the same files.
  3. I use Windows 7 64-bit, but I think this should work for Windows 8 too.

Main Menu
Generally speaking, our installation involves these steps:

  1. Getting pre-build OpenCV
  2. Importing the OpenCV components (libraries) to the IDE
  3. Testing it by building a simple OpenCV example

To make things easier to read, I’ll separate the scenarios into 3 different blog posts. Happy reading 🙂

    It’s A Good Time to Develop VR Content

    Ever since the head of my department bought two Oculus Rift HMDs and lent one of them to me, I started to dabble a lot into this field. Some months before it, I managed to get a Google Cardboard, which was a good introduction to VR, but I didn’t really able to develop further interest, partially because I had quite a massive simulation sickness for playing with it in a short time (though, this is probably mistake in my side, since I have an acute vertigo to begin with).

    Now, Oculus Rift DK2 is actually a very good piece of hardware. I can use it comfortably for 20 minutes using my 2012 non-retina MBP. It also comes with a rapidly developed software suites for developers, meaning it’s actually pretty fast and easy if you want to start making contents for it. The ecosystem is lively, plus there’s a lot of options to pick from should you want to start your adventure here.

    Indeed, generally speaking, it’s a good time to develop for VR. As I looked around for games and apps demos for the Rift, I founded out that there’s not much released by big studios/companies, but there are plenty to download or buy from small indie developers. This is another proof that developers who bought it, aren’t afraid of showing whatever they had, even if it’s just a single level wandering around demo type of thing. And this is very important.

    This new wave of VR devices still has plenty to crack to make it ready for public use. Several issues, including the haunting simulation sickness are still there. I haven’t come across any research paper that tried to answer it, but each people who made contents for VR has different approach to reduce it. This is why, the more demos available, the more solutions appear.

    Also, think about the UI of the content. For starter, this isn’t a flat screen 2D monitor, it’s a full blown 3D experience right from the beginning. Sticking widgets and buttons in the top corner just won’t cut it. How would you answer it? Of course by showing your solution via a demo app.

    Those are just two of many issues that I found interetsing in VR contents. Obviously many technical issues will spring out, such as efficient use of polygon to reduce judder (which will also reduce simulation sickness) or how to make the contents less resource intensive, so people don’t have to own a high end gaming PC such as one suggested by Oculus recently, etc.

    Those are issues that will be there in a long time. Meanwhile, you can choose your own path in development. Hardware wise, Oculus isn’t the only player in town. For PC-based VR, we will have:

    1. Open Source VR with hardware made by Razer (coming soon July ’15)
    2. Valve and HTC’s Vive, which were claimed by many to be better than Oculus Rift
    3. and the upcoming FOVE which is still running its crowdfunding campaing in Kicksarter.
    4. Last but not least, Oculus Rift Consumer Model, coming soon in Q1 2016

    Also, Sony has its own Morpheus which will run exclusively on PS4. This is a good contender, since it can offer a VR experience for millions of existing PS4 players who inclined to get a new PC.

    On the other hand, for mobile based VR, there are already a handful of hardware to choose from:

    1. Zeiss VR One
    2. Avegant Glyph
    3. Google Cardboard
    4. Oculus Gear VR
    5. Durovis Dive

    With that in mind, you would suspect that software wise, developing contents for them, should provide you with many options as well. You’re right. For developing game, both Unreal Engine and Unity (both are popular game engines) offers support for Oculus Rift and I can’t see why they won’t do the same for others, when available. I tried both to make quick VR prototype and yes, they’re very friendly and provides you with tools to rapidly develop VR contents and polish it along the way.

    If web is your thing then afraid not, as both Firefox and Chrome also aims for the VR platform as well. They have the capabilities to deliver the VR content. For programming them, you have the bulletproof Three JS to do it. Also, there’s a work in progress JS API for VR named WebVR, which sounds and looks very promising.

    I can sense this VR era would end up quite a lot like the mobile smartphone boom several years ago. The availability of apps really usher the era, partially because of the democratization of developer tools. Every one can make one and every one has the same chance to proof themselves while giving solutions to the platform.

    Now I just need to make one myself.

    I’m on The Job Market. Hire Me.

    #wip #GenerativeTypography #processing

    A post shared by Adityo Pratomo (@greatkingfrequency) on


    To keep things short, I’m now looking for new opportunity. I’m an interaction designer, with a mission on crafting human-friendly interaction with digital devices and technologies. My field of work focuses on natural user interface which involves technologies such as Microsoft Kinect, Leap Motion, Oculus Rift, among others.

    Here’s a short list of my capabilities

    – Processing, openFrameworks and Cinder for interactive installation/application development
    – Unity for game, Augmented Reality and Virtual Reality development
    – Arduino for Internet of Things and other hardware-based project
    – Analysis of UI and UX
    – Front-end web development

    Do check out my previous works here. Or, feel free to explore my Infographic CV .

    Never hesitate to contact me through the form on the left.

    I’ll see you soon!

    Arduino untuk Pemula Bagian 2: Keajaiban Pemrograman

    Masih menyimpan rangkaian dari tutorial sebelumnya? Kalau iya, bagus. Kalau tidak, silakan baca pelan-pelan dan kembali setelah rangkaiannya siap 🙂

    Jika sudah siap, maka kita bisa memulai bagian kedua tutorial ini. Kali ini, saya akan menunjukkan fleksibilitas dari pemrogaman Arduino. Sebagaimana kita ketahui, Arduino sejatinya adalah sebuah mikrokontroler yang bisa diprogram untuk mengolah input tertentu menjadi sebuah output. Artinya, sebagaimana sebuah komputer yang secara hardware tidak berubah, mampu diprogram untuk melakukan hal-hal yang beragam, maka Arduino juga bisa kita program untuk melakukan hal berbeda, meski terdiri dari komponen yang sama.

    IMG_3669.JPG

    Tutorial kali ini akan difokuskan pada pemrograman Arduino. Tutorial sebelumnya menghasilkan sebuah rangkaian yang terdiri dari tombol dan LED, di mana selama tombol ditekan, maka LED akan menyala. Sangat dasar dan tidak menarik. Kali ini, kita akan membuat LED berkedip jika tombol ditekan, dan mati jika tombol ditekan untuk kedua kalinya. Seperti kita menyalakan saklar. Ini akan kita lakukan dengan menggunakan rangkaian yang sama.

    Kode yang kita gunakan adalah pengembangan dari kode sebelumnya. Hanya ada sedikit penambahan di awal, dan di bagian fungsi loop()

    Bagi yang belum pernah melakukan programming, mungkin ada hal-hal berikut yang perlu disimak:

    1. Setiap baris yang diawali dengan // artinya baris ini adalah komentar, berisi keterangan atau penjelasan tertentu. Bagian ini tidak akan di-compile dan diupload ke Arduino, hanya penjelasan bagi yang membaca source code program tersebut
    2. dalam program, ada yang disebut sebagai variabel. Ini adalah tempat menampung nilai-nilai yang bisa berubah seiring program berjalan. Variabel-variabel ini memiliki tipe tertentu dan akan diinisialisasi dengan memberikan nilai. Sebagai contoh, untuk membuat sebuah variabel dengan nama x yang memiliki tipe data integer (bilangan bulat) dan memiliki nilai 10, kita bisa menulis
    int x = 10

    3. Setiap baris dalam program yang melakukan sebuah hal disebut sebagai pernyataan (statement). Satu pernyataan diakhiri dengan titik koma ;
    4. Ada bagian program yang mengumpulkan satu atau beberapa pernyataan menjadi satu blok tertentu. Ini dikenal sebagai fungsi, dan ia akan melakukan tugas tertentu. Sebagai contoh di program kita, ada 2 buah fungsi, yakni setup() dan loop()

    Yak, inilah kodenya:

    // set pin nomor 2 sebagai pin button dan pin 13 sebagai pin untuk LED
    const int buttonPin = 2;
    const int ledPin = 13;
    
    // membuat variabel untuk menyimpan status tombol, status sebelumnya, dan status LED
    int buttonState = LOW;
    int prevButtonState = LOW;
    int ledState = LOW;
    
    void setup() {
        // pin ledPin dibuat sebagai output
        pinMode(ledPin, OUTPUT);
        // pin button dibuat sebagai input
        pinMode(buttonPin, INPUT);
    }
    
    void loop(){
        // baca status dari pin button
        buttonState = digitalRead(buttonPin);
    
        // bagian ini akan kita ubah
        // kita memeriksa apakah kondisi button berbeda dengan kondisi sebelumnya
        if (buttonState != prevButtonState) {
            // jika iya, maka kondisi button sebelumnya, adalah kondisi button sekarang
            prevButtonState = buttonState;
            // jika kondisi button sebelumnya LOW, maka status LED akan berbeda dari sebelumnya
            if (prevButtonState == LOW) {
               ledState = !ledState;
            }
        }
        // jika ledState = HIGH, maka LED akan berkedip, jika tidak, maka LED mati
        if (ledState == HIGH) {
            blink();
        } else { 
            digitalWrite (ledPin, ledState);
      }
    }
    
    //fungsi ini membuat lampu LED berkedip
    void blink(){
      digitalWrite(ledPin, HIGH);   // nyalakan LED
      delay(1000);                  // tunggu sebentar
      digitalWrite(ledPin, LOW);    // matikan LED
      delay(1000);                  // tunggu lagi
    }
    

    Kode yang kita ubah dalam fungsi loop() menyatakan bahwa kali ini, yang kita perhatikan adalah perubahan kondisi button. Apabila ada perubahan kondisi, yang terjadi setiap kali ia ditekan, maka variabel ledState akan berubah pula kondisinya. Seperti mengatakan ON atau OFF. Ini berbeda dengan sebelumnya, di mana yang kita perhatikan adalah kondisi button.

    Apabila kondisi LED HIGH, maka program akan memanggil fungsi blink() yang mengatur LED agar berkedip, dan mati jika sebaliknya. Fungsi ini bisa digambarkan sebagai bagian program yang melakukan, well, fungsi tertentu. Setidaknya, dalam Arduino ada 2 fungsi yang wajib ada, yakni setup() yang berjalan sekali di awal program, serta loop() yang dieksekusi berulang-ulang selama program berjalan. Menggunakan fungsi yang berbeda ini adalah latihan yang bagus untuk berpikir prosedural, atau berurutan, yang bisa membantu pemula untuk memecahkan masalah secara terstruktur.

    Itu dia tutorial kali ini, semoga membantu pengalaman menggunakan Arduino. Tutorial selanjutnya akan membahas jenis input yang lain. Jika ada ide, silakan juga tulis komentar pada postingan ini.

    Selamat ngoprek 🙂