Proj 1: Simple Project

glegrady
Posts: 203
Joined: Wed Sep 22, 2010 12:26 pm

Proj 1: Simple Project

Post by glegrady » Fri Apr 01, 2016 1:48 pm

Simple Project Assignment due April 5, 2016

The purpose of this assignment is to acquire basic skills in how the Kinect and SimpleOpenNi work together.

To begin review the following:
Dan Schiffman’s “Getting Started with Kinect and Processinghttp://shiffman.net/p5/kinect/

Making Things See, Borenstein, Chapter 2 – Working with Depth Image, pages 43-107

Study the examples:
Project 5: Tracking the Nearest Object, p76
Project 6: Invisible Pencil, P86
Project 7: Minority Report Photos, P96
Multiple Image & Scale, p100

Do a simple project that uses any of the techniques learned.
Examples:
. Paint on the screen using basic forms: lines, rectangles, etc.
. Erase or transform part of a photo covering the screen
. Create a shape based on your gestures’ history
. Put effort into the design and aesthetic quality

Post your work by clicking on PostReply and submit the following:
. A description of your simple project
. A screen shot
. Add the code so others can try it
. List references: which code you expanded on, or any publications or other artists' or researchers' works
George Legrady
legrady@mat.ucsb.edu

ihwang
Posts: 5
Joined: Fri Apr 01, 2016 2:35 pm

Intae's Proj 1: Simple Project

Post by ihwang » Sun Apr 03, 2016 10:56 pm

Topic
I was interested with converting body gesture to another type of expression. Kinect is one of the best apparatus to capture the motion of body. Using one of KinectV2 libraries, I create body painting program.

Description
Basically, KinectV2 library (http://codigogenerativo.com) has "Skeleton 3D" example which present each reference points of joints in a body. I selected two of joints, right and left hand, this two selected "Joints" draw circles on a screen. The Skeleton 3D program offers x,y,z coordinate on the virtual space, I stored only x,y reference points, using these the program draw multiple circles on a background.

Drawback
Major problem was that I couldn't fix was make the linear arrangement of circles as a single curved line. The current solution is draw circles in every frame, but if the user moves their hands fast, we can see the gap or empty space between the two. Instead of draw individual circle, I'll try to line function to make the motion is actually creating a "Drawing".

Reference
I borrowed two code from different places.
1.Thomas Sanchez Lengeling's KniectPV2 Library
2.amnon.owed's answer from Processing forum (https://forum.processing.org/one/topic/eraser.html)

Screen shot
0194.png
https://www.youtube.com/watch?v=r7eBzlFH6fk

Source file
I attached source code below, but I used Windows that requires Kinect SDK. It won't work on Mac OS.

Code: Select all

/*
 MAT 265 class project No.1 20160403
 This code is the modified version from one of examples in Thomas Sanchez Lengeling's KniectPV2 Library 
 and amnon.owed's answer  (https://forum.processing.org/one/topic/eraser.html)
  
 Copyright (C) 2014  Thomas Sanchez Lengeling.
 KinectPV2, Kinect for Windows v2 library for processing
 
 Permission is hereby granted, free of charge, to any person obtaining a copy
 of this software and associated documentation files (the "Software"), to deal
 in the Software without restriction, including without limitation the rights
 to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
 copies of the Software, and to permit persons to whom the Software is
 furnished to do so, subject to the following conditions:
 
 The above copyright notice and this permission notice shall be included in
 all copies or substantial portions of the Software.
 
 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
 OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
 THE SOFTWARE.
 
 https://forum.processing.org/one/topic/eraser.html
 */

import KinectPV2.KJoint;
import KinectPV2.*;
PGraphics canvas;

KinectPV2 kinect;

Skeleton [] skeleton;

float storeX;
float storeY;

float zVal = 400;
float rotX = PI;

void setup() {
  size(1024, 768, P3D);

  kinect = new KinectPV2(this);

  kinect.enableColorImg(true);

  kinect.enableSkeleton(true );
  //enable 3d Skeleton with (x,y,z) position
  kinect.enableSkeleton3dMap(true);

  kinect.init();
  smooth();
  canvas = createGraphics(width,height,P3D);
  canvas.beginDraw();
  canvas.smooth();
  canvas.endDraw();
}

void draw() {
  background(255);

  image(kinect.getColorImage(), 0, 0, 320, 240);

  skeleton =  kinect.getSkeleton3d();

  //translate the scene to the center 
  pushMatrix();
  translate(width/2, height/2, 0);
  scale(zVal);
  rotateX(rotX);

  for (int i = 0; i < skeleton.length; i++) {
    if (skeleton[i].isTracked()) {
      KJoint[] joints = skeleton[i].getJoints();

      //draw different color for each hand state
      drawHandState(joints[KinectPV2.JointType_HandRight]);
      //drawHandState(joints[KinectPV2.JointType_HandLeft]); 
    }
  }
  popMatrix();
  
  fill(255, 0, 0);
  text(frameRate, 50, 50);
  drawFunction();
  image(canvas,0,0);
  //println(storeX,storeY);
  //saveFrame("frames/####.png");
 
}

void drawFunction() {
  canvas.beginDraw();
  canvas.noStroke();
  canvas.fill(255,0,0);

  //canvas.ellipse(mouseX, mouseY,50,50);
  canvas.ellipse(storeX*zVal+width/2, -storeY*zVal+height/2,20,20);
  canvas.endDraw();
}


//use different color for each skeleton tracked
color getIndexColor(int index) {
  color col = color(255);
  if (index == 0)
    col = color(255, 0, 0);
  if (index == 1)
    col = color(0, 255, 0);
  if (index == 2)
    col = color(0, 0, 255);
  if (index == 3)
    col = color(255, 255, 0);
  if (index == 4)
    col = color(0, 255, 255);
  if (index == 5)
    col = color(255, 0, 255);

  return col;
}


void drawJoint(KJoint[] joints, int jointType) {
  strokeWeight(2.0f + joints[jointType].getZ()*8);
  point(joints[jointType].getX(), joints[jointType].getY(), joints[jointType].getZ());
}

void drawBone(KJoint[] joints, int jointType1, int jointType2) {
  strokeWeight(2.0f + joints[jointType1].getZ()*8);
  point(joints[jointType2].getX(), joints[jointType2].getY(), joints[jointType2].getZ());
}

void drawHandState(KJoint joint) {
  handState(joint.getState());
  strokeWeight(5.0f + joint.getZ()*8);
  point(joint.getX(), joint.getY(), joint.getZ());
  storeX = joint.getX();
  storeY = joint.getY();
  //println(joint.getX(), joint.getY());
}


void drawBody(KJoint[] joints) {
  drawBone(joints, KinectPV2.JointType_Head, KinectPV2.JointType_Neck);
  drawBone(joints, KinectPV2.JointType_Neck, KinectPV2.JointType_SpineShoulder);
  drawBone(joints, KinectPV2.JointType_SpineShoulder, KinectPV2.JointType_SpineMid);

  drawBone(joints, KinectPV2.JointType_SpineMid, KinectPV2.JointType_SpineBase);
  drawBone(joints, KinectPV2.JointType_SpineShoulder, KinectPV2.JointType_ShoulderRight);
  drawBone(joints, KinectPV2.JointType_SpineShoulder, KinectPV2.JointType_ShoulderLeft);
  drawBone(joints, KinectPV2.JointType_SpineBase, KinectPV2.JointType_HipRight);
  drawBone(joints, KinectPV2.JointType_SpineBase, KinectPV2.JointType_HipLeft);

  // Right Arm    
  drawBone(joints, KinectPV2.JointType_ShoulderRight, KinectPV2.JointType_ElbowRight);
  drawBone(joints, KinectPV2.JointType_ElbowRight, KinectPV2.JointType_WristRight);
  drawBone(joints, KinectPV2.JointType_WristRight, KinectPV2.JointType_HandRight);
  drawBone(joints, KinectPV2.JointType_HandRight, KinectPV2.JointType_HandTipRight);
  drawBone(joints, KinectPV2.JointType_WristRight, KinectPV2.JointType_ThumbRight);

  // Left Arm
  drawBone(joints, KinectPV2.JointType_ShoulderLeft, KinectPV2.JointType_ElbowLeft);
  drawBone(joints, KinectPV2.JointType_ElbowLeft, KinectPV2.JointType_WristLeft);
  drawBone(joints, KinectPV2.JointType_WristLeft, KinectPV2.JointType_HandLeft);
  drawBone(joints, KinectPV2.JointType_HandLeft, KinectPV2.JointType_HandTipLeft);
  drawBone(joints, KinectPV2.JointType_WristLeft, KinectPV2.JointType_ThumbLeft);

  // Right Leg
  drawBone(joints, KinectPV2.JointType_HipRight, KinectPV2.JointType_KneeRight);
  drawBone(joints, KinectPV2.JointType_KneeRight, KinectPV2.JointType_AnkleRight);
  drawBone(joints, KinectPV2.JointType_AnkleRight, KinectPV2.JointType_FootRight);

  // Left Leg
  drawBone(joints, KinectPV2.JointType_HipLeft, KinectPV2.JointType_KneeLeft);
  drawBone(joints, KinectPV2.JointType_KneeLeft, KinectPV2.JointType_AnkleLeft);
  drawBone(joints, KinectPV2.JointType_AnkleLeft, KinectPV2.JointType_FootLeft);

  drawJoint(joints, KinectPV2.JointType_HandTipLeft);
  drawJoint(joints, KinectPV2.JointType_HandTipRight);
  drawJoint(joints, KinectPV2.JointType_FootLeft);
  drawJoint(joints, KinectPV2.JointType_FootRight);

  drawJoint(joints, KinectPV2.JointType_ThumbLeft);
  drawJoint(joints, KinectPV2.JointType_ThumbRight);

  drawJoint(joints, KinectPV2.JointType_Head);
}
void handState(int handState) {
  switch(handState) {
  case KinectPV2.HandState_Open:
    stroke(0, 255, 0);
    break;
  case KinectPV2.HandState_Closed:
    stroke(255, 0, 0);
    break;
  case KinectPV2.HandState_Lasso:
    stroke(0, 0, 255);
    break;
  case KinectPV2.HandState_NotTracked:
    stroke(100, 100, 100);
    break;
  }
}

List references
Kinect V2 library = http://codigogenerativo.com/code/kinect ... g-library/
Drawing code source = https://forum.processing.org/one/topic/eraser.html
Attachments
Skeleton3d_160402.zip
Source code
(2.81 KiB) Downloaded 255 times
Last edited by ihwang on Tue Apr 05, 2016 3:21 pm, edited 3 times in total.

qiu0717
Posts: 9
Joined: Wed Jan 06, 2016 1:44 pm

Re: Proj 1: Simple Project

Post by qiu0717 » Mon Apr 04, 2016 1:20 pm

Shape of your hand trails
Weihao

Theme
Tracking hands and visualize their trails is fantastic. It gives user a feeling that he has magic wand or invisible pen to draw things on screen. Therefore, this simple project mainly deal with the trails of hand, but visualize it in a different way other than the trails. It actually generates random shapes based on the hands’ trails.

Related point:
. Paint on the screen using basic forms: lines, rectangles, etc.


Processing Code:

Code: Select all

/* MAT 265 Project 1: Simple Project 
 * Author: Weihao Qiu
 * Start code: SimpleOpenNI Hands3d Test
 * Date : 2016/04/04
 * Description: Based on the hand capture result by the start code, I draw 
 * a randomlized form based on the trails of the hand. 
 * --------------------------------------------------------------------------
 * SimpleOpenNI Hands3d Test
 * --------------------------------------------------------------------------
 * Processing Wrapper for the OpenNI/Kinect 2 library
 * http://code.google.com/p/simple-openni
 * --------------------------------------------------------------------------
 * prog:  Max Rheiner / Interaction Design / Zhdk / http://iad.zhdk.ch/
 * date:  12/12/2012 (m/d/y)
 * ----------------------------------------------------------------------------
 * This demos shows how to use the gesture/hand generator.
 * It's not the most reliable yet, a two hands example will follow
 * ----------------------------------------------------------------------------
 */

import java.util.Map;
import java.util.Iterator;

import SimpleOpenNI.*;

SimpleOpenNI context;
int handVecListSize = 40;
Map<Integer, ArrayList<PVector>>  handPathList = new HashMap<Integer, ArrayList<PVector>>();
color[]       userClr = new color[] { 
  color(255, 0, 0, 150), 
  color(0, 255, 0, 150), 
  color(0, 0, 255, 150), 
  color(255, 255, 0, 150), 
  color(255, 0, 255, 150), 
  color(0, 255, 255, 150)
};
void setup()
{
  size(640, 480); 
  context = new SimpleOpenNI(this);
  if (context.isInit() == false)
  {
    println("Can't init SimpleOpenNI, maybe the camera is not connected!"); 
    exit();
    return;
  }   

  // enable depthMap generation 
  context.enableDepth();

  // disable mirror
  context.setMirror(true);

  // enable hands + gesture generation
  //context.enableGesture();
  context.enableHand();
  context.startGesture(SimpleOpenNI.GESTURE_WAVE);

  // set how smooth the hand capturing should be
  //context.setSmoothingHands(.5);
  blendMode(BLEND);
}

void draw()
{
  // update the cam
  context.update();
  image(context.depthImage(), 0, 0);
  background(255);  
  // draw the tracked hands
  if (handPathList.size() > 0)  
  {    
    Iterator itr = handPathList.entrySet().iterator();     
    while (itr.hasNext ())
    {
      Map.Entry mapEntry = (Map.Entry)itr.next(); 
      int handId =  (Integer)mapEntry.getKey();
      ArrayList<PVector> vecList = (ArrayList<PVector>)mapEntry.getValue();
      PVector p;
      PVector p2d = new PVector();

      stroke(userClr[ (handId - 1) % userClr.length ]);
      noFill(); 
      strokeWeight(1);        
      Iterator itrVec = vecList.iterator(); 


      if (vecList.size() >= handVecListSize-1) { 
        beginShape(TRIANGLE_STRIP);
        IntList vecIndex = new IntList();
        for (int i = 0; i<handVecListSize-2; i++) {
          vecIndex.append(i);
        }
        vecIndex.shuffle();

        for (int i = 0; i<handVecListSize; i++) {
          p = vecList.get(vecIndex.get(i));
          context.convertRealWorldToProjective(p, p2d);
          stroke(userClr[ (vecIndex.get(i)) % userClr.length]);
          strokeWeight(0.1*(handVecListSize-i));
          vertex(p2d.x, p2d.y);
        }

        endShape();
      }


      stroke(userClr[ (handId - 1) % userClr.length ]);
      strokeWeight(4);
      p = vecList.get(0);
      context.convertRealWorldToProjective(p, p2d);
      point(p2d.x, p2d.y);
    }
  }
}


// -----------------------------------------------------------------
// hand events

void onNewHand(SimpleOpenNI curContext, int handId, PVector pos)
{
  println("onNewHand - handId: " + handId + ", pos: " + pos);

  ArrayList<PVector> vecList = new ArrayList<PVector>();
  vecList.add(pos);

  handPathList.put(handId, vecList);
}

void onTrackedHand(SimpleOpenNI curContext, int handId, PVector pos)
{
  //println("onTrackedHand - handId: " + handId + ", pos: " + pos );

  ArrayList<PVector> vecList = handPathList.get(handId);
  if (vecList != null)
  {
    vecList.add(0, pos);
    if (vecList.size() >= handVecListSize)
      // remove the last point 
      vecList.remove(vecList.size()-1);
  }
}

void onLostHand(SimpleOpenNI curContext, int handId)
{
  println("onLostHand - handId: " + handId);
  handPathList.remove(handId);
}

// -----------------------------------------------------------------
// gesture events

void onCompletedGesture(SimpleOpenNI curContext, int gestureType, PVector pos)
{
  println("onCompletedGesture - gestureType: " + gestureType + ", pos: " + pos);

  int handId = context.startTrackingHand(pos);
  println("hand stracked: " + handId);
}

// -----------------------------------------------------------------
// Keyboard event
void keyPressed()
{

  switch(key)
  {
  case ' ':
    context.setMirror(!context.mirror());
    break;
  case '1':
    context.setMirror(true);
    break;
  case '2':
    context.setMirror(false);
    break;
  case 's':
    saveFrame("####.jpg");
    break;
  }  
}

 
0288.jpg
ScreenShot1
0349.jpg
ScreenShot2
0653.jpg
ScreenShot3
0907.jpg
ScreenShot4

jing_yan
Posts: 5
Joined: Fri Apr 01, 2016 2:33 pm

Re: Proj 1: Simple Project

Post by jing_yan » Mon Apr 04, 2016 8:17 pm

Among of green stiff old bright broken branch
come white sweet May again
: : Jing Yan

Concept

It is a poem with only one sentence composed by William Carlos Williams. Since there is only one sentence, every single word is important and stand out as an individual. While combining these single words together, we get a whole picture of the awakening world of Spring. For this project, I am firstly interested in the fragmentation of a poem. How is the relationship among those words? How is the relationship between each single word and the whole poem?

Besides that, I am interested in the idea of visualizing poems with their own characters, to create an image of them. However, it will be more amusing if more randomness is added by interaction. So I intend to create this project as an installation in public space where lots of people are walking in front of it. By doing the motion tracking, it creates a feeling of words stacking following the trajectory of users. The movement of words are controlled by users in front, and the size of the character is related to the distance between users and screen.

Also, I add upon a very realistic walking sound which is panning according to the movement of users, in contrast with the somehow abstract image, and to build an atmosphere of early spring. A small crisp sound is also stimulated every time a word comes out.
4.png
5.png
6.png
7.png
8.png
9.png
10.png
References
Code expanded on the “advanced drawing examples” from Greg.Borenstein <Making Things See> (2012.01)

Code: Select all

/* 2016-4-2 (Processing 3)
 
 M265 Optical-Computational Processes: Simple Project 
 
 ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
 :::::::: W H I T E / S W E E T / M A Y / A G A I N::::::::::::::::
 ::::::::::::::::::::::::::::::::::::::::::: code: Jing Yan :::::::
 ::::::::::::::::::: theuniqueeye@gmail.com :::::::::::::::::::::::
 ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
 ::::::: [VERSION 3] ::::::::::::::::::::::::::::::::::::::::::::*/
 
 
// version 3: 
// 1. Add black background to words.Red to MAY
// 2. Use for loop and function to rebuild the redundant structure.
// 3. Further: Try to do some rotate. 

// reference: Greg.Borenstein <Making Things See> (2012.01)

import SimpleOpenNI.*;
SimpleOpenNI kinect;

import processing.sound.*;
SoundFile walk, bell1, bell2, bell3, bell4, bell5;


String[] words= {
  "A M O N G", "O F", "G R E E N", "S T I F F", "O L D", "B R I G H T", "B R O K E N", "B R A N C H", "C O M E", "W H I T E", "S W E E T", "M A Y", "A G A I N"
};
PFont font,font2;
float wordX, wordY, rectW, rectH;
int transparency=127;
float wordScale;
float[] stayX = new float[13];
float[] stayY = new float[13];
float[] stayScale = new float[13];
float[] stayRectW = new float[13];
float[] stayRectH = new float[13];

int minDistance=200;
int maxDistance=3000;
int closestValue, closestX, closestY;
float lastX;
float lastY;

int totalPhase = 40*18;
int timestamp = 40;
float loudness = 0.1;
boolean titleOn=true;
//int counter=0;


void setup() {
  size(640, 480);
  frameRate(20);
  background(255); 

  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
  kinect.enableRGB();

  font = loadFont("KohinoorDevanagari-Book-48.vlw");
  font2 = loadFont("OratorStd-48.vlw");
  textAlign(CENTER, CENTER);
  println("number of words: " + words.length);

  // Load a soundfile from the /data folder of the sketch and play it back
  walk = new SoundFile(this, "walk_mono.mp3");
  walk.play();
  walk.loop();
  bell1 = new SoundFile(this, "bell1.wav");
  bell2 = new SoundFile(this, "bell2.wav");
  bell3 = new SoundFile(this, "bell3.wav");
  bell4 = new SoundFile(this, "bell4.wav");
  bell5 = new SoundFile(this, "bell5.wav");
  bell1.amp(0.5);
  bell2.amp(0.5);
  bell3.amp(0.5);
  bell4.amp(0.5);
  bell5.amp(0.5);
}


void draw() {
  closestValue = 8000;

  kinect.update();
  int[] depthValues = kinect.depthMap();

  // find out the closest point on screen
  for (int y=0; y<480; y++) {
    for (int x=0; x<640; x++) {
      int reversedX = 640-x-1;
      int i = reversedX+y*640;
      int currentDepthValue = depthValues[i];

      if (currentDepthValue>minDistance && currentDepthValue < maxDistance && currentDepthValue<closestValue) {
        closestValue = currentDepthValue;
        closestX=x;
        closestY=y;
      }
    }
  }
  // make the movement less active and more stable
  float interpolatedX = lerp(lastX, closestX, 0.05f);   
  float interpolatedY = lerp(lastY, closestY, 0.05f);

  // position of words
  wordX = interpolatedX; 
  wordY = interpolatedY;

  // scale of words, boxs; loudness of sounds
  if (closestValue<maxDistance&&closestValue>minDistance) { // avoid bug at edges
    wordScale = map(closestValue, minDistance, maxDistance, 50, 6);
    loudness = map(closestValue, minDistance, maxDistance, 0.1, 0.9);
    rectW = wordScale * 5 +10;
    rectH = wordScale * 1;
  }

  background(255); // refresh canvas
  textFont(font, wordScale);
  noStroke();
  //image(kinect.depthImage(), 0, 0); // see the real scene for examine
  //ellipse(wordX, wordY, 25, 25); // see the actual pick up spot for examine

    // stereolize the sound according to the movement
  walk.amp(loudness);
  walk.pan(map(interpolatedX, 0, interpolatedY, -1.0, 1.0)); 

 

  // Using frameCount to do animation
  if (frameCount % totalPhase <timestamp) {
    transparency = 127; 
    fill(0);
    rect(wordX-rectW/2, wordY-rectH/2, rectW, rectH);

    fill(255);
    text(words[0], wordX, wordY);

    stayX[0]=wordX;
    stayY[0]=wordY;
    stayScale[0]=wordScale;
    stayRectW[0]=rectW;
    stayRectH[0]=rectH;
    if (frameCount%totalPhase == 1) bell1.play();
  }

  if (frameCount % totalPhase >=timestamp && frameCount % totalPhase <timestamp*2) {
    drawTextAndBox(1);
    if (frameCount%totalPhase == 41) bell2.play();
  }

  if (frameCount % totalPhase >=timestamp*2 && frameCount % totalPhase <timestamp*3) {
    drawTextAndBox(2);
    if (frameCount%totalPhase == 81) bell3.play();
  }

  if (frameCount % totalPhase >=timestamp*3 && frameCount % totalPhase <timestamp*4) {
    drawTextAndBox(3);
    if (frameCount%totalPhase == 121) bell4.play();
  }

  if (frameCount % totalPhase >=timestamp*4 && frameCount % totalPhase <timestamp*5) {
    drawTextAndBox(4);
    if (frameCount%totalPhase == 161) bell5.play();
  }

  if (frameCount % totalPhase >=timestamp*5 && frameCount % totalPhase <timestamp*6) {
    drawTextAndBox(5);
    if (frameCount%totalPhase == 201) bell1.play();
  }

  if (frameCount % totalPhase >=timestamp*6 && frameCount % totalPhase <timestamp*7) {
    drawTextAndBox(6);
    if (frameCount%totalPhase == 241) bell2.play();
  }

  if (frameCount % totalPhase >=timestamp*7 && frameCount % totalPhase <timestamp*8) {
    drawTextAndBox(7);
    if (frameCount%totalPhase == 281) bell3.play();
  }

  if (frameCount % totalPhase >=timestamp*8 && frameCount % totalPhase <timestamp*9) {
    drawTextAndBox(8);
    if (frameCount%totalPhase == 321) bell4.play();
  }
  if (frameCount % totalPhase >=timestamp*9 && frameCount % totalPhase <timestamp*10) {
    drawTextAndBox(9);
    if (frameCount%totalPhase == 361) bell5.play();
  }

  if (frameCount % totalPhase >=timestamp*10 && frameCount % totalPhase <timestamp*11) {
    drawTextAndBox(10);
    if (frameCount%totalPhase == 401) bell1.play();
  }

  if (frameCount % totalPhase >=timestamp*11 && frameCount % totalPhase <timestamp*12) {

    // want to make the MAY red and exceptional

    // draw all the previous texts and boxs
    for (int i=0; i<11; i++) {
      fill(0, 127);
      rect(stayX[i]-stayRectW[i]/2, stayY[i]-stayRectH[i]/2, stayRectW[i], stayRectH[i]);
      fill(255);
      textFont(font, stayScale[i]);
      text(words[i], stayX[i], stayY[i]);
    }

    // draw the current moving text and it's box
    fill(220, 20, 60); // red [220,20,60]
    rect(wordX-rectW/2, wordY-rectH/2, rectW, rectH);
    fill(255);
    textFont(font, wordScale);
    text(words[11], wordX, wordY);

    // store the current data into array
    stayX[11]=wordX;
    stayY[11]=wordY;
    stayScale[11]=wordScale;
    stayRectW[11]=rectW;
    stayRectH[11]=rectH;
    if (frameCount%totalPhase == 441) bell2.play();
  }

  if (frameCount % totalPhase >=timestamp*12 && frameCount % totalPhase <timestamp*13) {
    // want to make the MAY red and exceptional

    // draw all the previous texts and boxs
    for (int i=0; i<11; i++) {
      fill(0, 127);
      rect(stayX[i]-stayRectW[i]/2, stayY[i]-stayRectH[i]/2, stayRectW[i], stayRectH[i]);
      fill(255);
      textFont(font, stayScale[i]);
      text(words[i], stayX[i], stayY[i]);
    }

    // red MAY
    fill(220, 20, 60, 127); // red [220,20,60]
    rect(stayX[11]-stayRectW[11]/2, stayY[11]-stayRectH[11]/2, stayRectW[11], stayRectH[11]);
    fill(255);
    textFont(font, stayScale[11]);
    text(words[11], stayX[11], stayY[11]);

    // draw the current moving text and it's box
    fill(0);
    rect(wordX-rectW/2, wordY-rectH/2, rectW, rectH);
    fill(255);
    textFont(font, wordScale);
    text(words[12], wordX, wordY);

    // store the current data into array
    stayX[12]=wordX;
    stayY[12]=wordY;
    stayScale[12]=wordScale;
    stayRectW[12]=rectW;
    stayRectH[12]=rectH;
    if (frameCount%totalPhase == 481) bell3.play();
  }

  if (frameCount % totalPhase >=timestamp*13 && frameCount % totalPhase <timestamp*17) {
    //println("i = "+frameCount % totalPhase+"   transparency  = "+transparency);
    if (transparency>3)
      transparency = transparency-3;

    for (int i=0; i<11; i++) {
      fill(0, transparency);
      rect(stayX[i]-stayRectW[i]/2, stayY[i]-stayRectH[i]/2, stayRectW[i], stayRectH[i]);
      fill(255, transparency);
      textFont(font, stayScale[i]);
      text(words[i], stayX[i], stayY[i]);
    }

    // red MAY
    fill(220, 20, 60, transparency); // red [220,20,60]
    rect(stayX[11]-stayRectW[11]/2, stayY[11]-stayRectH[11]/2, stayRectW[11], stayRectH[11]);
    fill(255, transparency);
    textFont(font, stayScale[11]);
    text(words[11], stayX[11], stayY[11]);

    // last word
    fill(0, transparency);
    rect(stayX[12]-stayRectW[12]/2, stayY[12]-stayRectH[12]/2, stayRectW[12], stayRectH[12]);
    fill(255, transparency);
    textFont(font, stayScale[12]);
    text(words[12], stayX[12], stayY[12]);
  }
  
  if (frameCount % totalPhase >=timestamp*17 && frameCount % totalPhase <timestamp*18) {
    drawTitle();
  }

  lastX = interpolatedX;
  lastY = interpolatedY;
  
  // saveFrame to make an animation
  //saveFrame("poem-######.png");
}


junxiangyao
Posts: 10
Joined: Wed Jan 06, 2016 1:38 pm

Re: Proj 1: Simple Project

Post by junxiangyao » Tue Apr 05, 2016 10:03 am

Description
In this project, I tracked the closest point (usually my hand) detected by kinect to pick the letters which were distributed randomly on the canvas up. In the picking up process, I used the dist() to calculate the distance between the current position of the letter and my hand, if the distance is small enough, the letter will be picked. Since I've already draw several horizontal lines on the canvas to mark the destinations of the letters, my next goal is just move the letter I picked to the right position. And again, I used dist() to check if the current position of the letter is close enough to its goal position. If that is the case, like the distance is smaller than a threshold I defined, the letter will be fixed on its right position. No matter how I move my hand, the fixed letter will not be picked up again.

Screenshots
屏幕快照 2016-04-03 下午11.15.45.png
屏幕快照 2016-04-03 下午11.16.10.png
屏幕快照 2016-04-03 下午11.17.22.png
屏幕快照 2016-04-03 下午11.20.29.png
Code

Code: Select all

/********************************************************
 * MAT265 PROJ.1   Simple Project                       *
 *                                                      *
 * Junxiang Yao                                         * 
 *                                                      *
 *                                                      *
 *                                                      *
 * Press G to show / hide the grid system.              *
 *                                                      *
 * Press D to show / hide the depth image.              *
 *                                                      *
 * Press A to turn on / off the auto mode.              *
 *                                                      *
 * Press R to restart.                                  *
 *                                                      *
 ********************************************************/


import SimpleOpenNI.*;
SimpleOpenNI kinect;
int closestX, closestY;
float lastX, lastY;
int closestValue;

PFont f;
String message = "KINECT";
Letter [] letters;

boolean grid = false;
boolean auto = false;
//boolean reStart = false;
boolean depthImage = false;
boolean [] move = new boolean[message.length()];


void setup() {
  size(640, 480, P3D);
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
  f = createFont("Arial", 40);
  textFont(f);
  textAlign(LEFT);
  letters = new Letter[message.length()];
  int w = 175;
  for (int i = 0; i < message.length (); i++) {
    letters[i] = new Letter(w, 180, message.charAt(i));
    w += textWidth(message.charAt(i))+30;
  }
  for (int i = 0; i < message.length (); i++) {
    move[i] = false;
  }
}

void draw() {
  background(0);
  closestValue = 8000;
  kinect.update();
  int[] depthValues = kinect.depthMap();
  for (int y = 0; y < 480; y++) {
    for (int x = 0; x < 640; x++) {
      int reversedX = 640 - x - 1;
      int index = reversedX + y * 640;
      int currentDepthValue = depthValues[index];
      if (currentDepthValue > 500 && currentDepthValue < 1525 && 
        currentDepthValue < closestValue) {
        closestValue = currentDepthValue;
        closestX = x;
        closestY = y;
      }
    }
  }
  float interpolatedX = lerp(lastX, closestX, 0.8);
  float interpolatedY = lerp(lastY, closestY, 0.8);
  lastX = interpolatedX;
  lastY = interpolatedX;

  if (depthImage) {
    PImage depthImage = kinect.depthImage();
    depthImage.loadPixels();
    loadPixels();
    for (int x = 0; x < 640; x++) {
      for (int y = 0; y < 480; y++) {
        int loc = x + y * 640;
        pixels[640-1-x+y*640]=depthImage.pixels[loc];
      }
    }
    updatePixels();
  }

  //  image(kinect.depthImage(), 0, 0);
  fill(255, 0, 0);
  ellipse(interpolatedX, interpolatedY, 10, 10);

  for (int i = 0; i < message.length (); i++) {
    pushMatrix();
    letters[i].home(interpolatedX, interpolatedY, move, i);
    letters[i].display(interpolatedX, interpolatedY);
    popMatrix();
  }

  int w = 175;
  for (int i = 0; i < message.length (); i++) {
    noStroke();
    fill(255, 200);
    rect(w-5, 183, textWidth(message.charAt(i))+10, 1);
    w += textWidth(message.charAt(i))+30;
  }
  if (grid) {
    stroke(100);
    line(0, (1-0.618)*480, 640, (1-0.618)*480);
  }
}


class Letter {
  float x, y, homex, homey;
  char letter;
  boolean show = false;
  Letter(float x_, float y_, char letter_) {
    homex =  x_;
    homey =  y_;
    x = random(20, width-40);
    y = random(80, height-60);
    letter = letter_;
  }



  void home(float ix, float iy, boolean[] move, int ind) {
    if (x != homex && y != homey) {
      if (move[0]==false&&move[1]==false&&move[2]==false&&move[3]==false&&
        move[4]==false&&move[5]==false) {
        if (dist(ix, iy, x, y)<8) {
          show = true;
        }
      }
    }
  }


  void display(float ix, float iy) {
    if (show == true) {
      x = ix;
      y = iy;
    }
    if (!auto) {
      if (show == true && dist(ix, iy, homex, homey)<8) {
        x = homex;
        y = homey;
        show = false;
      }
    }
    if (auto) {
      if (dist(ix, iy, homex, homey)<8) {
        x = homex;
        y = homey;
        show = false;
      }
    }
    fill(255);
    pushMatrix();
    translate(x, y);
    text(letter, 0, 0);
    popMatrix();
  }
}

void keyPressed() {
  if (key == 'g' || key == 'G') {
    grid = !grid;
  }
  if (key == 'd' || key == 'D') {
    depthImage = !depthImage;
  }
  if (key == 'a' || key == 'A') {
    auto = !auto;
  }
  if (key == 'r'|| key == 'R') {
    int w = 175;
    for (int i = 0; i < message.length (); i++) {
      letters[i] = new Letter(w, 180, message.charAt(i));
      w += textWidth(message.charAt(i))+30;
    }
    for (int i = 0; i < message.length (); i++) {
      move[i] = false;
    }
  }
}
References
Making Things See, Borenstein, Chapter 2
Project 7: Minority Report Photos, P96
Multiple Image & Scale, p100
Last edited by junxiangyao on Tue Apr 05, 2016 1:10 pm, edited 6 times in total.

lliu
Posts: 9
Joined: Wed Jan 06, 2016 1:41 pm

Re: Proj 1: Simple Project

Post by lliu » Tue Apr 05, 2016 12:45 pm

SELF PORTRAIT

Abstract Point Clouds of People
__ ___ __ ___ __ ___ __ ___ __ ___ __ ___ __ ___ __ ___ __ ___ __ ___ __ ___ __ ___

Concept:

For this project, I use the point clouds to draw the figure of the people.
I tried to eliminate the minor points and the background by only caring about points within a certain depth.
In the color mode, I use the "alternativeViewPointDepthToImage" function in SimpleOpenNI to make every pixel's value of RGBImage matches to the depth points. And I change the points to boxes to give audience a more abstract feeling of portrait.

Operation:
Press "C" to switch Black-white Mode to Color Mode
Press "R" to Rotate whole Figure

Processing Code:

Code: Select all

import processing.opengl.*;
import SimpleOpenNI.*;
SimpleOpenNI kinect;
boolean Rotate = false;
boolean Color = false;
float rotation=0;

void setup() {
  size(1024, 768, OPENGL);
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
  kinect.enableRGB();
  kinect.alternativeViewPointDepthToImage();
}
void draw() {
  background(0);
  kinect.update();
  // prepare to draw centered in x-y
  // pull it 1000 pixels closer on z

  PImage rgbImage= kinect.rgbImage();
  translate(width/2, height/2, -1000); 
  rotateX(radians(180)); // flip y-axis from "realWorld" 
  stroke(255); 
  if (Rotate) {
    translate(0, 0, 1000);
    rotateY(radians(rotation));
    rotation++;
  }

  // get the depth data as 3D points
  PVector[] depthPoints = kinect.depthMapRealWorld(); 
  for (int i = 0; i < depthPoints.length; i+=10) {
    // get the current point from the point array
    PVector currentPoint = depthPoints[i];
    // draw the current point
    //    point(currentPoint.x, currentPoint.y, currentPoint.z);
    if (currentPoint.z > 610 && currentPoint.z < 1525) {
      pushMatrix();
      translate(currentPoint.x, currentPoint.y, currentPoint.z);
      if (Color) { 
        stroke(rgbImage.pixels[i]);
      } else {
        stroke(map(currentPoint.z, 610, 1525, 255, 0));
      }
      box(map(currentPoint.z, 610, 1525, 20, 2));
      popMatrix();
    }
  }
}

void keyPressed() {
  if (key == 'r' || key== 'R') {
    Rotate =! Rotate;
  }
  if (key == 'c' || key== 'C') {
    Color =! Color;
  }
  if (key == 's' || key == 'S') {  

    // Saves each frame as line-000001.png, line-000002.png, etc.
    saveFrame("demo3.png");
  }
}
Results:
In Black-white Mode, the closer you stand in front of the camera, the bigger and brighter boxes you'll see.

Black-White Version
demo1.png
Color Version
demo2.png
Rotate Mode
demo3.png
demo4.png

Reference:
"Making Things See" by Greg Borenstein
Last edited by lliu on Mon Apr 18, 2016 4:18 pm, edited 1 time in total.

xindi
Posts: 8
Joined: Wed Jan 06, 2016 1:39 pm

Simple Project: the Chasing Game

Post by xindi » Tue Apr 05, 2016 7:59 pm

This a really really really simple game where you move your hands and chase the target. It may help you do some arm exercise.
Screen Shot 2016-04-05 at 8.56.05 PM.png
The smaller dot represents your hand and the larger dot is your target. Move your hands to reach the target.
Your goal is to get the highest score within the given time, which is represented by the count down in the background. The count down starts from 100 (50 seconds).

Here is the processing code for this simple game.

Code: Select all

import SimpleOpenNI.*;
SimpleOpenNI kinect;

int closestValue; 
int closestX;
int closestY;
float targetX = random(640);
float targetY = random(480);

int score = 0; 
int time = 1000;

boolean Timer = false;

void setup()
{
  size(640, 495);
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
  kinect.enableRGB();
}
void draw() {
  closestValue = 8000;

  kinect.update();

  int[] depthValues = kinect.depthMap();
  for (int y=0; y<480; y++) {
    for (int x=0; x<640; x++) {


      int reversedX = 640-x-1;
      int i = reversedX+y*640;
      int currentDepthValues = depthValues[i];
      // distance limit is 2f -- 5f
      if (currentDepthValues > 610 && currentDepthValues < 1525 && currentDepthValues < closestValue) {
        closestValue = currentDepthValues;
        closestX = x;
        closestY = y;
      }
    }
  }
  //PImage depthImage = kinect.depthImage();
  //  PImage rgbImage = kinect.rgbImage();

  //image(kinect.depthImage(), 0, 0);

  background(0);
  image(kinect.rgbImage(), 640, 0);


  fill(255, 255, 255);
  textSize(14);
  text("MOVE YOUR HAND TO REACH THE TARGET", width/6, height/8);
  fill(255, 0, 0);
  ellipse(closestX, closestY, 25, 25);

  ellipse(targetX, targetY, 50, 50);
  if (closestX > targetX-50 && closestX < targetX+50 && closestY > targetY-50 && closestY < targetY+50) {
    //if((closestX-targetX)*(closestX-targetX)<=75 && abs(closestY-targetY)<=75 ){  
    targetX = random(640);
    targetY = random(480);
    ellipse(targetX, targetY, 50, 50);
    score++; 
    textSize(40);
    text("WELL DONE", 40, height/2);
  }
  counter();
  rect(0, 480, width, 15);
  textSize(20);
  text("start", width/2-10, 493);
  timer();
  // if (closestY>=460) {
  //   if (!Timer) {
  //     Timer = true;
  //   } else if (Timer) {
  //     Timer = false;
  //   }
  //   if (Timer)timer();
  // }
}
void counter() {
  fill(255, 255, 255, 80);
  textSize(50);
  text("Your Score:" + score, width/5, height-40);
}

void timer() {

  time --;
  textSize(300);
  text(time/10, width/6, height/1.5);
  if (time>200) {
  }
}

zhenyuyang
Posts: 9
Joined: Fri Apr 01, 2016 2:34 pm

Re: Proj 1: Simple Project

Post by zhenyuyang » Tue Apr 05, 2016 9:37 pm

White Mist - Zhenyu Yang


Concept
I was inspired by several film posters which show blurred figures. Details like face expression are removed. The remained information (like the outline of a person and his or her movement) will create more imaginary details to the viewers. The following are one of the posters that inspired me.

poster2.jpg


Description
This project is based on the example "ex06_closest_pixel" provided by the professor. In this project, depth data is extracted by a Kinect device in real-time and processed through a filter so that only objects located in a certain range of depth can be displayed. At the same time, the colors of extracted objects are removed to form outlines. Furthermore, the outlines are blurred, which create a sense of mist. The blurriness is mapped from the distance between the user and the Kinect device. To create a better experience, sound effects are introduced into the environment. There are three kinds of sounds that are used in this project: The main background sound, which is played in a loop to create a narrative environment; some environmental sounds, which are played randomly to enhance the sense of environment; heart beat sounds, whose frequencies are mapped from the distance between the user and the Kinect device (The short the distance is, the more intense the heart beats will be).


Drawback
A problem was noticed by a classmate during the presentation: The blur effect is applied on the entire screen so objects at different depth are not well differentiated.


Reference
1."ex06_closest_pixel" example.
2.Super Fast Blur Filter (Based on Mario Klingemann's work)
3.Greg.Borenstein <Making Things See> (2012.01)



Screenshots
Screenshots (Distance from far to near):
1.png
2.png
3.png
4.png
5.png
6.png
7.png




Code

Code: Select all

/* M265 Optical-Computational Processes: Simple Project - whitemist
* Author: Zhenyu Yang
* Date : 2016-4-1
*/

import ddf.minim.spi.*;
import ddf.minim.signals.*;
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.ugens.*;
import ddf.minim.effects.*;


Minim minim;
AudioPlayer player;
AudioInput input;


import SimpleOpenNI.*;
SimpleOpenNI  kinect;
PShader blur;

int closestValue;
int closestX;
int closestY;

int canvasWidth  = 640;
int canvasHeight = 480;
int kinectWidth  = 640;
int kinectHeight = 480;

int heartBeatPeriod1 = 80;
int heartBeatPeriod2 = 40;
int heartBeatPeriod3 = 20;
int heartBeatPeriod4 = 15;
int bgmPeriod = 500;
int tempbgmPeriod = 0;

double edgeTransistionRate = 0.15; //transition on the max/min distance edges

PImage maskImage;

// DISTANCE RANGE IN MILLIMETERS (FOR THE FILTER)
int minDistance  = 500;  // 50cm
int maxDistance  = 1500; // 1.5m

int value = 30;
void setup()
{
  
  //Initialization
  minim = new Minim(this);
  size(kinectWidth, kinectHeight, P2D);
  maskImage  = createImage(kinectWidth, kinectHeight, RGB);
  blur = loadShader("blur.glsl");
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();  
  kinect.enableRGB();
  tempbgmPeriod = bgmPeriod+(int)(Math.random() * 200);  //Initializing temporary backgroundsound period
  
  //Play main background sound in a loop
  AudioPlayer playerloop = minim.loadFile("loop.mp3"); 
  playerloop.play();
  playerloop.loop();
  playerloop.setGain(-4.0); //Adjust volume
}

void draw()
{
  background(255);//Set background as white color.

  //Randomly play temporary background sound effects.
  if(frameCount%tempbgmPeriod==0){
     thread("backgroundSounds");
     tempbgmPeriod = bgmPeriod+(int)(Math.random() * 200);
     println("new tempbgmPeriod = "+tempbgmPeriod);
     }
  closestValue = 8000;
  kinect.update();

  int[] depthValues = kinect.depthMap();

 maskImage.loadPixels();




//Extract a figure of a user from the depth map. Thresholds are applied.
  // for each row in the depthMap
  for (int y = 0; y < 480; y++) {
    // look at each pixel in the row
    for (int x = 0; x < 640; x++) {
      // pull out the corresponding value from the depth array
      int i = x + y * 640;
      int currentDepthValue = depthValues[i];
      // if that pixel is the closest one we've seen so far
      if (currentDepthValue > 0 && currentDepthValue < closestValue) {
        // save its value
        closestValue = currentDepthValue;
        // and save its position (both X and Y coordinates)
        closestX = x;
        closestY = y;
      }
      if (depthValues[i] > minDistance && depthValues[i] < minDistance*(1+edgeTransistionRate)){
        // IN RANGE: WHITE PIXEL
        maskImage.pixels[i] = color(255-(int)(255*(depthValues[i]-minDistance)/(minDistance*edgeTransistionRate)));
      }
      else if (depthValues[i] < maxDistance*(1-edgeTransistionRate) && depthValues[i] >= minDistance*(1+edgeTransistionRate)){
        maskImage.pixels[i] = color(0);
      }
      else if (depthValues[i] >= maxDistance*(1-edgeTransistionRate) && depthValues[i] < maxDistance){
        maskImage.pixels[i] = color(255-(int)(255*(maxDistance - depthValues[i])/(maxDistance*edgeTransistionRate)));
      }
      else
      maskImage.pixels[i] = color(255);
    }
  }
  maskImage.updatePixels();
  image(maskImage, 0, 0);
  distance2BlurWithFastSpeed(closestValue); //Processing the maskImage
  thread("heartBeat");  //Open a new thread to play heart beat sounds
}

//Image blur processing
void distance2BlurOnGPU(int distance) {
  //println("closestValue = "+closestValue);

  if (distance<=maxDistance&&distance>minDistance)
    multiBlur((distance-minDistance)/20);
  else if (distance<=minDistance)
    multiBlur(0);
  else
    multiBlur((maxDistance-minDistance)/20);
}

void distance2BlurWithFastSpeed(int distance) {
  //println("closestValue = "+closestValue);

  if (distance<=maxDistance&&distance>minDistance)
    superFastBlur((distance-minDistance)/25);
  else if (distance<=minDistance)
    superFastBlur(0);
  else
    superFastBlur((maxDistance-minDistance)/25);
}


void multiBlur(int n) { //a=src, b=dest img
  for (int i = 0; i<n; i++)
    filter(blur);
}



//Sound effects
void heartBeat() {
int distance = closestValue;
  if (minDistance<distance&&distance<=minDistance+(maxDistance-minDistance)/4){
     if(frameCount%heartBeatPeriod4==0){
     player = minim.loadFile("heart3.wav");
     player.play();
     println("heart3.wav");
     }
  }
  else if (minDistance+(maxDistance-minDistance)/4<distance&&distance<=minDistance+((maxDistance-minDistance)*2)/4){
     if(frameCount%heartBeatPeriod3==0){
     player = minim.loadFile("heart3.wav");
     player.play();
     println("heart3.wav");
     }
  }
  else if (minDistance+((maxDistance-minDistance)*2)/4<distance&&distance<=minDistance+((maxDistance-minDistance)*3)/4){
     if(frameCount%heartBeatPeriod2==0){
     player = minim.loadFile("heart2.wav");
     player.play();
          println("heart2.wav");
     }
  }
  else if (minDistance+((maxDistance-minDistance)*3)/4<distance&&distance<=maxDistance){
     if(frameCount%heartBeatPeriod1==0){
     player = minim.loadFile("heart1.wav");
     player.play();
     player.setGain(+6.0); //volume
     println("heart1.wav");
     }
  }
}

void backgroundSounds(){
  player = minim.loadFile("bgm"+(1+(int)(Math.random() * 3))+".mp3");
  
  player.play();
  player.setGain(-10.0); //volume
  println("backgroundSounds");
  
}


//Keyboard interation
void keyPressed() {
  if (key == 's') {
    value ++;
    println("s");
    println("value = "+value);
  } else if (key == 'd') {
    if(value>0)
    value --;
    println("value = "+value);
    println("d");
  }
}


//Mouse interaction
void mousePressed() {
  color c = get(mouseX, mouseY);
  println("r: " + red(c) + " g: " + green(c) + " b: " + blue(c));
}

YouTube links:
https://youtu.be/Hedig9Wzjdo
https://youtu.be/NO3hf38oHHY
Attachments
whitemist_Zhenyu_Yang.zip
(6.77 MiB) Downloaded 163 times
Last edited by zhenyuyang on Thu May 19, 2016 12:33 am, edited 4 times in total.

davidaleman
Posts: 4
Joined: Fri Apr 01, 2016 2:33 pm

Re: Proj 1: Simple Project

Post by davidaleman » Tue Apr 05, 2016 11:03 pm

Drawing was one of my favorite mediums and so my idea was to combine my body with the kinect and create an interactive drawing app using my hand.

This program detects the closest thing to the Kinect's IR camera. So it can be your hand, a piece of furniture, or some other object that can draw.

I added another layer of interactivity by adding keyPress functions. When drawing you can hit the keys R, G, B, Y, O , and P to turn the stroke of your drawing to the colors red, green, blue, yellow, orange, and purple respectively.

If you hit key S then you can stop the drawing and move your hand some place else and hit the other keys to begin drawing with the color you finished choosing.

Code: Select all

import SimpleOpenNI.*;
SimpleOpenNI kinect;

int closestValue; 
int closestX;
int closestY;

//declare global variables for the 
//previous x and y coordinates
//int previousX;
//int previousY;
float lastX;
float lastY;

boolean colorR = false;
boolean colorG = false;
boolean colorB = false;
boolean colorY = false;
boolean colorO = false;
boolean colorP = false;
boolean colorNone = false;

void setup(){

  size(640, 480);
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
  
  //start with a black background
  background(0);
}

void draw(){
  closestValue = 8000; 
  kinect.update();
  
  int[] depthValues = kinect.depthMap();
  
   // for each row in the depth image
  for(int y = 0; y < 480; y++){ 
  // look at each pixel in the row
    for(int x = 0; x < 640; x++){
      
     //reverse x for mirror effect
      int reversedX = 640-x-1;
     // pull out the corresponding value from the depth array using reversedX
      int i = reversedX + y * 640; 
      int currentDepthValue = depthValues[i];
      
      //closest value within range 610 or 2 ft and 1525 or 5ft
      if(currentDepthValue > 610 && currentDepthValue < 1525 
          && currentDepthValue < closestValue){ 
        // save its value
        closestValue = currentDepthValue;
        // and save its position (both X and Y coordinates)
        closestX = x;
        closestY = y;
      }
    }
  }
  
  //linear interpolation, smooth transition between last point and new closest point
  float interpolatedX = lerp(lastX, closestX, 0.3f);
  float interpolatedY = lerp(lastY, closestY, 0.3f);
  
  //draw the depth image on the screen
  //for this project comment out to have a nicer drawing interface 
  //image(kinect.depthImage(),0,0);
  
  colorStart();
  keyPressed();
  
  
  //set the line drawing color to red
  colorR = true;
  strokeWeight(3);
  //draw a line from the previous point to the new closest one
  line(lastX, lastY, interpolatedX, interpolatedY);
  lastX = interpolatedX;
  lastY = interpolatedY;
}

void mousePressed(){
  //save image to file
  save("drawing.png");
  background(0);
}
drawing.png
Attachments
kinectCanvas.zip
(34.35 KiB) Downloaded 158 times

ambikayadav
Posts: 4
Joined: Fri Apr 01, 2016 2:32 pm

Re: Proj 1: Simple Project

Post by ambikayadav » Wed Apr 06, 2016 10:57 am

FARTHER :: BETTER
by Ambika Yadav

In the project , I wanted to experiment with the human mind. I wanted to place a piece of art in front of the user which is can be completely understood only from far away .
As the user attempts to move close to the piece it disappears and the context is not completely understood.
The user needs to come at the right distance from the piece to disclose the work completely.
To achieve this, I have first split the artwork into small blocks, and vary the opacity of blocks , on the whole decreasing as the user approaches.

The piece of art I have used here is American Gothic by Grant Wood.

Code: Select all

import SimpleOpenNI.*;
SimpleOpenNI kinect;
int opacityrand[];
PImage image[];
int closestValue;
void setup()
{
 size(1000,1000);
 background(255,255,255);
 
 kinect = new SimpleOpenNI(this);
 kinect.enableDepth();
 
 image = new PImage[1024];
 opacityrand = new int[1024];
 for (int i = 0 ; i <1024 ;i++)
 {
   opacityrand[i] = int(random(10,30)); 
 }
 
 for (int i=0 ;i<32;i++)
 {
   for (int j=0 ;j<32;j++)
 {
   String finalname = "AmericanGothic [www.imagesplitter.net]-"+ str(i) + "-" +str(j) + ".jpeg";
   image[i+32*j]= loadImage(finalname);
 }
 }
}
void draw()
{
 closestValue = 8000;
  kinect.update();
  int[] depthValues = kinect.depthMap();
    for(int y = 0; y < 480; y++)
    {
      for(int x = 0; x < 640; x++)
      {
        int i = x + y * 640;
        int currentDepthValue = depthValues[i];
        if(currentDepthValue > 0 && currentDepthValue < closestValue)
        {
          closestValue = currentDepthValue;
        }
      }
    }
  
 background(255,255,255);
 for (int i=0 ;i<32;i++)
 {
   for (int j=0 ;j<32;j++)
   {
    if (closestValue > 2500)
    {
    tint(255, 255);
    }
    else if ( closestValue < 2500)
    {
    int z = int(map( closestValue, 0, 2500, 0,255));
    int alpha = z - opacityrand[i+10*j]*3 ; 
    tint(255,alpha);
    }
    image(image[i+32*j],j*32.0625,i*38.5);  
   }
 }
}
Screen Shot 2016-04-06 at 11.46.57 AM.png
Closest to the Kinect
Screen Shot 2016-04-06 at 11.45.45 AM.png
Screen Shot 2016-04-06 at 11.47.07 AM.png
Screen Shot 2016-04-06 at 11.47.23 AM.png
Screen Shot 2016-04-06 at 11.49.13 AM.png

Post Reply