Proj 5: Final Project II: Actual Documentation

glegrady
Posts: 203
Joined: Wed Sep 22, 2010 12:26 pm

Proj 5: Final Project II: Actual Documentation

Post by glegrady » Fri Apr 01, 2016 6:23 pm

Final Project II: Actual Documentation

Topic: Students will create a project using the Kinect and in Processing. The project can be an extension and further development of Student Project I or else it can be a new project.

Criteria: The project will have conceptual, aesthetic, and computational components. It will be based on ideas, technical solutions, and aesthetic methods learned from the research presentations and from other sources as found through individual research.

Schedule:
May 24: Classroom discussion of basic concept defined with sketch or else If Project II is a continuation of Project I, details of what new components are added should be precisely described. (also posted at student forum)
May 31: Lab work, individual meetings
June 7: Final presentations in classroom
George Legrady
legrady@mat.ucsb.edu

junxiangyao
Posts: 10
Joined: Wed Jan 06, 2016 1:38 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by junxiangyao » Tue Jun 07, 2016 1:42 am

Particles Mirror

Junxiang Yao

Particle system is a useful technique that uses a large amount of small objects in simulating water, fog, falling leaves and rocks. Therefore, particles can bring flexibility when creating shapes using them. In the final project, I planed to use both kinect and particle system to create a project enables users to interact with virtual particles that following some of the physical laws in the real world.

The Initial Version

In my first plan, I used toxiclib, a physical library which can assign physical features to the objects in processing, to create a liquid 3D structure. To make the particles more like liquid, I covered the particles with mesh. The particles are able to respond to gravity and several attractions created by the gesture and the movement of the user. Also, there are repulsions among all the particles. Here are two screenshots of the demo:
屏幕快照 2016-06-07 上午12.07.53.png
屏幕快照 2016-06-07 上午12.08.28.png

The Final Version

Because I just started learning the use of physical libraries and faced some problems in the initial plan, I am not sure about the result of it. Thus, I decided to try to use those physical libraries in 2D space first. And referencing the works of Daniel Rozin, I planed to build a new virtual mirror using kinect and particle system.

In this new plan, the particles will be moving toward the contour of the user generated by the computer using the raw depth image captured by kinect. And once they move into the contour, they will not move out unless there is no contour. At the begining, to keep the particles inside of the contour and also move everywhere inside the contour, my method is changing the attractive force. When the particles move inside the contour, the attraction will be deleted. But after discussed with Weihao, I realized that this idea is not viable since attractions cannot be selective. Therefore, I changed my plan again. Insdead of building attractors inside the contour, I let the particle moving toward its goal location. And if there are contour blob captured by kinect, the goal location will be located inside the contour. And when the distance between the goal location and the current location of that particle is smaller than a threshold, the goal location will change to a new place inside the contour until there is no contour detected. Using this method, thought there is no attractor inside the contour, it looks like there are some kinds of attraction attracting the particles.

Demo

In the demo, I used the raw depth image captured by kinect. And in terms of repulsive force, I calculate the vector of each particle and let them affect each other so that they will not be overlapping. But because the algorithm is not mature enough, when two particles collide, they might be shaking for a little while. Here is a screenshot of the demo.
屏幕快照 2016-06-07 上午12.11.40.png
Version 1

Following professor Legrady's advices, I made several chages. I used only two grey to color the pixels on the raw depth image captured by kinect. Since for a human body, the areas of hands, arms, neck and head are smaller quite smaller than the are of main body, the particles are less likely to move to those body parts, resulting in the fact that the particle cluster is not clear enough. I need the background to become a reference. In addition to the change on background, I added another kinds of particle which always keep aways from the contour. And I used a color scheme to differnciate these two kinds of particles. And because the separating algrithm I used in demo make the particles shake, I used box2D library to let the particles collide each other to make the effect smoother. Also, I want the movements particles of particles to be impacted by collision. Here are several screen shots of this version:
屏幕快照 2016-06-07 上午12.14.47.png
屏幕快照 2016-06-07 上午12.53.08.png
屏幕快照 2016-06-07 上午12.53.46.png

Code: Select all

/********************************************************
 * MAT265 PROJ.5   Final Project  Version 1             *
 *                                                      *
 * Junxiang Yao                                         * 
 *                                                      *
 *                                                      *
 *                                                      *
 * Reference:                                           *
 *                                                      *
 * Making Things See by Greg Borinstein.                *
 *                                                      *
 * The Nature of Code by Daniel Shiffman.               *
 *                                                      *
 * Kinect Flow Example by Amnon Owed.                   *
 *                                                      *
 ********************************************************/
 


// import libraries
import processing.opengl.*; // opengl
import SimpleOpenNI.*; // kinect
import blobDetection.*; // blobs
//importing java utility used in PolygonBlob
import java.util.Collections.*;
import java.util.Collections;

// this is a regular java import for PolygonBlob
import java.awt.Polygon;


// box2d
import shiffman.box2d.*;
import org.jbox2d.collision.shapes.*;
import org.jbox2d.common.*;
import org.jbox2d.dynamics.*;

Box2DProcessing box2d;
ArrayList<Boundary> boundaries;

// declare SimpleOpenNI object
SimpleOpenNI context;
// declare BlobDetection object
BlobDetection theBlobDetection;
// declare custom PolygonBlob object
PolygonBlob poly = new PolygonBlob();

// PImage to hold incoming imagery and smaller one for blob detection
PImage cam, blobs;
// the kinect's dimensions for later calculations
int kinectWidth = 640;
int kinectHeight = 480;
// to center and rescale from 640x480 to higher custom resolutions
float reScale;

//vehicles
Vehicle[] vehicles = new Vehicle[150];
bgVehicle[] bgVehicles = new bgVehicle[240];
boolean skeleton = true;
PVector com = new PVector();                                   
PVector com2d = new PVector();
boolean start = false;
PImage bg;
PImage colorBar;
int furthest = 2500;

void setup() {
  // it's possible to customize this, for example 1920x1080
  size(1080, 720, OPENGL);
  smooth();
  box2d = new Box2DProcessing(this);
  box2d.createWorld();
  // We are setting a custom gravity
  box2d.setGravity(0, 0);
  colorBar = loadImage("colorBar.jpg");

  // initialize SimpleOpenNI object
  context = new SimpleOpenNI(this);
  // mirror the image to be more intuitive
  context.setMirror(true);
  if (context.isInit() == false)
  {
    println("Can't init SimpleOpenNI, maybe the camera is not connected!"); 
    exit();
    return;
  }

  // enable depthMap generation 
  context.enableDepth();

  // enable skeleton generation for all joints
  context.enableUser();


  if (!context.enableDepth() || !context.enableUser()) { 
    // if context.enableScene() returns false
    // then the Kinect is not working correctly
    // make sure the green light is blinking
    println("Kinect not connected!"); 
    exit();
  } else {
    // calculate the reScale value
    // currently it's rescaled to fill the complete width (cuts of top-bottom)
    // it's also possible to fill the complete height (leaves empty sides)
    reScale = (float) width / kinectWidth;
    // create a smaller blob image for speed and efficiency
    blobs = createImage(kinectWidth/3, kinectHeight/3, RGB);
    // initialize blob detection object to the blob image dimensions
    theBlobDetection = new BlobDetection(blobs.width, blobs.height);
    theBlobDetection.setThreshold(0.7);
  }
  for (int i = 0; i < vehicles.length; i++) {
    vehicles[i] = new Vehicle(random(40, kinectWidth-40), random(40, kinectHeight-40));
  }
  for (int i = 0; i < bgVehicles.length; i++) {
    bgVehicles[i] = new bgVehicle(random(40, kinectWidth-40), random(40, kinectHeight-40));
  }
    boundaries = new ArrayList();
    boundaries.add(new Boundary(kinectWidth/2, kinectHeight-1, kinectWidth, 2, 0));
    boundaries.add(new Boundary(kinectWidth/2, 1, kinectWidth, 2, 0));
    boundaries.add(new Boundary(kinectWidth-1, kinectHeight/2, 2, kinectHeight, 0));
    boundaries.add(new Boundary(1, kinectHeight/2, 2, kinectHeight, 0));
}

void draw() {
  box2d.step();
  // fading background
  //  noStroke();
  //  fill(0, 65);
  //  rect(0, 0, width, height);
  // update the SimpleOpenNI object
  context.update();

  //  // put the image into a PImage
  //  cam = context.depthImage();
  //  // copy the image into the smaller blob image
  //  blobs.copy(cam, 0, 0, cam.width, cam.height, 0, 0, blobs.width, blobs.height);

  int[] depthValues = context.depthMap();
  bg = context.depthImage();
  bg.loadPixels();
  for (int i = 0; i < kinectWidth; i++) {
    for (int j = 0; j < kinectHeight; j++) {
      int index = i + j * kinectWidth;
      if (depthValues[index] > 500 && depthValues[index] < furthest) {
        bg.pixels[index]=color(255);
      } else {
        bg.pixels[index]=color(0);
      }
    }
  }
  // draw depthImageMap
  bg.updatePixels();


  // copy the image into the smaller blob image
  blobs.copy(bg, 0, 0, bg.width, bg.height, 0, 0, blobs.width, blobs.height);
  // blur the blob image
  blobs.filter(BLUR);
  // detect the blobs
  theBlobDetection.computeBlobs(blobs.pixels);
  // clear the polygon (original functionality)
  poly.reset();
  // create the polygon from the blobs (custom functionality, see class)
  poly.createPolygon();
  //  translate(0, (height-kinectHeight*reScale)/2);
  translate(0, 0);
  scale(reScale);
  bg.loadPixels();
  for (int i = 0; i < kinectWidth; i++) {
    for (int j = 0; j < kinectHeight; j++) {
      int index = i + j * kinectWidth;
      if (depthValues[index] > 500 && depthValues[index] < furthest) {
        bg.pixels[index]=color(60);
      } else {
        bg.pixels[index]=color(30);
      }
    }
  }
  // draw depthImageMap
  bg.updatePixels();

  image(bg, 0, 0);
  tint(255, 126); 
  //  filter(BLUR, 4);
  //  for (Boundary wall : boundaries) {
  //    wall.display();
  //  }

  //Particle-------------------------------------------------------------------------------------  
  for (int i = 0; i < vehicles.length; i++) {
    vehicles[i].newTarget();
    //    vehicles[i].targetDisplay();
    //    vehicles[i].separate(vehicles);
    vehicles[i].seek();
    vehicles[i].update();
    vehicles[i].display();
  }
  for (int i = 0; i < bgVehicles.length; i++) {
    bgVehicles[i].newTarget();
//    //    vehicles[i].targetDisplay();
//    //    vehicles[i].separate(vehicles);
    bgVehicles[i].seek();
    bgVehicles[i].update();
    bgVehicles[i].display();
  }

}


class Boundary {

  // A boundary is a simple rectangle with x,y,width,and height
  float x;
  float y;
  float w;
  float h;
  // But we also have to make a body for box2d to know about it
  Body b;

 Boundary(float x_,float y_, float w_, float h_, float a) {
    x = x_;
    y = y_;
    w = w_;
    h = h_;

    // Define the polygon
    PolygonShape sd = new PolygonShape();
    // Figure out the box2d coordinates
    float box2dW = box2d.scalarPixelsToWorld(w/2);
    float box2dH = box2d.scalarPixelsToWorld(h/2);
    // We're just a box
    sd.setAsBox(box2dW, box2dH);


    // Create the body
    BodyDef bd = new BodyDef();
    bd.type = BodyType.STATIC;
    bd.angle = a;
    bd.position.set(box2d.coordPixelsToWorld(x,y));
    b = box2d.createBody(bd);
    
    // Attached the shape to the body using a Fixture
    b.createFixture(sd,1);
  }

  // Draw the boundary, if it were at an angle we'd have to do something fancier
  void display() {
    noFill();
    stroke(255);
    strokeWeight(1);
    rectMode(CENTER);

    float a = b.getAngle();

    pushMatrix();
    translate(x,y);
    rotate(-a);
    rect(0,0,w,h);
    popMatrix();
  }

}

class PolygonBlob extends Polygon {


  void createPolygon() {
   
    ArrayList<ArrayList<PVector>> contours = new ArrayList<ArrayList<PVector>>();
    // helpful variables to keep track of the selected contour and point (start/end point)
    int selectedContour = 0;
    int selectedPoint = 0;

    // create contours from blobs
    // go over all the detected blobs
    for (int n=0; n<theBlobDetection.getBlobNb (); n++) {
      Blob b = theBlobDetection.getBlob(n);
      // for each substantial blob...
      if (b != null && b.getEdgeNb() > 100) {
        // create a new contour arrayList of PVectors
        ArrayList<PVector> contour = new ArrayList<PVector>();
        // go over all the edges in the blob
        for (int m=0; m<b.getEdgeNb (); m++) {
          // get the edgeVertices of the edge
          EdgeVertex eA = b.getEdgeVertexA(m);
          EdgeVertex eB = b.getEdgeVertexB(m);
          // if both ain't null...
          if (eA != null && eB != null) {
            // get next and previous edgeVertexA
            EdgeVertex fn = b.getEdgeVertexA((m+1) % b.getEdgeNb());
            EdgeVertex fp = b.getEdgeVertexA((max(0, m-1)));
            // calculate distance between vertexA and next and previous edgeVertexA respectively
            // positions are multiplied by kinect dimensions because the blob library returns normalized values
            float dn = dist(eA.x*kinectWidth, eA.y*kinectHeight, fn.x*kinectWidth, fn.y*kinectHeight);
            float dp = dist(eA.x*kinectWidth, eA.y*kinectHeight, fp.x*kinectWidth, fp.y*kinectHeight);
            // if either distance is bigger than 15
            if (dn > 15 || dp > 15) {
              // if the current contour size is bigger than zero
              if (contour.size() > 0) {
                // add final point
                contour.add(new PVector(eB.x*kinectWidth, eB.y*kinectHeight));
                // add current contour to the arrayList
                contours.add(contour);
                // start a new contour arrayList
                contour = new ArrayList<PVector>();
                // if the current contour size is 0 (aka it's a new list)
              } else {
                // add the point to the list
                contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
              }
              // if both distance are smaller than 15 (aka the points are close)
            } else {
              // add the point to the list
              contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
            }
          }
        }
      }
    }


    while (contours.size () > 0) {

      // find next contour
      float distance = 999999999;
      // if there are already points in the polygon
      if (npoints > 0) {
        // use the polygon's last point as a starting point
        PVector lastPoint = new PVector(xpoints[npoints-1], ypoints[npoints-1]);
        // go over all contours
        for (int i=0; i<contours.size (); i++) {
          ArrayList<PVector> c = contours.get(i);
          // get the contour's first point
          PVector fp = c.get(0);
          // get the contour's last point
          PVector lp = c.get(c.size()-1);
          // if the distance between the current contour's first point and the polygon's last point is smaller than distance
          if (fp.dist(lastPoint) < distance) {
            // set distance to this distance
            distance = fp.dist(lastPoint);
            // set this as the selected contour
            selectedContour = i;
            // set selectedPoint to 0 (which signals first point)
            selectedPoint = 0;
          }
          // if the distance between the current contour's last point and the polygon's last point is smaller than distance
          if (lp.dist(lastPoint) < distance) {
            // set distance to this distance
            distance = lp.dist(lastPoint);
            // set this as the selected contour
            selectedContour = i;
            // set selectedPoint to 1 (which signals last point)
            selectedPoint = 1;
          }
        }
        // if the polygon is still empty
      } else {
        // use a starting point in the lower-right
        PVector closestPoint = new PVector(width, height);
        // go over all contours
        for (int i=0; i<contours.size (); i++) {
          ArrayList<PVector> c = contours.get(i);
          // get the contour's first point
          PVector fp = c.get(0);
          // get the contour's last point
          PVector lp = c.get(c.size()-1);
          // if the first point is in the lowest 5 pixels of the (kinect) screen and more to the left than the current closestPoint
          if (fp.y > kinectHeight-5 && fp.x < closestPoint.x) {
            // set closestPoint to first point
            closestPoint = fp;
            // set this as the selected contour
            selectedContour = i;
            // set selectedPoint to 0 (which signals first point)
            selectedPoint = 0;
          }
          // if the last point is in the lowest 5 pixels of the (kinect) screen and more to the left than the current closestPoint
          if (lp.y > kinectHeight-5 && lp.x < closestPoint.y) {
            // set closestPoint to last point
            closestPoint = lp;
            // set this as the selected contour
            selectedContour = i;
            // set selectedPoint to 1 (which signals last point)
            selectedPoint = 1;
          }
        }
      }

      // add contour to polygon
      ArrayList<PVector> contour = contours.get(selectedContour);
      // if selectedPoint is bigger than zero (aka last point) then reverse the arrayList of points
      if (selectedPoint > 0) { 
        Collections.reverse(contour);
      }
      // add all the points in the contour to the polygon
      for (PVector p : contour) {
        addPoint(int(p.x), int(p.y));
      }    
      // remove this contour from the list of contours
      contours.remove(selectedContour);
      // the while loop above makes all of this code loop until the number of contours is zero
      // at that time all the points in all the contours have been added to the polygon... in the correct order (hopefully)      
    }
  }
}

class Vehicle {
  ArrayList<PVector> history = new ArrayList<PVector>();
  Body body;

  float r;
  float maxforce;
  float maxspeed;
  Vec2 target;
  Vec2 preTarget;
  color c;
  float w;
  float h;


  Vehicle(float x, float y) {

    r = random(3, 6);
    maxspeed = 20;
    maxforce = 20;
    target =  box2d.coordPixelsToWorld(random(30, kinectWidth-30), random(30, kinectHeight-30));
    preTarget =  new Vec2(random(30, kinectWidth-30), random(30, kinectHeight-30));
    //    new Vec2(random(30, width-30), random(30, height-30));
    //    c = color(random(255), random(255), random(255));
    //    c = color(255);
    c = colorBar.get((int)random(colorBar.width/2-200), colorBar.height/2);
    makeBody(new Vec2(x, y));
  }



  void newTarget() {
    Vec2 loc = body.getWorldCenter();
    
    Vec2 targetWorld = box2d.coordWorldToPixels(target.x, target.y);
    if (poly.npoints > 0 && !poly.contains(targetWorld.x, targetWorld.y)) {
      preTarget = new Vec2(random(30, kinectWidth-30), random(30, kinectHeight-30)); 
      if (!poly.contains(preTarget.x, preTarget.y)) {
        // while it is outside the polygon
        while (!poly.contains (preTarget.x, preTarget.y)) {
          // randomize x and y
          preTarget.x = random(30, kinectWidth-30);
          preTarget.y = random(30, kinectHeight-30);
        }
      }
    }
    if (dist(loc.x, loc.y, target.x, target.y)<2) {  
      preTarget = new Vec2(targetWorld.x+random(-6, 6), targetWorld.y+random(-6, 6)); 
      if (poly.npoints > 0) {
        if (!poly.contains(preTarget.x, preTarget.y)) {
          // while it is outside the polygon
          while (!poly.contains (preTarget.x, preTarget.y)) {
            // randomize x and y
            preTarget.x = random(30, kinectWidth-30);
            preTarget.y = random(30, kinectHeight-30);
          }
        }
      }
    }
    target = box2d.coordPixelsToWorld(preTarget.x, preTarget.y);
  }

  void update() {
    Vec2 seek = seek();
    seek.mulLocal(1.5);
    Vec2 loc = body.getWorldCenter();
    body.applyForce(seek, loc);
  }



  Vec2 seek() {
    Vec2 loc = body.getWorldCenter();
    Vec2 desired = target.sub(loc);  // A vector pointing from the location to the target

    // If the magnitude of desired equals 0, skip out of here
    // (We could optimize this to check if x and y are 0 to avoid mag() square root
    if (desired.length() == 0) return new Vec2(0, 0);

    // Normalize desired and scale to maximum speed
    desired.normalize();
    desired.mulLocal(maxspeed);
    // Steering = Desired minus Velocity

    Vec2 vel = body.getLinearVelocity();
    Vec2 steer = desired.sub(vel);

    float len = steer.length();
    if (len > maxforce) {
      steer.normalize();
      steer.mulLocal(maxforce);
    }
    return steer;
  }

  void targetDisplay() {
    fill(200);
    stroke(0);
    strokeWeight(2);
    Vec2 targetWorld = box2d.coordWorldToPixels(target.x, target.y);
    ellipse(targetWorld.x, targetWorld.y, 16, 16);
  }

  //  void separate(Vehicle[] v) {
  //    float desiredseparation = r;
  //    PVector sum = new PVector();
  //    int count = 0;
  //    for (Vehicle other : vehicles) {
  //      float d = PVector.dist(location, other.location);
  //      if (d>0&&d<desiredseparation) {
  //        PVector diff = PVector.sub(location, other.location);
  //        diff.normalize();
  //        diff.div(d);
  //        sum.add(diff);
  //        count++;
  //      }
  //      if (start&&poly.npoints > 0&&d>250&&poly.contains(location.x, location.y)&&poly.contains(other.location.x, other.location.y)) {
  //        stroke(255, 100);
  //        strokeWeight(0.3);
  //        line(location.x, location.y, other.location.x, other.location.y);
  //      }
  //    }
  //    if (count>0) {
  //      sum.setMag(maxspeed);
  //      PVector steer = PVector.sub(sum, velocity);
  //      steer.mult(40);
  //      applyforce(steer);
  //    }
  //  }








  void display() {
    // We look at each body and get its screen position
    Vec2 pos = box2d.getBodyPixelCoord(body);
    // Get its angle of rotation
    float a = body.getAngle();

    rectMode(CENTER);
    fill(c);
    noStroke();
    //    stroke(0);
    //    strokeWeight(1);
    pushMatrix();
    translate(pos.x, pos.y);
    rotate(-a);

    ellipse(0, 0, r*2, r*2);
    popMatrix();
  }

  void makeBody(Vec2 center) {

    // Define the body and make it from the shape
    BodyDef bd = new BodyDef();
    bd.type = BodyType.DYNAMIC;
    bd.position.set(box2d.coordPixelsToWorld(center));
    body = box2d.createBody(bd);

    CircleShape circle = new CircleShape();
    circle.m_radius = box2d.scalarPixelsToWorld(r);

    circle.m_p.set(0, 0);

    PolygonShape sd = new PolygonShape();

    body.createFixture(circle, 1.0);

    // Give it some initial random velocity
    body.setLinearVelocity(new Vec2(0, 0));
    body.setAngularVelocity(0);
  }
}

class bgVehicle extends Vehicle {
  color c;
  bgVehicle(float x, float y) {
    super(x, y);
    c = colorBar.get((int)random(colorBar.width/2, colorBar.width), colorBar.height/2);
  }

  void newTarget() {
    Vec2 loc = body.getWorldCenter();
    Vec2 vel = body.getLinearVelocity();
    vel.normalize();
    vel.mulLocal(10);
    Vec2 predictLoc = loc.add(vel);
    Vec2 predictLocPixels = box2d.coordWorldToPixels(predictLoc.x, predictLoc.y);
    Vec2 locPixels = box2d.coordWorldToPixels(loc.x, loc.y);

    Vec2 targetPixels = box2d.coordWorldToPixels(target.x, target.y);
    if (poly.npoints > 0 && poly.contains(targetPixels.x, targetPixels.y)) {
      preTarget = new Vec2(random(30, kinectWidth-30), random(30, kinectHeight-30)); 
      if (poly.contains(preTarget.x, preTarget.y)) {
        // while it is outside the polygon
        while (poly.contains (preTarget.x, preTarget.y)) {
          // randomize x and y
          preTarget.x = random(30, kinectWidth-30);
          preTarget.y = random(30, kinectHeight-30);
        }
      }
    }
    if (dist(loc.x, loc.y, target.x, target.y)<2) {  
      preTarget = new Vec2(targetPixels.x+random(-3, 3), targetPixels.y+random(-3, 3)); 
      if (poly.npoints > 0) {
        if (poly.contains(preTarget.x, preTarget.y)) {
          // while it is outside the polygon
          while (poly.contains (preTarget.x, preTarget.y)) {
            // randomize x and y
            preTarget.x = random(30, kinectWidth-30);
            preTarget.y = random(30, kinectHeight-30);
          }
        }
      }
    }
    target = box2d.coordPixelsToWorld(preTarget.x, preTarget.y);
  }

  void update() {    
    Vec2 loc = body.getWorldCenter();
    Vec2 seek = seek();
    seek.mulLocal(1.5);

    body.applyForce(seek, loc);
  }



  Vec2 seek() {
    Vec2 loc = body.getWorldCenter();
    Vec2 desired = target.sub(loc);  // A vector pointing from the location to the target

    // If the magnitude of desired equals 0, skip out of here
    // (We could optimize this to check if x and y are 0 to avoid mag() square root
    if (desired.length() == 0) return new Vec2(0, 0);

    // Normalize desired and scale to maximum speed
    desired.normalize();
    desired.mulLocal(maxspeed);
    // Steering = Desired minus Velocity

    Vec2 vel = body.getLinearVelocity();
    Vec2 steer = desired.sub(vel);
    //    line(target.x, target.y, loc.x, loc.y);
    float len = steer.length();
    if (len > maxforce) {
      steer.normalize();
      steer.mulLocal(maxforce);
    }
    return steer;
  }

  void targetDisplay() {
    fill(200);
    stroke(0);
    strokeWeight(2);
    Vec2 targetPixels = box2d.coordWorldToPixels(target.x, target.y);
    ellipse(targetPixels.x, targetPixels.y, 16, 16);
  }

  //  void separate(Vehicle[] v) {
  //    float desiredseparation = r;
  //    PVector sum = new PVector();
  //    int count = 0;
  //    for (Vehicle other : vehicles) {
  //      float d = PVector.dist(location, other.location);
  //      if (d>0&&d<desiredseparation) {
  //        PVector diff = PVector.sub(location, other.location);
  //        diff.normalize();
  //        diff.div(d);
  //        sum.add(diff);
  //        count++;
  //      }
  //      if (start&&poly.npoints > 0&&d>250&&poly.contains(location.x, location.y)&&poly.contains(other.location.x, other.location.y)) {
  //        stroke(255, 100);
  //        strokeWeight(0.3);
  //        line(location.x, location.y, other.location.x, other.location.y);
  //      }
  //    }
  //    if (count>0) {
  //      sum.setMag(maxspeed);
  //      PVector steer = PVector.sub(sum, velocity);
  //      steer.mult(40);
  //      applyforce(steer);
  //    }
  //  }








  void display() {
    // We look at each body and get its screen position
    Vec2 pos = box2d.getBodyPixelCoord(body);
    // Get its angle of rotation
    float a = body.getAngle();
    //    float theta = velocity.heading2D() + PI/2;
    rectMode(CENTER);
    fill(c);
    noStroke();
    //    stroke(0);
    //    strokeWeight(1);
    pushMatrix();
    translate(pos.x, pos.y);
    rotate(-a);

    ellipse(0, 0, r*2, r*2);
    popMatrix();
  }

  void makeBody(Vec2 center) {

    // Define the body and make it from the shape
    BodyDef bd = new BodyDef();
    bd.type = BodyType.DYNAMIC;
    bd.position.set(box2d.coordPixelsToWorld(center));
    body = box2d.createBody(bd);

    CircleShape circle = new CircleShape();
    circle.m_radius = box2d.scalarPixelsToWorld(r);
    circle.m_p.set(0, 0);

    PolygonShape sd = new PolygonShape();


    body.createFixture(circle, 1.0);

    // Give it some initial random velocity
    body.setLinearVelocity(new Vec2(0, 0));
    body.setAngularVelocity(0);
  }
}

// SimpleOpenNI user events

void onNewUser(SimpleOpenNI curContext, int userId)
{
  println("onNewUser - userId: " + userId);
  println("\tstart tracking skeleton");

  context.startTrackingSkeleton(userId);
  start = true;
}

void onLostUser(SimpleOpenNI curContext, int userId)
{
  println("onLostUser - userId: " + userId);
  start = false;
}

void onVisibleUser(SimpleOpenNI curContext, int userId)
{
  //println("onVisibleUser - userId: " + userId);
}
Version 2

Because my goal is to create a virtual mirror and the cluster is not showing the contour clearly, I wrote a second version. In this version, I enlarged the maximum radius of the particles and increased the amount of the particles to make them stuck together. This cluster will follow the movement of the user, and the color of the particle will be brighter or darker depanding on the relationship between the particle and the contour.
屏幕快照 2016-06-07 上午12.55.28.png
屏幕快照 2016-06-07 上午12.55.48.png
屏幕快照 2016-06-07 上午12.56.40.png
屏幕快照 2016-06-07 上午1.00.31.png

Code: Select all

/********************************************************
 * MAT265 PROJ.5   Final Project  Version 2             *
 *                                                      *
 * Junxiang Yao                                         * 
 *                                                      *
 *                                                      *
 *                                                      *
 * Reference:                                           *
 *                                                      *
 * Making Things See by Greg Borinstein.                *
 *                                                      *
 * The Nature of Code by Daniel Shiffman.               *
 *                                                      *
 * Kinect Flow Example by Amnon Owed.                   *
 *                                                      *
 ********************************************************/



// import libraries
import processing.opengl.*; // opengl
import SimpleOpenNI.*; // kinect
import blobDetection.*; // blobs

//importing java utility used in PolygonBlob
import java.util.Collections.*;
import java.util.Collections;

// this is a regular java import for polygon class (PolygonBlob)
import java.awt.Polygon;


// box2d libraries
import shiffman.box2d.*;
import org.jbox2d.collision.shapes.*;
import org.jbox2d.common.*;
import org.jbox2d.dynamics.*;

Box2DProcessing box2d;
ArrayList<Boundary> boundaries;

SimpleOpenNI context;
// declare BlobDetection object
BlobDetection theBlobDetection;
// declare custom PolygonBlob object 
PolygonBlob poly = new PolygonBlob();

// PImage to hold incoming imagery and smaller one for blob detection
PImage cam, blobs;

// the kinect's dimensions for later calculations
int kinectWidth = 640;
int kinectHeight = 480;

// to center and rescale from 640x480 to higher custom resolutions
float reScale;

//Vehicles
Vehicle[] vehicles = new Vehicle[500];
boolean skeleton = true;
PVector com = new PVector();                                   
PVector com2d = new PVector();
boolean start = false;
PImage bg;
PImage colorBar;
int furthest = 2500;

void setup() {
  size(1080, 720, OPENGL);
  smooth();
  box2d = new Box2DProcessing(this);
  box2d.createWorld();
  box2d.setGravity(0, 0);
  colorBar = loadImage("colorBar.jpg");

  // initialize SimpleOpenNI object
  context = new SimpleOpenNI(this);
  // mirror the image to be more intuitive
  context.setMirror(true);
  if (context.isInit() == false)
  {
    println("Can't init SimpleOpenNI, maybe the camera is not connected!"); 
    exit();
    return;
  }

  // enable depthMap generation 
  context.enableDepth();

  // enable skeleton generation for all joints
  context.enableUser();


  if (!context.enableDepth() || !context.enableUser()) { 
    println("Kinect not connected!"); 
    exit();
  } else {
    // calculate the reScale value
    // currently it's rescaled to fill the complete width (cuts of top-bottom)
    // it's also possible to fill the complete height (leaves empty sides)
    reScale = (float) width / kinectWidth;
    // create a smaller blob image for speed and efficiency
    blobs = createImage(kinectWidth/3, kinectHeight/3, RGB);
    // initialize blob detection object to the blob image dimensions
    theBlobDetection = new BlobDetection(blobs.width, blobs.height);
    theBlobDetection.setThreshold(0.7);
  }
  for (int i = 0; i < vehicles.length; i++) {
    vehicles[i] = new Vehicle(random(40, kinectWidth-40), random(40, kinectHeight-40));
  }
  //  boundaries = new ArrayList();
  //  boundaries.add(new Boundary(kinectWidth/2, kinectHeight-1, kinectWidth, 2, 0));
  //  boundaries.add(new Boundary(kinectWidth/2, 1, kinectWidth, 2, 0));
  //  boundaries.add(new Boundary(kinectWidth-1, kinectHeight/2, 2, kinectHeight, 0));
  //  boundaries.add(new Boundary(1, kinectHeight/2, 2, kinectHeight, 0));
}

void draw() {
  box2d.step();
  context.update();

  int[] depthValues = context.depthMap();
  // put the image into a PImage
  bg = context.depthImage();
  bg.loadPixels();
  for (int i = 0; i < kinectWidth; i++) {
    for (int j = 0; j < kinectHeight; j++) {
      int index = i + j * kinectWidth;
      if (depthValues[index] > 500 && depthValues[index] <  furthest) {
        bg.pixels[index]=color(255);
      } else {
        bg.pixels[index]=color(0);
      }
    }
  }
  // draw depthImageMap
  bg.updatePixels();

  // copy the image into the smaller blob image
  blobs.copy(bg, 0, 0, bg.width, bg.height, 0, 0, blobs.width, blobs.height);
  // blur the blob image
  blobs.filter(BLUR);
  // detect the blobs
  theBlobDetection.computeBlobs(blobs.pixels);
  // clear the polygon (original functionality)
  poly.reset();
  // create the polygon from the blobs (custom functionality, see class)
  poly.createPolygon();
  //  translate(0, (height-kinectHeight*reScale)/2);
  translate(0, 0);
  scale(reScale);
  bg.loadPixels();
  for (int i = 0; i < kinectWidth; i++) {
    for (int j = 0; j < kinectHeight; j++) {
      int index = i + j * kinectWidth;
      if (depthValues[index] > 500 && depthValues[index] < furthest) {
        bg.pixels[index]=color(40);
      } else {
        bg.pixels[index]=color(30);
      }
    }
  }
  // draw depthImageMap
  bg.updatePixels();

  image(bg, 0, 0);
  tint(255, 126); 

  //  filter(BLUR, 4);
  
  
  //  for (Boundary wall : boundaries) {
  //    wall.display();
  //  }



  //Particle-------------------------------------------------------------------------------------  
  for (int i = 0; i < vehicles.length; i++) {
    vehicles[i].newTarget();
    vehicles[i].seek();
    vehicles[i].update();
    vehicles[i].display();
  }
}

class Boundary {
  float x;
  float y;
  float w;
  float h;

  Body b;

 Boundary(float x_,float y_, float w_, float h_, float a) {
    x = x_;
    y = y_;
    w = w_;
    h = h_;

    // Define the polygon
    PolygonShape sd = new PolygonShape();
    
    // Figure out the box2d coordinates
    float box2dW = box2d.scalarPixelsToWorld(w/2);
    float box2dH = box2d.scalarPixelsToWorld(h/2);

    sd.setAsBox(box2dW, box2dH);


    // Create the body
    BodyDef bd = new BodyDef();
    bd.type = BodyType.STATIC;
    bd.angle = a;
    bd.position.set(box2d.coordPixelsToWorld(x,y));
    b = box2d.createBody(bd);
    
    // Attached the shape to the body using a Fixture
    b.createFixture(sd,1);
  }

  void display() {
    noFill();
    stroke(255);
    strokeWeight(1);
    rectMode(CENTER);

    float a = b.getAngle();

    pushMatrix();
    translate(x,y);
    rotate(-a);
    rect(0,0,w,h);
    popMatrix();
  }

}

class PolygonBlob extends Polygon {

  void createPolygon() {
   
    ArrayList<ArrayList<PVector>> contours = new ArrayList<ArrayList<PVector>>();
    // helpful variables to keep track of the selected contour and point (start/end point)
    int selectedContour = 0;
    int selectedPoint = 0;

    // create contours from blobs
    // go over all the detected blobs
    for (int n=0; n<theBlobDetection.getBlobNb (); n++) {
      Blob b = theBlobDetection.getBlob(n);
      // for each substantial blob...
      if (b != null && b.getEdgeNb() > 100) {
        // create a new contour arrayList of PVectors
        ArrayList<PVector> contour = new ArrayList<PVector>();
        // go over all the edges in the blob
        for (int m=0; m<b.getEdgeNb (); m++) {
          // get the edgeVertices of the edge
          EdgeVertex eA = b.getEdgeVertexA(m);
          EdgeVertex eB = b.getEdgeVertexB(m);
          // if both ain't null...
          if (eA != null && eB != null) {
            // get next and previous edgeVertexA
            EdgeVertex fn = b.getEdgeVertexA((m+1) % b.getEdgeNb());
            EdgeVertex fp = b.getEdgeVertexA((max(0, m-1)));
            // calculate distance between vertexA and next and previous edgeVertexA respectively
            // positions are multiplied by kinect dimensions because the blob library returns normalized values
            float dn = dist(eA.x*kinectWidth, eA.y*kinectHeight, fn.x*kinectWidth, fn.y*kinectHeight);
            float dp = dist(eA.x*kinectWidth, eA.y*kinectHeight, fp.x*kinectWidth, fp.y*kinectHeight);
            // if either distance is bigger than 15
            if (dn > 15 || dp > 15) {
              // if the current contour size is bigger than zero
              if (contour.size() > 0) {
                // add final point
                contour.add(new PVector(eB.x*kinectWidth, eB.y*kinectHeight));
                // add current contour to the arrayList
                contours.add(contour);
                // start a new contour arrayList
                contour = new ArrayList<PVector>();
                // if the current contour size is 0 (aka it's a new list)
              } else {
                // add the point to the list
                contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
              }
              // if both distance are smaller than 15 (aka the points are close)
            } else {
              // add the point to the list
              contour.add(new PVector(eA.x*kinectWidth, eA.y*kinectHeight));
            }
          }
        }
      }
    }

   
    while (contours.size () > 0) {

      // find next contour
      float distance = 999999999;
      // if there are already points in the polygon
      if (npoints > 0) {
        // use the polygon's last point as a starting point
        PVector lastPoint = new PVector(xpoints[npoints-1], ypoints[npoints-1]);
        // go over all contours
        for (int i=0; i<contours.size (); i++) {
          ArrayList<PVector> c = contours.get(i);
          // get the contour's first point
          PVector fp = c.get(0);
          // get the contour's last point
          PVector lp = c.get(c.size()-1);
          // if the distance between the current contour's first point and the polygon's last point is smaller than distance
          if (fp.dist(lastPoint) < distance) {
            // set distance to this distance
            distance = fp.dist(lastPoint);
            // set this as the selected contour
            selectedContour = i;
            // set selectedPoint to 0 (which signals first point)
            selectedPoint = 0;
          }
          // if the distance between the current contour's last point and the polygon's last point is smaller than distance
          if (lp.dist(lastPoint) < distance) {
            // set distance to this distance
            distance = lp.dist(lastPoint);
            // set this as the selected contour
            selectedContour = i;
            // set selectedPoint to 1 (which signals last point)
            selectedPoint = 1;
          }
        }
        // if the polygon is still empty
      } else {
        // use a starting point in the lower-right
        PVector closestPoint = new PVector(width, height);
        // go over all contours
        for (int i=0; i<contours.size (); i++) {
          ArrayList<PVector> c = contours.get(i);
          // get the contour's first point
          PVector fp = c.get(0);
          // get the contour's last point
          PVector lp = c.get(c.size()-1);
          // if the first point is in the lowest 5 pixels of the (kinect) screen and more to the left than the current closestPoint
          if (fp.y > kinectHeight-5 && fp.x < closestPoint.x) {
            // set closestPoint to first point
            closestPoint = fp;
            // set this as the selected contour
            selectedContour = i;
            // set selectedPoint to 0 (which signals first point)
            selectedPoint = 0;
          }
          // if the last point is in the lowest 5 pixels of the (kinect) screen and more to the left than the current closestPoint
          if (lp.y > kinectHeight-5 && lp.x < closestPoint.y) {
            // set closestPoint to last point
            closestPoint = lp;
            // set this as the selected contour
            selectedContour = i;
            // set selectedPoint to 1 (which signals last point)
            selectedPoint = 1;
          }
        }
      }

      // add contour to polygon
      ArrayList<PVector> contour = contours.get(selectedContour);
      // if selectedPoint is bigger than zero (aka last point) then reverse the arrayList of points
      if (selectedPoint > 0) { 
        Collections.reverse(contour);
      }
      // add all the points in the contour to the polygon
      for (PVector p : contour) {
        addPoint(int(p.x), int(p.y));
      }    
      // remove this contour from the list of contours
      contours.remove(selectedContour);
      // the while loop above makes all of this code loop until the number of contours is zero
      // at that time all the points in all the contours have been added to the polygon... in the correct order (hopefully)      
    }
  }
}

class Vehicle {
  ArrayList<PVector> history = new ArrayList<PVector>();
  Body body;

  float r;
  float maxforce;
  float maxspeed;
  Vec2 target;
  Vec2 preTarget;
  color c1;
  color c2;
  float w;
  float h;


  Vehicle(float x, float y) {
    r = random(3, 15);
    maxspeed = 30;
    maxforce = 20;
    target =  box2d.coordPixelsToWorld(random(30, kinectWidth-30), random(30, kinectHeight-30));
    preTarget =  new Vec2(random(30, kinectWidth-30), random(30, kinectHeight-30));
    //    new Vec2(random(30, width-30), random(30, height-30));
    //    c = color(random(255), random(255), random(255));
    //    c = color(255);
    c1 = colorBar.get((int)random(colorBar.width/2-360), colorBar.height/2);
    c2 = colorBar.get((int)random(colorBar.width/2-100, colorBar.width-200), colorBar.height/2);
    makeBody(new Vec2(x, y));
  }



  void newTarget() {
    Vec2 loc = body.getWorldCenter();

    Vec2 targetWorld = box2d.coordWorldToPixels(target.x, target.y);
    if (poly.npoints > 0 && !poly.contains(targetWorld.x, targetWorld.y)) {
      preTarget = new Vec2(random(30, kinectWidth-30), random(30, kinectHeight-30)); 
      if (!poly.contains(preTarget.x, preTarget.y)) {
        // while it is outside the polygon
        while (!poly.contains (preTarget.x, preTarget.y)) {
          // randomize x and y
          preTarget.x = random(30, kinectWidth-30);
          preTarget.y = random(30, kinectHeight-30);
        }
      }
    }
    if (dist(loc.x, loc.y, target.x, target.y)<2) {  
      preTarget = new Vec2(targetWorld.x+random(-6, 6), targetWorld.y+random(-6, 6)); 
      if (poly.npoints > 0) {
        if (!poly.contains(preTarget.x, preTarget.y)) {
          // while it is outside the polygon
          while (!poly.contains (preTarget.x, preTarget.y)) {
            // randomize x and y
            preTarget.x = random(30, kinectWidth-30);
            preTarget.y = random(30, kinectHeight-30);
          }
        }
      }
    }
    target = box2d.coordPixelsToWorld(preTarget.x, preTarget.y);
  }

  void update() {
    Vec2 seek = seek();
    seek.mulLocal(1.5);
    Vec2 loc = body.getWorldCenter();
    body.applyForce(seek, loc);
  }

  Vec2 seek() {
    Vec2 loc = body.getWorldCenter();
    Vec2 desired = target.sub(loc);  // A vector pointing from the location to the target

    // If the magnitude of desired equals 0, skip out of here
    // (We could optimize this to check if x and y are 0 to avoid mag() square root
    if (desired.length() == 0) return new Vec2(0, 0);

    // Normalize desired and scale to maximum speed
    desired.normalize();
    desired.mulLocal(maxspeed);
    // Steering = Desired minus Velocity

    Vec2 vel = body.getLinearVelocity();
    Vec2 steer = desired.sub(vel);
    float len = steer.length();
    if (len > maxforce) {
      steer.normalize();
      steer.mulLocal(maxforce);
    }
    return steer;
  }

  void targetDisplay() {
    fill(200);
    stroke(0);
    strokeWeight(2);
    Vec2 targetWorld = box2d.coordWorldToPixels(target.x, target.y);
    ellipse(targetWorld.x, targetWorld.y, 16, 16);
  }

  //  void separate(Vehicle[] v) {
  //    float desiredseparation = r;
  //    PVector sum = new PVector();
  //    int count = 0;
  //    for (Vehicle other : vehicles) {
  //      float d = PVector.dist(location, other.location);
  //      if (d>0&&d<desiredseparation) {
  //        PVector diff = PVector.sub(location, other.location);
  //        diff.normalize();
  //        diff.div(d);
  //        sum.add(diff);
  //        count++;
  //      }
  //      if (start&&poly.npoints > 0&&d>250&&poly.contains(location.x, location.y)&&poly.contains(other.location.x, other.location.y)) {
  //        stroke(255, 100);
  //        strokeWeight(0.3);
  //        line(location.x, location.y, other.location.x, other.location.y);
  //      }
  //    }
  //    if (count>0) {
  //      sum.setMag(maxspeed);
  //      PVector steer = PVector.sub(sum, velocity);
  //      steer.mult(40);
  //      applyforce(steer);
  //    }
  //  }


  void display() {
    // We look at each body and get its screen position
    Vec2 pos = box2d.getBodyPixelCoord(body);
    // Get its angle of rotation
    float a = body.getAngle();

    rectMode(CENTER);
    Vec2 loc = body.getWorldCenter();
    Vec2 locPixels = box2d.coordWorldToPixels(loc.x, loc.y);
    if ( poly.contains(locPixels.x, locPixels.y)) {
      fill(c1);
    } else
    {
      fill(c2);
    }

    noStroke();
    //    stroke(0);
    //    strokeWeight(1);
    pushMatrix();
    translate(pos.x, pos.y);
    rotate(-a);
    
    ellipse(0, 0, r*2, r*2);
    popMatrix();
  }

  void makeBody(Vec2 center) {

    // Define the body and make it from the shape
    BodyDef bd = new BodyDef();
    bd.type = BodyType.DYNAMIC;
    bd.position.set(box2d.coordPixelsToWorld(center));
    body = box2d.createBody(bd);

    CircleShape circle = new CircleShape();
    circle.m_radius = box2d.scalarPixelsToWorld(r);

    circle.m_p.set(0, 0);

    PolygonShape sd = new PolygonShape();
    
    body.createFixture(circle, 1.0);

    // Give it some initial random velocity
    body.setLinearVelocity(new Vec2(0, 0));
    body.setAngularVelocity(0);
  }
}

// SimpleOpenNI user events

void onNewUser(SimpleOpenNI curContext, int userId)
{
  println("onNewUser - userId: " + userId);
  println("\tstart tracking skeleton");

  context.startTrackingSkeleton(userId);
  start = true;
}

void onLostUser(SimpleOpenNI curContext, int userId)
{
  println("onLostUser - userId: " + userId);
  start = false;
}

void onVisibleUser(SimpleOpenNI curContext, int userId)
{
  //println("onVisibleUser - userId: " + userId);
}
References
Making Things See, Greg Borinstein
The Nature of Code, Daniel Shiffman
https://vimeo.com/49516871, Amnon Owed
http://www.smoothware.com/danny/, Daniel Rozin
Last edited by junxiangyao on Tue Jun 07, 2016 6:40 pm, edited 14 times in total.

ihwang
Posts: 5
Joined: Fri Apr 01, 2016 2:35 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by ihwang » Tue Jun 07, 2016 1:50 am

Visualizing Energy flowage in 3D with Kinect V2 and Head-up Display in Unity 5

Intae Hwang

This project started from Chinese traditional martial art "Tai Chi". The goal of the training is to strengthen physical status and mind at the same time. Thus, it focuses the on the state of mind to control energy flow in body with a slow movement which allows optimizing internal energy with meditation.
tai-chi-sunrise-300x174.jpg
Source: http://www.tiptoplifestyle.com/tai-chi-benefits/

Also, calligraphy is another example of presenting invisible energy of a body. By moving huge brush, the stroke reflects of body movement. To draw good letters, the performer should practice for a long time. A fine drawing is a result of demanding training.
11230037.jpg
Source: http://koreajoongangdaily.joins.com/new ... id=2949706

To see this body movement in 3D, I used Kinect V2, Oculus DK2, and Unity5.

Kinect V2
Kinect V2 skeleton library for Unity is developed by Microsoft. If you install Kinect SDK for Windows, there is the complete scene, which has four different modes; Bodyview(skeleton),Depthview,Colorview, and Infraview. However, using the application in Unity5 didn't go smoothly. Because a user need to hack the C# code inside the package. Thus, what can we make something with this good resource is also depends on how much we understand the structure of the code.

In my case, the most challenging part was to set proper orientation between the two devices. Most skeleton libraries of Kinect, it always shows mirrored image to a user. I had to change the orientation to put Oculus display on the skeleton. Also, since competition between the two major computer company, Kinect V2 only works in Windows machine, not MacOS.

Oculus DK2
camera_dk2.jpg
Source: https://www.oculus.com/en-us/blog/annou ... kit-2-dk2/

Since Oculus Rift was released a couple of months ago, DK2, the previous version doesn't support to Windows 10. It needs to install old Oculus SDK (0.4.3) and Oculus Unity Package to run the device inside the engine. DK2 works as a camera inside Unity, simply put the camera on a character, a user can see the virtual world as same as the character's perspective.

To generate body movement, I used the "trail renderer" in Unity. This tool automatically generates flowage of movement. What kind of experiment will be interesting to the user? My idea is to make people believe that they are in the virtual environment, and their acts actually trigger movement the virtual object. But the difference between simply using only Kinect and the combination of Kinect and Oculus. The experience of illusion fantasy is extremely effective.

To archive this goal, I created three scenes, the first scene is virtual calligraphy. A brush is attached user's right hand. As the brush moves, a line is generated on the ground.
Final Project  (1).jpg
2.jpg
The second scene is virtual physics. There is a simple set of multiple boxes, if the user pushes a box in front of him/her, the domino will fall down. My intention was to simulate how well the Kinect skeleton library presents the collision event. In addition to the simple column, the domino scene is an interesting situation that only can experience in a virtual environment.
Final Project  (2).jpg
3.jpg
The last scene is the energy flowage. On user's both hands, a flame is constantly created . If the user swings his/her right hands, the hand shoots fire. This projectile is moving based on vector and speed of right hand. The program calculates right hands movement, and it tells where the fireball is flying. The cones are an indicator that tells the user how far the ball is thrown and it makes eccentric mood in the space.
Final Project  (3).jpg
1.jpg
Future Work

I found a lot of possibility for this project. The library only tells the position by attaching a basic shape such as cube or sphere. My next step is to create a human or monster body. Then attaching this 3D model to the skeleton. Then, the user is able to see their body through Oculus.

This is the video.
https://www.youtube.com/watch?v=zztpa-IHkCM

Presentation File
Final Project .pdf
(433.22 KiB) Downloaded 401 times
Kinect Oculus Game
Kinect_Oculus_Game.zip
(14.42 MiB) Downloaded 311 times
To run this game, you need to have
1.Windows 10 64bit or Windows 8 64bit
2. Microsoft Kinect SDK https://developer.microsoft.com/en-us/windows/kinect
3. Oculus SDK for Windows (V0.4.3) https://developer.oculus.com/downloads/
Last edited by ihwang on Wed Jun 08, 2016 12:22 am, edited 5 times in total.

jing_yan
Posts: 5
Joined: Fri Apr 01, 2016 2:33 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by jing_yan » Tue Jun 07, 2016 2:28 am

/

the BLUE GUITAR

Jing Yan

They said, "You have a blue guitar, You do not play things as they are." --- Wallace Stevens
Inspired by David Hockney, who is inspired by Wallace Stevens who is inspired by Pablo Picasso. The blue guitar is used as a metaphor for not only an instrument that you can play with but also a distorted and unrealistic acoustic world that you might encounter.

The “BLUE GUITAR” is a motion based spatial sound environment project. The intention is to create a potential audio environment that allows user to interact with and gradually reveals and evolves itself through interaction.During the project, I also attempt to balance the designed narrative of sound scene and the audience trigger interaction. While the story is telling during certain time sections, users can still participate in the performance of the sound.

The initial concept of the structure of sound and sound materials.
sketch-01.jpg
Interaction process:
The sound is triggered by the entrance of the audience. the amplitude and panning are related to the amount of users, distance to the Kinect device, position of the body part, and the position in space.

Sound Scene:

^ Opening
Imagine in a dark empty room, there is something whispering which allures you to come closer.
And the story begins.

^ Piano - instrument
The chords of piano is triggered by the appearance of the users. The closer the user and the longer it plays, more complicated the sound becomes. Interact with body gestures.

^ Sea - space
Now, you come closer to the sea. When you get higher, the body grows into the sky and you hear the gulls. When you get lower, you dive deep into the sea.

^ Memory - time
There are invisible doors on the plane, whichever you step into will lead you to a brand new space and time.


Visualization:
In contrast to the complexity and the direct control of the sound, the visualization is simplified as a wave like thing consisting of lines that is waving up and down according to the sound volume. Verlet integration is used to create a creature like movement of the entire body. Thus, the visual object is built like a deep sea creature, which gradually changing the shapes and colors due to the motion of the entire body.
visual.png
visual2.png
https://vimeo.com/169680830

Technic:
Sound materials are controlled by minim library.
Motion detection are achieved through Kinect.
All parts are connected by Processing.

Future: I intended to build more real time synthesis sound in the supercollider, however due to some technical difficulty this real time transition from the processing to supercollider was not well built. Also the sound environment is too much realistic and there needs to have some sound improvement.

Reference:
Minim library example,Hilda's demo, Supercollider example.
the VerletBall&VerletStick class is based on Ira Greenberg's VerletStick example
(https://github.com/irajgreenberg/worksh ... tStick.pde)

Main tab

Code: Select all

/* 2016-6-7 (Processing 3)
 
 M265 Optical-Computational Processes:  
 
 ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
 ::::::::::::::::::: the BLUE guitar ::::::::::::::::::::::::::::::
 :::::::::::::::::::::::::::::::::::::::::::  Jing YAN ::::::::::::
 ::::::::::::::::::: theuniqueeye@gmail.com :::::::::::::::::::::::
 ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
 ::::::: [VERSION 9] ::::::::::::::::::::::::::::::::::::::::::::*/

/*
-------------------------------------------------------------------------------
 They said, "You have a blue guitar, You do not play things as they are."
 
 The “BLUE GUITAR” is a motion based spatial sound environment project. 
 The intention is to create a potential audio environment that allows user 
 to interact with and gradually reveals and evolves itself through interaction.
 During the project, I also attempt to balance the designed narrative of sound 
 scene and the audience trigger interaction. 
 -------------------------------------------------------------------------------
 Version History >>
 [version8]
 connect kinnect with skeleton detection
 [version7] key interaction version
 refined version6
 piano chord refined 
 color and color transistion refined
 strokewieght refined
 sound cycle refined
 [version6]
 piano chord delay fixed
 visual color transition built
 total sound add to the posY
 [version5]
 scene setting
 [version4]
 add the whisper the walking step and the opening scene
 [version3]
 1st scene - Piano sound built
 [version 2]
 set up with minim sound
 try different component of minim library
 [version 1]
 build verlet wave form with verlet integration
 -------------------------------------------------------------------------------
 Reference >>
 minim library
 http://code.compartmental.net/minim/ugen_class_ugen.html
 https://forum.processing.org/one/topic/counter.html
 the VerletBall&VerletStick class is based on Ira Greenberg's VerletStick example
 https://github.com/irajgreenberg/workshopExamples/blob/master/apression/VerletStick.pde
 Hilda's demo about the user detection
 try supercollider's example to build synthesize sound 
 -------------------------------------------------------------------------------
 Special thanks to Zhenyu and Donghao for rescuing me from brain stuck
 Guide by Prof. George Legrady. 
 -------------------------------------------------------------------------------
 */


import peasy.*;
import peasy.org.apache.commons.math.*;
import peasy.org.apache.commons.math.geometry.*;
import peasy.test.*;
PeasyCam cam;

import SimpleOpenNI.*;
SimpleOpenNI kinect;
import processing.opengl.*;

import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.spi.*; 

import java.util.Iterator;

import supercollider.*;
import oscP5.*;


// sound 
Minim minim;
AudioOutput out;
LiveInput in;
Oscil deepSea;
Noise theNoise;
Summer synth; // sum the outputs of multiple UGens
//Wavetable table;
//Midi2Hz midi;

//supercollider
Group group;
Synth synth0, synth1, synth2, synth3;

// sound materials
AudioPlayer whisper, step, tic; // Scene 1
AudioPlayer chord1_0, chord1_1, chord1_2, chord1_3, chord2_0, chord2_1, chord2_2, // Scene 2
chord2_3, chord3_0, chord3_1, chord3_2, chord3_3, chord4_0, chord4_1, chord4_2, chord4_3;

AudioPlayer sea, seaWave, gull_1, gull_2, wind, wave; // Scene 3
AudioPlayer bell, road, park_1, park_2, choir, walk; // Scene 4

// timer
int timerSet = 0;
float targetTime1 = 0, targetTime2 = 0, targetTime3 = 0, targetTime4 = 0;
float time_to_play_sound = 1e20;
int phase = 130;
int timestamp_1 = 5; 
int timestamp_2 = 5+5; 
int timestamp_3 = 5+5+10; 
int timestamp_4 = 5+10+10; 
int timestamp_5 = 5+10+10+5; 
int timestamp_6 = 5+10+10+25; 
int timestamp_7 = 5+10+10+30; 
int timestamp_8 = 5+10+10+30+10; 
int timestamp_9 = 5+10+10+30+25; 
int timestamp_10 = 5+10+10+30+20; 
int timestamp_11 = 5+10+10+30+25+40; 
int timestamp_12 = 5+10+10+30+25+40+10; 

// interaction switches
boolean playStep=false, playChord=false, playNoise=false, playSea=false, playMemo=false, playSynth=false;
boolean user1In, user1Hug, user2In, user2Hug, user3In, user3Hug, user4In, user4Hug;
boolean saveFrame=false, rotateX=false;

// visual 
int particles = 100;
int layers = 30;
VerletBall[][] balls = new VerletBall[particles][layers];
int bonds = particles + particles/2;
VerletStick[][] sticks = new VerletStick[bonds][layers];
float tension;
float posY=0;
float trans=100, transP=100;

// kinect
PVector rightHand, leftHand, rightElbow, leftElbow, head;
boolean skeleton = true;
int[] user_list;
boolean[] handsUp = new boolean[10];
int userId;
//User location
PVector[] com = new PVector[10];
float[] peak = new float[10];
float[] rightHandy = new float[10];
//HashMap<Integer, UserTrackingInfo> user_tracking_info = new HashMap<Integer, UserTrackingInfo>();

// -------------------------------------------------------------------------------

HashMap<AudioPlayer, Float> next_play_times = new HashMap<AudioPlayer, Float>();

void setup() {
  //size(1960, 1080, OPENGL);
  size(1080, 720, OPENGL);
  cam = new PeasyCam(this, 1000); 
  float theta = PI/4.0; 
  float jump; 
  tension = 0.1;


  //////////////  kinect setup ///////////////
  kinect = new SimpleOpenNI(this);
  kinect.enableDepth();
  kinect.setMirror(true);
  kinect.enableUser();
  rightHand = new PVector(0, 0, 0);
  leftHand = new PVector(0, 0, 0);
  com[0] = new PVector(); // user location
  com[1] = new PVector(); // user location
  //coms = new PVector();
  handsUp[0]=false; 

  ///////////// visual setup /////////////////
  //colorMode(HSB, 255); 
  colorMode(RGB, 255); 

  // balls + add interaction 
  for (int j = 0; j<layers; j++) {
    jump = -400.0; 
    for (int i=0; i<particles; i++) {
      PVector push = new PVector(0, 0, 0); 
      PVector p = new PVector(jump, 200, 25*j); 
      balls[i][j]= new VerletBall(p, push, 10);
      theta += TWO_PI/particles;
      jump += 8;
    }
  }

  // sticks external
  for (int j = 0; j<layers; j++) {
    for (int i=0; i<particles; i++) {
      if (i<particles-1) { 
        sticks[i][j] = new VerletStick(balls[i][j], balls[i+1][j], tension);
      }
    }
  }

  /////////////  sound setup /////////////////

  minim = new Minim(this);
  out = minim.getLineOut(); // use the getLineOut method of the Minim object to get an AudioOutput object
  synth = new Summer();

  deepSea = new Oscil( 700, 0.05f, Waves.SINE );
  //deepSea.patch( synth ); // patch to the output
  synth.patch( out ); 

  //// load sound materials //// 
  whisper = minim.loadFile("openning_3.mp3");
  step = minim.loadFile("step.mp3");
  tic = minim.loadFile("tic.mp3");
  chord1_0 = minim.loadFile("chord1_0.mp3");
  chord1_1 = minim.loadFile("chord1_1.mp3");
  chord1_2 = minim.loadFile("chord1_2.mp3");
  chord1_3 = minim.loadFile("chord1_3.mp3");
  chord2_0 = minim.loadFile("chord2_0.mp3");
  chord2_1 = minim.loadFile("chord2_1.mp3");
  chord2_2 = minim.loadFile("chord2_2.mp3");
  chord2_3 = minim.loadFile("chord2_3.mp3");
  chord3_0 = minim.loadFile("chord3_0.mp3");
  chord3_1 = minim.loadFile("chord3_1.mp3");
  chord3_2 = minim.loadFile("chord3_2.mp3");
  chord3_3 = minim.loadFile("chord3_3.mp3");
  chord4_0 = minim.loadFile("chord4_0.mp3");
  chord4_1 = minim.loadFile("chord4_1.mp3");
  chord4_2 = minim.loadFile("chord4_2.mp3");
  chord4_3 = minim.loadFile("chord4_3.mp3");

  sea = minim.loadFile("sea.wav");
  seaWave = minim.loadFile("seaWave.mp3");
  wave = minim.loadFile("wave2.mp3");
  gull_1 = minim.loadFile("gull.mp3");
  gull_2 = minim.loadFile("gull_2.mp3");
  wind = minim.loadFile("wind_2.wav");
  bell = minim.loadFile("bell.mp3");
  road = minim.loadFile("road.mp3");
  park_1 = minim.loadFile("park_1.mp3");
  park_2 = minim.loadFile("park_2.wav");
  choir = minim.loadFile("choir.wav");
  walk = minim.loadFile("walk.mp3");

  /* try connect to supercollider
   group = new Group();
   group.create();
   
   // uses default sc server at 127.0.0.1:57110    
   // does NOT create synth!
   synth0 = new Synth("sine");
   synth1 = new Synth("DecaySin");
   synth2 = new Synth("DecayPink");
   synth3 = new Synth("Reverb");
   
   // set initial arguments
   synth0.set("amp", 0.5);
   synth0.set("freq", 80);
   synth1.set("amp", 0.5);
   synth1.set("freq", 80);
   //  synth2.set("amp", 0.5);
   //  synth2.set("freq", 80);
   synth3.set("amp", 0.5);
   synth3.set("freq", 80);
   */
}


// -------------------------------------------------------------------------------

void draw() {
  // set the stage
  background(0);

  ///////////// kinect detect /////////////////
  kinect.update();
  user_list = kinect.getUsers();

  //user__________________________________________

  if (user_list.length > 0) {

    for (int numbOfUsers = 0; numbOfUsers<user_list.length; numbOfUsers++) {
      userId = user_list[numbOfUsers];

      try {
        kinect.getCoM(user_list[numbOfUsers], com[numbOfUsers]);
      } 
      catch(Exception e) {
      }

      // if we’re successfully calibrated
      if ( kinect.isTrackingSkeleton(userId)) {
        if (skeleton) { // draw user
          pushMatrix(); 
          translate(0, 0, -2500);
          rotateX(radians(180));
          //drawSkeleton(userId);
          popMatrix();
        }

        rightHand = new PVector();
        kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_HAND, rightHand);
        leftHand = new PVector();
        kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_HAND, leftHand);
        rightElbow = new PVector();
        kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, rightElbow);
        leftElbow = new PVector();
        kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, leftElbow);
        head = new PVector();
        kinect.getJointPositionSkeleton(userId, SimpleOpenNI.SKEL_HEAD, head);

        if ((rightHand.y > rightElbow.y && rightHand.x > rightElbow.x) && (leftHand.y > leftElbow.y 
          && leftHand.x > leftElbow.x)) handsUp[numbOfUsers]=true;
        else handsUp[numbOfUsers]=false;

        peak[numbOfUsers]=max(rightHand.y, leftHand.y, head.y);
        rightHandy[numbOfUsers]=rightHand.y;
      }
    }
  }


  ///////////// timer set /////////////////

  timerSet = -11+int(millis()/1000)%(130); // total phase: 3 Min
  textSize(40);
  //text (timerSet, 0, -500, 0);


  ///////////// visual display /////////////////

  for (int j = 0; j<layers; j++) {
    for (int i=0; i<particles-1; i++) {
      sticks[i][j].constrainLen();
    }
    for (int i=0; i<particles/2; i++) {    
      sticks[i][j].render(0.4+map(i, 0, particles/2-1, 0, 2));
    }
    for (int i=particles/2; i<particles-1; i++) {
      sticks[i][j].render(0.4+map(i, particles/2, particles-2, 2, 0));
    }
  }

  for (int j = 0; j<layers; j++) {
    strokeWeight(4);
    stroke(255, trans);
    balls[particles/2][j].render();
    stroke(#68EBE3, trans);
    strokeWeight(3.5);
    balls[particles/2-1][j].render();
    stroke(#C3B4D3, trans);
    balls[particles/2+1][j].render();

    for (int i=0; i<particles; i++) {
      if (i!=0&&i!=particles-1)
        balls[i][j].verlet();
      noFill();

      // draw balls
      if (i!=particles/2) { //&&i!=(particles/2+1)&&i!=(particles/2-1)
        //transP=lerp(transP, trans, 0.1);
        //stroke(#1F6B6F, transP);
        //pushMatrix();
        //translate(0, -100, 0 );
        //balls[i][j].render();
        //popMatrix();
      }
    }
  }

  for (int l = 0; l<layers; l++) {
    for (int i = 0; i < layers; i++)
    {

      posY =whisper.mix.get(i);
      posY +=step.mix.get(i);
      posY +=tic.mix.get(i);
      posY += chord1_0.mix.get(i);
      posY += chord1_1.mix.get(i);
      posY +=chord1_2.mix.get(i);
      posY +=chord1_3.mix.get(i);
      posY +=chord2_0.mix.get(i);
      posY +=chord2_1.mix.get(i);
      posY +=chord2_2.mix.get(i);
      posY +=chord2_3.mix.get(i);
      posY +=chord3_0.mix.get(i);
      posY +=chord3_1.mix.get(i);
      posY +=chord3_2.mix.get(i);
      posY +=chord3_3.mix.get(i);
      posY +=chord4_0.mix.get(i);
      posY +=chord4_1.mix.get(i);
      posY +=chord4_2.mix.get(i);
      posY +=chord4_3.mix.get(i);
      posY +=sea.mix.get(i);
      posY +=seaWave.mix.get(i);
      posY +=gull_1.mix.get(i);
      posY +=gull_2.mix.get(i);
      posY +=bell.mix.get(i);
      posY +=road.mix.get(i);
      posY +=park_1.mix.get(i);
      posY +=park_2.mix.get(i);
      posY +=choir.mix.get(i);
      posY +=walk.mix.get(i);
      //if(handsUp[0])
      posY += lerp(0, -rightHandy[0]/700, -2);
      trans = abs((int)map(posY*10000, -80, 80, 0, 255));
      balls[particles/2][l].update(balls[particles/2][l].pos.x, posY*100+50);
    }
  }

  ///////////// interaction /////////////////
  if (key == 's')  
    saveFrame("frame/######.png");
  if (rotateX) cam.rotateY((PI/180)*.05);
  translate(0, -com[0].z/100, 0);

  theNoise = new Noise( 0.01f, Noise.Tint.RED );
  scene();

  if (playChord) pianoChord();
  if (playMemo) memo();
  //   if (playNoise) theNoise.patch(synth);
  //  else theNoise.unpatch(synth);
}


// -------------------------------------------------------------------------------

void keyPressed() {
  if (key == 'p')
    //println("userList: "+userList);
    //println("CamX "+cam.getPosition()[0]+" CamY "+cam.getPosition()[1]+" CamZ "+cam.getPosition()[2]);
    if (key == 's')  
      saveFrame=!saveFrame;
  if (key == ' ' )  
    rotateX=!rotateX;
}
kinect tab

Code: Select all

void drawSkeleton(int userId)
{
  strokeWeight(3);

  // to get the 3d joint data
  drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK);
  drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER);
  drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);
  drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);

  drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER);
  drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);
  drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);

  drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO);
  drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);

  drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP);
  drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE);
  drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);

  drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP);
  drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE);
  drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT); 

  stroke(255, 200, 200);

  strokeWeight(1);
}

void drawLimb(int userId, int jointType1, int jointType2)
{
  PVector jointPos1 = new PVector();
  PVector jointPos2 = new PVector();
  float  confidence;

  // draw the joint position
  confidence = kinect.getJointPositionSkeleton(userId, jointType1, jointPos1);
  confidence = kinect.getJointPositionSkeleton(userId, jointType2, jointPos2);

  stroke(255, 0, 0, confidence * 200 + 55);
  line(jointPos1.x, jointPos1.y, jointPos1.z, 
  jointPos2.x, jointPos2.y, jointPos2.z);

  drawJointOrientation(userId, jointType1, jointPos1, 50);
}

void drawJointOrientation(int userId, int jointType, PVector pos, float length)
{
  // draw the joint orientation  
  PMatrix3D  orientation = new PMatrix3D();
  float confidence = kinect.getJointOrientationSkeleton(userId, jointType, orientation);
  if (confidence < 0.001f) 
    // nothing to draw, orientation data is useless
    return;

  pushMatrix();
  translate(pos.x, pos.y, pos.z);

  // set the local coordsys
  applyMatrix(orientation);

  // coordsys lines are 100mm long
  // x - r
  stroke(255, 0, 0, confidence * 200 + 55);
  line(0, 0, 0, 
  length, 0, 0);
  // y - g
  stroke(0, 255, 0, confidence * 200 + 55);
  line(0, 0, 0, 
  0, length, 0);
  // z - b    
  stroke(0, 0, 255, confidence * 200 + 55);
  line(0, 0, 0, 
  0, 0, length);
  popMatrix();
}

// SimpleOpenNI user events

void onNewUser(SimpleOpenNI curContext, int userId)
{

    println("onNewUser - userId: " + userId);
    println("\tstart tracking skeleton");
    kinect.startTrackingSkeleton(userId);
//    user_tracking_info.put(userId, new UserTrackingInfo());
  
}

void onLostUser(SimpleOpenNI curContext, int userId)
{
  
    println("onLostUser - userId: " + userId);
//    UserTrackingInfo info = user_tracking_info.get(userId);
//    historyInfo.addUserData(info.positions, info.colour, time);
//    user_tracking_info.remove(userId);
  
}

void onVisibleUser(SimpleOpenNI curContext, int userId)
{
  println("onVisibleUser - userId: " + userId);
}

//Update user position 
//void updateUserPoses() {
//  if (kinect != null) {
//    int[] user_list = kinect.getUsers();
//    for (int i = 0; i < user_list.length; i++) {
//      //println("test for distance use " + i +":  "+ user_list[i]);
//      if (kinect.getCoM(user_list[i], com)) {
//        println("user com: " + com);
//        int user_id = user_list[i];
//        //println("Current :" + com);
//        if (com != null && user_tracking_info.get(user_id) != null) {
//          user_tracking_info.get(user_id).update(new PVector(com.x, com.y, com.z));
//        }
//      }
//    }
//  }
//}
scene tab

Code: Select all

void stopAll() {
  whisper.pause();
  step.pause();
  tic.pause();
  chord1_0.pause();
  chord1_1.pause();
  chord1_2.pause();
  chord1_3.pause();
  chord2_0.pause();
  chord2_1.pause();
  chord2_2.pause();
  chord2_3.pause();
  chord3_0.pause();
  chord3_1.pause();
  chord3_2.pause();
  chord3_3.pause();
  chord4_0.pause();
  chord4_1.pause();
  chord4_2.pause();
  chord4_3.pause();

  sea.pause();
  seaWave.pause();
  gull_1.pause();
  gull_2.pause();
  wind.pause();
  wave.pause();
  bell.pause();
  road.pause();
  park_1.pause();
  park_2.pause();
  choir.pause();
  walk.pause();
  whisper.rewind();
  step.rewind();
  tic.rewind();
  chord1_0.rewind();
  chord1_1.rewind();
  chord1_2.rewind();
  chord1_3.rewind();
  chord2_0.rewind();
  chord2_1.rewind();
  chord2_2.rewind();
  chord2_3.rewind();
  chord3_0.rewind();
  chord3_1.rewind();
  chord3_2.rewind();
  chord3_3.rewind();
  chord4_0.rewind();
  chord4_1.rewind();
  chord4_2.rewind();
  chord4_3.rewind();

  sea.rewind();
  seaWave.rewind();
  wave.rewind();
  gull_1.rewind();
  gull_2.rewind();
  wind.rewind();
  bell.rewind();
  road.rewind();
  park_1.rewind();
  park_2.rewind();
  choir.rewind();
  walk.rewind();
}


void scene() {

  // &&&&&&&&&&&&&&&&&&   start over   &&&&&&&&&&&&&&&&&&&&
  if (timerSet % phase == 1) { 
    stopAll();
    playMemo = false;
  }

  // &&&&&&&&&&&&&&&&&&   whisper   &&&&&&&&&&&&&&&&&&&&
  if (timerSet % phase == timestamp_1) { 
    whisper.play(); // 11s 
    whisper.setPan(0);
  }

  // &&&&&&&&&&&&&&&&&&   step   &&&&&&&&&&&&&&&&&&&&
  if (timerSet % phase == timestamp_2) {
    tic.play();
    tic.loop();
  }
  if (timerSet % phase == timestamp_3) {
    playNoise = true; // noisy gradually>>>
  }
  if (timerSet % phase <= timestamp_4 && timerSet % phase > timestamp_2) { 
    // multi users, control times and panning >>>> some problems !!!!
    //    for (int numbOfUsers = 0; numbOfUsers<user_list.length; numbOfUsers++) {
    //      step.play(); // 17s
    //      step.setPan(map(mouseX, 0, 1080, -1, 1));
    //      //      com[i]
    //    } 

    play2(step, millis());
    float dis = dist(com[0].x, com[0].y, com[0].z, 0, 0, 0);
    step.setGain(5-(dis/1000)*(dis/1000));
    step.setPan(map(com[0].x, 0, 2000, -1, 1));

    playStep = true;
    whisper.shiftGain(0, -20, 2*1000);
  }

  // &&&&&&&&&&&&&&&&&&   piano   &&&&&&&&&&&&&&&&&&&&
  if (timerSet % phase == timestamp_4) {
    tic.pause();
  }
  if (timerSet % phase == timestamp_5) {
    playNoise = false;
  }
  if (timerSet % phase <= timestamp_7 && timerSet % phase > timestamp_4) {
    step.pause();
    step.shiftGain(0, -50, 2*1000);
    playChord = true;
  
  }
  if (timerSet % phase == timestamp_6) {
    playNoise = true;
    
  }

  // &&&&&&&&&&&&&&&&&&   sea   &&&&&&&&&&&&&&&&&&&&
  if (timerSet % phase == timestamp_8) {
    playNoise = false;
    
  }
  if (timerSet % phase <=timestamp_9  && timerSet % phase >timestamp_7 ) { 
    playChord = false;
    playSynth=true;
    //theNoise.unpatch(out);
    chord1_0.pause();
    chord1_1.pause();
    chord1_2.pause();
    chord1_3.pause();
    chord2_0.pause();
    chord2_1.pause();
    chord2_2.pause();
    chord2_3.pause();
    chord3_0.pause();
    chord3_1.pause();
    chord3_2.pause();
    chord3_3.pause(); 
    chord4_0.pause();
    chord4_1.pause();
    chord4_2.pause();
    chord4_3.pause();
 
    playSea = true;

    //sea.setGain(5);
    play2(sea, millis());

    /*
    // try create synth in supercollider
     if (playSynth) {
     synth0.addToTail(group);
     synth1.addToTail(group);
     synth2.addToTail(group);
     synth3.addToTail(group);
     playSynth=false;
     }
     
     synth0.set("freq", 40 + (peak[0] * 2)); 
     synth0.set("amp", min(0, -0.5*peak[0]+150));
     synth1.set("freq", 40 + (peak[0] * 2)); 
     synth1.set("amp", min(0, -0.5*peak[0]+150));
     synth2.set("freq", 40 + (peak[0] * 4)); 
     synth2.set("amp", min(0, -0.5*peak[0]+150));
     synth3.set("freq", 40 + (peak[0] * 3)); 
     synth3.set("amp", min(0, -0.5*peak[0]+150));
     println("freq " + 40 + (mouseX * 4)+" "+"amp", (float)(0.05+mouseX/700) );
     */


    wind.setGain(min(0, -0.5*peak[0]+150));
    gull_1.setGain(min(0, 0.5*peak[0]-250));
    wave.setGain(min(0, -0.5*peak[0]+150));
    play2(wind, millis());
    play2(wave, millis());
    play2(gull_1, millis());
    if (dist(com[0].x, com[0].z, com[1].x, com[1].z) < 1000)
      play2(seaWave, millis()); // >>> To do list: shorter the wave file
  }

  // &&&&&&&&&&&&&&&&&&   memo   &&&&&&&&&&&&&&&&&&&&
  if (timerSet % phase == timestamp_10) {
    tic.rewind();
    tic.play();
    tic.loop();
  }
  if (timerSet % phase <=timestamp_11  && timerSet % phase >timestamp_9 ) { 
    playSea = false;
    sea.pause();
    seaWave.pause();
    gull_1.pause();
    gull_2.pause();
    wind.pause();
    playMemo = true;
  }

  // &&&&&&&&&&&&&&&&&&   ending   &&&&&&&&&&&&&&&&&&&&
  if (timerSet % phase <=timestamp_12  && timerSet % phase >timestamp_11 ) { 
    playMemo = false;
  }
}


// &&&&&&&&&&&&&&&&&&   piano chord set  &&&&&&&&&&&&&&&&&&&&

void pianoChord() {

  if (user_list.length==1) { // single user
    float dis = dist(com[0].x, com[0].y, com[0].z, 0, 0, 0);
    float gain=-(dis/1000)*(dis/1000);
    float duration = 250;
    float maxDis=3000;

    if ((timerSet-timestamp_4)<15/2) {
      if (com[0].z<2*maxDis/4) {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
        play2(chord1_2, millis()+duration); 
        chord1_1.setGain(gain);
        play2(chord1_3, millis()+duration*2); 
        chord1_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
        play2(chord1_2, millis()+duration); 
        chord1_1.setGain(gain);
      } else {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord1_0, millis());
        chord1_0.setGain(gain);
      }
    } else if ((timerSet-timestamp_4)<30/2) {
      if (com[0].z<2*maxDis/4) {
        play2(chord2_1, millis()); 
        chord2_1.setGain(gain);
        play2(chord2_2, millis()+duration); 
        chord2_1.setGain(gain);
        play2(chord2_3, millis()+duration*2); 
        chord2_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord2_1, millis()); 
        chord2_1.setGain(gain);
        play2(chord2_2, millis()+duration); 
        chord1_1.setGain(gain);
      } else {
        play2(chord2_1, millis()); 
        chord2_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord2_0, millis());
        chord2_0.setGain(gain);
      }
      //
      if (com[0].z<2*maxDis/4) {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
        play2(chord1_2, millis()+duration); 
        chord1_1.setGain(gain);
        play2(chord1_3, millis()+duration*2); 
        chord1_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
        play2(chord1_2, millis()+duration); 
        chord1_1.setGain(gain);
      } else {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord1_0, millis());
        chord1_0.setGain(gain);
      }
    } else if ((timerSet-timestamp_4)<45/2) {
      if (com[0].z<2*maxDis/4) {
        play2(chord3_1, millis()); 
        chord3_1.setGain(gain);
        play2(chord3_2, millis()+duration); 
        chord3_1.setGain(gain);
        play2(chord3_3, millis()+duration*2); 
        chord3_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord3_1, millis()); 
        chord3_1.setGain(gain);
        play2(chord3_2, millis()+duration); 
        chord3_1.setGain(gain);
      } else {
        play2(chord3_1, millis()); 
        chord3_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord3_0, millis());
        chord3_0.setGain(gain);
      }
      //
      if (com[0].z<2*maxDis/4) {
        play2(chord2_1, millis()); 
        chord2_1.setGain(gain);
        play2(chord2_2, millis()+duration); 
        chord2_1.setGain(gain);
        play2(chord2_3, millis()+duration*2); 
        chord2_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord2_1, millis()); 
        chord2_1.setGain(gain);
        play2(chord2_2, millis()+duration); 
        chord1_1.setGain(gain);
      } else {
        play2(chord2_1, millis()); 
        chord2_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord2_0, millis());
        chord2_0.setGain(gain);
      }
      //
      if (com[0].z<2*maxDis/4) {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
        play2(chord1_2, millis()+duration); 
        chord1_1.setGain(gain);
        play2(chord1_3, millis()+duration*2); 
        chord1_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
        play2(chord1_2, millis()+duration); 
        chord1_1.setGain(gain);
      } else {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord1_0, millis());
        chord1_0.setGain(gain);
      }
    } else {
      if (com[0].z<2*maxDis/4) {
        play2(chord4_1, millis()); 
        chord4_1.setGain(gain);
        play2(chord4_2, millis()+duration); 
        chord4_1.setGain(gain);
        play2(chord4_3, millis()+duration*2); 
        chord4_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord4_1, millis()); 
        chord4_1.setGain(gain);
        play2(chord4_2, millis()+duration); 
        chord4_1.setGain(gain);
      } else {
        play2(chord4_1, millis()); 
        chord4_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord4_0, millis());
      }
      //
      if (com[0].z<2*maxDis/4) {
        play2(chord3_1, millis()); 
        chord3_1.setGain(gain);
        play2(chord3_2, millis()+duration); 
        chord3_1.setGain(gain);
        play2(chord3_3, millis()+duration*2); 
        chord3_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord3_1, millis()); 
        chord3_1.setGain(gain);
        play2(chord3_2, millis()+duration); 
        chord3_1.setGain(gain);
      } else {
        play2(chord3_1, millis()); 
        chord3_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord3_0, millis());
        chord3_0.setGain(gain);
      }
      //
      if (com[0].z<2*maxDis/4) {
        play2(chord2_1, millis()); 
        chord2_1.setGain(gain);
        play2(chord2_2, millis()+duration); 
        chord2_1.setGain(gain);
        play2(chord2_3, millis()+duration*2); 
        chord2_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord2_1, millis()); 
        chord2_1.setGain(gain);
        play2(chord2_2, millis()+duration); 
        chord1_1.setGain(gain);
      } else {
        play2(chord2_1, millis()); 
        chord2_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord2_0, millis());
        chord2_0.setGain(gain);
      }
      //
      if (com[0].z<2*maxDis/4) {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
        play2(chord1_2, millis()+duration); 
        chord1_1.setGain(gain);
        play2(chord1_3, millis()+duration*2); 
        chord1_1.setGain(gain);
      } else if (com[0].z<3*maxDis/4) {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
        play2(chord1_2, millis()+duration); 
        chord1_1.setGain(gain);
      } else {
        play2(chord1_1, millis()); 
        chord1_1.setGain(gain);
      } 
      if (handsUp[0]) {
        //println("hand0");
        play2(chord1_0, millis());
        chord1_0.setGain(gain);
      }
    }
  }

  /*
  if (user_list.length==2) { // two users
   
   play2(chord2_1, millis()+500);
   
   if (handsUp[0]) {
   println("hand0");
   play2(chord1_1, millis());
   play2(chord1_2, millis());
   play2(chord1_2, millis());
   }
   if (handsUp[1]) {
   println("hand1");
   play2(chord2_1, millis());
   play2(chord2_2, millis());
   play2(chord2_2, millis());
   }
   }
   */
}

float getGainFromDistance(float distance) {
  return min(0, (100-distance/1000*200));
}

boolean bellTriger = false;

void memo() {

  if (user_list.length==1) {
    float radius = 500;
    float dis_walk = dist(com[0].x, com[0].z, 0, 1500);
    float dis_park_1 = dist(com[0].x, com[0].z, 500, 1000);
    float dis_park_2 = dist(com[0].x, com[0].z, 500, 2000);
    float dis_choir = dist(com[0].x, com[0].z, -500, 1000);
    float dis_road = dist(com[0].x, com[0].z, -500, 2000);

    if (dis_walk<500&&(dis_park_1<500||dis_park_2<500||dis_choir<500||dis_road<500)&&bellTriger==false) {
      play2(bell, millis());
      bellTriger=true;
    } else if (!(dis_walk<500&&(dis_park_1<500||dis_park_2<500||dis_choir<500||dis_road<500))) {
      bellTriger=false;
    }

    walk.setGain(getGainFromDistance(dis_walk));
    road.setGain(getGainFromDistance(dis_road));
    park_1.setGain(getGainFromDistance(dis_park_1));
    park_2.setGain(getGainFromDistance(dis_park_2));
    choir.setGain(getGainFromDistance(dis_choir));

    play2(road, millis());
    play2(park_1, millis());
    play2(park_2, millis());
    play2(choir, millis());
    play2(walk, millis());
  }
}
stuff tab

Code: Select all

void play2(AudioPlayer tempPlayer, float time2BPlayed) {
   if(!next_play_times.containsKey(tempPlayer)) {
     next_play_times.put(tempPlayer, 1e20);
   }
  
  boolean playingState = tempPlayer.isPlaying();

  if (playingState) {
    println("ture");
    next_play_times.put(tempPlayer, 1e20);
  } else {
    
    if (millis()>=next_play_times.get(tempPlayer)) {
      println("false"); 
      tempPlayer.rewind();
      tempPlayer.play();
    } else {
      if(next_play_times.get(tempPlayer) == 1e20) {
        next_play_times.put(tempPlayer, time2BPlayed);
      }
    }
  }
}

// -------------------------------------------------------------------------------
///////////////    Here is my garage     /////////////////
////////////   for things might be useful   /////////////
// -------------------------------------------------------------------------------
/*
 // set a volume variable
 float vol = 0.45;
 
 // set the tempo for here
 out.setTempo( 100.0f );
 // set a percentage for the actual duration
 out.setDurationFactor( 0.95f );
 // use pauseNotes to add a bunch of notes at once without time moving forward 
 out.pauseNotes();
 
 // specify the waveform for this group of notes
 Waveform disWave = Waves.sawh( 4 );
 // add these notes with disWave
 out.playNote( 0.0, 1.0, new ToneInstrument( "E4 ", vol, disWave, out ) );
 out.playNote( 1.0, 1.0, new ToneInstrument( "E4 ", vol, disWave, out ) );
 out.playNote( 2.0, 1.0, new ToneInstrument( "E4 ", vol, disWave, out ) );
 out.playNote( 3.0, 0.75, new ToneInstrument( "C4 ", vol, disWave, out ) );
 out.playNote( 3.75, 0.25, new ToneInstrument( "G4 ", vol, disWave, out ) );
 out.playNote( 4.0, 1.0, new ToneInstrument( "E4 ", vol, disWave, out ) );
 out.playNote( 5.0, 0.75, new ToneInstrument( "C4 ", vol, disWave, out ) );
 out.playNote( 5.75, 0.25, new ToneInstrument( "G4 ", vol, disWave, out ) );
 out.playNote( 6.0, 2.0, new ToneInstrument( "E4 ", vol, disWave, out ) );
 
 // specify the waveform for this group of notes
 disWave = Waves.triangleh( 9 );
 // add these notes with disWave
 out.playNote( 8.0, 1.0, new ToneInstrument( "B4 ", vol, disWave, out ) );
 out.playNote( 9.0, 1.0, new ToneInstrument( "B4 ", vol, disWave, out ) );
 out.playNote(10.0, 1.0, new ToneInstrument( "B4 ", vol, disWave, out ) );
 out.playNote(11.0, 0.75, new ToneInstrument( "C5 ", vol, disWave, out ) );
 out.playNote(11.75, 0.25, new ToneInstrument( "G4 ", vol, disWave, out ) );
 out.playNote(12.0, 1.0, new ToneInstrument( "Eb4 ", vol, disWave, out ) );
 out.playNote(13.0, 0.75, new ToneInstrument( "C4 ", vol, disWave, out ) );
 out.playNote(13.75, 0.25, new ToneInstrument( "G4 ", vol, disWave, out ) );
 out.playNote(14.0, 2.0, new ToneInstrument( "E4 ", vol, disWave, out ) );
 
 // specify the waveform for this group of notes
 disWave = Waves.randomNOddHarms( 3 );
 //add these notes with disWave
 out.playNote( 0.0, 1.9, new ToneInstrument( "E3 ", vol, disWave, out ) );
 out.playNote( 2.0, 1.9, new ToneInstrument( "E3 ", vol, disWave, out ) );
 out.playNote( 4.0, 1.9, new ToneInstrument( "E3 ", vol, disWave, out ) );
 out.playNote( 6.0, 1.9, new ToneInstrument( "E3 ", vol, disWave, out ) );
 
 // specify the waveform for this group of notes
 disWave = Waves.TRIANGLE;
 // add these notes with disWave
 out.playNote( 8.0, 1.9, new ToneInstrument( "E3 ", vol, disWave, out ) );
 out.playNote(10.0, 1.9, new ToneInstrument( "E3 ", vol, disWave, out ) );
 out.playNote(12.0, 1.9, new ToneInstrument( "C3 ", vol, disWave, out ) );
 out.playNote(14.0, 1.9, new ToneInstrument( "E3 ", vol, disWave, out ) );
 
 ////////////
 table = Waves.randomNOddHarms(9); 
 table = randomNoise();
 table.smooth( 64 );
 table.addNoise( 0.1f );
 wave  = new Oscil( 440, 0.05f, table );
 wave.patch( out );
 ////////////
 
 // use resumeNotes at the end of the section which needs guaranteed timing
 out.resumeNotes();
 
 */

///////////////////
/*
  Summer sum = new Summer();
 Oscil wave = new Oscil( Frequency.ofPitch("A4"), 0.3f, Waves.SINE );
 wave.patch( sum );
 
 wave = new Oscil( Frequency.ofPitch("C#5"), 0.3f, Waves.SINE );
 wave.patch( sum );
 
 wave = new Oscil( Frequency.ofPitch("E5"), 0.3f, Waves.SINE );
 wave.patch( sum );
 
 // and the Summer to the output and you should hear a major chord
 sum.patch( out );
 */
///////////////////

/*
// make our midi converter
 midi = new Midi2Hz( 50 );
 
 midi.patch( wave.frequency );
 wave.patch( out );
 */

//////////////////////////////////////////////////

/*
void keyPressed()
 {
 if ( key == 'a' ) midi.setMidiNoteIn( 50 );
 if ( key == 's' ) midi.setMidiNoteIn( 60 );
 if ( key == 'd' ) midi.setMidiNoteIn( 70 );
 if ( key == 'f' ) midi.setMidiNoteIn( 80 );
 if ( key == 'g' ) midi.setMidiNoteIn( 90 );
 if ( key == 'h' ) midi.setMidiNoteIn( 100 );
 if ( key == 'j' ) midi.setMidiNoteIn( 110 );
 if ( key == 'k' ) midi.setMidiNoteIn( 120 );
 if ( key == 'l' ) midi.setMidiNoteIn( 130 );
 if ( key == ';' ) midi.setMidiNoteIn( 140 );
 if ( key == '\'') midi.setMidiNoteIn( 150 );
 }
 */


///////////////////////////////////////////////////////

//out.playNote( 2.0, 2.9, new SineInstrument( Frequency.ofPitch( "C3" ).asHz() ) );
//out.playNote( 3.0, 1.9, new SineInstrument( Frequency.ofPitch( "E3" ).asHz() ) );
//out.playNote( 4.0, 0.9, new SineInstrument( Frequency.ofPitch( "G3" ).asHz() ) );

//////////////////////////////////////////////////

/*
  //// microphone input transform //// 
 
 AudioStream inputStream = minim.getInputStream( Minim.STEREO, 
 out.bufferSize(), 
 out.sampleRate(), 
 out.getFormat().getSampleSizeInBits()
 );
 in = new LiveInput( inputStream );
 
 Vocoder vocode = new Vocoder( 1024, 16 );
 in.patch( vocode.modulator );
 //synth.patch( vocode ).patch( out ); 
 */
verlet ball tab

Code: Select all

// Reference: the VerletBall&VerletStick class is based on Ira Greenberg's VerletStick example
// (https://github.com/irajgreenberg/workshopExamples/blob/master/apression/VerletStick.pde)
// -------------------------------------------------------------------------------

class VerletBall {

  PVector pos, posOld;
  PVector push;
  float radius;

  VerletBall(PVector pos, PVector push, float radius) { 
    this.pos = pos;
    this.push = push;
    this.radius = radius;
    this.posOld  = new PVector(pos.x, pos.y, pos.z);

    // start motion
    pos.add(push); 
  }

  void verlet() { 
    PVector posTemp = new PVector(pos.x, pos.y, pos.z);
    pos.x += (pos.x-posOld.x);
    pos.y += (pos.y-posOld.y);
    pos.z += (pos.z-posOld.z);
    posOld.set(posTemp);
  }

  void render() {
    //ellipse(pos.x, pos.y, radius*2, radius*2);
    pushMatrix();
    translate(pos.x, pos.y, pos.z);
    point(0, 0, 0);
    strokeWeight(3);
    stroke(127, 110);
    popMatrix();
  }

  void boundsCollision() {
    if (pos.x>width/2-radius) {
      pos.x = width/2-radius;
      posOld.x = pos.x;
      pos.x -= push.x;
    } else if (pos.x< -width/2+radius) {
      pos.x = -width/2+radius;
      posOld.x = pos.x;
      pos.x += push.x;
    }

    if (pos.y>height/2-radius) {
      pos.y = height/2-radius;
      posOld.y = pos.y;
      pos.y -= push.y;
    } else if (pos.y<-height+radius) {
      pos.y = -height+radius;
      posOld.y = pos.y;
      pos.y += push.y;
    }
  }

  void update(float x, float y) {
    pos.x = x;
    pos.y = y;
    //pos.x = mouseX-width/2;
    //pos.y = mouseY-height/2;
  }
}
verlet stick tab

Code: Select all

// Reference: the VerletBall&VerletStick class is based on Ira Greenberg's VerletStick example
// (https://github.com/irajgreenberg/workshopExamples/blob/master/apression/VerletStick.pde)
// -------------------------------------------------------------------------------

class VerletStick {
  VerletBall b1, b2;
  float stiffness;
  boolean isVisible=true;
  PVector vecOrig;
  float len;

  VerletStick(VerletBall b1, VerletBall b2, float stiffness) {
    this.b1 = b1;
    this.b2 = b2;
    this.stiffness = stiffness;
    vecOrig  = new PVector(b2.pos.x-b1.pos.x, b2.pos.y-b1.pos.y, b2.pos.z-b1.pos.z);
    len = PVector.dist(b1.pos, b2.pos);
  }

  void render(float stroke) {
    if (isVisible) { 
      line(b1.pos.x, b1.pos.y, b1.pos.z, b2.pos.x, b2.pos.y, b2.pos.z);
      strokeWeight(stroke);
      //stroke((224-0.01*(b2.pos.y*b2.pos.y)), (82-0.01*(b2.pos.y*b2.pos.y)), (141+b2.pos.y), 255); // pink-blue
      stroke((220-b1.pos.y*b2.pos.y/100)/2-b2.pos.y/9, (220-b1.pos.y*b2.pos.y/100-b2.pos.y)-b2.pos.y/7, 
        (250-b1.pos.y*b2.pos.y/100-b2.pos.y)-b2.pos.y/4, transP*transP/100);
    }
  }

  void constrainLen() {
    PVector delta = new PVector(b2.pos.x-b1.pos.x, b2.pos.y-b1.pos.y, b2.pos.z-b1.pos.z);
    float deltaLength = delta.mag();
    float difference = ((deltaLength - len) / deltaLength); 
    b1.pos.x += delta.x * (0.5f * stiffness * difference); 
    b1.pos.y += delta.y * (0.5f * stiffness * difference);
    b1.pos.z += delta.z * (0.5f * stiffness * difference);

    b2.pos.x -= delta.x * (0.5f * stiffness * difference);
    b2.pos.y -= delta.y * (0.5f * stiffness * difference);
    b2.pos.z -= delta.z * (0.5f * stiffness * difference);
  }
}
Because the sound file is larger than 50M, so I upload the whole package here.
Remember to connect a kinect before having a try.
https://drive.google.com/open?id=0B_3lC ... ExVMkhUdVE

Here is the link for the class demo video.
https://youtu.be/vRNfPeQHIPI
Last edited by jing_yan on Tue Jun 07, 2016 9:35 pm, edited 5 times in total.

ambikayadav
Posts: 4
Joined: Fri Apr 01, 2016 2:32 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by ambikayadav » Tue Jun 07, 2016 6:38 am

The final project is a build up and combination of the concepts discussed in the previous documentation . I am making an attempt to combine the concept of illusions and mirroring and present it as one piece.

Concept 1 : Illusion and varied perception of sizes and distance

Inspiration : The main inspiration to implement was to understand how we humans perceive the world . A study conducted revealed , how we humans can get handicapped if our world is reversed , that is the left is made the right and right is made the left. This state of confusion is something observed when the blind are given eyes at a very later stage .
I wanted to develop a piece , wherein this aspect to conditioned mind is touched upon .
Inversion.jpg
Inversion Concept Sketch
What is happening here : This piece can be better perceived as a Virtual Reality setup, though I have tried to put forward the point in a very simple fashion here.
The user is made to stand in a blue room with a white box suspended in the air.Now ideally when the user walks towards the wall of the room , as the user must perceive the wall to become comparatively bigger. But, what happens here is as the user approaches the wall , the wall goes farther away and becomes smaller. At the same time the box behaves in an opposite but more desired fashion , hence confusing the user on how to approach various parts of the setup .
Screen Shot 2016-06-06 at 10.16.07 PM.png
Illusion -1
Screen Shot 2016-06-06 at 10.16.59 PM.png
Illusion - 2
Concept 2 : Mirroring

Inspiration : The inspiration for this piece is from the various concept of mirroring showcased in the mirror series on Daniel Rozin. It enables the user to interact with the himself/herself and get engaged to become more self aware.
DR_Ref.jpg
Mirror by Daniel Rozin
DR_Ref.jpg (4.91 KiB) Viewed 8552 times
Mirror.jpg
Mirroring Concept Sketch
What is happening here : The screen is imagined to be array of circular light sources, with the ability to display various colors. Depending on the presence of the user ahead, the system changes the color of the light source to mirror the presence and posture of the user.
Screen Shot 2016-06-06 at 10.18.45 PM.png
Mirroring - 1
Screen Shot 2016-06-06 at 10.18.34 PM.png
Mirroring - 2
FINAL : AMES ROOM MIRROR SIMULATION

Inspiration : I finally wanted to combine the concept of illusions(varied perceptions) and mirrors. I have been reading about the Ames room for a while now and wanted to combine the implementation of the same here. The final piece is an attempt to combine the Ames room effect with the concept of light source mirroring .
ames-room.jpg
Ames Room Illusion
What is happening here : To implement the Ames room , the users data is extracted and scaled according to its location in the space to attempt the simulation of the Ames room . At the same time the users data is manipulated to showcase it as the mirrored light source rather than a straightforward user image created using user data.
Screen Shot 2016-06-06 at 10.20.38 PM.png
Ames Mirroring - 1
Screen Shot 2016-06-06 at 10.20.18 PM.png
Ames Mirroring - 2
Attachments
Final_Project.zip
Final_Codes
(3.92 MiB) Downloaded 220 times
Last edited by ambikayadav on Tue Jun 07, 2016 1:10 pm, edited 1 time in total.

lliu
Posts: 9
Joined: Wed Jan 06, 2016 1:41 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by lliu » Tue Jun 07, 2016 10:19 am

Intrinsic Variations
Human Body fulfilled with Voronoi 3D cells
LU LIU spring M265 2016
voronoiske1.jpg
Concept:
Cells constitute the human body. What cells look like?
Intrinsic Variations is a visualization of motion structure inside of human body. It's also a virtual sculpture. The main focus are fluidity of cells and transformation of cells themselves. It based on Voronoi algorithm and realized in 3D space.

Introduction:
Because my goal is to find some interesting shapes to reconstruct the human body. So I chose the Voronoi algorithm, which represents a creative way to divide the space, to explore the cells. The flow speed and number of cells could be controlled through control P5.
There are three main modes in this project. The
Lines Blue and Colorful.The user could also add a delay effect on the motion and change the background color of screen.

The background sound is a sample that very familiar to "The sound of cells talking", found by British Biologist Brian Ford the sound of real neurons communicating with one another.
http://www.bbc.co.uk/blogs/today/tomfei ... s_wha.html

For the future work, I want to use Print3D to make these virtual structure as a real sculpture.

Work-in-Progress:
IMG_2500.JPG
demo2.png
Final Version:
demoo0.png
QQ20160607-7@2x.png
QQ20160607-5@2x.png
QQ20160607-3@2x.png
Reference:
// 1."Making Things See" by Greg Borenstein
// 2. HE_Mesh* is a Java library for creating and manipulating polygonal meshes. It could be downloaded at
http://www.wblut.com/#hemesh
//3. Sounds Effect http://soundbible.com/suggest.php?q=nail&x=0&y=0
Attachments
voronoi_.zip
(4.99 MiB) Downloaded 216 times
Last edited by lliu on Fri Jun 10, 2016 10:04 am, edited 4 times in total.

davidaleman
Posts: 4
Joined: Fri Apr 01, 2016 2:33 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by davidaleman » Tue Jun 07, 2016 11:46 am

Trending Today

Trending Today is a project using the Kinect sensor to detect user movement and interact with text. This text is derived from 2 media outlet sites called Twitter and The New York Times.

In our world today many people rely on their devices to gain valuable news or other information of interest. One way that information is retrieved is through social media outlets. According to Pew Research Center 63% of Twitter and Facebook users depend on these sites as a source of news. These news articles can however be different from the top stories on newspapers like The New York Times. According to statistica, there are 65 million active Twitter users in the US alone, while The New York Times reached its 1 millionth subscriber on October 2015.

There are more US citizens acquiring news and information through social media sites than through a known credible news outlet. The objective of this project is to create a juxtaposition of trending topics from both Twitter and New York Times and spark dialogue about news that matters most.

I used Twitter and NYT API to receive daily trending topics in the US and top stories respectively.

The Kinect detects the orientation of a user's torso. If turned to the left, trending topics from Twitter appear and if turned to the right, top stories for NYT appear. If the user would like the text to appear simultaneously then the user would have to spread their arms open, then the user can observe and read and compare the news stories from both sites.I also added a sound of a ticking clock to allude to time passing by and remind the user that there are breaking news stories every passing second and that this project is never the same and up to date.
0110.png
0178.png

Image of Twitter Trends (06/07/2016 | 12:51 pm):
0132.png
Image of New York Times Top Stories: (06/07/2016 | 12:51 pm):
0152.png
Image of What is Trending Today (06/07/2016 | 12:51 pm):
0176.png
Attachments
trendingToday.zip
(11.15 MiB) Downloaded 222 times
Last edited by davidaleman on Tue Jun 07, 2016 1:03 pm, edited 1 time in total.

qiu0717
Posts: 9
Joined: Wed Jan 06, 2016 1:44 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by qiu0717 » Tue Jun 07, 2016 12:21 pm

"Bubbles"
Weihao Qiu


Concept:

"Bubbles" is a Kinect project that represent users movements in form of a bubbles. When a person moves, a bubble will be enlarged as the outline of his body be the bubble wand. The start and end of making bubbles is controlled by the person's moving speed. As time goes by, the shape of bubble should become more organic. Moreover, people can interact with the bubble they make, by pushing it to float away or piercing it to break it apart.
img_2880-2.jpg
Inspiration

Demos

Version 1: 2D Bubble

To create realistic bubbles, a physically-based simulation engine must be used. Then I found and used a processing library, toxiclib, to simulate the movement of particles, and draw isocontour based on the positions of particles.

For interaction part, I designed three ways for the particles to respond to human figure's movement, thereby changing the shape of the bubble.
屏幕快照 2016-06-10 上午9.40.33.png
Interaction_1
Also, I have parameters to set the threshold of isocontour, which determine the size of bubbles, and to switch on and of the appearance of particles.

Demo Video:
Image



Version 2: 3D colored Bubble
To enhance the visual interestingness, I added color onto the bubbles. For some reason, I chose to render the bubbles in a 3D place, so that the color can change as the simulated light changes.
Also, I added 3 modes of interaction: breaking the bubble, slapping the bubble and bubble exploding.
屏幕快照 2016-06-10 下午12.31.03.png
3 new modes
Demo Videos:
Image


Screenshots:
000279.jpg
Bubble_3
000228.jpg
Bubble_2
000141.jpg
Bubble_1


Code :
sketch_3D_Bubble.pde

Code: Select all

/**
 * <p>BoxFLuid demo combining 3D physics particles with the IsoSurface class to
 * create an animated mesh with a fluid behaviour. The mesh is optionally created
 * within a boundary sphere, but other forms can be created using a custom
 * ParticleConstraint class.</p>
 * 
 * <p>Dependencies:</p>
 * <ul>
 * <li>toxiclibscore-0015 or newer package from <a href="http://toxiclibs.org">toxiclibs.org</a></li>
 * <li>verletphysics-0004 or newer package from <a href="http://toxiclibs.org">toxiclibs.org</a></li>
 * <li>volumeutils-0002 or newer package from <a href="http://toxiclibs.org">toxiclibs.org</a></li>
 * <li>controlP5 GUI library from <a href="http://sojamo.de">sojamo.de</a></li>
 * </ul>
 * 
 * <p>Key controls:</p>
 * <ul>
 * <li>w : wireframe on/off</li>
 * <li>c : close sides on/off</li>
 * <li>p : show particles only on/off</li>
 * <li>b : turn bounding sphere on/off</li>
 * <li>r : reset particles</li>
 * <li>s : save current mesh as OBJ & STL format</li>
 * <li>- / = : decrease/increase surface threshold/tightness</li>
 * </ul>
 */

/* 
 * Copyright (c) 2009 Karsten Schmidt
 * 
 * This demo & library is free software; you can redistribute it and/or
 * modify it under the terms of the GNU Lesser General Public
 * License as published by the Free Software Foundation; either
 * version 2.1 of the License, or (at your option) any later version.
 * 
 * http://creativecommons.org/licenses/LGPL/2.1/
 * 
 * This library is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * Lesser General Public License for more details.
 * 
 * You should have received a copy of the GNU Lesser General Public
 * License along with this library; if not, write to the Free Software
 * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301  USA
 */

import processing.opengl.*;
import java.util.*;

import toxi.physics.*;
import toxi.physics.behaviors.*;
import toxi.physics.constraints.*;
import toxi.geom.*;
import toxi.geom.mesh.*;
import toxi.math.*;
import toxi.volume.*;
import toxi.processing.*;
import toxi.physics2d.*;

import controlP5.*;
import peasy.*;
import gab.opencv.*;
import KinectProjectorToolkit.*;
import SimpleOpenNI.*;


int NUM_PARTICLES = 300;
float REST_LENGTH=375;
int DIM=600;

//int GRID= 35;
int GRID = 10;
float VS=2*DIM/GRID;
Vec3D SCALE=new Vec3D(DIM, DIM, DIM).scale(2);
float isoThreshold= 6;
float alpha = 128;

int numP;
VerletPhysics physics;
ParticleConstraint boundingSphere;
GravityBehavior gravity;
AttractionBehavior keyAttractor, centerAttractor;

VolumetricSpaceArray volume;
IsoSurface surface;

TriangleMesh mesh=new TriangleMesh("fluid");

boolean showPhysics=false;
boolean isWireFrame=false;
boolean isClosed=true;
boolean useBoundary=false;

Vec3D colAmp=new Vec3D(400, 200, 200);


PeasyCam cam;
ControlP5 ui;
OpenCV opencv;
KinectProjectorToolkit kpc;
ToxiclibsSupport gfx;
SimpleOpenNI kinect;
Polygon2D body;


ArrayList<ProjectedContour> projectedContours;
ArrayList<VerletParticle2D> bodyPoints;
ArrayList<VerletSpring2D> bodySprings;
ArrayList<VerletParticle> bubblePoints;

boolean hideGUI, showFigure;
int mode;
color[] colors;

void setup() {
  size(displayWidth, displayHeight, OPENGL);
  smooth();
  kinect = new SimpleOpenNI(this);
  if (kinect.isInit() == false)
  {
    println("Can't init SimpleOpenNI, maybe the camera is not connected!"); 
    exit();
    return;
  } else {
    // mirror the image to be more intuitive
    kinect.setMirror(false);
    kinect.enableDepth();
    kinect.enableUser();
    kinect.alternativeViewPointDepthToImage();

    opencv = new OpenCV(this, kinect.depthWidth(), kinect.depthHeight());
    kpc = new KinectProjectorToolkit(this, kinect.depthWidth(), kinect.depthHeight());
    kpc.loadCalibration("calibration.txt");
    kpc.setContourSmoothness(2);

    gfx = new ToxiclibsSupport(this);
  }

  initPhysics();
  initGUI(); 
  volume=new VolumetricSpaceArray(SCALE, GRID, GRID, GRID);
  surface=new ArrayIsoSurface(volume);
  textFont(createFont("SansSerif", 12));

  cam = new PeasyCam(this, 1200);
  cam.setWheelScale(0.2);
  cam.rotateY(PI);
  bodyPoints = new ArrayList<VerletParticle2D>();
  bodySprings = new ArrayList<VerletSpring2D>();
  bubblePoints = new ArrayList<VerletParticle>();


  colors = new color[10];
  colors[0] = #0A86D0;
  colors[1] = #134ED5;
  colors[2] = #6D5CAF;
  colors[3] = #D2000D;
  colors[4] = #EB6002;
  colors[5] = #FFBC00;
  colors[6] = #019C69;
  colors[7] = #D42020;

  hideGUI = false;
  showFigure = false;
}

void draw() {
  background(230);

  computeVolume();
  updateContours();

  drawContours();
  updateAllBubbles();

  updateParticles();


  pushMatrix();
  //  translate(width*0.5, height*0.5, 0);

  //  rotateX(mouseY*0.01);
  //  rotateY(mouseX*0.01);
  noFill();
  stroke(255, 192);
  strokeWeight(1);
  //  box(physics.getWorldBounds().getExtent().x*2);
  if (showPhysics) {
    strokeWeight(4);
    stroke(50);
    for (Iterator i=physics.particles.iterator (); i.hasNext(); ) {
      VerletParticle p=(VerletParticle)i.next();
      Vec3D col=p.add(colAmp).scaleSelf(0.5);
      stroke(col.x, col.y, col.z);
      point(p.x, p.y, p.z);
    }
  } else {
    ambientLight(216, 216, 216);
    directionalLight(255, 255, 255, 0, 1, 0);
    directionalLight(96, 96, 96, 1, 1, -1);

    noStroke();
    fill(224, 0, 51);
    for (int i = 0; i< bubblePoints.size (); i++) {
      VerletParticle pos3d = bubblePoints.get(i);
      pushMatrix();
      translate(pos3d.x, pos3d.y, pos3d.z);
      color col = getColorByPosition(pos3d);
      fill(col, alpha);
      sphere(10);
      popMatrix();
    }

    if (isWireFrame) {
      stroke(255);
      noFill();
    } else {
      noStroke();
      fill(224, 0, 51);
    }


    beginShape(TRIANGLES);
    if (!isWireFrame) {
      drawFilledMesh();
    } else {
      drawWireMesh();
    }
    endShape();
  }



  popMatrix();
  noLights();
  fill(0);

  if (!hideGUI) {
    cam.beginHUD();
    text("faces: "+mesh.getNumFaces(), 20, 600);
    text("vertices: "+mesh.getNumVertices(), 20, 615);
    text("particles: "+physics.particles.size(), 20, 630);
    text("springs: "+physics.springs.size(), 20, 645);
    text("alpha: "+alpha, 20, 660);

    text("fps: "+frameRate, 20, 690);
    ui.draw();

    cam.endHUD();
  }

  bodyPoints.clear();
}

void keyPressed() {
  if (key=='r') initPhysics();
  if (key=='w') isWireFrame=!isWireFrame;
  if (key=='p') showPhysics=!showPhysics;
  if (key=='c') isClosed=!isClosed;
  if (key=='b') {
    toggleBoundary();
  }

  if (key == 'h') {
    hideGUI = !hideGUI;
  }
  if (key == 'f') {
    showFigure = !showFigure;
  }
  if (key == '1') {
    mode = 1;
  }
  if (key == '2') {
    mode = 2;
  }
  if (key == '3') {
    mode = 3;
  }
  if (key=='-' || key=='_') {
    alpha-=10;
  }
  if (key=='=' || key=='+') {
    alpha+=10;
  }

  if (key==',' || key=='<') {
    float strengh = centerAttractor.getStrength()-0.5;
    if (strengh>0.2)
      centerAttractor.setStrength(strengh);
  }
  if (key=='.' || key=='>') {
    float strengh = centerAttractor.getStrength()+0.5;
    if (strengh<5)
      centerAttractor.setStrength(strengh);
  }
  if (key=='s') {
    //    mesh.saveAsOBJ(sketchPath(mesh.name+".obj"));
    //    mesh.saveAsSTL(sketchPath(mesh.name+".stl"));
    saveFrame("######.jpg");
  }

  if (key == ' ') {
    keyAttractor = new AttractionBehavior(new Vec3D(0, 0, 0), 250, -10.f, 0.015);
    physics.addBehavior(keyAttractor);
  }
}

void keyReleased() {
  if (key == ' ') {
    physics.removeBehavior(keyAttractor);
  }
}

void toggleBoundary() {
  useBoundary=!useBoundary;
  initPhysics();
}

void updateContours() {
  kinect.update();  
  kpc.setDepthMapRealWorld(kinect.depthMapRealWorld()); 
  kpc.setKinectUserImage(kinect.userImage());
  opencv.loadImage(kpc.getImage());
  //  opencv.loadImage(createImage(640,480,RGB));
  // get projected contours
  projectedContours = new ArrayList<ProjectedContour>();
  ArrayList<Contour> contours = opencv.findContours();
  for (Contour contour : contours) {
    if (contour.area() > 2000) {
      ArrayList<PVector> cvContour = contour.getPoints();
      ProjectedContour projectedContour = kpc.getProjectedContour(cvContour, 1.0);
      projectedContours.add(projectedContour);
      //      println(contour.area());
    }
  }
}

void drawContours() {
  for (int i=0; i < projectedContours.size (); i++) {
    ProjectedContour projectedContour = projectedContours.get(i);
    int k = 0;
    float sum1=0, sum2=0;
    for (PVector p : projectedContour.getProjectedContours ()) {
      k++;
      //      println("k:"+k);
      sum1 = sum1 + p.x;
      sum2 = sum2 + p.y;
      if (k%2 == 0) {
        PVector t = projectedContour.getTextureCoordinate(p);
        stroke(0);
        strokeWeight(3);
        if ((random(1)<0.01) & (physics.particles.size()<NUM_PARTICLES)) {
          VerletParticle newBubble = new VerletParticle(p.x, p.y, 0);
          bubblePoints.add(newBubble);
          physics.addParticle(newBubble);
          //          physics.addBehavior(new AttractionBehavior(newBubble, 7, -0.2f, 0.05f));
        }
        bodyPoints.add(new VerletParticle2D(p.x, p.y));
      }
    }
    //    if (frameCount%5==0) {
    //      PVector p = projectedContour.getProjectedContours().get((frameCount*5)%k);
    //      //      println(frameCount%k+"   "+k);
    //      if (bubblePoints.size()<100) 
    //        addParticle(new Vec2D(p.x, p.y));
    //    }
  }

  body = null;
  body = new Polygon2D((List)bodyPoints);
  noStroke();
  fill(255);
  pushMatrix();
  translate(-width*0.5, -height*0.5, 0);
  if (showFigure)
    gfx.polygon2D(body);

  popMatrix();
}


void updateAllBubbles() {
  for (int i = 0; i< bubblePoints.size (); i++) {
    VerletParticle pos3d = bubblePoints.get(i);

    //    pushMatrix();
    //    translate(pos3d.x, pos3d.y, pos3d.z);
    //    sphere(10);
    //    popMatrix();

    VerletParticle2D pos = new VerletParticle2D(pos3d.x+width*0.5, pos3d.y+height*0.5);
    Vec3D prepos = pos3d.getPreviousPosition();
    VerletParticle2D pre = new VerletParticle2D(prepos.x+width*0.5, prepos.y+height*0.5);
    if ((body.containsPoint(pos) && !body.containsPoint(pre))) {
      //          pos.scaleVelocity(-2);
      pos3d.x = prepos.x;
      pos3d.y = prepos.y;
      pos3d.z = prepos.z;
    }
    if (!body.containsPoint(pos) && body.containsPoint(pre)) {
      Vec2D center = body.getCentroid();
      Vec3D velocity = new Vec3D(pos3d.x-center.x, pos3d.y-center.y, pos3d.z);
      pos3d.clearVelocity();
      pos3d.addVelocity(velocity.getNormalizedTo(2));
    }  
    if (body.containsPoint(pos) && body.containsPoint(pre)) {

      switch(mode) {
      case 1:
        pos3d.x = random(width);
        pos3d.y = random(height);
        break;
      case 2:
        float closestD = 1000;
        int pointIdx = 0;
        for (int ii = 0; ii <bodyPoints.size (); ii+=1) {
          VerletParticle2D tempBodyPoint = bodyPoints.get(ii);
          float distance = pos.distanceTo(new VerletParticle2D(tempBodyPoint.x - width*0.5, tempBodyPoint.y - height*0.5));
          if (distance < closestD) {
            closestD = distance;
            pointIdx = ii;
          }
        }
        Vec2D center = bodyPoints.get(pointIdx);
        //        pushMatrix();
        //        translate(center.x - width*0.5, center.y - height*0.5, 0);
        //        sphere(20);
        //        popMatrix();

        Vec3D velocity = new Vec3D(pos3d.x-center.x, pos3d.y-center.y, pos3d.z);
        pos3d.clearVelocity();
        pos3d.addVelocity(velocity.getNormalizedTo(-60));
        break;

      case 3:
        center = new Vec2D(0, 0);
        //        pushMatrix();
        //        translate(center.x - width*0.5, center.y - height*0.5, 0);
        //        sphere(20);
        //        popMatrix();

        velocity = new Vec3D(pos3d.x-center.x, pos3d.y-center.y, pos3d.z);
        pos3d.clearVelocity();
        pos3d.addVelocity(velocity.getNormalizedTo(60));
        break;
        //      pos.z = ran
      }

      pos3d.z = 0;
    }
  }
}

color getColorByPosition(VerletParticle pos3d) {

  color col;
  if (abs(pos3d.x) > 0.1) {
    float angle = (atan(pos3d.y/pos3d.x)/(PI/2)+1)/2*7;
    int clridx = int(angle)%7;
    println(clridx);
    col = lerpColor(colors[clridx], colors[(clridx+1)%7], angle - int(angle));
  } else {
    float angle = (atan(pos3d.y/0.1)/(PI/2)+1)/2*7;
    println(angle);
    int clridx = int(angle)%7;
    col = lerpColor(colors[clridx+1], colors[(clridx+1)%7+1], angle - int(angle));
  }
  return col;
}
GUI.pde

Code: Select all

void initGUI() {
  ui = new ControlP5(this);
  ui.addSlider("isoThreshold", 1, 12, isoThreshold, 20, 20, 100, 14).setLabel("iso threshold");
  ui.addSlider("alpha", 0, 255, alpha, 20, 220, 100, 14).setLabel("color alpha");

  ui.addToggle("showPhysics", showPhysics, 20, 60, 14, 14).setLabel("show particles");
  ui.addToggle("isWireFrame", isWireFrame, 20, 100, 14, 14).setLabel("wireframe");
  ui.addToggle("isClosed", isClosed, 20, 140, 14, 14).setLabel("closed mesh");
  ui.addToggle("toggleBoundary", useBoundary, 20, 180, 14, 14).setLabel("use boundary");

  ui.addBang("initPhysics", 20, 260, 28, 28).setLabel("restart");

  ui.setAutoDraw(false);
}
Mesh.pde

Code: Select all

void computeVolume() {
  float cellSize=(float)DIM*2/GRID;
  Vec3D pos=new Vec3D();
  Vec3D offset=physics.getWorldBounds().getMin();
  float[] volumeData=volume.getData();
  for (int z=0, index=0; z<GRID; z++) {
    pos.z=z*cellSize+offset.z;
    for (int y=0; y<GRID; y++) {
      pos.y=y*cellSize+offset.y;
      for (int x=0; x<GRID; x++) {
        pos.x=x*cellSize+offset.x;
        float val=0;
        for (int i=0; i<numP; i++) {
          Vec3D p=(Vec3D)physics.particles.get(i);
          float mag=pos.distanceToSquared(p)+0.00001;
          val+=1/mag;
        }
        volumeData[index++]=val;
      }
    }
  }
  if (isClosed) {
    volume.closeSides();
  }
  surface.reset();
  surface.computeSurfaceMesh(mesh, isoThreshold*0.001);
  // xxx
}

void drawFilledMesh() {
  int num=mesh.getNumFaces();
  mesh.computeVertexNormals();
  // xxx
  for (int i=0; i<num; i++) {
    Face f=mesh.faces.get(i);

    color col = getColorByPosition(new VerletParticle(f.a.x,f.a.y,f.a.z));
//    Vec3D col= new Vec3D(colc.r,colc.g,colc.b);
    fill(col, alpha+50);
    normal(f.a.normal);
    vertex(f.a);
    
    col = getColorByPosition(new VerletParticle(f.b.x,f.b.y,f.b.z));
//    col= new Vec3D(colc.r,colc.g,colc.b);
    fill(col, alpha+50);
    normal(f.b.normal);
    vertex(f.b);
    
    col = getColorByPosition(new VerletParticle(f.c.x,f.c.y,f.c.z));
//    col= new Vec3D(colc.r,colc.g,colc.b);
    fill(col, alpha+50);
    normal(f.c.normal);
    vertex(f.c);
  }
}

void drawWireMesh() {
  noFill();
  int num=mesh.getNumFaces();
  Vec3D col = new Vec3D();
  Face f = mesh.faces.get(0);
  for (int i=0; i<num; i++) {
    f=mesh.faces.get(i);
    col=f.a.add(colAmp).scaleSelf(0.5);
    stroke(col.x, col.y, col.z);
    vertex(f.a);
    col=f.b.add(colAmp).scaleSelf(0.5);
    stroke(col.x, col.y, col.z);
    vertex(f.b);
    col=f.c.add(colAmp).scaleSelf(0.5);
    stroke(col.x, col.y, col.z);
    vertex(f.c);
  }
  fill(col.x, col.y, col.z, alpha);
  normal(f.c.normal);
}

void normal(Vec3D v) {
  normal(v.x, v.y, v.z);
}

void vertex(Vec3D v) {
  vertex(v.x, v.y, v.z);
}
Physics.pde

Code: Select all

void initPhysics() {
  physics=new VerletPhysics();
  physics.setDrag(0.1f);
  physics.setWorldBounds(new AABB(new Vec3D(), new Vec3D(DIM, DIM, DIM)));
  if (surface!=null) {
    surface.reset();
    mesh.clear();
  }
  boundingSphere=new SphereConstraint(new Sphere(new Vec3D(), DIM), SphereConstraint.INSIDE);
  gravity=new GravityBehavior(new Vec3D(0, 1, 0));
  centerAttractor = new AttractionBehavior(new Vec3D(0, 0, 0), 10000, 1.f, 0.1f);
  physics.addBehavior(centerAttractor);

  //  physics.addBehavior(gravity);
}

void updateParticles() {
  Vec3D grav=Vec3D.Y_AXIS.copy();
  //  grav.rotateX(mouseY*0.01);
  //  grav.rotateY(mouseX*0.01);
  gravity.setForce(grav.scaleSelf(0));
  numP=physics.particles.size();
  // add new particles
  if (random(1)<0.8 && numP<NUM_PARTICLES) {
    //    VerletParticle p=new VerletParticle(new Vec3D(random(-1, 1)*10, 0 , random(-1, 1)*10));
    //    if (useBoundary) p.addConstraint(boundingSphere);
    //    physics.addParticle(p);
    //    bubblePoints.add(p);
  }
  if (numP>10 && physics.springs.size()<1400) {
    for (int i=0; i<60; i++) {
      if (random(1)<0.04) {
        VerletParticle q=physics.particles.get((int)random(numP));
        VerletParticle r=q;
        while (q==r) {
          r=physics.particles.get((int)random(numP));
        }
        //        physics.addSpring(new VerletSpring(q, r, REST_LENGTH, 0.0002));
      }
    }
  }
  float len=(float)numP/NUM_PARTICLES*REST_LENGTH;
  for (Iterator i=physics.springs.iterator (); i.hasNext(); ) {
    VerletSpring s=(VerletSpring)i.next();
    s.setRestLength(random(0.9, 1.1)*len);
  }
  physics.update();
} 
sketch_3D_Bubble.zip
Source Code
(963.49 KiB) Downloaded 219 times
屏幕快照 2016-06-10 下午12.32.49.png
Demo Video 1
Attachments
屏幕快照 2016-06-10 下午12.32.23.png
Demo Video 2
Last edited by qiu0717 on Fri Jun 10, 2016 11:48 am, edited 6 times in total.

xindi
Posts: 8
Joined: Wed Jan 06, 2016 1:39 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by xindi » Tue Jun 07, 2016 12:31 pm

Acoustic Aura

Xindi Kang

It’s astonishing how in the contemporary art world, shapes of everything could produce an artistic image, and sound of any kind could be music. This suggests that we could be living in an art world as long as we can perceive it that way. I personally feel really glad that I’ve been given this gift of perceiving a lot of things in a sensitive and artistic way. And I’ve always been looking for ways to visually express and share these things I perceive. Now that technology has enabled us to work in the third dimension in a dynamic way, unlike painting oil color on canvas or sculpting with plaster, we, as artists, now visualize the true imagination and emotions wavering in the our minds, literally.
MeditationAuraLady.jpg
This project will open your eyes to the world of music. See yourself in a new world where the fluctuation of your aura with the sound you hear and produce will be virtually visualized. Speaking of aura, it is sort of a paranormal phenomenon that not everyone believes exists.

Here’s our old friend Wikipedia’s definition for “Aura”:
“In parapsychology and spiritual practice, an aura is a field of subtle, luminous radiation surrounding a person or object like the halo or aureola in religious art. In Buddhism theories, it is believed that all objects and all living things manifest such an aura.”

“Attempts to prove the existence of auras scientifically have repeatedly met with failure; for example people are unable to see auras in the dark, and auras have never been successfully used to identify people when their identifying features are otherwise obscured in controlled tests.”

Even though the existence of aura is not scientifically proved, people’s belief was not shaken because of it. The important thing here is not the finding the proof. Instead, especially when we’re crossing the boarder of art and science, the crucial motivation for development in both humanities and technology is imagination. Therefore, visualization in a virtual space becomes a necessary and functional medium.

There are two modes for the visualization:
1. Intrinsic Mode
In this mode a built in music track will play and a “cube cloud” model of the person/people in range will be detected and their shapes, meaning the height, width and depth of the cubes, and colors will change in response to the music. Each value of the wave analysis will be a different changing factor of each body part. And each part of the body will respond to the sound differently (Body parts are identified using skeleton data)

2. Input Mode
In this mode the person/ people in range will use either a microphone or some other acoustic sound input to produce sound that changes the 3D “cube cloud” of his or her/ their own image.

Screen Shot 2016-06-07 at 11.48.03 AM.png
Screen Shot 2016-06-07 at 11.49.02 AM.png
Screen Shot 2016-06-07 at 11.47.16 AM.png
Project File Attached Below:
AnalyzeSoundC1 copy.zip
(3.17 MiB) Downloaded 223 times
Last edited by xindi on Tue Jun 07, 2016 1:11 pm, edited 5 times in total.

changhe
Posts: 6
Joined: Wed Jan 06, 2016 1:39 pm

Re: Proj 5: Final Project II: Actual Documentation

Post by changhe » Tue Jun 07, 2016 12:32 pm

【HGCI】
Hand Gesture Control Interface

Hilda He
2016 Spring
Inspired by Tim Wood's Phase Space and Jonathan Feinberg's PeasyCam

Concept:

I made a simple hand guest recognition and control interacion sub-system for artist to embed and use in their existing 3D projects. Like PeasyCam, you just import the library and it will make your program react to mouse. My goal is to make HGCI an independent and useful library.

The way to use HGCI is really simple. Plug kinect in and let it recognize you by waving your hands one at each time. You will see a small hand icon on the screen. And then you can us it to present your project and interact with it.

The idea behind is super consice: free your hands and let it control the thing you made.

This is an experimental library not yet published to the pulic.


Documentation:
  • Methods:
    setupHGCI(): set up HGCI with default settings. Run in the setup stage of your program. No parameter. No return value.

    HGCI(): get information from kinect. It detects and recogenize you. No parameter. No return value.

    showHands(boolean icon): show hands in the 3D environment, so you could find the corresponding positions of your hands. One parameter: Boolean: true for hand icon to be black, false for hand icon to be white color. No return value.

    beginHGCI(): Start interaction for 3D shapes. Put it ahead of the code you use to draw any shape. Have to use endHGCI() to end this process. No parameter. No return value.

    endHGCI(): End the hand interaction. Have to pair up with bginHGCI(). No parameter. No return value.

    HGCIwithGUI(): cooperate with GUI elements artist uses in the same program. Use this funtion ahead of any GUI code, such as codes for controlP5. No parameter. No return value.


    Fields:
    SimpleOpenNI: inherited from SimpleOpenNI and use its library
Instruction:

1. unzip it, and open it with processing2
2. download required libraries: SimpleOpenNI
3. plug in kinect before start the program
4. face the kinect, let the system recongnize both your hands by waving or pushing your hand quickly one hand at each time. You will see a small hand icon when it recognize you.
5. play with it while making sure both your hands are detected by kinect.
6. give it a little bit patient, the environment might affect the performance.


Demo

I create two examples for HGCI. One is my own data visualization project. The other one is Lu Liu's project. Thanks Lu LIU for offering her project as a demo object for me. And they both work well with my project.

Rubik -- Hilda He:
example1.jpg
Rubik demo
Film Adaption Influence -- Lu Liu:
example2.jpg
Film Adaption Influence
Welcome more project and artist who want to try out my tool to embed my sub-system.


Future Development:

I'll definitly implement more functions. The next step is to make it fully functional as mouse.
But it requires infastructure change to my project because only inherite SimpleOpenNi seems too limited, expecially fingure tip detection isn't provided, which is a big barrier for further development. I need to even look and work into lower and basic layers of kinect.
Attachments
HGCI.zip
HGCI
(18.59 KiB) Downloaded 218 times

Post Reply