Down in the Open Space

Here is the first image of a person walking by the system, as you can see the particles are constantly moving which is what I wanted to be portrayed; showing that we are not the only things which are moving in the world, even if you can’t see it, it doesn’t mean it hasn’t got a response, the camera is showing this.

The problem here is the way the particles are responding to each other; because the camera has to pick up a much bigger space there aren’t enough particles to cause a big enough response to react to the man walking by. This can easily be fixed by adding more particles.

openspace1

Here I flipped the code adding lines, as seen previously in my designs. The benefit of this was the reaction the camera had towards the person on screen as well ash the particles responding much better to his movements. Expressing what I wanted to a much better light.

Here is someone reacting to the system, I think the same person who had noticed the work and come back to see what happens. The person was eager to see how the particles reacted by doing star jumps and moving closer to the camera, this helped show the space been left blank where the persons body was, but the particles around still staying in motion.

openspace

This is the biggest jump with the code, having made the particles bigger and also reversing the negative space as seen in previous experiment I wanted to see how to bigger space would react. It in fact not being beneficial to the initial idea at all, the particles don’t have the motion which always shows movement anymore and the bold colours of the persons body isn’t strong enough to draw attention towards the particles.

I would describe this as the worst out come for the digital environment; it lead to the camera lagging once again and the particle where not responding enough to represent my concept, but the colours are nice and you can see the Tardis very well.

openspacebigparticles

Overall it was a shame that there weren’t many people in the foyer at the time because my response may have been more deeming, but the way the almost particle system symbolised the space looked very nice and portrays my concept ideally, but not to the standard I want.

Edited code for second installation below –V–

Capture video;
boolean cheatScreen;

String letterOrder =
  "------------------------------------------" +
  "0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO";
char[] letters;

float[] bright;
char[] chars;

PFont font;
float fontSize = 1.8;


void setup() {
  size(640, 480);

  filter(GRAY);
  // This the default video input, see the GettingStartedCapture 
  // example if it creates an error
  video = new Capture(this, 160, 120);
 
  
  // Start capturing the images from the camera
  video.start(); 
Advertisements

That One PARTicle That Just Can’t Keep Up

for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];

int r = (pixelColor >> 16) & 0xff;
int g = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

Here I have made a slight changes to the colour, I have changed the attributes from “r,g,b,” to “h,s,b,”. The reason for this is to add a clearer image to what is been projected, allowing people to see what is going better. At the moment I feel the way the particles are looking doesn’t look clear enough and show the blank spot where the person is well enough, changing these attributes will hopefully improve this.

for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];

int h = (pixelColor >> 16) & 0xff;
int s = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

Also changing the font size again, this time too 2.5.

colourchangep

As you can see I have encountered a little problem

theproblemThe problem I have encountered here is the camera cannot keep up with my movements, this is a big deal because of the pace people walk through Weymouth House, they will not notice that there bodies are being blanked out. Trying to fix this problem I added in a fps command;

frameRate(60);

This had no response to the work; nothing changed so I fluctuated between higher and lower frame rates and this still had no difference….So I tried cutting the amount of letters (particles) in half to reduce the collisions between them, again this had no difference. This problem is a tricky one to figure out, so I’m going to move from w119 and head down to the walking space and see if this makes any difference.

openspaceWhat you’re seeing here is the first view of the production in the Weymouth House walk through; me stepping away from to webcam to see if it is clear that I am standing there. The lag on the image isn’t as bad as it was up in W119, that could be due to the wider every of space, I did also reduce the size of the particles to allow more space for movement, hence dark spots.

I’ve Made A Decision

I’ve decided to use more than one of the code manipulations in this space, considered as further experimentation I am going to explain it as; “which code reacts better to the bigger environment.”

The A to Zero Design Process

From the starting point my main objective is to achieve the best representation of what a particle system would look like by manipulating the code, here are some sketches of what I’m trying to show.10933874_10204886313190887_4319158734198323611_n-2What we have with the original piece is all the letters of the alphabet and grammar creating the environment

String letterOrder =
" .`-_':,;^=+/"|)\<>)iv%xclrs{*}I?!][1taeo7zjLu" +
"nT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q";
char[] letters;

The code above ^

I am going to experiment with these letters and change them into symbols like lines and dots; trying to portray the movement in the environment to best of my ability.  In the sketch above I have tried to show little circles which I can develop the code into, I’ve changed the code see above to have ‘0’ to see what would happen.

String letterOrder =
"000000000000000000000000000000000000000000000000" ;
char[] letters;

00000As you can see the results are not what I want, the ‘0’s are not moving like particles and the image looks more like a stained glass effect. The way that I may fix this is if I replace the ‘0’ with different size round shapes e.g ‘O’ and ‘o’.

The results made no different, so I started playing around with the font size of the letters, replacing the code with this:

String letterOrder =
"0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO" ;
char[] letters;

float[] bright;
char[] chars;

PFont font;
float fontSize = 1.5;

The Results –v– Changed fontSize from ‘2’ to 1.5.0oOThese are the results I’m looking for, what’s great about this is the way the “Particles” are knocking against each other, portraying the beginnings of my concept. The simple change in size of the letters and made such a big difference that a static image has become alive.

The smaller the particles the more that are needed to fill out the image. The expression here relates to the concept, but there are a few problems.

The Problems

I’m not convinced with the way motion is being shown, yes it is the particle portrayal that I want, but it’s something about the colour that I’m not really enjoying looking at. It’s important that I get the colour the way I want it, so what I’m going to do is mess about with which colours are being used, the code I am going to manipulate is below –v–

pushMatrix();
for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];
// Faster method of calculating r, g, b than red(), green(), blue()
int r = (pixelColor >> 16) & 0xff;
int g = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

// Another option would be to properly calculate brightness as luminance:
// luminance = 0.3*red + 0.59*green + 0.11*blue
// Or you could instead red + green + blue, and make the the values[] array
// 256*3 elements long instead of just 256.
int pixelBright = max(r, g, b);

As well as messing about with the colour I’m also going to see what the piece would look like if I where to use lines rather that circles. The reason I came to this was because of some of the sketch designs I created, I found that the way I was using my pen to draw the structure of peoples bodies where been show with a sound wave mimic. Let’s check what this looks like.

I Found Something

Doing my sketches and scrolling through examples I have come across something.

AsciiVideo

Jan 24, 2015 15:02I’ve hit a stroke of luck here, this piece being one of the examples of Processing it has all the aspects I am looking for;

Colour, Motion and Light

Here the colours are being replaced with ‘letters’ and they are hitting against each other to show constant reaction, this is the effect I want to try and portray (showing constant motion). From what I understand (what I see without looking at the code) the letters have slight tints dependents on the lighting in the room, but it’s not a big feature, the biggest feature here is the letters acting as the image. Very similar to the way particles would act.

I am going to develop on these letters and see where it could go. The reason for this being it’s structured way of portraying what the camera can see; it’s showing another way of seeing life and to me it looks quite like a particle system relating back to what I want my Concept to be. I could try and adapt these letters in other formats, replacing them with little dots may give the best portrayal of particles.

 

 

 

S*** already hit the FAN, or the Mac

As mentioned before I was aware that I MAY need a Kinetic Camera to work some of the code I wanted to experiment with, but it does turn out you ACTUALLY do need a Kinetic Camera to work this code. Below is not the entire of the imported Library, but just a bit of it so you get an idea of what I am looking at;


import SimpleOpenNI.*;

SimpleOpenNI context;
float zoomF =0.3f;
float rotX = radians(180); // by default rotate the hole scene 180deg around the x-axis,
// the data from openni comes upside down
float rotY = radians(0);
PShape pointCloud;
int steps = 2;

void setup()
{
size(1024,768,P3D);

//context = new SimpleOpenNI(this,SimpleOpenNI.RUN_MODE_MULTI_THREADED);
context = new SimpleOpenNI(this);
if(context.isInit() == false)
{
println("Can't init SimpleOpenNI, maybe the camera is not connected!");
exit();
return;
}

// disable mirror
context.setMirror(false);

// enable depthMap generation
context.enableDepth();

context.enableRGB();

// align depth data to image data
context.alternativeViewPointDepthToImage();
context.setDepthColorSyncEnabled(true);

stroke(255,255,255);
smooth();
perspective(radians(45),
float(width)/float(height),
10,150000);
}

endShape();

This is the response I receive when I activate the the file;

Screen Shot 2015-01-22 at 17.24.53

“You know what that is…**** *******!”

That is a whole lot of red and not pretty at all, with me I never expected it to be easy, but when the code is already created and you just want to see what it looks like you would think that it would work pretty simply.

In the true Lorimer style I am going ignore all the red and look at the white, the key statement here being;

“Can’t init SimpleOpenNI, maybe the camera is not connected!”

The camera is clearly connected!! A Mac Book Air worth almost a thousand pounds comes with an inboard webcam, it might as well come with a kettle and Twinnings breakfast tea for that price and it’s telling me the camera isn’t connected…I need to find out the problem. Be right back.

Later on….

So it’s as I thought, I 100% need a Kinetic Camera.

comics-extralife-kinect-xbox-one-716039The Kinetic camera already has the preferences installed to work the files, hence it being created for Microsoft it’s code is already created to notice people in the room so they can interact with games. Even this being so I tried to channel the source to the webcam I rented from the University and also the Mac cam. Speaking to a couple of people on my course I thought this would be quite simple;

Adding the my void Draw sections;

cam = newCapture(this,320,240,"FaceTime HD Camera", 30);

I thought this would work, but another error appeared explaining that there is no such thing as “cam”. Reading this now you may be thinking to yourself that it’s incredibly obvious that I cannot fix this problem, which I do know now, but I kept on trying to find another way around it.

Looking at the “GettingStartedCapture” CV offer as a basic to learn how a Webcam works on Processing, code below

import processing.video.*;

Capture cam;

void setup() {
size(640, 480);

String[] cameras = Capture.list();

if (cameras == null) {
println("Failed to retrieve the list of available cameras, will try the default...");
cam = new Capture(this, 640, 480);
} if (cameras.length == 0) {
println("There are no cameras available for capture.");
exit();
} else {
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {
println(cameras[i]);
}

// The camera can be initialized directly using an element
// from the array returned by list():
cam = new Capture(this, cameras[0]);
// Or, the settings can be defined based on the text in the list
//cam = new Capture(this, 640, 480, "Built-in iSight", 30);

// Start capturing the images from the camera
cam.start();
}
}

void draw() {
if (cam.available() == true) {
cam.read();
}
image(cam, 0, 0);
// The following does the same as the above image() line, but
// is faster when just drawing the image without any additional
// resizing, transformations, or tint.
//set(0, 0, cam);
}

I thought by copying a pasting this code into my first file and deleting the repetitive factors it would solve the problem and again I was wrong, but the camera would activate for a split second and then deactivate. This was starting to get annoying….So I went for a chat with a couple of my course mates, it was mentioned that I could try and do this without the Kinetic Camera, but it would involve telling the code to ignore density and go straight for motion, I could imagine how I would do this.

Just for experimenting I was wasting too much time on this particular code, so time to get on to other things.