Final(ly) Evaluation

What I have Learnt

Throughout this project I have learnt the fundamentals of Design Iteration, going from point to point; Analyse, Design and Testing. Having been reminded of the process which I need to take when designing a piece of work and documenting it’s progress. I have also learnt parts of Processing code and how to manipulate works to portray an image I aspire. These processes have widened my knowledge of design processes and given me an insight to the lengths I need to go to to come to a final piece which works; following the track I took the development of lighting, motion and colour was expanded into and understanding of spaces, and social interacts. Touching on the knowledge I already possessed from life experiences I adapted what I was taught together to create a piece of work which questions the motion of people and the motion of life (what happens around us as people).

What I have Achieved

I achieved the adaptation of a piece of processing code which lead to me creating a digital environment that portrays the physicality that there is motion in everything, even if it can’t be seen. I developed an “Almost Particle System” which represented a particle system that allowed passers to see the environment they stood in evolve in to something which is constantly in motion. The System had a main focus on what was going on around people rather than focusing sourly on the person viewing and when a motion was performed by people the particles would react, showing that every action has a reaction. Symbolising that that there is always a cause from an effect. I also achieved a new found knowledge of depth and concept understanding, relating the two things together because of how my concept was affected due the depth of the space I was using as well as text size.

What Went Wrong

The biggest mistake I made was not taking into account how big the space was in the foyer; I had been doing all my experiments and designs in close proxemics to myself and when it lead to me setting up in Weymouth house the particles did not respond to the environment as well. The particles would not react with each other as well as when the camera was focused on something closer, and the code could not blank out the entire person when they walked past. This had a massive effect on the final piece as a whole, leading to me installing the system and changing the code as I went along, giving different interpretations of what could be used. The benefit of this was a wider knowledge of how the space was effect by the particles, giving me an insight into how cameras react to a large and small space as well as the motion of people walking by.

What I Would Change For Next Time

Next time I would write the Particle System myself from scratch, at the point of deciding to write the system my knowledge was not acute enough to create it. I would analyse the space to a proper degree and begin experimenting in either the space I would be installing in or in a similar space . I would like the Particle System I would create to react to people when they walk on screen, first starting off a showing the whole person and then them evolving into Particles. I would also do wider research in other systems which are related to particles, giving myself a wider insight into what I could have adapted my skills to leading to a better result.

Down in the Open Space

Here is the first image of a person walking by the system, as you can see the particles are constantly moving which is what I wanted to be portrayed; showing that we are not the only things which are moving in the world, even if you can’t see it, it doesn’t mean it hasn’t got a response, the camera is showing this.

The problem here is the way the particles are responding to each other; because the camera has to pick up a much bigger space there aren’t enough particles to cause a big enough response to react to the man walking by. This can easily be fixed by adding more particles.

openspace1

Here I flipped the code adding lines, as seen previously in my designs. The benefit of this was the reaction the camera had towards the person on screen as well ash the particles responding much better to his movements. Expressing what I wanted to a much better light.

Here is someone reacting to the system, I think the same person who had noticed the work and come back to see what happens. The person was eager to see how the particles reacted by doing star jumps and moving closer to the camera, this helped show the space been left blank where the persons body was, but the particles around still staying in motion.

openspace

This is the biggest jump with the code, having made the particles bigger and also reversing the negative space as seen in previous experiment I wanted to see how to bigger space would react. It in fact not being beneficial to the initial idea at all, the particles don’t have the motion which always shows movement anymore and the bold colours of the persons body isn’t strong enough to draw attention towards the particles.

I would describe this as the worst out come for the digital environment; it lead to the camera lagging once again and the particle where not responding enough to represent my concept, but the colours are nice and you can see the Tardis very well.

openspacebigparticles

Overall it was a shame that there weren’t many people in the foyer at the time because my response may have been more deeming, but the way the almost particle system symbolised the space looked very nice and portrays my concept ideally, but not to the standard I want.

Edited code for second installation below –V–

Capture video;
boolean cheatScreen;

String letterOrder =
  "------------------------------------------" +
  "0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO";
char[] letters;

float[] bright;
char[] chars;

PFont font;
float fontSize = 1.8;


void setup() {
  size(640, 480);

  filter(GRAY);
  // This the default video input, see the GettingStartedCapture 
  // example if it creates an error
  video = new Capture(this, 160, 120);
 
  
  // Start capturing the images from the camera
  video.start(); 

That One PARTicle That Just Can’t Keep Up

for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];

int r = (pixelColor >> 16) & 0xff;
int g = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

Here I have made a slight changes to the colour, I have changed the attributes from “r,g,b,” to “h,s,b,”. The reason for this is to add a clearer image to what is been projected, allowing people to see what is going better. At the moment I feel the way the particles are looking doesn’t look clear enough and show the blank spot where the person is well enough, changing these attributes will hopefully improve this.

for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];

int h = (pixelColor >> 16) & 0xff;
int s = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

Also changing the font size again, this time too 2.5.

colourchangep

As you can see I have encountered a little problem

theproblemThe problem I have encountered here is the camera cannot keep up with my movements, this is a big deal because of the pace people walk through Weymouth House, they will not notice that there bodies are being blanked out. Trying to fix this problem I added in a fps command;

frameRate(60);

This had no response to the work; nothing changed so I fluctuated between higher and lower frame rates and this still had no difference….So I tried cutting the amount of letters (particles) in half to reduce the collisions between them, again this had no difference. This problem is a tricky one to figure out, so I’m going to move from w119 and head down to the walking space and see if this makes any difference.

openspaceWhat you’re seeing here is the first view of the production in the Weymouth House walk through; me stepping away from to webcam to see if it is clear that I am standing there. The lag on the image isn’t as bad as it was up in W119, that could be due to the wider every of space, I did also reduce the size of the particles to allow more space for movement, hence dark spots.

I’ve Made A Decision

I’ve decided to use more than one of the code manipulations in this space, considered as further experimentation I am going to explain it as; “which code reacts better to the bigger environment.”

The Line Between Two Designs

The next step to try a different interpretation of what I am doing, before I get really deep into the code I need to state which direction I want to go in, either working the particles like the plan has always been or delving into something else, working with lines rather than dots or maybe even something completely different. Options being numbers or playing around with size of the letters, not like before, in turns of absolutely massive or really tiny. I will have to prepare myself whether lighting will have a big effect on the sizes and if this could effect the distance the camera could see.

10906287_10204886449954306_241422977948094300_nI’ll just get straight into trying out the lines, code used below –v–

String letterOrder =
"-__--__==__--__==__--__==__--__==__--__==__--__==__--__==__--__==" ;
char[] letters;

Using equals signs and similar line shapes I have replaced all the zeros, here are the results –v–Screen Shot 2015-01-25 at 21.41.48Yes! That is just a black screen!

There is obviously a problem here….

Doing further insight into what’s gone wrong here and not having any errors the problem is with density; the letters need to have a curtain density for them to placed onto the screen. The problem with using lines and lets which are very thin is the density of the symbols isn’t computed by the code leading to the blank screen.

Because I have come across this problem I am going to mess about a bit with the code and see what I come up with.

largefont

PFont font;
float fontSize = 3;

Increasing the size of the font the 3, usually being 1.5 you can see that the size of the letters is cause a bigger confrontation with each other dismantling the image and making it look quite (inaway) pixelated.

I not sure if I really like this effect, it has quite a lot of lag which is shame and does portray my concept to it’s full degree.

theblackspot

String letterOrder =
" .`-_':,;^=+/"|)\<>)iv%xclrs{*}I?!][1taeo7zjLu" +
"0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO";
char[] letters;

reversed particles

This is the reverse of the one before; having the particle on the inside, to do this I used what I learnt about density in the letters and made the top like of letters all lines and the bottom line ‘O’s’, creating what is above.

String letterOrder =
"------------------------------------------------" +
"0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO";
char[] letters;

 Here I have kept the top line of text which has worked out quite well for me; the way the pixels are reacting to my body shape and the outside if very similar to one of the drawings I have.

10419987_10204886438634023_5063957941942802546_nHere I was trying to show the difference I want between the body and what’s outside the body, and with the size of the text and the way code is laid out in the form of letter density order it has resulted in the main focus being what’s happening outside of the structure. Another thing I have noticed is the way the shade black doesn’t have any letters, it is a completely blank space when on camera. When I move my body the letters (particles) react in such away that it show’s what is happening when the motion takes place, even if it can’t be seen the camera is showing it, exactly what my concept is.

The Next Step

I want to make the particles clearer, giving the image more clarity and nicer imagery. My next step of experimentation is to mess about with the colour attributes and how the particles are represented.

The A to Zero Design Process

From the starting point my main objective is to achieve the best representation of what a particle system would look like by manipulating the code, here are some sketches of what I’m trying to show.10933874_10204886313190887_4319158734198323611_n-2What we have with the original piece is all the letters of the alphabet and grammar creating the environment

String letterOrder =
" .`-_':,;^=+/"|)\<>)iv%xclrs{*}I?!][1taeo7zjLu" +
"nT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q";
char[] letters;

The code above ^

I am going to experiment with these letters and change them into symbols like lines and dots; trying to portray the movement in the environment to best of my ability.  In the sketch above I have tried to show little circles which I can develop the code into, I’ve changed the code see above to have ‘0’ to see what would happen.

String letterOrder =
"000000000000000000000000000000000000000000000000" ;
char[] letters;

00000As you can see the results are not what I want, the ‘0’s are not moving like particles and the image looks more like a stained glass effect. The way that I may fix this is if I replace the ‘0’ with different size round shapes e.g ‘O’ and ‘o’.

The results made no different, so I started playing around with the font size of the letters, replacing the code with this:

String letterOrder =
"0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO" ;
char[] letters;

float[] bright;
char[] chars;

PFont font;
float fontSize = 1.5;

The Results –v– Changed fontSize from ‘2’ to 1.5.0oOThese are the results I’m looking for, what’s great about this is the way the “Particles” are knocking against each other, portraying the beginnings of my concept. The simple change in size of the letters and made such a big difference that a static image has become alive.

The smaller the particles the more that are needed to fill out the image. The expression here relates to the concept, but there are a few problems.

The Problems

I’m not convinced with the way motion is being shown, yes it is the particle portrayal that I want, but it’s something about the colour that I’m not really enjoying looking at. It’s important that I get the colour the way I want it, so what I’m going to do is mess about with which colours are being used, the code I am going to manipulate is below –v–

pushMatrix();
for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];
// Faster method of calculating r, g, b than red(), green(), blue()
int r = (pixelColor >> 16) & 0xff;
int g = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

// Another option would be to properly calculate brightness as luminance:
// luminance = 0.3*red + 0.59*green + 0.11*blue
// Or you could instead red + green + blue, and make the the values[] array
// 256*3 elements long instead of just 256.
int pixelBright = max(r, g, b);

As well as messing about with the colour I’m also going to see what the piece would look like if I where to use lines rather that circles. The reason I came to this was because of some of the sketch designs I created, I found that the way I was using my pen to draw the structure of peoples bodies where been show with a sound wave mimic. Let’s check what this looks like.

You Live and You Learn

The Kinect, mentioned in the previous post as something which would be an absolute life saver I managed to get my hands on one from the university. I was over joyed that they had one and thought all my problems would be solved, I was 100% wrong…just spent the last three hours typing away on Terminal and downloading different programs so that Processing could ready the Kinect and use it as an output, to find that it was NEVER going to work because the model number of the kit is too new. I needed  an older version model, mine being 1473 and the one i need having to be 1414.
As you can imagine this is quite frustrating, putting a stunt on the work I was going to create.

But no worries I can still get round this; the whole Kinect idea is now dropped I am just going to use a normal webcam. The down side to this is the benefit of having the Kinect is it’s ability to recognise depth and people, saving me having to right the code I will find another way round this.

What I’m going to do now

The job now is to find a fix for this problem, already thinking about it whilst I slowly failed at terminal I will have to focus the camera to recognise movement rather that big shapes (like bodies), below is an example of what I wanted to achieve with the Kinect camera. What you are looking at is one picture (the middle) have the persons entire body being the blank space the particles being in motion around, the other is the reverse of this, body being the particles and the blank space being around. I can still do a basic idea of this which is going to be much simpler than what I planned.10931510_10204872979937564_2224866543232384509_nThe Fix

Using;

  • Colour
  • Motion
  • Lighting

I will be able to adapt a similar, but complete different interface.

Colour

  • Using the change of colours I will adapt the webcam to base the code of recognising colour, so imagine in the picture below the lines are red and the persons body is blue; when the blue overlaps the red you will see movement. Relating to the way I wanted the particles to move.

Motion

  • Having the Particles been replaced with colour (making colour the main focus) the next key aspect is motion; it is important to make sure that Motion is recognised when passers walk by, I will have to think about away of the colour not being static having it in a focus which shows the world moving even if it is not.

Lighting

  • The use of lighting is something I could add to give this idea an extra sparking, having different tones of colour to create a different perspective when being output, I will look into this more later.

notes 3

Time for some more sketches 

 

S*** already hit the FAN, or the Mac

As mentioned before I was aware that I MAY need a Kinetic Camera to work some of the code I wanted to experiment with, but it does turn out you ACTUALLY do need a Kinetic Camera to work this code. Below is not the entire of the imported Library, but just a bit of it so you get an idea of what I am looking at;


import SimpleOpenNI.*;

SimpleOpenNI context;
float zoomF =0.3f;
float rotX = radians(180); // by default rotate the hole scene 180deg around the x-axis,
// the data from openni comes upside down
float rotY = radians(0);
PShape pointCloud;
int steps = 2;

void setup()
{
size(1024,768,P3D);

//context = new SimpleOpenNI(this,SimpleOpenNI.RUN_MODE_MULTI_THREADED);
context = new SimpleOpenNI(this);
if(context.isInit() == false)
{
println("Can't init SimpleOpenNI, maybe the camera is not connected!");
exit();
return;
}

// disable mirror
context.setMirror(false);

// enable depthMap generation
context.enableDepth();

context.enableRGB();

// align depth data to image data
context.alternativeViewPointDepthToImage();
context.setDepthColorSyncEnabled(true);

stroke(255,255,255);
smooth();
perspective(radians(45),
float(width)/float(height),
10,150000);
}

endShape();

This is the response I receive when I activate the the file;

Screen Shot 2015-01-22 at 17.24.53

“You know what that is…**** *******!”

That is a whole lot of red and not pretty at all, with me I never expected it to be easy, but when the code is already created and you just want to see what it looks like you would think that it would work pretty simply.

In the true Lorimer style I am going ignore all the red and look at the white, the key statement here being;

“Can’t init SimpleOpenNI, maybe the camera is not connected!”

The camera is clearly connected!! A Mac Book Air worth almost a thousand pounds comes with an inboard webcam, it might as well come with a kettle and Twinnings breakfast tea for that price and it’s telling me the camera isn’t connected…I need to find out the problem. Be right back.

Later on….

So it’s as I thought, I 100% need a Kinetic Camera.

comics-extralife-kinect-xbox-one-716039The Kinetic camera already has the preferences installed to work the files, hence it being created for Microsoft it’s code is already created to notice people in the room so they can interact with games. Even this being so I tried to channel the source to the webcam I rented from the University and also the Mac cam. Speaking to a couple of people on my course I thought this would be quite simple;

Adding the my void Draw sections;

cam = newCapture(this,320,240,"FaceTime HD Camera", 30);

I thought this would work, but another error appeared explaining that there is no such thing as “cam”. Reading this now you may be thinking to yourself that it’s incredibly obvious that I cannot fix this problem, which I do know now, but I kept on trying to find another way around it.

Looking at the “GettingStartedCapture” CV offer as a basic to learn how a Webcam works on Processing, code below

import processing.video.*;

Capture cam;

void setup() {
size(640, 480);

String[] cameras = Capture.list();

if (cameras == null) {
println("Failed to retrieve the list of available cameras, will try the default...");
cam = new Capture(this, 640, 480);
} if (cameras.length == 0) {
println("There are no cameras available for capture.");
exit();
} else {
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {
println(cameras[i]);
}

// The camera can be initialized directly using an element
// from the array returned by list():
cam = new Capture(this, cameras[0]);
// Or, the settings can be defined based on the text in the list
//cam = new Capture(this, 640, 480, "Built-in iSight", 30);

// Start capturing the images from the camera
cam.start();
}
}

void draw() {
if (cam.available() == true) {
cam.read();
}
image(cam, 0, 0);
// The following does the same as the above image() line, but
// is faster when just drawing the image without any additional
// resizing, transformations, or tint.
//set(0, 0, cam);
}

I thought by copying a pasting this code into my first file and deleting the repetitive factors it would solve the problem and again I was wrong, but the camera would activate for a split second and then deactivate. This was starting to get annoying….So I went for a chat with a couple of my course mates, it was mentioned that I could try and do this without the Kinetic Camera, but it would involve telling the code to ignore density and go straight for motion, I could imagine how I would do this.

Just for experimenting I was wasting too much time on this particular code, so time to get on to other things.

Concepts…A designer who writes essays

Having just gone through writing an essay for my ‘Cross Media Creativity Perspective’ my brain is filled with what I would call ‘MindBlowingChaos‘ ‘Concept Overload’; working with ‘Participatory Culture’.

Extract from my ‘Cross Media Creativity’ Essay – Focusing on the ‘Doctor Who’ Phenomenon

“Through out this course my position in the cross media sphere has been enhanced; coming into the perspective from a ‘Digital Media Design’ view my knowledge of Cross Media Creativity has expanded to more than just the knowledge of tablets, computers and phones. Being taught that cross media creativity is more that just viewing media on a screen I have learnt that the use of re-inventing a franchise in your own way is part of this perspective. When thinking about productions like Doctor Who I would always consider the show to be running on its own power; the cast and crew creating the show in parallel with the audience enjoyment in the phenomena. This reflects an understanding of the power fluctuations between the creator and consumer. This having a huge role to play with such a big phenomena cross media creativity brings the audience and what they love watching together and allows opportunities to be part of it. This unit has impacted on my future media consumption; a guilty pleasure of mine is watching Doctor Who fan made trailers and now when watching them I don’t just view what someone has created; I view why they have created it that way, what are they trying to say and how could this effect the entirety of Doctor Who. To come with future projects I will now always take into account how it will be consumed and what production values may effect my inspirations. Throughout the unit the learning curve has been hard, but enjoyable giving me an eagerness to learn how much wider the scale goes with Cross Media Creativity.”

Above is an example of my own interpretation of ‘Participatory Culture’, this created by me in First year.

Taking this theory concept in to context I am going analyse a way of ‘maybe’ incorporating it into my Processing piece; taking a Phenomena in the world and manipulating it into a work.

 

What is Processing?

Processing is an open source programming language and integrated development environment (IDE) built for the electronic arts, new media art, and visual design communities with the purpose of teaching the fundamentals of computer programming in a visual context, and to serve as the foundation for electronic sketchbooks.

https://processing.org


processcompendium07

This work above is an example of what can be created through the use of Processing. My knowledge being based in HTML/CSS the potential to create something above through a piece of code is quite over whelming; being able to sit down a use a pencil to create a piece of art similar to the above would have been expressed as ‘common sense’ to a person like me, but being able to use ‘code’ to write a work of art has opened my views to this.processing_workshop_20141010Processing Spiral

spiral2

Above is the result of the code…

Here I’m at a point where I am experimenting with the potential of what could come out of coding, hopefully eventually leading me to the point where I become inspired by something I would like to create.

Here I’m find the repeat of the spiral too simple; something so basic which hasn’t taken me long to create doesn’t satisfy what I want to do.

Moving on the something better a bit more complex

Now it’s time for 2nd Year

So it has taken me a solid 2 months to actually get to the point of being able to say;

“I am on the verge of being half towards catching up with all my work”

My website is at the point where I am happy to leave it for now and press on with what is important….second year on my DMD University course.

http://knowmanlorimer.co.uk/

Website Header

Prepare yourself for some Design.