Final(ly) Evaluation

What I have Learnt

Throughout this project I have learnt the fundamentals of Design Iteration, going from point to point; Analyse, Design and Testing. Having been reminded of the process which I need to take when designing a piece of work and documenting it’s progress. I have also learnt parts of Processing code and how to manipulate works to portray an image I aspire. These processes have widened my knowledge of design processes and given me an insight to the lengths I need to go to to come to a final piece which works; following the track I took the development of lighting, motion and colour was expanded into and understanding of spaces, and social interacts. Touching on the knowledge I already possessed from life experiences I adapted what I was taught together to create a piece of work which questions the motion of people and the motion of life (what happens around us as people).

What I have Achieved

I achieved the adaptation of a piece of processing code which lead to me creating a digital environment that portrays the physicality that there is motion in everything, even if it can’t be seen. I developed an “Almost Particle System” which represented a particle system that allowed passers to see the environment they stood in evolve in to something which is constantly in motion. The System had a main focus on what was going on around people rather than focusing sourly on the person viewing and when a motion was performed by people the particles would react, showing that every action has a reaction. Symbolising that that there is always a cause from an effect. I also achieved a new found knowledge of depth and concept understanding, relating the two things together because of how my concept was affected due the depth of the space I was using as well as text size.

What Went Wrong

The biggest mistake I made was not taking into account how big the space was in the foyer; I had been doing all my experiments and designs in close proxemics to myself and when it lead to me setting up in Weymouth house the particles did not respond to the environment as well. The particles would not react with each other as well as when the camera was focused on something closer, and the code could not blank out the entire person when they walked past. This had a massive effect on the final piece as a whole, leading to me installing the system and changing the code as I went along, giving different interpretations of what could be used. The benefit of this was a wider knowledge of how the space was effect by the particles, giving me an insight into how cameras react to a large and small space as well as the motion of people walking by.

What I Would Change For Next Time

Next time I would write the Particle System myself from scratch, at the point of deciding to write the system my knowledge was not acute enough to create it. I would analyse the space to a proper degree and begin experimenting in either the space I would be installing in or in a similar space . I would like the Particle System I would create to react to people when they walk on screen, first starting off a showing the whole person and then them evolving into Particles. I would also do wider research in other systems which are related to particles, giving myself a wider insight into what I could have adapted my skills to leading to a better result.


The Line Between Two Designs

The next step to try a different interpretation of what I am doing, before I get really deep into the code I need to state which direction I want to go in, either working the particles like the plan has always been or delving into something else, working with lines rather than dots or maybe even something completely different. Options being numbers or playing around with size of the letters, not like before, in turns of absolutely massive or really tiny. I will have to prepare myself whether lighting will have a big effect on the sizes and if this could effect the distance the camera could see.

10906287_10204886449954306_241422977948094300_nI’ll just get straight into trying out the lines, code used below –v–

String letterOrder =
"-__--__==__--__==__--__==__--__==__--__==__--__==__--__==__--__==" ;
char[] letters;

Using equals signs and similar line shapes I have replaced all the zeros, here are the results –v–Screen Shot 2015-01-25 at 21.41.48Yes! That is just a black screen!

There is obviously a problem here….

Doing further insight into what’s gone wrong here and not having any errors the problem is with density; the letters need to have a curtain density for them to placed onto the screen. The problem with using lines and lets which are very thin is the density of the symbols isn’t computed by the code leading to the blank screen.

Because I have come across this problem I am going to mess about a bit with the code and see what I come up with.


PFont font;
float fontSize = 3;

Increasing the size of the font the 3, usually being 1.5 you can see that the size of the letters is cause a bigger confrontation with each other dismantling the image and making it look quite (inaway) pixelated.

I not sure if I really like this effect, it has quite a lot of lag which is shame and does portray my concept to it’s full degree.


String letterOrder =
" .`-_':,;^=+/"|)\<>)iv%xclrs{*}I?!][1taeo7zjLu" +
char[] letters;

reversed particles

This is the reverse of the one before; having the particle on the inside, to do this I used what I learnt about density in the letters and made the top like of letters all lines and the bottom line ‘O’s’, creating what is above.

String letterOrder =
"------------------------------------------------" +
char[] letters;

 Here I have kept the top line of text which has worked out quite well for me; the way the pixels are reacting to my body shape and the outside if very similar to one of the drawings I have.

10419987_10204886438634023_5063957941942802546_nHere I was trying to show the difference I want between the body and what’s outside the body, and with the size of the text and the way code is laid out in the form of letter density order it has resulted in the main focus being what’s happening outside of the structure. Another thing I have noticed is the way the shade black doesn’t have any letters, it is a completely blank space when on camera. When I move my body the letters (particles) react in such away that it show’s what is happening when the motion takes place, even if it can’t be seen the camera is showing it, exactly what my concept is.

The Next Step

I want to make the particles clearer, giving the image more clarity and nicer imagery. My next step of experimentation is to mess about with the colour attributes and how the particles are represented.

Ascii Art (Design)

Video ascii art is areal time post processing effect that will transform any video into ASCII art. The effect is created in a shader and use Kickjs engine. You can see the ASCII shader in action on

Jan 25, 2015 19:15This is a lovely example of what someone has done to adapt the Ascii Video to be easily used by other users, simply uploading a video it will develop it to be produced with letters, the main feature of a Ascii.  To me when I look at this I am getting an impression of particles, the slight movements that the letters are making looks like the way I want to particles to react in an image, the only problem being that I think the image is too clear here, you can clearly see what is happening and I don’t want that to be the case. I want people to react to what is being displayed and try and disrupt it.

Here below is what I am working with, I have used my phone as an example of the way the camera and code react to the colours to change them into letters. I’ve added the code below.

Jan 25, 2015 19:38

* ASCII Video
* by Ben Fry.
* Text characters have been used to represent images since the earliest computers.
* This sketch is a simple homage that re-interprets live video as ASCII text.
* See the keyPressed function for more options, like changing the font size.


Capture video;
boolean cheatScreen;

// All ASCII characters, sorted according to their visual density
String letterOrder =
" .`-_':,;^=+/"|)\<>)iv%xclrs{*}I?!][1taeo7zjLu" +
char[] letters;

float[] bright;
char[] chars;

PFont font;
float fontSize = 1.5;
void setup() {
size(640, 480);

// This the default video input, see the GettingStartedCapture
// example if it creates an error
video = new Capture(this, 160, 120);

// Start capturing the images from the camera

int count = video.width * video.height;

font = loadFont("UniversLTStd-Light-48.vlw");

// for the 256 levels of brightness, distribute the letters across
// the an array of 256 elements to use for the lookup
letters = new char[256];
for (int i = 0; i < 256; i++) {
int index = int(map(i, 0, 256, 0, letterOrder.length()));
letters[i] = letterOrder.charAt(index);

// current characters for each position in the video
chars = new char[count];

// current brightness for each point
bright = new float[count];
for (int i = 0; i < count; i++) {
// set each brightness at the midpoint to start
bright[i] = 128;
void captureEvent(Capture c) {;
void draw() {


float hgap = width / float(video.width);
float vgap = height / float(video.height);

scale(max(hgap, vgap) * fontSize);
textFont(font, fontSize);

int index = 0;
for (int y = 1; y < video.height; y++) {

// Move down for next line
translate(0, 1.0 / fontSize);

for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];
// Faster method of calculating r, g, b than red(), green(), blue()
int r = (pixelColor >> 16) & 0xff;
int g = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

// Another option would be to properly calculate brightness as luminance:
// luminance = 0.3*red + 0.59*green + 0.11*blue
// Or you could instead red + green + blue, and make the the values[] array
// 256*3 elements long instead of just 256.
int pixelBright = max(r, g, b);

// The 0.1 value is used to damp the changes so that letters flicker less
float diff = pixelBright - bright[index];
bright[index] += diff * 0.1;

int num = int(bright[index]);
text(letters[num], 0, 0);

// Move to the next pixel

// Move over for next character
translate(1.0 / fontSize, 0);

if (cheatScreen) {
//image(video, 0, height - video.height);
// set() is faster than image() when drawing untransformed images
set(0, height - video.height, video);
* Handle key presses:
* 'c' toggles the cheat screen that shows the original image in the corner
* 'g' grabs an image and saves the frame to a tiff image
* 'f' and 'F' increase and decrease the font size
void keyPressed() {
switch (key) {
case 'g': saveFrame(); break;
case 'c': cheatScreen = !cheatScreen; break;
case 'f': fontSize *= 1.1; break;
case 'F': fontSize *= 0.9; break;


This piece of code relates to my general idea to the best that I can find, the position I am in now is take the sketches I have created and the concept idea I have and manipulate this piece of work to the fashion I want it to represent. Lets get drawing.

What Stage Am I At? (The Boring Part)

What has happened so far.

I’ve analysed a concept

What I want to portray with my digital environment piece is the concept of motion; not focusing sourly on humans being the main thing which creates movement in the world. I want to bring to people’s attention that, lets say when you move your arm (you can clearly see your arm moving) there is a reaction around the space the motion is taking place. There is a reaction to everything that happens, even if you can’t see it, it still exists. 

I came to this culture because of the recent essay I have written on ‘Participatory Culture’ where fans of a franchise take the franchise into their own hands and create a piece of work in their own image. The culture creates an action in response of something which already exist, but may not be seen. This added to life experiences of everything you do having a reaction, whether it’s not replying to a text or calling the wrong person there is always a response somewhere those causes.

I’ve analysed the space

Using the Foyer in Weymouth House we have been offered one of the screens to install our systems, this is located in an eye catching spot so people will notice the system and also the camera will be facing where most of the action happens (where people walk through). Already knowing that we have been given access to this location I conducted some research with the use of the “Independent Dorset” brief, allowing me to get an insight to the duration people come into the foyer and leading to me having a wider knowledge of how I will present my digital environment.

I’ve analysed Particle Systems

Using something which already exists “Particle Systems” I did some research into some Systems which have already been written and how they look, the use of this gave me some ideas stretching from an environment which is 100% made out of particles and when motion is detected the particles respond with movement. With further research and advise from Liam my seminar tutor the complexity of writing a Particle System was explained as it may be quite over whelming and not work, but I really wanted to stick to the idea. This came back to the concept I have and the fact that I want this piece of work to be seen as art. 

I analysed the possibilities of Particle Systems

Using research and my note book I went into how I could take this possible particle system and create something on my own, doing sketches and looking into Examples which Processing supply like ‘OpenCV’ and ‘Punktiert’ I analysed how the imagery in particles is portrayed. The results of this was looking how the particles where presented, something being small and big and others being shown with the way they move rather than the dimensions of them, but all where really quite beautiful. Some people even found ways of getting the particles to move in such away they created images, just with how to particles moved around the page. Moving forward this took me to the point of wanting the camera to notice people rather than the entire space just being turned into Particles, this is quite a tricky thing to succeed in as the code is incredibly complex and I didn’t know how to write it, but I did recall that using a Kinect webcam already has the programming installed to recognise people from structure and with the library called ‘SimpleOpenNI’ I could manipulate this. 

The problems I confronted

I thought the possibility of this going wrong where quite minor, but when it came to it the code which had been written was specific for a kinetic camera, rather than the Logitech HD system I had rented, this was not a massive problem I could just rent out a Kinect. Before trying this I tried to manipulate the code I already had so that the output would either be my webcam on the Mac or the webcam I had rented, me copying and pasting basic camera processing into the file I already had I had got to the point where the camera would turn on, but then deactivate itself. Leading to 100% having to use the Kinect. The Kinect did NOT work because of the model number being to recent therefore having to abandon all the ideas I had. 

How I have solved the problem

I did not have enough time to write a particle system with to Kinect camera, putting me into a position where I decided I’m going to create something called “An Almost Particle System” this being a piece of work which is inspired and symbolises what a particle system looks like, but is in-fact not one. Using inspiration from a work called ‘AsciiVideo’ I will create a piece which uses motion, colour and possibly light density to portray action inside the space. Enough said.  

Now what?

The next blog posts are going to be the design process of my system.



I Found Something

Doing my sketches and scrolling through examples I have come across something.


Jan 24, 2015 15:02I’ve hit a stroke of luck here, this piece being one of the examples of Processing it has all the aspects I am looking for;

Colour, Motion and Light

Here the colours are being replaced with ‘letters’ and they are hitting against each other to show constant reaction, this is the effect I want to try and portray (showing constant motion). From what I understand (what I see without looking at the code) the letters have slight tints dependents on the lighting in the room, but it’s not a big feature, the biggest feature here is the letters acting as the image. Very similar to the way particles would act.

I am going to develop on these letters and see where it could go. The reason for this being it’s structured way of portraying what the camera can see; it’s showing another way of seeing life and to me it looks quite like a particle system relating back to what I want my Concept to be. I could try and adapt these letters in other formats, replacing them with little dots may give the best portrayal of particles.




You Live and You Learn

The Kinect, mentioned in the previous post as something which would be an absolute life saver I managed to get my hands on one from the university. I was over joyed that they had one and thought all my problems would be solved, I was 100% wrong…just spent the last three hours typing away on Terminal and downloading different programs so that Processing could ready the Kinect and use it as an output, to find that it was NEVER going to work because the model number of the kit is too new. I needed  an older version model, mine being 1473 and the one i need having to be 1414.
As you can imagine this is quite frustrating, putting a stunt on the work I was going to create.

But no worries I can still get round this; the whole Kinect idea is now dropped I am just going to use a normal webcam. The down side to this is the benefit of having the Kinect is it’s ability to recognise depth and people, saving me having to right the code I will find another way round this.

What I’m going to do now

The job now is to find a fix for this problem, already thinking about it whilst I slowly failed at terminal I will have to focus the camera to recognise movement rather that big shapes (like bodies), below is an example of what I wanted to achieve with the Kinect camera. What you are looking at is one picture (the middle) have the persons entire body being the blank space the particles being in motion around, the other is the reverse of this, body being the particles and the blank space being around. I can still do a basic idea of this which is going to be much simpler than what I planned.10931510_10204872979937564_2224866543232384509_nThe Fix


  • Colour
  • Motion
  • Lighting

I will be able to adapt a similar, but complete different interface.


  • Using the change of colours I will adapt the webcam to base the code of recognising colour, so imagine in the picture below the lines are red and the persons body is blue; when the blue overlaps the red you will see movement. Relating to the way I wanted the particles to move.


  • Having the Particles been replaced with colour (making colour the main focus) the next key aspect is motion; it is important to make sure that Motion is recognised when passers walk by, I will have to think about away of the colour not being static having it in a focus which shows the world moving even if it is not.


  • The use of lighting is something I could add to give this idea an extra sparking, having different tones of colour to create a different perspective when being output, I will look into this more later.

notes 3

Time for some more sketches 


S*** already hit the FAN, or the Mac

As mentioned before I was aware that I MAY need a Kinetic Camera to work some of the code I wanted to experiment with, but it does turn out you ACTUALLY do need a Kinetic Camera to work this code. Below is not the entire of the imported Library, but just a bit of it so you get an idea of what I am looking at;

import SimpleOpenNI.*;

SimpleOpenNI context;
float zoomF =0.3f;
float rotX = radians(180); // by default rotate the hole scene 180deg around the x-axis,
// the data from openni comes upside down
float rotY = radians(0);
PShape pointCloud;
int steps = 2;

void setup()

//context = new SimpleOpenNI(this,SimpleOpenNI.RUN_MODE_MULTI_THREADED);
context = new SimpleOpenNI(this);
if(context.isInit() == false)
println("Can't init SimpleOpenNI, maybe the camera is not connected!");

// disable mirror

// enable depthMap generation


// align depth data to image data



This is the response I receive when I activate the the file;

Screen Shot 2015-01-22 at 17.24.53

“You know what that is…**** *******!”

That is a whole lot of red and not pretty at all, with me I never expected it to be easy, but when the code is already created and you just want to see what it looks like you would think that it would work pretty simply.

In the true Lorimer style I am going ignore all the red and look at the white, the key statement here being;

“Can’t init SimpleOpenNI, maybe the camera is not connected!”

The camera is clearly connected!! A Mac Book Air worth almost a thousand pounds comes with an inboard webcam, it might as well come with a kettle and Twinnings breakfast tea for that price and it’s telling me the camera isn’t connected…I need to find out the problem. Be right back.

Later on….

So it’s as I thought, I 100% need a Kinetic Camera.

comics-extralife-kinect-xbox-one-716039The Kinetic camera already has the preferences installed to work the files, hence it being created for Microsoft it’s code is already created to notice people in the room so they can interact with games. Even this being so I tried to channel the source to the webcam I rented from the University and also the Mac cam. Speaking to a couple of people on my course I thought this would be quite simple;

Adding the my void Draw sections;

cam = newCapture(this,320,240,"FaceTime HD Camera", 30);

I thought this would work, but another error appeared explaining that there is no such thing as “cam”. Reading this now you may be thinking to yourself that it’s incredibly obvious that I cannot fix this problem, which I do know now, but I kept on trying to find another way around it.

Looking at the “GettingStartedCapture” CV offer as a basic to learn how a Webcam works on Processing, code below


Capture cam;

void setup() {
size(640, 480);

String[] cameras = Capture.list();

if (cameras == null) {
println("Failed to retrieve the list of available cameras, will try the default...");
cam = new Capture(this, 640, 480);
} if (cameras.length == 0) {
println("There are no cameras available for capture.");
} else {
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {

// The camera can be initialized directly using an element
// from the array returned by list():
cam = new Capture(this, cameras[0]);
// Or, the settings can be defined based on the text in the list
//cam = new Capture(this, 640, 480, "Built-in iSight", 30);

// Start capturing the images from the camera

void draw() {
if (cam.available() == true) {;
image(cam, 0, 0);
// The following does the same as the above image() line, but
// is faster when just drawing the image without any additional
// resizing, transformations, or tint.
//set(0, 0, cam);

I thought by copying a pasting this code into my first file and deleting the repetitive factors it would solve the problem and again I was wrong, but the camera would activate for a split second and then deactivate. This was starting to get annoying….So I went for a chat with a couple of my course mates, it was mentioned that I could try and do this without the Kinetic Camera, but it would involve telling the code to ignore density and go straight for motion, I could imagine how I would do this.

Just for experimenting I was wasting too much time on this particular code, so time to get on to other things.

Individual Analysis. Inpiration

What Inspires Me

AmnonOwed-KinectPhysics-03-640x360 AmnonOwed-KinectPhysics-04-640x360These images above are the result of using a Kinect Camera, exactly like the ones used on the Xbox. I see using these as more of a cheat because the camera is already programmed to detect bodies with the use of depth, having half my work already sort out for me, but with further experimentation and me not actually owning a Kinect I am finding it very difficult to manipulate the code so it will use my Mac webcam instead. Using “SimpleOpenNI” which is a library you can install on “Processing” I was going to have a look what some already created code would look like and try and use that as a bases to create my almost “Particle System”.

Why an almost Particle System?

To be perfectly honest I have run out of time to write a Particle System from scratch, my hand in is next week on Monday and the time it would take to go through this process would probably hit a fortnight. No worries though, this is where the “Almost Particle System” comes in.

In the pictures above you can see the shape of a human body which has been developed in to little lines/dots (quite like particles), to me this is stunning. This is how I would draw a picture, the code and developed again what I would describe as a piece of art and I love that. This also reminds of other works I have seen;

Particleswhiteback‘Punktiert’ another library I could develop from; this particles are beautiful, incredibly simple black on white little balls of different sizes working in correlation with each to have a constant movement around the canvas. This reminding me of something else I saw;

 Looking into these three pieces I hoping you’re getting an idea of where I want to go with this digital environment I want to complete. I have always found beauty in these little motion, there is so much character and life in the little creatures and it’s all created through the development of letters and numbers. Of course I’m going to create something based of these, they inspire me.

Where I could take this idea?

Adapting these ideas to a digital environment where people can interact through a webcam this idea may work very well with the motion of particles around a body. Like the top two images I could reverse what has been done and put the motion dots on the outside of the body rather than the inside, relating that the second image the black particles would react to a persons body or hand or whatever as a negative space.

Lets get experimenting.

Going with the Flow

Back to my days of college it was 100% coursework produced in a sketch book and I always used to start each brief with a flow chart, so here we go, a blast from the past.

My main objective here is to think of every angle which needs to be covered in the Analyse stage of this project.

Particle Systems; where did they come from?

Focusing more on why the idea of Particle Systems came to me rather than where the actually thing originated from, you can search that on Google

Why Particle Systems?

When the sun is shining through a window through a specific beam you can see the dust particles in the air, this is something I have been bought up with at my parents place and I never really noticed the beauty of it. Until my friend Lawrence was explained his thoughts on it, him showing me that the slightest movement from any part of the room caused a reaction in the dust particles. This related back to the echo I believe is cause by every motion of man and women in the world. Leading me to want to investigate into Particle systems.

Where could this go?

More to ‘Where do I want this to go?’, there are of course many options here and the possibilities are endless. To get me started I’m going to draw up some concept ideas and go from there.


These next steps require individual Analysis 

What inspires me?

Particle Systems of course, but this is where it has all come from –v–

Identify some problems I could encounter?

Something other than Particle Systems?


Flow Chart of Thought Track  ^

A Particle System Considered an Original Idea

Here we have a particle System, this not being the simplest one I could find, the basic physics behind it are circles following the mouse around and dropping to the floor.

Just had a meeting with Liam (my course lecturer), to be honest he caught me out helping my self the the kettle on the third floor and asked the dreaded question “how is the project going?”…My response being one of “I could not be any further from the answer you want right now”. To my luck Liam has heard this excuse many a time and pushed me for a meeting to discuss what I was going to do.

Starting focus ‘Particle Systems’, I knew where this was going because my friend Sam Jones had the same thought track leading to the idea getting botched a few weeks ago, but I can be quite stubborn with my ideas and wanted to stick with it.


To create a Particle System which using a webcam picks up the space of movement people walk in acting as a negative space pushing away particles (small dots) into the space which has no movement. 


This idea is supposed to symbolise that beings are not the only things moving through this world; everything action we take leads to and echo or a response which we may not see, even the movement of an arm causes some sort of motion or power we may not realise.


The idea came from personal experience; having the usual life stories of everything, a realisation has to be made that everything you do has a reaction. 

Here below is a simple example of a particle system and part of the code, I have added part of the code so you can understand what I’m in the motion of doing, I will explain technicals later on.


// Particles, by Daniel Shiffman.

ParticleSystem ps;
PImage sprite; 

void setup() {
 size(1024, 768, P2D);
 sprite = loadImage("sprite.png");
 ps = new ParticleSystem(10000);

 // Writing to the depth buffer is disabled to avoid rendering
 // artifacts due to the fact that the particles are semi-transparent
 // but not z-sorted.

void draw () {
 text("Frame rate: " + int(frameRate), 10, 20);