Final(ly) Evaluation

What I have Learnt

Throughout this project I have learnt the fundamentals of Design Iteration, going from point to point; Analyse, Design and Testing. Having been reminded of the process which I need to take when designing a piece of work and documenting it’s progress. I have also learnt parts of Processing code and how to manipulate works to portray an image I aspire. These processes have widened my knowledge of design processes and given me an insight to the lengths I need to go to to come to a final piece which works; following the track I took the development of lighting, motion and colour was expanded into and understanding of spaces, and social interacts. Touching on the knowledge I already possessed from life experiences I adapted what I was taught together to create a piece of work which questions the motion of people and the motion of life (what happens around us as people).

What I have Achieved

I achieved the adaptation of a piece of processing code which lead to me creating a digital environment that portrays the physicality that there is motion in everything, even if it can’t be seen. I developed an “Almost Particle System” which represented a particle system that allowed passers to see the environment they stood in evolve in to something which is constantly in motion. The System had a main focus on what was going on around people rather than focusing sourly on the person viewing and when a motion was performed by people the particles would react, showing that every action has a reaction. Symbolising that that there is always a cause from an effect. I also achieved a new found knowledge of depth and concept understanding, relating the two things together because of how my concept was affected due the depth of the space I was using as well as text size.

What Went Wrong

The biggest mistake I made was not taking into account how big the space was in the foyer; I had been doing all my experiments and designs in close proxemics to myself and when it lead to me setting up in Weymouth house the particles did not respond to the environment as well. The particles would not react with each other as well as when the camera was focused on something closer, and the code could not blank out the entire person when they walked past. This had a massive effect on the final piece as a whole, leading to me installing the system and changing the code as I went along, giving different interpretations of what could be used. The benefit of this was a wider knowledge of how the space was effect by the particles, giving me an insight into how cameras react to a large and small space as well as the motion of people walking by.

What I Would Change For Next Time

Next time I would write the Particle System myself from scratch, at the point of deciding to write the system my knowledge was not acute enough to create it. I would analyse the space to a proper degree and begin experimenting in either the space I would be installing in or in a similar space . I would like the Particle System I would create to react to people when they walk on screen, first starting off a showing the whole person and then them evolving into Particles. I would also do wider research in other systems which are related to particles, giving myself a wider insight into what I could have adapted my skills to leading to a better result.

Down in the Open Space

Here is the first image of a person walking by the system, as you can see the particles are constantly moving which is what I wanted to be portrayed; showing that we are not the only things which are moving in the world, even if you can’t see it, it doesn’t mean it hasn’t got a response, the camera is showing this.

The problem here is the way the particles are responding to each other; because the camera has to pick up a much bigger space there aren’t enough particles to cause a big enough response to react to the man walking by. This can easily be fixed by adding more particles.

openspace1

Here I flipped the code adding lines, as seen previously in my designs. The benefit of this was the reaction the camera had towards the person on screen as well ash the particles responding much better to his movements. Expressing what I wanted to a much better light.

Here is someone reacting to the system, I think the same person who had noticed the work and come back to see what happens. The person was eager to see how the particles reacted by doing star jumps and moving closer to the camera, this helped show the space been left blank where the persons body was, but the particles around still staying in motion.

openspace

This is the biggest jump with the code, having made the particles bigger and also reversing the negative space as seen in previous experiment I wanted to see how to bigger space would react. It in fact not being beneficial to the initial idea at all, the particles don’t have the motion which always shows movement anymore and the bold colours of the persons body isn’t strong enough to draw attention towards the particles.

I would describe this as the worst out come for the digital environment; it lead to the camera lagging once again and the particle where not responding enough to represent my concept, but the colours are nice and you can see the Tardis very well.

openspacebigparticles

Overall it was a shame that there weren’t many people in the foyer at the time because my response may have been more deeming, but the way the almost particle system symbolised the space looked very nice and portrays my concept ideally, but not to the standard I want.

Edited code for second installation below –V–

Capture video;
boolean cheatScreen;

String letterOrder =
  "------------------------------------------" +
  "0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO";
char[] letters;

float[] bright;
char[] chars;

PFont font;
float fontSize = 1.8;


void setup() {
  size(640, 480);

  filter(GRAY);
  // This the default video input, see the GettingStartedCapture 
  // example if it creates an error
  video = new Capture(this, 160, 120);
 
  
  // Start capturing the images from the camera
  video.start(); 

That One PARTicle That Just Can’t Keep Up

for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];

int r = (pixelColor >> 16) & 0xff;
int g = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

Here I have made a slight changes to the colour, I have changed the attributes from “r,g,b,” to “h,s,b,”. The reason for this is to add a clearer image to what is been projected, allowing people to see what is going better. At the moment I feel the way the particles are looking doesn’t look clear enough and show the blank spot where the person is well enough, changing these attributes will hopefully improve this.

for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];

int h = (pixelColor >> 16) & 0xff;
int s = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

Also changing the font size again, this time too 2.5.

colourchangep

As you can see I have encountered a little problem

theproblemThe problem I have encountered here is the camera cannot keep up with my movements, this is a big deal because of the pace people walk through Weymouth House, they will not notice that there bodies are being blanked out. Trying to fix this problem I added in a fps command;

frameRate(60);

This had no response to the work; nothing changed so I fluctuated between higher and lower frame rates and this still had no difference….So I tried cutting the amount of letters (particles) in half to reduce the collisions between them, again this had no difference. This problem is a tricky one to figure out, so I’m going to move from w119 and head down to the walking space and see if this makes any difference.

openspaceWhat you’re seeing here is the first view of the production in the Weymouth House walk through; me stepping away from to webcam to see if it is clear that I am standing there. The lag on the image isn’t as bad as it was up in W119, that could be due to the wider every of space, I did also reduce the size of the particles to allow more space for movement, hence dark spots.

I’ve Made A Decision

I’ve decided to use more than one of the code manipulations in this space, considered as further experimentation I am going to explain it as; “which code reacts better to the bigger environment.”

The A to Zero Design Process

From the starting point my main objective is to achieve the best representation of what a particle system would look like by manipulating the code, here are some sketches of what I’m trying to show.10933874_10204886313190887_4319158734198323611_n-2What we have with the original piece is all the letters of the alphabet and grammar creating the environment

String letterOrder =
" .`-_':,;^=+/"|)\<>)iv%xclrs{*}I?!][1taeo7zjLu" +
"nT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q";
char[] letters;

The code above ^

I am going to experiment with these letters and change them into symbols like lines and dots; trying to portray the movement in the environment to best of my ability.  In the sketch above I have tried to show little circles which I can develop the code into, I’ve changed the code see above to have ‘0’ to see what would happen.

String letterOrder =
"000000000000000000000000000000000000000000000000" ;
char[] letters;

00000As you can see the results are not what I want, the ‘0’s are not moving like particles and the image looks more like a stained glass effect. The way that I may fix this is if I replace the ‘0’ with different size round shapes e.g ‘O’ and ‘o’.

The results made no different, so I started playing around with the font size of the letters, replacing the code with this:

String letterOrder =
"0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO0oO" ;
char[] letters;

float[] bright;
char[] chars;

PFont font;
float fontSize = 1.5;

The Results –v– Changed fontSize from ‘2’ to 1.5.0oOThese are the results I’m looking for, what’s great about this is the way the “Particles” are knocking against each other, portraying the beginnings of my concept. The simple change in size of the letters and made such a big difference that a static image has become alive.

The smaller the particles the more that are needed to fill out the image. The expression here relates to the concept, but there are a few problems.

The Problems

I’m not convinced with the way motion is being shown, yes it is the particle portrayal that I want, but it’s something about the colour that I’m not really enjoying looking at. It’s important that I get the colour the way I want it, so what I’m going to do is mess about with which colours are being used, the code I am going to manipulate is below –v–

pushMatrix();
for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];
// Faster method of calculating r, g, b than red(), green(), blue()
int r = (pixelColor >> 16) & 0xff;
int g = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

// Another option would be to properly calculate brightness as luminance:
// luminance = 0.3*red + 0.59*green + 0.11*blue
// Or you could instead red + green + blue, and make the the values[] array
// 256*3 elements long instead of just 256.
int pixelBright = max(r, g, b);

As well as messing about with the colour I’m also going to see what the piece would look like if I where to use lines rather that circles. The reason I came to this was because of some of the sketch designs I created, I found that the way I was using my pen to draw the structure of peoples bodies where been show with a sound wave mimic. Let’s check what this looks like.

Ascii Art (Design)

Video ascii art is areal time post processing effect that will transform any video into ASCII art. The effect is created in a shader and use Kickjs engine. You can see the ASCII shader in action on http://www.kickjs.org/example/video_ascii_art/Video_Ascii_Art.html.

Jan 25, 2015 19:15This is a lovely example of what someone has done to adapt the Ascii Video to be easily used by other users, simply uploading a video it will develop it to be produced with letters, the main feature of a Ascii.  To me when I look at this I am getting an impression of particles, the slight movements that the letters are making looks like the way I want to particles to react in an image, the only problem being that I think the image is too clear here, you can clearly see what is happening and I don’t want that to be the case. I want people to react to what is being displayed and try and disrupt it.

Here below is what I am working with, I have used my phone as an example of the way the camera and code react to the colours to change them into letters. I’ve added the code below.

Jan 25, 2015 19:38


/**
* ASCII Video
* by Ben Fry.
*
*
* Text characters have been used to represent images since the earliest computers.
* This sketch is a simple homage that re-interprets live video as ASCII text.
* See the keyPressed function for more options, like changing the font size.
*/

import processing.video.*;

Capture video;
boolean cheatScreen;

// All ASCII characters, sorted according to their visual density
String letterOrder =
" .`-_':,;^=+/"|)\<>)iv%xclrs{*}I?!][1taeo7zjLu" +
"nT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q";
char[] letters;

float[] bright;
char[] chars;

PFont font;
float fontSize = 1.5;
void setup() {
size(640, 480);

// This the default video input, see the GettingStartedCapture
// example if it creates an error
video = new Capture(this, 160, 120);

// Start capturing the images from the camera
video.start();

int count = video.width * video.height;
//println(count);

font = loadFont("UniversLTStd-Light-48.vlw");

// for the 256 levels of brightness, distribute the letters across
// the an array of 256 elements to use for the lookup
letters = new char[256];
for (int i = 0; i < 256; i++) {
int index = int(map(i, 0, 256, 0, letterOrder.length()));
letters[i] = letterOrder.charAt(index);
}

// current characters for each position in the video
chars = new char[count];

// current brightness for each point
bright = new float[count];
for (int i = 0; i < count; i++) {
// set each brightness at the midpoint to start
bright[i] = 128;
}
}
void captureEvent(Capture c) {
c.read();
}
void draw() {
background(0);

pushMatrix();

float hgap = width / float(video.width);
float vgap = height / float(video.height);

scale(max(hgap, vgap) * fontSize);
textFont(font, fontSize);

int index = 0;
video.loadPixels();
for (int y = 1; y < video.height; y++) {

// Move down for next line
translate(0, 1.0 / fontSize);

pushMatrix();
for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];
// Faster method of calculating r, g, b than red(), green(), blue()
int r = (pixelColor >> 16) & 0xff;
int g = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

// Another option would be to properly calculate brightness as luminance:
// luminance = 0.3*red + 0.59*green + 0.11*blue
// Or you could instead red + green + blue, and make the the values[] array
// 256*3 elements long instead of just 256.
int pixelBright = max(r, g, b);

// The 0.1 value is used to damp the changes so that letters flicker less
float diff = pixelBright - bright[index];
bright[index] += diff * 0.1;

fill(pixelColor);
int num = int(bright[index]);
text(letters[num], 0, 0);

// Move to the next pixel
index++;

// Move over for next character
translate(1.0 / fontSize, 0);
}
popMatrix();
}
popMatrix();

if (cheatScreen) {
//image(video, 0, height - video.height);
// set() is faster than image() when drawing untransformed images
set(0, height - video.height, video);
}
}
/**
* Handle key presses:
* 'c' toggles the cheat screen that shows the original image in the corner
* 'g' grabs an image and saves the frame to a tiff image
* 'f' and 'F' increase and decrease the font size
*/
void keyPressed() {
switch (key) {
case 'g': saveFrame(); break;
case 'c': cheatScreen = !cheatScreen; break;
case 'f': fontSize *= 1.1; break;
case 'F': fontSize *= 0.9; break;
}
}

 

This piece of code relates to my general idea to the best that I can find, the position I am in now is take the sketches I have created and the concept idea I have and manipulate this piece of work to the fashion I want it to represent. Lets get drawing.

What Stage Am I At? (The Boring Part)

What has happened so far.

I’ve analysed a concept

What I want to portray with my digital environment piece is the concept of motion; not focusing sourly on humans being the main thing which creates movement in the world. I want to bring to people’s attention that, lets say when you move your arm (you can clearly see your arm moving) there is a reaction around the space the motion is taking place. There is a reaction to everything that happens, even if you can’t see it, it still exists. 

I came to this culture because of the recent essay I have written on ‘Participatory Culture’ where fans of a franchise take the franchise into their own hands and create a piece of work in their own image. The culture creates an action in response of something which already exist, but may not be seen. This added to life experiences of everything you do having a reaction, whether it’s not replying to a text or calling the wrong person there is always a response somewhere those causes.

I’ve analysed the space

Using the Foyer in Weymouth House we have been offered one of the screens to install our systems, this is located in an eye catching spot so people will notice the system and also the camera will be facing where most of the action happens (where people walk through). Already knowing that we have been given access to this location I conducted some research with the use of the “Independent Dorset” brief, allowing me to get an insight to the duration people come into the foyer and leading to me having a wider knowledge of how I will present my digital environment.

I’ve analysed Particle Systems

Using something which already exists “Particle Systems” I did some research into some Systems which have already been written and how they look, the use of this gave me some ideas stretching from an environment which is 100% made out of particles and when motion is detected the particles respond with movement. With further research and advise from Liam my seminar tutor the complexity of writing a Particle System was explained as it may be quite over whelming and not work, but I really wanted to stick to the idea. This came back to the concept I have and the fact that I want this piece of work to be seen as art. 

I analysed the possibilities of Particle Systems

Using research and my note book I went into how I could take this possible particle system and create something on my own, doing sketches and looking into Examples which Processing supply like ‘OpenCV’ and ‘Punktiert’ I analysed how the imagery in particles is portrayed. The results of this was looking how the particles where presented, something being small and big and others being shown with the way they move rather than the dimensions of them, but all where really quite beautiful. Some people even found ways of getting the particles to move in such away they created images, just with how to particles moved around the page. Moving forward this took me to the point of wanting the camera to notice people rather than the entire space just being turned into Particles, this is quite a tricky thing to succeed in as the code is incredibly complex and I didn’t know how to write it, but I did recall that using a Kinect webcam already has the programming installed to recognise people from structure and with the library called ‘SimpleOpenNI’ I could manipulate this. 

The problems I confronted

I thought the possibility of this going wrong where quite minor, but when it came to it the code which had been written was specific for a kinetic camera, rather than the Logitech HD system I had rented, this was not a massive problem I could just rent out a Kinect. Before trying this I tried to manipulate the code I already had so that the output would either be my webcam on the Mac or the webcam I had rented, me copying and pasting basic camera processing into the file I already had I had got to the point where the camera would turn on, but then deactivate itself. Leading to 100% having to use the Kinect. The Kinect did NOT work because of the model number being to recent therefore having to abandon all the ideas I had. 

How I have solved the problem

I did not have enough time to write a particle system with to Kinect camera, putting me into a position where I decided I’m going to create something called “An Almost Particle System” this being a piece of work which is inspired and symbolises what a particle system looks like, but is in-fact not one. Using inspiration from a work called ‘AsciiVideo’ I will create a piece which uses motion, colour and possibly light density to portray action inside the space. Enough said.  

Now what?

The next blog posts are going to be the design process of my system.

 

 

I Found Something

Doing my sketches and scrolling through examples I have come across something.

AsciiVideo

Jan 24, 2015 15:02I’ve hit a stroke of luck here, this piece being one of the examples of Processing it has all the aspects I am looking for;

Colour, Motion and Light

Here the colours are being replaced with ‘letters’ and they are hitting against each other to show constant reaction, this is the effect I want to try and portray (showing constant motion). From what I understand (what I see without looking at the code) the letters have slight tints dependents on the lighting in the room, but it’s not a big feature, the biggest feature here is the letters acting as the image. Very similar to the way particles would act.

I am going to develop on these letters and see where it could go. The reason for this being it’s structured way of portraying what the camera can see; it’s showing another way of seeing life and to me it looks quite like a particle system relating back to what I want my Concept to be. I could try and adapt these letters in other formats, replacing them with little dots may give the best portrayal of particles.

 

 

 

S*** already hit the FAN, or the Mac

As mentioned before I was aware that I MAY need a Kinetic Camera to work some of the code I wanted to experiment with, but it does turn out you ACTUALLY do need a Kinetic Camera to work this code. Below is not the entire of the imported Library, but just a bit of it so you get an idea of what I am looking at;


import SimpleOpenNI.*;

SimpleOpenNI context;
float zoomF =0.3f;
float rotX = radians(180); // by default rotate the hole scene 180deg around the x-axis,
// the data from openni comes upside down
float rotY = radians(0);
PShape pointCloud;
int steps = 2;

void setup()
{
size(1024,768,P3D);

//context = new SimpleOpenNI(this,SimpleOpenNI.RUN_MODE_MULTI_THREADED);
context = new SimpleOpenNI(this);
if(context.isInit() == false)
{
println("Can't init SimpleOpenNI, maybe the camera is not connected!");
exit();
return;
}

// disable mirror
context.setMirror(false);

// enable depthMap generation
context.enableDepth();

context.enableRGB();

// align depth data to image data
context.alternativeViewPointDepthToImage();
context.setDepthColorSyncEnabled(true);

stroke(255,255,255);
smooth();
perspective(radians(45),
float(width)/float(height),
10,150000);
}

endShape();

This is the response I receive when I activate the the file;

Screen Shot 2015-01-22 at 17.24.53

“You know what that is…**** *******!”

That is a whole lot of red and not pretty at all, with me I never expected it to be easy, but when the code is already created and you just want to see what it looks like you would think that it would work pretty simply.

In the true Lorimer style I am going ignore all the red and look at the white, the key statement here being;

“Can’t init SimpleOpenNI, maybe the camera is not connected!”

The camera is clearly connected!! A Mac Book Air worth almost a thousand pounds comes with an inboard webcam, it might as well come with a kettle and Twinnings breakfast tea for that price and it’s telling me the camera isn’t connected…I need to find out the problem. Be right back.

Later on….

So it’s as I thought, I 100% need a Kinetic Camera.

comics-extralife-kinect-xbox-one-716039The Kinetic camera already has the preferences installed to work the files, hence it being created for Microsoft it’s code is already created to notice people in the room so they can interact with games. Even this being so I tried to channel the source to the webcam I rented from the University and also the Mac cam. Speaking to a couple of people on my course I thought this would be quite simple;

Adding the my void Draw sections;

cam = newCapture(this,320,240,"FaceTime HD Camera", 30);

I thought this would work, but another error appeared explaining that there is no such thing as “cam”. Reading this now you may be thinking to yourself that it’s incredibly obvious that I cannot fix this problem, which I do know now, but I kept on trying to find another way around it.

Looking at the “GettingStartedCapture” CV offer as a basic to learn how a Webcam works on Processing, code below

import processing.video.*;

Capture cam;

void setup() {
size(640, 480);

String[] cameras = Capture.list();

if (cameras == null) {
println("Failed to retrieve the list of available cameras, will try the default...");
cam = new Capture(this, 640, 480);
} if (cameras.length == 0) {
println("There are no cameras available for capture.");
exit();
} else {
println("Available cameras:");
for (int i = 0; i < cameras.length; i++) {
println(cameras[i]);
}

// The camera can be initialized directly using an element
// from the array returned by list():
cam = new Capture(this, cameras[0]);
// Or, the settings can be defined based on the text in the list
//cam = new Capture(this, 640, 480, "Built-in iSight", 30);

// Start capturing the images from the camera
cam.start();
}
}

void draw() {
if (cam.available() == true) {
cam.read();
}
image(cam, 0, 0);
// The following does the same as the above image() line, but
// is faster when just drawing the image without any additional
// resizing, transformations, or tint.
//set(0, 0, cam);
}

I thought by copying a pasting this code into my first file and deleting the repetitive factors it would solve the problem and again I was wrong, but the camera would activate for a split second and then deactivate. This was starting to get annoying….So I went for a chat with a couple of my course mates, it was mentioned that I could try and do this without the Kinetic Camera, but it would involve telling the code to ignore density and go straight for motion, I could imagine how I would do this.

Just for experimenting I was wasting too much time on this particular code, so time to get on to other things.

Individual Analysis. Inpiration

What Inspires Me

AmnonOwed-KinectPhysics-03-640x360 AmnonOwed-KinectPhysics-04-640x360These images above are the result of using a Kinect Camera, exactly like the ones used on the Xbox. I see using these as more of a cheat because the camera is already programmed to detect bodies with the use of depth, having half my work already sort out for me, but with further experimentation and me not actually owning a Kinect I am finding it very difficult to manipulate the code so it will use my Mac webcam instead. Using “SimpleOpenNI” which is a library you can install on “Processing” I was going to have a look what some already created code would look like and try and use that as a bases to create my almost “Particle System”.

Why an almost Particle System?

To be perfectly honest I have run out of time to write a Particle System from scratch, my hand in is next week on Monday and the time it would take to go through this process would probably hit a fortnight. No worries though, this is where the “Almost Particle System” comes in.

In the pictures above you can see the shape of a human body which has been developed in to little lines/dots (quite like particles), to me this is stunning. This is how I would draw a picture, the code and developed again what I would describe as a piece of art and I love that. This also reminds of other works I have seen;

Particleswhiteback‘Punktiert’ another library I could develop from; this particles are beautiful, incredibly simple black on white little balls of different sizes working in correlation with each to have a constant movement around the canvas. This reminding me of something else I saw;

https://www.youtube.com/watch?v=UfbPlfgzhDI

 Looking into these three pieces I hoping you’re getting an idea of where I want to go with this digital environment I want to complete. I have always found beauty in these little motion, there is so much character and life in the little creatures and it’s all created through the development of letters and numbers. Of course I’m going to create something based of these, they inspire me.

Where I could take this idea?

Adapting these ideas to a digital environment where people can interact through a webcam this idea may work very well with the motion of particles around a body. Like the top two images I could reverse what has been done and put the motion dots on the outside of the body rather than the inside, relating that the second image the black particles would react to a persons body or hand or whatever as a negative space.

Lets get experimenting.

Concepts…A designer who writes essays

Having just gone through writing an essay for my ‘Cross Media Creativity Perspective’ my brain is filled with what I would call ‘MindBlowingChaos‘ ‘Concept Overload’; working with ‘Participatory Culture’.

Extract from my ‘Cross Media Creativity’ Essay – Focusing on the ‘Doctor Who’ Phenomenon

“Through out this course my position in the cross media sphere has been enhanced; coming into the perspective from a ‘Digital Media Design’ view my knowledge of Cross Media Creativity has expanded to more than just the knowledge of tablets, computers and phones. Being taught that cross media creativity is more that just viewing media on a screen I have learnt that the use of re-inventing a franchise in your own way is part of this perspective. When thinking about productions like Doctor Who I would always consider the show to be running on its own power; the cast and crew creating the show in parallel with the audience enjoyment in the phenomena. This reflects an understanding of the power fluctuations between the creator and consumer. This having a huge role to play with such a big phenomena cross media creativity brings the audience and what they love watching together and allows opportunities to be part of it. This unit has impacted on my future media consumption; a guilty pleasure of mine is watching Doctor Who fan made trailers and now when watching them I don’t just view what someone has created; I view why they have created it that way, what are they trying to say and how could this effect the entirety of Doctor Who. To come with future projects I will now always take into account how it will be consumed and what production values may effect my inspirations. Throughout the unit the learning curve has been hard, but enjoyable giving me an eagerness to learn how much wider the scale goes with Cross Media Creativity.”

Above is an example of my own interpretation of ‘Participatory Culture’, this created by me in First year.

Taking this theory concept in to context I am going analyse a way of ‘maybe’ incorporating it into my Processing piece; taking a Phenomena in the world and manipulating it into a work.