Ascii Art (Design)

Video ascii art is areal time post processing effect that will transform any video into ASCII art. The effect is created in a shader and use Kickjs engine. You can see the ASCII shader in action on http://www.kickjs.org/example/video_ascii_art/Video_Ascii_Art.html.

Jan 25, 2015 19:15This is a lovely example of what someone has done to adapt the Ascii Video to be easily used by other users, simply uploading a video it will develop it to be produced with letters, the main feature of a Ascii.  To me when I look at this I am getting an impression of particles, the slight movements that the letters are making looks like the way I want to particles to react in an image, the only problem being that I think the image is too clear here, you can clearly see what is happening and I don’t want that to be the case. I want people to react to what is being displayed and try and disrupt it.

Here below is what I am working with, I have used my phone as an example of the way the camera and code react to the colours to change them into letters. I’ve added the code below.

Jan 25, 2015 19:38


/**
* ASCII Video
* by Ben Fry.
*
*
* Text characters have been used to represent images since the earliest computers.
* This sketch is a simple homage that re-interprets live video as ASCII text.
* See the keyPressed function for more options, like changing the font size.
*/

import processing.video.*;

Capture video;
boolean cheatScreen;

// All ASCII characters, sorted according to their visual density
String letterOrder =
" .`-_':,;^=+/"|)\<>)iv%xclrs{*}I?!][1taeo7zjLu" +
"nT#JCwfy325Fp6mqSghVd4EgXPGZbYkOA&8U$@KHDBWNMR0Q";
char[] letters;

float[] bright;
char[] chars;

PFont font;
float fontSize = 1.5;
void setup() {
size(640, 480);

// This the default video input, see the GettingStartedCapture
// example if it creates an error
video = new Capture(this, 160, 120);

// Start capturing the images from the camera
video.start();

int count = video.width * video.height;
//println(count);

font = loadFont("UniversLTStd-Light-48.vlw");

// for the 256 levels of brightness, distribute the letters across
// the an array of 256 elements to use for the lookup
letters = new char[256];
for (int i = 0; i < 256; i++) {
int index = int(map(i, 0, 256, 0, letterOrder.length()));
letters[i] = letterOrder.charAt(index);
}

// current characters for each position in the video
chars = new char[count];

// current brightness for each point
bright = new float[count];
for (int i = 0; i < count; i++) {
// set each brightness at the midpoint to start
bright[i] = 128;
}
}
void captureEvent(Capture c) {
c.read();
}
void draw() {
background(0);

pushMatrix();

float hgap = width / float(video.width);
float vgap = height / float(video.height);

scale(max(hgap, vgap) * fontSize);
textFont(font, fontSize);

int index = 0;
video.loadPixels();
for (int y = 1; y < video.height; y++) {

// Move down for next line
translate(0, 1.0 / fontSize);

pushMatrix();
for (int x = 0; x < video.width; x++) {
int pixelColor = video.pixels[index];
// Faster method of calculating r, g, b than red(), green(), blue()
int r = (pixelColor >> 16) & 0xff;
int g = (pixelColor >> 8) & 0xff;
int b = pixelColor & 0xff;

// Another option would be to properly calculate brightness as luminance:
// luminance = 0.3*red + 0.59*green + 0.11*blue
// Or you could instead red + green + blue, and make the the values[] array
// 256*3 elements long instead of just 256.
int pixelBright = max(r, g, b);

// The 0.1 value is used to damp the changes so that letters flicker less
float diff = pixelBright - bright[index];
bright[index] += diff * 0.1;

fill(pixelColor);
int num = int(bright[index]);
text(letters[num], 0, 0);

// Move to the next pixel
index++;

// Move over for next character
translate(1.0 / fontSize, 0);
}
popMatrix();
}
popMatrix();

if (cheatScreen) {
//image(video, 0, height - video.height);
// set() is faster than image() when drawing untransformed images
set(0, height - video.height, video);
}
}
/**
* Handle key presses:
* 'c' toggles the cheat screen that shows the original image in the corner
* 'g' grabs an image and saves the frame to a tiff image
* 'f' and 'F' increase and decrease the font size
*/
void keyPressed() {
switch (key) {
case 'g': saveFrame(); break;
case 'c': cheatScreen = !cheatScreen; break;
case 'f': fontSize *= 1.1; break;
case 'F': fontSize *= 0.9; break;
}
}

 

This piece of code relates to my general idea to the best that I can find, the position I am in now is take the sketches I have created and the concept idea I have and manipulate this piece of work to the fashion I want it to represent. Lets get drawing.

You Live and You Learn

The Kinect, mentioned in the previous post as something which would be an absolute life saver I managed to get my hands on one from the university. I was over joyed that they had one and thought all my problems would be solved, I was 100% wrong…just spent the last three hours typing away on Terminal and downloading different programs so that Processing could ready the Kinect and use it as an output, to find that it was NEVER going to work because the model number of the kit is too new. I needed  an older version model, mine being 1473 and the one i need having to be 1414.
As you can imagine this is quite frustrating, putting a stunt on the work I was going to create.

But no worries I can still get round this; the whole Kinect idea is now dropped I am just going to use a normal webcam. The down side to this is the benefit of having the Kinect is it’s ability to recognise depth and people, saving me having to right the code I will find another way round this.

What I’m going to do now

The job now is to find a fix for this problem, already thinking about it whilst I slowly failed at terminal I will have to focus the camera to recognise movement rather that big shapes (like bodies), below is an example of what I wanted to achieve with the Kinect camera. What you are looking at is one picture (the middle) have the persons entire body being the blank space the particles being in motion around, the other is the reverse of this, body being the particles and the blank space being around. I can still do a basic idea of this which is going to be much simpler than what I planned.10931510_10204872979937564_2224866543232384509_nThe Fix

Using;

  • Colour
  • Motion
  • Lighting

I will be able to adapt a similar, but complete different interface.

Colour

  • Using the change of colours I will adapt the webcam to base the code of recognising colour, so imagine in the picture below the lines are red and the persons body is blue; when the blue overlaps the red you will see movement. Relating to the way I wanted the particles to move.

Motion

  • Having the Particles been replaced with colour (making colour the main focus) the next key aspect is motion; it is important to make sure that Motion is recognised when passers walk by, I will have to think about away of the colour not being static having it in a focus which shows the world moving even if it is not.

Lighting

  • The use of lighting is something I could add to give this idea an extra sparking, having different tones of colour to create a different perspective when being output, I will look into this more later.

notes 3

Time for some more sketches