It uses text messages to generate pseudo-random combinations of particle colour, background colour, size and speed to change the animation. It also uses a dictionary of colour words found via XKCD to find matching words in the text messages and use them to colour the particles.
A few months ago I began playing with manually corrupting jpeg files to see what kinds of artefacts I could create. I selected an image, compressed it down to a very small size (so I could easily manipulate large chunks of the data), opened it in a text editor (I like SubEthaEdit and TextMate) and added random text, copied pasted and generally shuffled the data, occasionally saving as new files.
Above is a QuickTime movie which animates through 12 of the resulting jpeg files. Click it to stop if it’s giving you a headache I re-compressed the jpegs just to be sure they wouldn’t crash QuickTime Player. Manually introducing errors and noise into files and then playing them is one of those “make sure you save any important files you have open” situations as things can grind to a halt.
I was playing with these images at Plug N Play at Kent St on Thursday night. I mentioned that I was planning on writing a php script which would similarly screw with jpeg images online and Sean told me about glitchbrowser.com.
From the site:
Computers are not allowed to make mistakes.
The glitch browser represents a deliberate attempt to subvert the
usual course of conformity and signal perfection. Information packets
which are communicated with integrity are intentionally lost in
transit or otherwise misplaced and rearranged. The consequences of
such subversion are seen in the surprisingly beautiful readymade
visual glitches provoked by the glitch browser and displayed through
our forgiving and unsuspecting web browsers.
This work was produced for New Langton Arts Packets programme,
by Dimitre Lima, Tony Scott and Iman Moradi.
Here’s a short excerpt from the video portrait I produced for Boxed as part of the Short & Sweet short theatre festival.
I used the excellent Vidgeo Gogh plugin for Final Cut Pro to generate a number of layers of video with different levels of paint effect and Quartz Composer to composite the layers together in real time with masks.
I am currently working on the design and production for a ‘video portrait’ which will be projected on set as part of the play Boxed directed by Simon Gorman.
The play is part of the Short & Sweet short theatre festival at The Arts Centre
The festival, advertised as ‘The Tropfest of Theatre’ features 10 plays per night all of which are 10 minutes or less in duration. We’re in the first week of the top 30 plays, beginning tonight Wednesday 23rd of November and playing through until Saturday.
“This is brush, a small Max/MSP/Jitter patch that Ive compiled as a standalone application. Its aimed at visualists who are just starting out and looking for software to play with. Programmatically, its very simple. Video from a live camera (or a movie file) is fed back on itself so that light stays on the canvas (screen). Thus, you can paint with the light in the room youre in. Decay (fade time), tolerance (lower luma threshold) and color inversion are adjustable so you can adapt your performance to any lighting conditions.”
This is a great little piece of software, what I would call a Vidget. A small scale application which lets you manipulate digital media in real time for improvised performance. It is very easy to use and entertaining to play with.
Click image to play (28Kb Quartz Composition in QuickTime wrappper, requires MacOS 10.4)
MPEG 4 – h264 version (1.9Mb) Requires Quicktime 7 or alternative player such as VLC. Quartz Composition file (64Kb) – Open in Quartz Composer to see how it works or move to ~/Library/Screen Savers/ to use as a screen saver.
The site is powered by WordPress and makes extensive use of customised templates, css and custom fields. The design is by Nicole Dominic, sliced up and css/xhtml-ised by me.
One of the main functions of the website is to present an easily update-able show-reel of the company’s work (primarily TV ads). Some of the tricks I discovered whilst making the site may be of interest to the videoblogging crew or others wanting to use WordPress as a content management system for video. The next step is going to be working out how to customise the site’s RSS feeds to include this information.
For each of the ads I make a regular post, storing a lot of information in custom fields, such as: the url of a thumbnail image; the url of a poster movie and the url of the movie itself. I store this information here rather than in the actual post text in order to separate content from styling and presentation, allowing me to refer to the same clip in a number of different ways from different areas of the site.
I recently completed work on an experimental internet radio program with Hannah Miller and Kate Eccles. Hannah and Kate are final year Media students at RMIT majoring in Radio and TV production. My role in the production was to take various pieces of audio, video, still images and text, and create an interface which would allow the user to mix and match the elements in an exploratory, non-linear way.
The result of this work is a program called “Inspiration”, which features interviews, live footage, sound recordings and lyrics from Reset://0 a Japanese influenced Melbourne band.
The program was authored in LiveStage Pro and is a Quicktime file that consists of a sprite track, several movie tracks and a text track which features lyrics. The above image shows the partially completed work as I was assigning sounds to various non-square shaped roll-over buttons. The idea was that rather than presenting the user with a list of options, or even a grid of non-labelled options, the work should encourage the user to explore the screen space with the cursor, almost like they are feeling their way in the dark. To give the users some feedback, and a little direction as to where may be a good place to explore, I used Hanna’s fire twirling image as a guide. I placed invisible sprites over the background image which reacted to the “MouseEnter” event, triggering sounds which played in specific movie tracks, and changing the sprite image for the background so that different parts of the fire twirling would be illuminated and hi-lighted.
You can view the completed work in context on the interadio site. Or, to go straight to Inspiration(requires Quicktime, a fairly recent computer and a decent broadband connection – 15Mb)
This is the latest version of my interactive networked video project.
Click on the image to load Vidget 3 in Quicktime Player. (It is quite small but very processor intensive – especially as it first loads)
This version is a mix between the my first vidget which featured a text based interface for mixing up to three video clips on top of eachother, and my Quicktime Flickr photo viewer which let you search for and view images based on a search word.
The interface has been redesigned and now features a grid of 25 draggable images which represent video clips. These may be dragged and dropped onto three coloured ‘layers’. The blue layer is the uppermost with green below and red at the bottom. Each of these layers has a number of ‘graphics modes’. Like Photoshop layers, these may be combined in a number of modes, ranging from fully transparent to fully opaque. Each of these layers also has a number of playback controls which allow the user to play the clip faster or slower, forwards or backwards and step through frame by frame.
To the right of the three colour layers and their controls is a small white text field. This allows the user to search for images from Flickr. The ten most recent pictures tagged with the search word entered are loaded as thumbnails below. These thumbnails may be dragged and dropped onto any of the layers and combined with other moving and still images.
I have resized the output movie area so that everything fits on one screen.
Behind the scenes, the vidget has also been greatly updated. Rather than being limited to a set number of video clips determined at the time of authoring, this version dynamically loads all content including thumbnails. The names of these files are drawn from an XML file. This file may be updated with a simple text editor to add or delete more clips. The movie automatically loads the first 25 thumbnails from the XML list as it initially loads but may load the next 25, and the following 25 via the 1, 2 and 3 buttons at the top right of the controls.
At the moment the whole movie pauses whenever thumbnails are loaded, either via a Flickr search or by skipping to the next 25 thumbnails of video clips. I am working on ways around this.
The LiveStage Pro source files may be downloaded here: vidget3.zip
For the past couple of Thursday nights I have been playing live visuals at a night called ‘Plug & Play’ at a bar called Kent St (located confusingly on Smith St in Collingwood). The night is run by two fine gentlemen named Jean Poole and Future Eater and is a nice relaxed place where each week people come to plug in their audio/video/laptop/playstation/casio devices and play. The venue also has a good broadband connection which allows for international djs/vjs to perform remotely and for me to test my latest vidgets. Version 3 is just about ready for posting here and combines the layering/mixing of clips of the first version with the photo searching and xml reading of the Flickr Viewer.
Here is a screenshot of the new drag and drop interface. The grid of images is loaded dynamically based on an xml file which means I can set the vidget up to play different content without rebuilding the entire project in LiveStage Pro. Almost everything is modular now. The ten thumbnails on the right are the results of a Flickr search for the tag ‘blue’. Each of these thumbnails is draggable to the three clip holders at the top of the screen (red, green, and blue). These refer to the three different layers of video which are output to a screen or projector. Up to three video clips or still images may be mixed/layered together.
After playing with this prototype version at Kent St last week I’m definitely going add some more space for thumbnails as I ran out of content after a while. I am also going to explore some more of the graphics modes for combining the images.
Oh yeah, I’m playing there again this week so if you’re in Melbourne come down. Its free and starts at about 8pm @ Kent St, 201 Smith St Collingwood.
Click image to load in Quicktime Player (it seems to be a little funky in a browser)
This is a little Quicktime movie that lets you view photos from Flickr, a photo sharing and social networking site. When users upload their images they associate them with tags, so a picture of a tree may have the tags ‘tree’, ‘green’, ‘eucalyptus’ etc. The entire database of photos is searchable by tag, by author, by series and other organisations. Each of these has a RSS or ATOM feed associated with it. This is what I am using here to access the 10 most recent uploads for a given tag.
To search the site and view the photos, just type a tag, say “cow” (no quotes) into the small white text field and press return. If you press return without entering any text you get the 10 most recent uploads from any category. Once the thumbnails load you can click to view at a larger size on the right.
The movie uses Quicktime’s ability to read and parse XML files such as RSS feeds and access files from the network.
This is a great example of an ‘ergodic’ interactive work with a very clever, but simple concept produced well. This site allows you to create your own ‘Marilyn’ prints in real time on screen with an embedded flash file.
Its been a bit quiet around here for a while and this is why. I’ve been working pretty solidly on this piece for the past couple of weeks leading up to a gig I co-organised last week. Segmentation Fault is a semi-regular experimental music & visual night we put on every couple of months and proved to be a good motivation (ie. deadline) to get a work together for. in my research I am mostly interested in applying VJ aesthetics and methods to the desktop environment where the user becomes the performer, but its always fun to perform in front of an audience of humans in a room.
Now its time to release this draft to the world and see what people think. Click on the images below to load the two parts in Apple – QuickTime Player.
The scrambled looking black, white and green image will load the ‘output’ movie. This is the movie to be projected on a screen or viewed on a second monitor. It is designed to run at full PAL resolution (720 * 576) to suit the TV output of my laptop. If you want to try it out on a single monitor setup, you can load the movie and select ‘Half Size’ from the Movie menu in Quicktime Player. This movie is really just a kind of holder for up to three other movies. To load different clips into the ‘output’ movie you will need to use the ‘interface’ movie below.
This movie controls which video loop is loaded in which layer of the output movie. Along the top of the window are the numbers 1, 2 and 3. These represent the three layers with 3 being the ‘highest’, 1 the lowest and 2 in between. Next to each of the numbers are playback controls for each layer. Once clips have loaded they may be played forwards and backward, in slow and fast motion and stepped through frame by frame. Next to the playback controls are the graphics mode controls. These control the ways in which each of the layers are blended.
‘Blend 0′ means the clip is completely transparent. It is probably a good idea to switch to this setting if you are going to load a big clip as it will take a while to load and display a still image whilst it is doing so. ‘Blend 100′ means the clip is 100% opaque so any other clip below it will not be seen. ‘Add Max’ adds the bright portions of the clips image over the clips below, leaving the dark areas transparent. ‘Add Min’ adds the dark portions of the clips image over the clips below, leaving the lighter areas transparent. ‘Sub Pin Blk’ subtracts the bright areas of the clips image from the ones below so white snow on a black background will result in black snow on a transparent background. ‘Inverse Or’, ‘Exclusive Or’ and ‘Inverse Exclusive Or’ produce other effects but to be honest I’m still not sure exactly how they work .
These graphics modes probably won’t be much fun to play with until some different clips are loaded into each of the layers. To do this I have designed two different patch loaders. If you click on the 1 or 2 with red # symbols next to them the # will change to a *, telling you which loader is active. The first thing to do is select which layer or ‘channel’ to load the clip into. These are selected by clicking the large 1, 2 or 3 at the top. Next a clip may be selected from the list at the bottom half of the controls. The clip’s name and id number are displayed and when the ‘Do It!’ button is pressed the clip will start to load. (If you are wondering why it is called ‘Do It!’ go see Starsky and Hutch )
Note: there may be a couple missing – such as the ‘live input’ at the bottom right, so if you get a ‘broken movie’ image just try another clip.
Office Voodoo is a great example of an interactive video project that uses a cinematic/televisual aesthetic with real life actors whilst maintaining meaningful real time user interaction. It is rare to a project which achieves all these aims at once.
Office Voodoo features footage of two bored workers as they sit in an office. By physically manipulating ‘voodoo’ dolls with red flashing eyes, two users may control the characters’ emotional states. Depending on the combination of the two characters’ moods a real time editing engine cuts together shots which form a kind of ‘algorithmic sitcom’, as the site says. The editing engine respects the conventions of shot / reverse shot and continuity editing, making for a fairly seamless TV like program.
While I haven’t played with it myself, the About Office Voodoo movie on the site shows examples of people using the system and the effects of their actions on the characters. It reminds me of being a director holding casting auditions where I would get actors to act out a scene in a couple of different ways. My favourite was when I asked an actor to rap a David WIlliamson play.
From the site:
“With advances in compression standards and faster, larger hard disks, the film form is finally freeing itself from the inherent linearity of the celluloid or tape substrate, as it becomes chunks of data that can be retrieved instantaneously. This explosion of the film medium is redefining our approach to narrative filmmaking and over the viewer’s control of the time flow and the plot. In the attempt to carry on the tradition of mimetic storytelling with real actors, this piece brings together the craft of cinema with automated editing techniques, trying to replicate in new media semiotics what 1920s soviet filmmakers like Kuleshov did to film with montage. Here, the knowledge of the editor is represented in the machine, and the rules are scripted according to user interaction. As a filmmaker and a programmer, the author is telling a story not only with audiovisual media but also with computer code.” [my emphasis]
Click the poster movies above to load the real ones. The second one will take a little while to load (3.9Mb)
Alternatively (recommended) download the two files, ControlTOBE.mov (Controller) and TOBE.mov (Player) and open them both in Quicktime Player.
Ok, this is a half finished draft of a basic Quicktime vision mixer. Like the inter-movie text communication movies I posted earlier , this project comes in two parts: one movie to control another. I actually created the text sender movies to troubleshoot when I was making this movie.
The ‘player’ movie contains three video tracks. The ‘controller’ movie features a number of clickable sprites which tell the ‘player’ movie what to do. By clicking on the green or red buttons on the left hand side of the ‘controller’ the tracks in the ‘player’ movie may be enabled or disabled. In the middle of the ‘controller’ movie there are two gradients with a percentage number, these control the opacity of each of the two ‘upper’ video tracks. By clicking on the gradients at various different places along the horizontal the numbers should change and the tracks should fade in or out. On the far right of the ‘controller’ movie is the ‘invert’ button. This inverts the top video track so that white is black and black is white etc.. The other button at the bottom of the ‘controller’ is the ‘add’ button. It is the latest addition and it also controls the top movie (I’ll move it up top in the next version). Rather than simply inverting the video track, the ‘add’ button produces an additive effect (like a Photoshop layer) which changes depending on the presence or absence of the underlying two video tracks.
This draft was designed as an experiment in building a VJ instrument in Quicktime with Livestage Pro for a recent Segmentation Fault gig I was organising. The ‘player’ movie is designed to automatically go to full screen on the video output of my laptop while the ‘controller’ sits on my screen out of sight. I built a more complicated version with about 20 different video tracks for the gig but unfortunately most of the effects such as fading in and out and layering decided not to work on the night. I think I’ve worked out the problem so I’ll post a new version soon.
The buttons usually take a couple of clicks to get going but they should work after that.
ClownStaples is a source of many interactive flash animations. My favourite (while not strictly interactive) is a pun on the loading of flash animated splash screens. It proceeds in an increasingly complex series of loading screens without end. Unfortunately since the site his hosted by Geocities it is often unavailable due to limited data transfer.
A while ago Adrian Miles’ posted Video Blogs, Vidblogs and Vogs, presenting an ongoing discussion about the nature and definitions of video weblogging.
“At the moment all video blogs are video inside text orientated CMS [Content Management System] engines. But here’s a simple idea (more complex backend), you make a movie that has a sprite and a text track. The text track is there to show a number. The sprite reads an external XML file which simply indicates how many trackbacks that video has.”
So I set about looking for examples of alternative content management systems which deal natively with video rather than text. I’m still working out how to get Quicktime movies to read and write to my own XML databases using Livestage Pro.
WaterCooler provides a very slick looking and functional interface for their content management system in a small 265k host movie.
Navicast provide another ‘aqua’ styled interface to their CMS, this time with more controls such as three levels of compression quality and playback size. The selection and organisation of clips is, however, not as well executed as WaterCooler (for example the first movie loads by itself – slowing down access on a slow connection before the user has made a choice).
The two sites provide good examples of what is possible using the Quicktime Player as a front end for content management, accessing online clips and data. While both feature linear movie clips, a similar approach could be used to deal with interactive and dynamic ‘hyper’ media. This is a direction I am looking to explore as I learn more about the tools.
This looks like a really helpful resource for making Moveable Type weblogs look a bit better than the default settings. I’ve still got a bit of a way to go making this blog a bit prettier but I think this will help.
I found this link through the Oxff mailing list which is a discussion space for real time video performers (vjs etc) and programmers using patching and coding based software such as Puredata (+GEM) and Max (+Jitter).
From the site: PANSE is an open platform for the development of audio-visual netart, open to all
The PANSE experiments are made up of various browser windows which each feature a flash animation and or controls such as sliders and buttons. These windows each control (or are controled by) an audio synthesiser which sends a real time generated MP3 stream back to you. The more of the little windows you have open the more complex the sounds and visuals become as they interact with one another. This sort of thing makes me want to learn Max or PD! I love the way anyone can post their own projects to the site and they can work alongside everyone else’s.