Based on the implementation at http://www.wblut.com/2011/07/13/mccabeism-turning-noise-into-a-thing-of-beauty/ Ported to openFrameworks, then back to Processing, replacing OpenCv blur with an integral image blur. This uses the inhibitor from the current level as the activator for the next level. Each level blurs the level beneath it. Blurring is done "in place" with only three buffers (activator, inhibitor, integral image) making it possible to compute an arbitrary number of levels.
Click to reset. Parameters vary slightly each time.
This is a simple utility I wrote on the subway. I wanted to keep track of how much time I had for each section of a talk I was about to give. I didn't include any numbers, but instead focused on providing a quick visual description of where I was at that moment.
If something goes terribly wrong, you can click and drag to "reset the timer".
What would the Earth look like if every elevation was inverted? If oceans were mountains and vice versa?
Bathymetric (ocean depth) information from NASA is recolored here using topographic information and satellite imagery. Colors are based on a distance-based weighted average of similar elevations (other algorithms are available if the source code is modified).
<a href="http://www.flickr.com/photos/kylemcdonald/5292001581/">High resolution render on Flickr.</a>
<a href="http://earthobservatory.nasa.gov/Features/BlueMarble/BlueMarble_monthlies.php">Source material from NASA.</a>
A simple recreation of John F. Simon Jr.'s 1997 piece, one of the best known visual enumeration pieces (<a href="http://www.numeral.com/paraicon.html">explanation</a>, <a href="http://www.numeral.com/eicon.html">original</a>).
The license for the original forbids reverse engineering.
Visualization of point cloud generated by Photosynth from a two and a half minute take in Godard's "Week End". Original Photosynth is <a href="http://photosynth.net/view.aspx?cid=02768a46-9f93-4a92-9bb7-efa4fc5c6df2">here</a>.
Controls: left click to rotate, right click to zoom, both or middle to pan.
Using Perlin noise to describe wind, acting as an attractor on "pollen" particles moving through it; trying to coax a 2D vector field from the 1D Perlin noise functions. If you let it run for a bit, you'll notice some repetition in form due to the nature of Perlin noise.
Point-based rendering aesthetic inspired by Jared Tarbell.
Technique from Song Zhang, coded in C++ by Alex Evans, ported to Processing by Florian Jennet. I rewrote the code and got rid of things that were unnecessary or didn't work. The original had a little less noise. I extrapolated out three variables instead of trying to compute them: zskew, zscale, and noiseTolerance.
Learn how to use this code to make your own 3D scans <a href="http://www.instructables.com/id/Structured-Light-3D-Scanning/">on Instructables</a>.
A game of tag, played by an adaptive population. Click to reset. Each member of the population has a mind made up of nested code; for example:
(ifit (max (mod y (+ (min y (min (abs y) (iflte 0.252 y (* 0.0955 x) x))) y)) 0.252) (iflte y 0.005545 (iflte y (* y (min (* x x) 0.4767)) (* 0.05141 x) (* 0.1078 x)) 4.767))
Where x and y refer to the position of an important member (either one to pursue or run from, depending on the status of the member; the red one is it) in a local coordinate system. The output of the function is a single heading. These minds are generated at random.
click to pause
select a bar to see that mind/code tree
move within tagging distance to select a member
m mutates selected member
c crosses over to selected member
r to reset the game
d toggles display mode
e toggles extra display (crossovers, etc.)
f toggles statistics
s toggles sorting of statistics
t toggles display of code trees
A simple self organizing system. Every termite (the things moving around) knows two things: if you run into a wood chip, pick it up; only drop it when you hit a pile. Using these rules, the piles eventually converge to one single pile.
Hit a key for an alternative visualization of transfers.
A collaboration with Jason LaPorte (http://lonelypinkelephants.com/) implementing Kanerva's data structure as described by Peter Denning. The idea is to mimic human memory with a structure that can make connections between seemingly unrelated information, recall more salient information more accurately, and have quick recall of all information.
You might call it "holographic".
Right now it is just being used to store random pairs of 16-bit integers, and visualizing the memory as a 2D field of colors.
Some early attempts at generating realistic human irises. Takes maybe five seconds to generate one blue iris right now. It's way too complicated, and needs to be thought over again from the ground up. Unfortunately, literature on iris development and anatomy is hard to come by.
Stacking up binary numbers. Clicking at an x position determines the behavior.
An internal number is incremented using a user-defined value, and translated to a binary number. These numbers are stacked up, trying to maintain equally wide rows. Once you click an x position, the behavior is completely deterministic.
Tag similarity is visualized as spatial distance and color similarity, and frequency determines font size. Hit any key to reload. Select a beginning and end tag (which become highlighted red and green, respectively) to see the shortest path between them. Deselect by clicking again, or reselect the end by clicking a new tag. Tags come from old projects of mine I tagged. Distance from each tag to the three most significant and polarized tags determines RGB color. These three tags are found by taking the mode of multiple K-medoids runs for K = 3 using Dijkstra's algorithm as the distance metric.
Space: improvise as conductor
Tab: everyone picks new musicians to watch
Left click: move musician
Right click: improvise as selected musician
You can also reassign who is watching whom by ctrl + left clicking or using the alphabet keys. For example, typing "a" then "w" will have the conductor listen to w, creating a loop. "a" then "a" will return the conductor to introversion.
Every point in a grid is mapped to a point in life-space (8x8 = 64 dimensional binary space) and run for a bit. Colors are generated to represent each game, with the intensity varying with respect to the number of cells alive. Lots of red means the game was fuller near the beginning than the end. More blue means the opposite.
Visualization of structure within the game of Life. The first top left point is mapped to the a game of life (an 8x8 = 64-dimension binary space) and run for a while. A color is returned representing its life cycle (red intensity for the amount of cells early in the game, green mid game, blue end game). The color is used to determine which game to pick for the next cell (top to bottom, left to right). Varying zooms are chosen.
pppd is a highly formalized audiovisual composition built around the esoteric programming language p'' ("p prime prime"). During each brief scene, a random sequence of p'' code is generated and run, while the memory it uses is visualized and sonified. pppd is an artistic re-imagining of the otherwise academic field of computability theory. It functions simultaneously as an investigation of complex behavior emerging from formally simple systems, and as a playful exploration of computational dreams.
p'' is a subset of the programming language better known as Brainfuck.
Some work towards a fast DIY 3D scanner. This sketch loads 18 640x480 jpgs and uses them to resolve the 3D coordinates of the scene. PeasyCam controls: left drag for rotate, right drag for zoom, both/middle for pan. More on <a href="http://vimeo.com/3193063">vimeo</a> and <a href="http://flickr.com/photos/kylemcdonald/sets/72157613657773217/">flickr</a>.
Learn about structured light scanning, and contribute with your own work, on the <a href="http://sites.google.com/site/structuredlight/">structured light wiki</a>.