The nature of information recorded by digital sensors computing devices allows for nearly endless accumulation and recombination of data in order to look at it as if from different geographic and temporal scales than that in which the data was originally captured. My objective in applying data collection and processing techniques to visual information is not to acquire and store the images that are represented by the points of data, but rather to visualize the cumulative relationships between the points. I seek to find shapes that can express aspects of experience in order to gain a broader understanding of how experience is created in the mind. Creating the opportunity for relationships to be drawn between elements at both the micro and macro scales can reveal how experience is structured.
In her book, Vision and Art: the Biology of Seeing, Margaret Livingstone describes human vision as a combination of many different neurological pathways. Rather than functioning merely as cameras that show us the world as a unified image, our eyes instead capture light as data and send it to the brain where multiple processes assemble a spatial, sensory map that we interpret as an image. According to Livingstone, human vision is comprised of a combination of a colorblind, spatially-oriented “Where” system and a color-bound, object-recognizing “What” system in the brain that continually stitch together information received through the eyes to produce a synthetic representation of the environment. At the foundation of much of my work is the idea that we are constantly sampling data points from our surroundings and that the gaps between those points are filled in by our brains—and by computers—in order to create representations of our environment.
Using depth sensors and 3d rendering techniques, I engage in a similar process of informational distillation; the world perceived by the device and stored in the computer is reduced to points of data that can be mapped. However, rather than represent the information as mere points and rely on the brain to blend them together, I use algorithms to interpolate the point data and construct three dimensional representations of the original forms. I have embraced the technique of looking at information as a shape defined by an arrangement of its internal relationships rather than raw data because that is the basis for our innate pattern recognition pathways. As a means of achieving this synthesis of information, even a faint shadow of it, I superimpose information from multiple points in time. Because humans experience time as a one-dimensional flow of moments, combining spatial data points over time allows me to create visual representations of spatiotemporal continuity.
In my work, I explore aspects of human perception—and desired augmentation to it—using information technology. Through the visual centers of the brain, the interpolation of data points that allows us to construct an understandable mental representation of our surroundings. The gaps in our understanding that are filled in automatically by the brain pertain not only to space, but to time as well; one of the goals of many of my pieces is to create a “thick slice of time,” or a forced compression of many moments into a single experience, as a means of exploring a broader range of temporal continuity. Performing this compression allows me to examine the way objects and environments exist as forms that exist continuously in spacetime, rather than moment-to-moment. Through the continuing use of systems and machines that mimic human perception, I seek to implement these concepts for the purposes of better understanding how we mentally construct our surrounding environments, and how those environments in turn affect us.