primary_thesis_project
• CEPA Gallery: Big Orbit Project Space
• Cultural Funding Grant, Technē Institute for the Arts and Emerging • Technologies.
• Submitted in partial fulfillment of the requirements for the degree of MFA from the University at Buffalo, Department of Art.
• 2018
Simulated Sentience (3.1.7)
(fourth_system . second_update . eighth_revision)
overview
Simulated Sentience was the primary interactive of my master’s thesis in 2018. The first version of this installation was constructed for only one user at any given time. This second version was triplicated to situate up to three users simultaneously.
After introducing the work I cover my process and the utilization of antiquated technologies. Forming critical analyses of the media implemented will better situate my work within the transmediated self-narrative.
My desire to extend this work had several foci: to expand the interactive capacities; intensify the interplay between analog and digital modalities; to establish a co-creative network as a direct commentary on the transmediated-self.
components
23 analog CRT’s
15 analog video cameras
15 ultrasonic sensors
raw hanging speakers
3 Mac Mini’s
3 analog-to-digital adapters
3 digital-to-analog adapters
3 Arduino sketches
3 Max7 patches
and Hive Mind
installation
This work is comprised of five, human-sized monolithic structures. Cables going up diagonally tether them to a pentagon-shaped central hub that is suspended overhead. I refer to this hub as a “hive mind” as it has three independent systems that come together to form a unified interactive whole. Not only does this arrangement make for a more visually demanding presence than the first iteration, but it also is more suggestive of a unified system than when the cables were running towards a corner. Each of these five monoliths is comprised of analog sonar sensors, as well as consumer-grade analog CRTs (televisions) and camcorders. All of the hardware was arranged to stylistically form different monoliths, while also facing inwards to withhold a centralized sense of immersion.
system
A single user at any monolith has the ability to activate any of the three layers at a time (of that one monolith) that they are in front of. When any sonar sensor is triggered by proximity, the associated CRT(s) and camera(s) will activate, presenting distorted representation(s) of that user on the CRT(s). When a second person also enters, instead of both people activating their own stacks solely, they are also sharing their presence by mirroring their live distortion feed to other users. Since there are three levels of any given monolith, up to three users can interact with one another through the system. The assemblage of analog bodies generates exquisite-corpse-like narrativity.
documentation
I presented documents and prints to show what’s happening behind the scenes:
The Max/MSP/Jitter print represented the patch that was running on all three of the Mac computers.
The three draping runners were a print of the entire Arduino code used on each of the three Arduino’s of the hive mind.
A print of the hive mind to give full transparency to what otherwise would be impossible to see from the ground
The last elements were the drawings and prints used in designing the three custom circuit boards that I used in this work:
The Ethernet hub and sonar board were implemented so that the sensors could be plug and play
The video switcher was essential in porting the correct video feed in and then routing back to the correct CRTs
theory
If you have further interest in the concepts behind this work, you can read more on pages 25-31 of my thesis.