We are obsessed with capture. On an aesthetic level we have been attempting to capture aesthetic qualities of things for thousands of years through drawing, painting and, relatively recently, photography. Advances in digital technology now allow to capture more than just aesthetic qualities of a thing. Now we can measure things, analyse them, and make decisions based on statistics and quantifiable data as opposed to qualitative personal opinions. Through this we have gained an incredible insight into the world around us, and can study everything from weather patterns, genetics of species and so much more. This gigantic planet and beyond now seems so much more comprehensible, now that we can understand it in terms of numbers and patterns.
Developments in digital technology have turned the focus increasingly onto the individuals. We want to understand not just how an environment evolves, but also the people that move through it. We want to know how they interact with it, why they do so, what their intent might be, what they might do next, what their emotions are and study little incidental quirks that could reveal more than a person intended. Like weather and climate data before it, the hope is that by collecting enough data about individuals we can begin to understand them more, and make predictions about them.
Although it has the potential to be useful, collecting this amount of data can have dangerous implications. CCTV cameras became the ever watchful unblinking eyes that are found littered throughout cities. These have only been enhanced by movement detectors, facial recognition software and GPS data on our phones. Thanks to constant misuse and abuse of these systems and the data that they collect, we have become distrustful of those collecting data. Now we are, more than ever, looking for ways not to be analysed but anonymised.
Artists, too, have been utilising their skills to highlight the culture we live in, where our every move and keystroke is being captured. This artwork, whilst informative, is at times eerie in the amount of data it reveals.
In 2009 Kyle McDonald undertook the live Twitter-based project Keytweeter. For this project, McDonald would tweet every 140 characters he typed, with a few exceptions like passwords and e-mail addresses. He also removed, at the request of others, text that would reveal sensitive details about third parties. McDonald states how, in undertaking this project, he was more aware that others may be “listening in” on his conversations. McDonald himself was explicit and transparent about his project. The source code was released and he alerted everyone that he was tweeting his keystrokes in all of his communications. This transparency allowed him to carry out the project, and his day-to-day life.
His other projects, such as Scrapscreen, Important Things, and other graphical projects like IOGraphica and Graffiti Markup Language, which similarly capture and display graphically potentially sensitive data, all avoid crossing the line into being invasive through this same level of transparency. Users are aware of the data that is being collected, how it’s being collected and can decided for themselves what is done with it.
All of these projects could be deemed invasive, but what they are doing is what automated software has been doing covertly every day. Bots scan your e-mails, tweets, status updates, images and videos for information that is used to identify you. They claim not to do this to spy, but to help bring you more relevant information.
It is now not just text that is being mined for data. Facial, gesture and emotion recognition software is being used in a variety of invasive ways. Features like Facebook’s auto tagging feature can be useful for identifying friends and family in photos, but become dangerous as we are in the dark about how this data is being used. Similarly, advertisers are employing the use of emotion tracking software such as Affdex to detect emotional responses to advertisements.
The aforementioned technologies themselves are not malicious. They may well have been developed with an intention to further study the world, but they become malicious when we are ignorant about how they are being used and how the data they capture is being employed. Only when there is transparency, as seen in Keytweeter et al, can we become comfortable with data being mined.
How, then, can we escape this data capture?
Alongside the very technically complex open solutions from the likes of the Tor Project and Blackphone, artists and designers have been addressing the issue of avoiding being captured in creative ways.
Sang Mun’s ZXX typeface from 2013 in one such approach that obscures text from being read by optical character recognition processes. By adding noise to glyphs—be it in the form of haphazardly placed pixels and shapes or overlays of other glyphs—the hope is that text would become indecipherable to computers, much like the aim of CAPTCHAS.
Adam Harvey’s CV Dazzle explores how fashion can be used as camouflage from face-detection technology. Instead of opting for camouflaging the whole face by wearing a mask, Harvey’s solution selectively obscures parts of a face using removable tattoos and face-obscuring hair attachments so that computer vision software becomes confused and eventually fails.
Matthew Plummer-Fernandez created a system in 2014 for encrypting and decrypting files made for 3D printing. When a user wishes to distribute an STL file, they can use his software to apply an algorithm that encrypts the file by randomising the vertex positions, which makes the 3D model appear to be an abstract shape. When a user wants to decrypt it, they need to apply the same algorithm, or the model becomes visually corrupted.
A common theme in these approaches to escaping computer vision is the addition of noise. Current computer vision software is only able to capture data by analysing media and finding common patterns. The addition of noise disrupts this software, which allows the obfuscated element to go undetected.
Whilst these approaches may have a reliable level of effectiveness now, it is only a matter of time before computer vision software adapts to recognise glitched and corrupted files. We will begin to enter an arms race, leading to an ever increasing amount of noise being added to try and hide our messages.
Antonio Roberts is a digital visual artist based in Birmingham, UK. He produces artwork that takes inspiration from glitches in software and hardware, errors and the unexpected. http://hellocatfood.com