Content uploaded by Digdem Sezen
Author content
All content in this area was uploaded by Digdem Sezen on Oct 04, 2021
Content may be subject to copyright.
ISCC 11 (1) pp. 103–107 Intellect Limited 2020
Interactions: Studies in Communication & Culture
Volume 11 Number 1
© 2020 Intellect Ltd Commentary. English language. https://doi.org/10.1386/iscc_00010_7
www.intellectbooks.com 103
Received 15 June 2019; Accepted 30 June 2019
ABSTRACT
In the last decade following the technological and commercial advances in digital
image production and in artificial intelligence, human vision-centred understand-
ing of visuality has changed profoundly. Machine vision technologies (MVTs) are
used across a wide spectrum of activities ranging from surveillance to medical
diagnosis, to adaptive visual filters on social media. This commentary calls for a
rethinking of visuality that takes both technological advances and lessons learned
from cultural studies into consideration.
Since the mid-nineteenth century, engaging with the diverse issues of visual
reproduction, visual culture studies mainly focused on images generated for
human vision such as photography, film and TV images. The theoretical strat-
egies of visual culture as a field of study aimed to understand how mean-
ing is produced by and through images for humans in their historical context
(Sturken and Cartwright 2001) and brought together approaches across
disciplines of media studies and art history (Berger 1972; Rose 2005). In the
last decade, however, following the technological and commercial advances
in digital image production and in artificial intelligence, human vision-
centred understanding of visuality has changed profoundly. Machine vision
KEYWORDS
machine vision
coded gaze
algorithmic bias
data-image
visual culture
MVTs
DİĞDEM SEZEN
Teesside University
Without a blink:
Machineways of seeing in
contemporary visual culture
iscc
Interactions: Studies in Communication & Culture
Intellect
https://doi.org/10.1386/iscc_00010_7
11
1
103
107
© 2020 Intellect Ltd
2020
COMMENTARY
FINAL DRAFT
Diğdem Sezen
104 Interactions: Studies in Communication & Culture
technologies (MVTs) enabling automated creation, analysis and manipulation
of images as data are used across a wide spectrum of activities ranging from
surveillance to medical diagnosis, from the automotive industry to livestock
manufacturing and in various end-user applications from automatic image
organizers to adaptive visual filters on social media. Excel at quantitative
measurement of images for repetitive inspection tasks, these MVTs provide
not only immediate, consistent and large-scale results for business productiv-
ity, but also pervade to social and cultural lives of people through everyday
interactions. This paradigm shift calls for a rethinking of visuality and more
importantly visual culture that must take both technological advances and
lessons learned from cultural studies into consideration.
Due to the data-driven nature of contemporary images, today’s visual
culture cannot just be concerned with mimesis or signification. It is contin-
uously reproduced through algorithmic models and further complicated by
biometrics and machine vision (Rettberg 2014). As Ingrid Hoelzl points out,
for today’s visuality‘human vision is only one among many possible sentient
systems’ (Hoelzl 2015: 73). She suggests the concept of ‘postimage’ in the
framework of posthuman theory, as a collaborative vision distributed across
species. Trevor Paglen (2016) goes even further and posits human vision in
the contemporary visual culture not only as peripheral, as Hoelzl argues, but
also as irrelevant in many ways. The majority of images in contemporary visual
culture, Paglen says, are now being produced by machines for other machines
and herefore become invisible to the human eye. He claims that the elastic,
ambiguous relationship between meaning and image in human-to-human
visual culture has ceased to exist in quantified machine-to-machine seeing
and there are no obvious ways of employing visual strategies developed for
human-to-human visual culture to make sense of machine-to-machine visual
culture.
On the other hand, trying to make sense of contemporary image by sepa-
rating it from the domain of human-centric visual culture might lead us to
treat it simply as data. As Steve F. Anderson (2017) emphasizes, however,
the difference between images and data still exists. Focusing on the evolv-
ing relationship between data and images, Anderson argues that the data
and images are complementary and existing in a dynamic interplay and this
duality requires a nuanced, balanced and extensive rethinking (2017: 5). He
also argues that approaching contemporary‘data-image’ only with the tools
of emerging subfields like software studies, code studies or platform stud-
ies might alienate the arts and humanities scholars who do not write or
understand computer code (2017: 3). Instead, he emphasizes a coextensive
approach rethinking critical models developed around analogue media and
cultural studies and benefitting from ‘hard-won advances in areas such as
feminism, critical race theory and models linking popular culture and technol-
ogy to issues of class, sexuality and politics’ (2017: 3). The importance of this
approach becomes even more apparent by concerns voiced regarding social,
cultural biases embedded in MVTs.
Since the MVTs are based on computation, users employing machines for
different purposes tend to think that their results are unbiased, neutral and
objective. However, as James Bridle (2018) points out, technologies do not
emerge from a vacuum. Developed over generations, through evolution and
culture, technology is‘the reification of a particular set of beliefs and desires’
(Bridle 2018). Acknowledging that technologies are the products of a specific
Without a blink
www.intellectbooks.com 105
world-view, The Xenofeminist Manifesto of Laboria Cuboniks Collective (2018)
also reminds us that‘technology isn’t inherently progressive’ (2018: 17) and
technological tools are embedded with values based on abuse and exploita-
tion of the weak. Safiya Umoja Noble (2018: 4) coins the term‘algorithmic
oppression’ to underscore the structural ways of racism and sexism that are
specific to people of colour and women in data-driven algorithmic culture.
Re-evaluation of‘looking’ as‘the gaze’ by feminist and queer theory has
fundamentally transformed our understanding of visual culture in the past
(Mirzoeff 1998: 391). Gaze is also a relevant and important analytical category
in critical computing (Algorithmic Justice League United 2020). MIT Media Lab
researcher Joy Buolamwini discusses the concept of‘coded gaze’ in machine
vision systems and defines it as ‘the embedded views that are propagated
by those who have the power to code systems’ (Buolamwini 2016). Biased
systems create biased results. Algorithmic biases are deeply embedded in the
image-based applications, services that we use and are extremely common in
our lives. Studying gender predictions of computer vision software, Zhao et al.
(2017: 2941) found that the deep learning AI trained on gender-biased image
datasets was more likely to amplify those biases. Such biased systems could
end up affecting any space in which automated decision-making is used in
any institutional or structural capacity. To be able to interpret, question and
challenge such systems, it is critical to develop an understanding concerning
the machinic ways of seeing.
In 2017, researchers from The Geena Davis Institute on Gender in Media
in collaboration with Google.org developed a software tool called the Geena
Davis Inclusion Quotient (GD-IQ) to measure how often we see and hear
women on-screen. The GD-IQ used machine learning to recognize patterns
the human viewers might overlook in the 100 highest grossing live-action
movies from the years 2014, 2015 and 2016 (Google 2017). While the MVTs
used in this study provided researchers a remediated tool for content analysis
that was faster and more precise compared to human vision, it also provided
the machines in the study a dataset and perspective on representations of
gender in movies. Other studies employed different approaches and focused
on seeing behaviours of MVTs and experimented with the commercially
available image recognition systems. By looking either at women in femi-
nist movie scenes (Sezen 2019) or at artworks in museum collections (Pereira
and Moreschi 2019) through such systems, these experimental research
projects looked for oddities or glitches in the interpretations of machine
vision systems. As Safiya Umoja Noble (2018: 10) explains, such oddities are
not simply glitches but rather are fundamental to the operating systems of
such services. Studying such errors and oddities might open up ways allow-
ing the researcher to ask questions about the biases of the datasets or the
logic of algorithms, and expand their understanding through interdisciplinary
interpretations.
The public understanding and awareness regarding these new ways of
data-driven seeing thus require inspiring, intuitive and immediate critical
attention. Considering the dual character of contemporary data-image shaped
by both the politics of image forged by visual culture heritage and the politics
of data in algorithmic culture, we need to rethink our critical methods and
tools for visual research as well as our interdisciplinary positions. This would
not only provide new methodologies for visual research but also for machine
vision and shape its evaluation through critical thinking.
Diğdem Sezen
106 Interactions: Studies in Communication & Culture
REFERENCES
Algorithmic Justice League United (2020),‘Home page’, http://www.ajlunited.
org. Accessed 14 February 2020.
Anderson, Steve F. (2017), Technologies of Vision: The War Between Data and
Images, Kindle ed., Cambridge, MA: The MIT Press.
Berger, John (1972), Ways of Seeing, London: Penguin.
Bridle, James (2018), New Dark Age: Technology and the End of the Future, New
York: Verso.
Buolamwini, Joy (2016),‘InCoding – in the beginning was the coded gaze’,
MIT Media Lab, 17 May, https://medium.com/mit-media-lab/incoding-in-
the-beginning-4e2a5c51a45d. Accessed 30 June 2019.
Google (2017),‘Methodology and GD-IQ findings from editorial: “The women
missing from the silver screen and the technology used to find them”’,
https://static.googleusercontent.com/media/about.google/en//assets/pdf/
Our-Methodology-GDIQ.pdf?cache=b54c6dd. Accessed 30 June 2019.
Hoelzl, Ingrid (2015), ‘From softimage to postimage’, Leonardo The MIT
Press Journals, 50:1, pp. 72–73, https://www.mitpressjournals.org/doi/
abs/10.1162/LEON_a_01349. Accessed 30 June 2019.
Laboria Cuboniks Collective (2018), The Xenofeminist Manifesto, New York:
Verso.
Mirzoeff, Nicholas (1998), Visual Culture Reader, New York: Routledge.
Noble, Safiya Umoja (2018), Algorithms of Oppression: How Search Engines
Reinforce Racism, New York: New York University Press.
Paglen, Trevor (2016),‘Invisible images (your pictures are looking at you)’, The
New Inquiry, 8 December, https://thenewinquiry.com/invisible-images-
your-pictures-are-looking-at-you. Accessed 30 June 2019.
Pereira, Gabriel and Moreschi, Bruno (2019),‘Ways of seeing with compu-
ter vision: Artificial intelligence and institutional critique’, MediArXiv
Preprints, https://mediarxiv.org/nv9z2/. Accessed 30 June 2019.
Rettberg, Jill Walker (2014), Seeing Ourselves Through Technology: How We Use
Selfies, Blogs, and Wearable Devices to See and Shape Ourselves, New York:
Palgrave MacMillan.
Rose, Gillian (2005), Visual Methodologies, London: Sage.
Sezen, Diğdem (2019),‘Machine gaze on women: Looking at feminist movie
scenes through machine vision’, Female Agency and Subjectivity in Film and
Television, Istanbul Bilgi University, Istanbul, 11–13 April.
Sturken, Marita and Cartwright, Lisa (2001), Practices of Looking: An Introduction
to Visual Culture, New York: Oxford University Press.
Zhao, Jieyu, Wang, Tianlu, Yatskar, Mark, Ordonez, Vicente and Chang, Kai-Wei
(2017),‘Men also like shopping: Reducing gender bias amplification using
corpus-level constraints’, in R.Barzilay and M.-Y.Kan (eds), Proceedings
of the 55th Annual Meeting of the Association for Computational Linguistics,
Copenhagen, Denmark, 7–11 September, Copenhagen: Curran Associates
Inc., pp. 2941–52.
SUGGESTED CITATION
Sezen, Diğdem (2020),‘Without a blink: Machine ways of seeing in contem-
porary visual culture’, Interactions: Studies in Communication & Culture, 11:1,
pp. 103–107, doi: https://doi.org/10.1386/iscc_00010_7
Without a blink
www.intellectbooks.com 107
CONTRIBUTOR DETAILS
Diğdem Sezen is a lecturer at Teesside University, School of Computing,
Engineering and Digital Technologies. She received her Ph.D. from Istanbul
University. She got the Fulbright scholarship for her doctoral studies and did
research at Georgia Institute of Technology, School of Digital Media, United
States. She had acquired the Turkish equivalent of habilitation in Visual
Communication Design and Digital Game Design in 2017. She researches,
publishes and teaches in the fields of digital culture, new media literacies,
games and transmedia narratives. Her research has been published in Springer,
Palgrave Macmillan and Routledge. She was a visiting researcher between
2017 and 2019 at Rhein-Waal University of Applied Sciences in Germany for
the European Union-funded RheijnLandXperiences innovation project on
computer vision-aided storytelling practices within cultural institutions.
E-mail: d.sezen@tees.ac.uk
https://orcid.org/0000-0001-9892-7274
Diğdem Sezen has asserted their right under the Copyright, Designs and
Patents Act, 1988, to be identified as the author of this work in the format that
was submitted to Intellect Ltd.