Proponents of generative AI tools claim they will supplement, even replace, the work of cultural ... more Proponents of generative AI tools claim they will supplement, even replace, the work of cultural production. This raises questions about the politics of visibility: what kinds of stories do these tools tend to generate, and what do they generally not? Do these tools match the kind of diversity of representation that marginalized populations and non-normative communities have fought to secure in publishing and broadcast media? I tested three widely available generative AI tools with prompts designed to reveal these normative assumptions; I prompted the tools multiple times with each, to track the diversity of the outputs to the same query. I demonstrate that, as currently designed and trained, generative AI tools tend to reproduce normative identities and narratives, rarely representing less common arrangements and perspectives. When they do generate variety, it is often narrow, maintaining deeper normative assumptions in what remains absent.
Algorithms may now be our most important knowledge technologies, “the scientific instruments of a... more Algorithms may now be our most important knowledge technologies, “the scientific instruments of a society at large,” and they are increasingly vital to how we organize human social interaction, produce authoritative knowledge, and choreograph our participation in public life. Search engines, recommendation systems, and edge algorithms on social networking sites: these not only help us find information, they provide a means to know what there is to know and to participate in social and political discourse. If not as pervasive and structurally central as search and recommendation, trending has emerged as an increasingly common feature of such interfaces and seems to be growing in cultural importance. It represents a fundamentally different logic for how to algorithmically navigate social media: besides identifying and highlighting what might be relevant to “you” specifically, trending algorithms identify what is popular with “us” more broadly. But while the techniques may be new, the instinct is not: what today might be identified as “trending” is the latest instantiation of the instinct to map public attention and interest, be it surveys and polling, audience metrics, market research, forecasting, and trendspotting. Understanding the calculations and motivations behind the production of these “calculated publics,”in this historical context, helps highlight how these algorithms are relevant to our collective efforts to know and be known. Rather than discuss the effect of trending algorithms, I want to ask what it means that they have become a meaningful element of public culture. Algorithms, particularly those involved in the movement of culture, are both mechanisms of distribution and valuation, part of the process by which knowledge institutions circulate and evaluate information, the process by which new media industries provide and sort culture. This essay examines the way these algorithmic techniques themselves become cultural objects, get taken up in our thinking about culture and the public to which it is addressed, and get contested both for what they do and what they reveal. We should ask not just how algorithms shape culture, but how they become culture.
Hosted by Northumbria and Birmingham City Universities, the Deplatforming Sex roundtable took pla... more Hosted by Northumbria and Birmingham City Universities, the Deplatforming Sex roundtable took place via Teams in October 2021. Participants included Danielle Blunt, Stefanie Duguay, Tarleton Gillespie and Sinnamon Love. Clarissa Smith chaired the discussion, which was transcribed and then edited to cut digressions and repetitions for publication. The roundtable provided the opportunity to reflect on recent moves to excise sex and forms of sexual commerce and performance from online spaces, while marking out some key issues for future research with and about sex workers, performers and other content providers. Our discussion provided critical engagement with ongoing legislative changes that are impacting content and providers directly and indirectly
Public debate about content moderation has overwhelmingly focused on removal: social media platfo... more Public debate about content moderation has overwhelmingly focused on removal: social media platforms deleting content and suspending users, or opting not to do so. However, removal is not the only available remedy. Reducing the visibility of problematic content is becoming a commonplace element of platform governance. Platforms use machine learning classifiers to identify content they judge misleading enough, risky enough, or offensive enough that, while it does not warrant removal according to the site guidelines, warrants demoting them in algorithmic rankings and recommendations. In this essay, I document this shift and explain how reduction works. I then raise questions about what it means to use recommendation as a means of content moderation.
Recent social science concerning the information technology industries has been driven by a sense... more Recent social science concerning the information technology industries has been driven by a sense of urgency around the problems social media platforms face. But it need not be our job to solve the problems these industries have created, at least not on the terms in which they offer them. When researchers are enlisted in solving the industry’s problems, we tend to repeat some of the missteps common to the study of technology and society.
With increasing attention to the labor, criteria, and implications of content moderation, come op... more With increasing attention to the labor, criteria, and implications of content moderation, come opportunities for real change in the ways that platforms are governed. After high profile exposes like The Guardian’s “Facebook Files,” it is becoming more difficult for platforms to regulate in secret. Governments around the world are increasingly seeking to influence moderation practices, and platforms now face substantial pressure from users, civil society, and industry groups to do more on specific issues like terrorism, hatred, ‘revenge porn, and ‘fake news.’ In light of this pressure and the opportunities it implies, this roundtable will consider options for the future of content moderation. The question is not just how the moderation apparatus should change, but what principles should guide these changes? This panel brings together perspectives from media and information studies, law, and civil society to explore a variety of approaches to regulation, from corporate self-governance ...
Recent social science concerning the information technology industries has been driven by a sense... more Recent social science concerning the information technology industries has been driven by a sense of urgency around the problems social media platforms face. But it need not be our job to solve the problems these industries have created, at least not on the terms in which they offer them. When researchers are enlisted in solving the industry’s problems, we tend to repeat some of the missteps common to the study of technology and society.
Public debate about content moderation has overwhelmingly focused on removal: social media platfo... more Public debate about content moderation has overwhelmingly focused on removal: social media platforms deleting content and suspending users, or opting not to do so. However, removal is not the only available remedy. Reducing the visibility of problematic content is becoming a commonplace element of platform governance. Platforms use machine learning classifiers to identify content they judge misleading enough, risky enough, or offensive enough that, while it does not warrant removal according to the site guidelines, warrants demoting them in algorithmic rankings and recommendations. In this essay, I document this shift and explain how reduction works. I then raise questions about what it means to use recommendation as a means of content moderation.
Online content providers such as YouTube are carefully positioning themselves to users, clients, ... more Online content providers such as YouTube are carefully positioning themselves to users, clients, advertisers, and policymakers, making strategic claims as to what they do and do not do, and how their place in the information landscape should be understood. One term in particular, ‘platform, ’ reveals the contours of this discursive work. ‘Platform ’ has been deployed in both their populist appeals and their marketing pitches – sometimes as technical platforms, sometimes as platforms from which to speak, sometimes as platforms of opportunity. Whatever tensions exist in serving all of these constituencies are carefully elided. The term also fits their efforts to shape information policy, where they seek protection for facilitating user expression, yet also seek limited liability for what those users say. As these providers become the curators of public discourse, we must examine the roles they aim to play, and the terms with which they hope to be judged.
Proponents of generative AI tools claim they will supplement, even replace, the work of cultural ... more Proponents of generative AI tools claim they will supplement, even replace, the work of cultural production. This raises questions about the politics of visibility: what kinds of stories do these tools tend to generate, and what do they generally not? Do these tools match the kind of diversity of representation that marginalized populations and non-normative communities have fought to secure in publishing and broadcast media? I tested three widely available generative AI tools with prompts designed to reveal these normative assumptions; I prompted the tools multiple times with each, to track the diversity of the outputs to the same query. I demonstrate that, as currently designed and trained, generative AI tools tend to reproduce normative identities and narratives, rarely representing less common arrangements and perspectives. When they do generate variety, it is often narrow, maintaining deeper normative assumptions in what remains absent.
Algorithms may now be our most important knowledge technologies, “the scientific instruments of a... more Algorithms may now be our most important knowledge technologies, “the scientific instruments of a society at large,” and they are increasingly vital to how we organize human social interaction, produce authoritative knowledge, and choreograph our participation in public life. Search engines, recommendation systems, and edge algorithms on social networking sites: these not only help us find information, they provide a means to know what there is to know and to participate in social and political discourse. If not as pervasive and structurally central as search and recommendation, trending has emerged as an increasingly common feature of such interfaces and seems to be growing in cultural importance. It represents a fundamentally different logic for how to algorithmically navigate social media: besides identifying and highlighting what might be relevant to “you” specifically, trending algorithms identify what is popular with “us” more broadly. But while the techniques may be new, the instinct is not: what today might be identified as “trending” is the latest instantiation of the instinct to map public attention and interest, be it surveys and polling, audience metrics, market research, forecasting, and trendspotting. Understanding the calculations and motivations behind the production of these “calculated publics,”in this historical context, helps highlight how these algorithms are relevant to our collective efforts to know and be known. Rather than discuss the effect of trending algorithms, I want to ask what it means that they have become a meaningful element of public culture. Algorithms, particularly those involved in the movement of culture, are both mechanisms of distribution and valuation, part of the process by which knowledge institutions circulate and evaluate information, the process by which new media industries provide and sort culture. This essay examines the way these algorithmic techniques themselves become cultural objects, get taken up in our thinking about culture and the public to which it is addressed, and get contested both for what they do and what they reveal. We should ask not just how algorithms shape culture, but how they become culture.
Hosted by Northumbria and Birmingham City Universities, the Deplatforming Sex roundtable took pla... more Hosted by Northumbria and Birmingham City Universities, the Deplatforming Sex roundtable took place via Teams in October 2021. Participants included Danielle Blunt, Stefanie Duguay, Tarleton Gillespie and Sinnamon Love. Clarissa Smith chaired the discussion, which was transcribed and then edited to cut digressions and repetitions for publication. The roundtable provided the opportunity to reflect on recent moves to excise sex and forms of sexual commerce and performance from online spaces, while marking out some key issues for future research with and about sex workers, performers and other content providers. Our discussion provided critical engagement with ongoing legislative changes that are impacting content and providers directly and indirectly
Public debate about content moderation has overwhelmingly focused on removal: social media platfo... more Public debate about content moderation has overwhelmingly focused on removal: social media platforms deleting content and suspending users, or opting not to do so. However, removal is not the only available remedy. Reducing the visibility of problematic content is becoming a commonplace element of platform governance. Platforms use machine learning classifiers to identify content they judge misleading enough, risky enough, or offensive enough that, while it does not warrant removal according to the site guidelines, warrants demoting them in algorithmic rankings and recommendations. In this essay, I document this shift and explain how reduction works. I then raise questions about what it means to use recommendation as a means of content moderation.
Recent social science concerning the information technology industries has been driven by a sense... more Recent social science concerning the information technology industries has been driven by a sense of urgency around the problems social media platforms face. But it need not be our job to solve the problems these industries have created, at least not on the terms in which they offer them. When researchers are enlisted in solving the industry’s problems, we tend to repeat some of the missteps common to the study of technology and society.
With increasing attention to the labor, criteria, and implications of content moderation, come op... more With increasing attention to the labor, criteria, and implications of content moderation, come opportunities for real change in the ways that platforms are governed. After high profile exposes like The Guardian’s “Facebook Files,” it is becoming more difficult for platforms to regulate in secret. Governments around the world are increasingly seeking to influence moderation practices, and platforms now face substantial pressure from users, civil society, and industry groups to do more on specific issues like terrorism, hatred, ‘revenge porn, and ‘fake news.’ In light of this pressure and the opportunities it implies, this roundtable will consider options for the future of content moderation. The question is not just how the moderation apparatus should change, but what principles should guide these changes? This panel brings together perspectives from media and information studies, law, and civil society to explore a variety of approaches to regulation, from corporate self-governance ...
Recent social science concerning the information technology industries has been driven by a sense... more Recent social science concerning the information technology industries has been driven by a sense of urgency around the problems social media platforms face. But it need not be our job to solve the problems these industries have created, at least not on the terms in which they offer them. When researchers are enlisted in solving the industry’s problems, we tend to repeat some of the missteps common to the study of technology and society.
Public debate about content moderation has overwhelmingly focused on removal: social media platfo... more Public debate about content moderation has overwhelmingly focused on removal: social media platforms deleting content and suspending users, or opting not to do so. However, removal is not the only available remedy. Reducing the visibility of problematic content is becoming a commonplace element of platform governance. Platforms use machine learning classifiers to identify content they judge misleading enough, risky enough, or offensive enough that, while it does not warrant removal according to the site guidelines, warrants demoting them in algorithmic rankings and recommendations. In this essay, I document this shift and explain how reduction works. I then raise questions about what it means to use recommendation as a means of content moderation.
Online content providers such as YouTube are carefully positioning themselves to users, clients, ... more Online content providers such as YouTube are carefully positioning themselves to users, clients, advertisers, and policymakers, making strategic claims as to what they do and do not do, and how their place in the information landscape should be understood. One term in particular, ‘platform, ’ reveals the contours of this discursive work. ‘Platform ’ has been deployed in both their populist appeals and their marketing pitches – sometimes as technical platforms, sometimes as platforms from which to speak, sometimes as platforms of opportunity. Whatever tensions exist in serving all of these constituencies are carefully elided. The term also fits their efforts to shape information policy, where they seek protection for facilitating user expression, yet also seek limited liability for what those users say. As these providers become the curators of public discourse, we must examine the roles they aim to play, and the terms with which they hope to be judged.
This paper considers how media workers and organizations make use of the abundance of metrics ava... more This paper considers how media workers and organizations make use of the abundance of metrics available to producers in the contemporary online environment. While the expansion of audience measurement on digital music platforms, dashboard analytics, and third-party providers raise broad societal concerns about the quantification of culture, less attention is paid to how professionals in the music industries approach, understand, and deploy these metrics in their work. Drawing on survey and interview data, we find that music workers do not take metrics on faith or reject them out of hand; rather, they make sense of them, deploy them strategically, and narrate their meanings to give themselves rationales to make investments and predictions, and to persuade others to do so. The paper shows that the longstanding quest to “know the audience” has not been fulfilled by the rise of digitalization or the expansion of audience measurement techniques. Instead, attention to data has gotten more granular, more in need of triangulation with numerous other data sources, and at times more neurotic. The media industries continue to be a place where workers “make do” with the resources provided by the systems of which they are part, and upon which they increasingly depend, in order to manage the uncertainty that is endemic to the business of culture.
Uploads
Papers