For each of our keystone texts, you will complete a short response to a writing prompt. There will be six of these assignments, and each is worth five points.
These short papers should be between 750-1500 words (with the exception of paper #6, which follows a slightly different format), and they should demonstrate that you've thought carefully about the prompt and about the reading. The prose in these papers should reflect some level of revision, but they do not have to be perfectly polished. I am primarily interested in the ideas generated by the practice of writing these papers. Show me careful thinking, and I'll be willing to let sloppy sentences slide.
The aim of the prompts is to provoke new ways of thinking about the reading and about the topic of the class, and I may ask you to think about things you're not accustomed to thinking about in ways that might also be new. Consider these papers as opportunities to take risks in your thinking. These are low stakes assignments meant to provide some space for creativity.
For your final project, you will choose one of the short responses and expand it to propose a larger project. This means you should consider each of these short writing assignments as potential rough drafts for that final project.
When providing feedback on these, I will focus on how you might expand on the ideas in the paper for the final project. I will also be asking the following questions:
In "Siri Disciplines," Lawrence argues that speech technologies like Siri operate in a disciplinary mode - they force nonstandard speakers of English to assimilate, and they exclude any speech practices to don't fit the narrow conception of what counts as "understandable." We could extend this critique to other technologies as well, even the keyboards we use discipline us and shape the way we write and think.
What approaches to design might help us create technologies that avoid this disciplinary mode? How might we create technologies that aren't rooted in discipline and exclusion? If technologies like Siri discipline, are there ways we might redesign them? Or we could even think beyond particular technologies: Are there general approaches to design that could result in technologies that do not force users to assimilate? Lawrence provides one possible answer to this question when she says that developers "on the periphery" are most likely to offer a way out of this bind. Where else might we look, and what approaches could show the most promise?
Choose one of the approaches to sML-based text classification described by Graham and Hopkins and use it to propose a potential "AI for Social Justice" project. Be as detailed as you can: What is the research question you would pursue? Why is that question important? How would you carry out the study? What data would you analyze? What would you expect to find and why? What might you need (including, but not limited to certain expertise with machine learning) to complete the study? Who might you collaborate with?
Steele argues that Black women are technically savvy but that technoculture has defined technology in a narrow (Western, white) way that excludes that savvy or that understands it as not *real* technology: "Black women have always engaged with technology; it is the definition of technology and technical expertise that shifted." Steele is asking us to reconsider what the term "technology" really means.
What is a practice that is deemed by many to be "not technical" or at least "not technical enough," and how does redefining that practice as "technical" or "technological" change our orientation to the practice. Think here of the spaces you interact with on a regular basis. Are there practices that are seen by many as frivolous, silly, non-serious, simple, or easily executed by anyone? Why are they seen that way? How could you reframe them as technical and as the result of craft, technique, or expertise? How does such a reframing help us see these practices in a new way?
Amoore argues that an algorithm's single output is too often seen as rooted in certainty and correctness and that even critiques of algorithms fall prey to this problem. If a predictive policing algorithm determines that a certain neighborhood should be patrolled due to the data fed to it, police departments tend to treat this result as a truth, and they act accordingly. Critics, on the other hand, aim to correct the racist, classist assumptions that led to that result, trying to uncover the algorithm's embedded biases. Amoore argues for a different approach altogether, offering a "cloud ethics" that does not accept algorithmic determinations as true or certain but also does not call for an approach that would break open the "black box" and aim to correct its biases.
So, what does a cloud ethics look like? In chapter 6, Amoore suggests that algorithmic results are "fabulations" and that a cloud ethics should not seek to correct those fantastical stories or point out the "real story" but should rather "confront the specific fabulatory functions of today's algorithms with a kind of fabulation of our own" (158). By creating other fabulations and stories, "the single output of the algorithm is reopened and reimagined as an already ethicopolitical being in the world." We create more stories to highlight that the algorithm is tellings stories and not offering solid truths.
Can you imagine an example of this cloud ethics in practice? What would it look like? How would it work? What algorithm could we engage with as we create "fabulations of our own"?
Computer vision technologies routinely misclassify or fail to recognize nonwhite faces, and many have argued that this indicates a lack of diversity in the datasets used to train these technologies as well as a lack of diversity in computer programming teams. But Amaro offers a different kind of critique, one aimed at the very nature of computer vision technologies. Amaro argues that the inclusion of black faces in datasets, and thus the creation of systems that recognize black faces, is not the only response to this problem. In fact, he argues that it is a response that might make many problems worse.
Given that computer vision technologies are based on a logic of "coherence," such technologies constantly aim to make sense of that which they see as incoherent. If whiteness is the norm, blackness is seen as incoherent, as a problem to be solved. But for Amaro, blackness is coherent in that it is "continually taking shape." If an algorithmic system perceives this as incoherence and thus tries to either make it cohere (by comparing it against a norm) or excludes it altogether (further solidifying the norm), then the answer is not to incorporate blackness into that system. Instead, it is to imagine an entirely new system.
In this short writing assignment, your task is to imagine a technology inspired by Amaro's framework. What technology can you imagine that is not rooted in "coherence and detectability"? This does not have to be a computer vision technology, though it can be. Your job is to invent, to treat this as a thought experiment: What other possibilities can we imagine if we take Amaro's argument as our starting point? What other futures are possible if the "black technical object" is not understood as needing to conform to the algorithm but is instead understood as an invitation to think differently about technology and design?
You have now completed 5 of these short papers, responding to the prompts that I have provided. Your task in this final assignment is to write a prompt for readers of Legacy Russell's Glitch Feminism.
The length of this paper will obviously not be 750-1000 words. In fact, depending on how you write this prompt, it could be as short as a single sentence or as long as multiple paragraphs. Regardless of its length, your job is to create a prompt that would allow someone reading Glitch Feminism to engage with its ideas in an inventive way.