Prompt #4: Amoore

Amoore argues that an algorithm's single output is too often seen as rooted in certainty and correctness and that even critiques of algorithms fall prey to this problem. If a predictive policing algorithm determines that a certain neighborhood should be patrolled due to the data fed to it, police departments tend to treat this result as a truth, and they act accordingly. Critics, on the other hand, aim to correct the racist, classist assumptions that led to that result, trying to uncover the algorithm's embedded biases. Amoore argues for a different approach altogether, offering a "cloud ethics" that does not accept algorithmic determinations as true or certain but also does not call for an approach that would break open the "black box" and aim to correct its biases.

So, what does a cloud ethics look like? In chapter 6, Amoore suggests that algorithmic results are "fabulations" and that a cloud ethics should not seek to correct those fantastical stories or point out the "real story" but should rather "confront the specific fabulatory functions of today's algorithms with a kind of fabulation of our own" (158). By creating other fabulations and stories, "the single output of the algorithm is reopened and reimagined as an already ethicopolitical being in the world." We create more stories to highlight that the algorithm is tellings stories and not offering solid truths.

Can you imagine an example of this cloud ethics in practice? What would it look like? How would it work? What algorithm could we engage with as we create "fabulations of our own"?

Scholarly Lite is a free theme, contributed to the Drupal Community by More than Themes.