You Do Not Need an AI Policy / by Sonja Drimmer

You do not need an AI policy.

Forget the crisis in plagiarism and cheating; that’s yesterday’s news. It is becoming increasingly common for my colleagues, both within my own university and elsewhere, to wonder aloud what it is they are supposed to do when confronted with an email ghostwritten-by-prompt.

A seemingly natural inclination is to come up with a set of guidelines. That’s certainly something that I, as graduate program director of my department, have been urged to do at my institution. Guidelines offer what appears to be a contract, a system, an outsourcing of messy arbitration to a decision tree: “if —> then.”

But ultimately such guidelines serve the entrenchment of a set of products that are being leveraged to enervate interpersonal relations, negotiation, and shared governance. Such guidelines give the product that we are calling AI a futurity, a legitimacy, a standing so seemingly permanent and inevitable that they merit a place in the manuals of our places of work.

So in instituting an AI policy, no matter how well intentioned, no matter how seemingly opposed to the eviscerating and immiserating effect of these products on educational institutions and society, we are inadvertently doing the industry’s work. We are explicitly saying, this product has so guaranteed a space in our institutions that we are mapping out sanctioned uses for it.

And by implementing an AI policy, we are strengthening the punitive element of education. We are creating a code of conduct that, if we are true to our word about committing to this code, mandates enforcement. A policy requires policing.

That is the impact of instituting an AI policy.

This is all well and good, and leaving students and colleagues to their own devices sounds very utopian. But it does nothing to solve the problem of these products being used in ways that educators know are detrimental to the process and mission of education.

That mission was already compromised when our institutions decided to pay for products and partner with tech firms with the left side of their mouths, while insisting that we, the educational laborers, preserve the integrity of education with the right side of their mouths.

Devising an AI policy means more work for me to think of it, and more work for me to enforce it.

There is a solution. And the solution is to concede that we are confronting a social, economic, and political problem; not a technological one.

So when someone chooses to subtract themselves from a social encounter, whatever means that self willing subtraction takes, the most appropriate response is to meet that self willing subtraction in kind. Meet absence with absence, meet apathy with apathy. It is not my job to compensate for a person’s erasure of me by making myself more present.

What does that look like in practice? In practice that means, for me, meeting with students and colleagues who ask for meetings to speak with me, that means responding to emails that are grounded in our relationship with emails that reflect our relations. That means being a decent human when someone shows me some goddamned human decency.

And if someone wants to complain about my non-responsiveness when those conditions aren’t met, then I welcome them to involve their whole selves in the process of interpersonal relations that complaining entails.

An index card on which is written, "no policy."