Diversity statements are one of the many things that go for a dime a dozen these days. Part of the application packages for professorial appointments at Canadian universities, these statements (supplemented by cover letters, CVs, teaching reviews, sample syllabi, etc.) ask candidates to affirm the principles of diversity, equity and inclusion (DEI) in higher education. Those keenly committed to DEI initiatives regard diversity statements as an indispensable tool for assessing whether a prospective candidate has a sophisticated grasp of institutional priorities. Cynics in the academic world view them as make-work projects whose lack of subtlety smacks of a moral purity test and is expressly hostile to the work of higher education.
Regardless of one’s viewpoint, an original diversity statement is hard to come by. As a quick Google search reveals, the best advice on diversity statements lacks much in the way of diversity. So too do the templates exhorting applicants to be honest and original while largely reproducing an uncontroversial set of rhetorical commitments: centring the de-centred voices of marginalized groups, exploding canons of taste, implicit or explicit critiques of colonialism, creating “safe spaces” in the classroom, aligning oneself with an historically disenfranchised community, and guarding against microaggressions.
In March of this year Patrick Hrdlicka, Professor of Chemistry at the University of Idaho, asked the world’s most famous AI platform, ChatGPT, to produce a sample diversity statement. Given the reputation of OpenAI’s large language model (LLM) for producing passable prose, it’s not surprising Hrdlicka’s prompt delivered an adequate DEI paper. ChatGPT’s highly readable diversity document captured the now-familiar catchphrases of universities, corporations and government bureaucracy: diversity is key to success, innovation depends on hearing every voice, we must be active listeners intentionally creating inclusive spaces, DEI is a way of life not just a way of work, and more. No critic of DEI initiatives himself, Hrdlicka’s brief post about his experience concludes by wondering whether new ways of measuring DEI commitment may be necessary in an AI-saturated world.
Concerns over AI saturation in the world of higher education had been echoing throughout the winter and into the early spring of 2023, though less with respect to AI and job applications than to student submissions. AI boosters hailed the new technology for its potential to help students struggling to produce essay drafts, outlines and thesis statements. Anti-AI “Cold Warriors” saw it as one more devastating technological blow to higher education that would undermine their efforts to help students acquire the habits of free and independent thought. Both sides were trying to determine appropriate methods of evaluation in a world where the production of AI texts by stressed, confused or uninterested students may soon expand from a trickle to a flood.
But while the debate over AI use in higher education – including in the classroom – has a certain practical urgency, the deeper risk lies in missing the extent to which AI is not merely a new tool but also a symptom of well-established trends in Canadian education. Namely, a largely unreflective accommodation of new technologies (even when they demonstrably undermine habits of learning) and a growing technocracy shaping the culture of campus conversation. From this perspective, ChatGPT’s aptitude for producing moralizing bureaucratic newspeak alongside passable papers on Shakespeare – largely inoffensive and generally uninspired – points to an existential crisis facing higher education in Canada.
Read the full op-ed at C2CJournal.ca.