No. But the Ohio State University says that its students are required to pursue this goal if they want to receive their degree.
Time was when engineers, linguists, and psychologists wanted to create programs modeled on human cognition and speech. In fact, three quarters of a century later, that time is still now, as thousands, if not millions, of poorly paid contract laborers across the globe are being exploited--arguably abused--to train large language models for commercial firms. The idea, to put it broadly, is to create a program that could speak human fluently.
Which all makes the Ohio State University’s new initiative in "AI Fluency" so strange. I am not a linguist or a cognitive psychologist, so I'm unable here to speak to the concept of "fluency" itself, a complex and no doubt deeply contested one. What I'm interested in here is the use of "fluency" in this marketing campaign and the perverse use of this metaphor to sell this now compulsory program at a public, state university.
According to their website
Ohio State is leading a bold, groundbreaking initiative to integrate artificial intelligence into the undergraduate educational experience. The initiative will ensure that every Ohio State student, beginning with the class of 2029, will graduate being AI fluent — fluent in their field of study, and fluent in the application of AI in that field.
Ravi V. Bellamkonda Executive Vice President and Provost of OSU, is quoted on the same webpage as saying that all OSU students will "become 'bilingual,'" frankly a galling use of the term given that bilingualism itself has been nearly criminalized, as Americans are being targeted by ICE just because they're heard speaking Spanish...but I digress.
I wrote the term "perverse" above to describe how the metaphor of fluency is being used here. I find it perverse for two reasons. First, because it commits to an anthromorphizing fallacy: one does not "speak" or become "fluent" in AI any more than one can speak and become fluent in refrigeration. (And all of this begs the question of why someone needs to spend a semester learning how to use commercial software that is designed to be easy enough for a crocodile to use.) Second, the use of fluency here folds back on itself the very goal that the engineers of "AI" are attempting to achieve, which is to make algebra speak human.
As with literally everything in life, Star Trek TNG provides great wisdom here.
In the episode, "Hero Worship," we meet Timothy, a traumatized young boy who has lost his whole family in a cataclysm suffered by their ship. Over the course of the episode we see Timothy suppressing his trauma by adopting Data's mannerisms and deciding that he wants to be an Android, just like Data. Because this is television, the episode takes us neatly through the arc of discovering what happened to Timothy's ship and the acute cause of his trauma (he thought he was responsible for the destruction after he accidentally knocked a computer panel), and eventually, though the help of Data's and Troi's counseling, he decides he wants to be a boy again.
American universities have been beset by rolling waves of blunt force trauma over the last four decades. I think we can take a cue from TNG and decide that wanting to become fluent in a "neural net" modeled on human cognition that itself remains incompletely understood is folly. And maybe if more Americans watched TNG, they'd know how foolish this is.