The California State University system, facing an estimated budget shortfall of $428 million to $1 billion next year because of state-level cuts and deferments, this week announced a partnership with major technology companies including OpenAI, Google and Microsoft to bring their artificial intelligence products into higher education.
The 23-school, 460,000-student CSU system, including Cal State Monterey Bay, touted the program as a “landmark initiative” to become America’s “first and largest AI-powered university system.” Some are calling it a costly and damaging educational misstep.
The Bay Area News Group spoke with Cal State’s deputy vice-chancellor of academic and student affairs Nathan Evans and chief information officer Ed Clark, who defended the initiative, and San Francisco State University anthropology professor Martha Lincoln, who prohibits her students from using generative AI in their classwork, assailed it. Comments directly quoted are in quotation marks.
Q: How much are you paying OpenAI for ChatGPT Edu?
Ed Clark: $16.9 million over an 18-month term
Martha Lincoln: “This is so deeply distressing. It’s absolutely shocking. For a while we didn’t even have regular paper in our copier: It was all three-hole punch. We don’t have enough counselors on our campus. When students have mental health concerns, they’re waitlisted for weeks if not months.”
Q: What other expenses will CSU incur from these partnerships?
CSU spokesperson: Additional costs have not been quantified.
Q: What kind of technology tools will students receive?
Clark: CSU bought an education-specific version of OpenAI’s ChatGPT chatbot, called “ChatGPT Edu” for all staff, faculty and students. Among other uses of corporate partners’ technology could be development of personalized “AI tutors” for students.
Nathan Evans: The initiative will help ensure “more equitable access” to technologies, across the CSU system. “A number of our campuses were already individually going down this route, but not all of our campuses have the immediate resources to be able to do that.” The technologies could also be employed to help students navigate educational and administrative processes.
Lincoln: “This has some signals of trying to recoup the public image of CSU, making us seem like a leader. There are real risks associated with early adoption. It looks a little naive to be jumping on this bandwagon first. The perceived value of the diploma might change. It’ll be a mark of less distinction, like having an online university degree.”
Q: What’s different about the ChatGPT Edu version from the regular version anyone can access?
Clark: Data is contained inside the CSU system and can’t be accessed or used by OpenAI or other outside entities to train AI models or for other purposes.
Lincoln: “These are not very transparent entities. I don’t think that CSU has a lot of leverage in this arrangement.”
Q: Is CSU substituting AI for human educators?
Evans: “Flatly, no. This is about augmenting and adapting to the market at the moment and the expectations of employers.” Faculty could receive grants to work with AI products and share findings throughout the system.
Clark: “We can help shape the future of artificial intelligence.”
Lincoln: “There are real questions about how those products will be used that are … not addressed. Higher education and the learning process are domains where automation is very problematic.”
Q: CSU won’t be using these technologies to replace staff?
Evans: “That’s not the place we’re starting from. It’s about, ‘How do we leverage that technology to be more adaptable to different learning styles?’”
Lincoln: “We don’t know whether this will translate into faculty layoffs or other staff reduction, but it sure could. CSU is in very deep water financially. I have yet to see the promise from the CSU that jobs or job duties won’t be reduced by this development.”
Q: How does promoting Chat GPT and AI use in school settings not encourage cheating?
Evans: “Our campuses have been in real time adapting their own expectations of academic integrity and student conduct to reflect the current times and realities of these tools. This technology is also emergent and changing daily, so nailing down exactly the boundaries of what that might look like is continually evolving.”
Clark: Birth of the worldwide web led to worries about cheating. “We’re going to have to figure out ways that we’re going to have to maintain academic integrity (but) think about if we had sat out of the internet.”
Lincoln: CSU adopting ChatGPT “unfortunately sends the message to students that we are not investing in them personally and we are not interested in their authentic learning.”
Q: Are you concerned relying on this technology will degrade students’ brainpower?
Clark: “We’re going to assess our students differently. In every workforce environment it’s going to be an expectation that you know how to use these tools.”
Evans: CSU will work with campuses and faculty on course design and redesign. “I see an opportunity in front of us to help model what critical thinking in the age of AI would look like.”
Lincoln: “This is deeply counter to the mission of developing critical thinking. It takes a lot of effort to become a good critical thinker and it takes a lot of effort to become a good writer. That requires time and iterative feedback: human interaction, which is costly. ChatGPT writes poorly.”
Q: Do you think these tech companies have students’ educational interest at heart, rather than just maximizing profit?
Clark: “Investing in companies doesn’t mean endorsing their politics or their current stances. We have to be part of shaping the future.”
Lincoln: CSU has become an “economic engine for California” by giving opportunities to minority and working-class students for meaningful jobs. “It’s very concerning to … turn over so much power in the educational process” to corporations concerned primarily with profit.
Q: These generative AI models are notorious for producing false results, does that worry you?
Clark: Students need to have “a critical eye with the responses you get back from artificial intelligence or Google search. There’s so much misinformation out there.”
Lincoln: “We cannot responsibly give (students) these inaccurate tools that bake in mistakes and confusion and faulty information.” Because AI models are trained on information scraped from across the internet, using them in education “really opens the door to the recycling of problematic attitudes and problematic language without any oversight or context or correction.”
Originally Published: