security

Faculty 'cautiously optimistic' about the potential of generative AI – news.vt.edu


Intelligent tutoring

That rethinking, for some faculty, involves replacing gameable assignments based on memorizing and summarizing with assignments involving problem-solving, in-class creation, critical thinking, and collaboration.

Beyond that, faculty are considering how AI models such as ChatGPT can customize learning by producing dynamic case studies or offering instant feedback or follow-up questions. “It could be emergent and responsive in a way that one human never could,” said Jacob Grohs, associate professor of engineering education in the College of Engineering. “It really ups the ante in terms of what we need to be doing as teachers.” 

In a first-year engineering course Andrew Katz taught last semester, the assistant professor of engineering education had ChatGPT explain foundational engineering concepts with different audiences in mind — a first-grader, a high schooler, an undergraduate. Then, his students identified baseline pieces of information amid the varying layers of complexity. “I’ll continue to encourage students to use these tools this fall,” he said. “So then the biggest question is, How do you help students use them thoughtfully?”

One use he’s particularly hopeful about is AI’s potential as an intelligent tutoring system that can individualize education by using students’ interests to teach new information — for instance, offering soccer metaphors to teach a new concept to a soccer-playing student. “If you can take even a step in that direction that’s a big improvement,” said Katz.

For now, many faculty are making AI the subject of assignments. They’re asking students to analyze and identify weaknesses in arguments produced by ChatGPT, for instance, or to edit an AI-produced essay with “track changes” on.

That kind of critical thinking about generative AI is vital, said Ismini Lourentzou, assistant professor of computer science in the College of Engineering. “It’s our responsibility as educators to teach students how to use these tools responsibly, and then understand the limitations of these tools.”

Potential AI pitfalls

The limitations are, admittedly, worrisome.

Lourentzou, who has long worked at the intersection of machine learning, artificial intelligence, and data science, recently collaborated on a commentary published in the biomedical journal eBioMedicine pointing out how AI models amplify pre-existing health care inequities for the already marginalized. 

Junghwan Kim, assistant professor of geography, in the College of Natural Resources and Environment, published a research paper in the journal Findings about potential geographic biases in a generative AI chatbot’s presentation of problems and solutions related to transportation in the United States and Canada. 

For students to develop digital literacy around AI, they must understand its flaws, including bias, hallucinations, privacy concerns, and issues of intellectual property. Such problems aren’t necessarily a dealbreaker, as long as students learn about them. “I’m a little concerned,” Kim said. “But my argument is, let’s be aware of the capabilities and limitations and then use it wisely.” 





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.