Driverless cars, playing games against computers and more precise medical diagnoses have made headlines in the 2010s. All of these are made possible by one powerful development—artificial intelligence.
Artificial intelligence, or AI, is broadly defined as using machines to perform tasks normally done by humans, whether it’s challenging a human to a game of chess, automatically filtering spam out of your inbox or interpreting medical imagery for accurate patient care. Most applications require creating more sophisticated algorithms to make computers faster and more precise—a field that’s exploded over the last decade.
“It’s truly remarkable how far we’ve come in this field of advanced science,” said Ervin Sejdic, associate professor of electrical and computer engineering at the University of Pittsburgh, who specializes in intelligent systems.
Pitt has enjoyed a front-row seat to AI testing in the city of Pittsburgh recently, with autonomous vehicle testing by multiple companies and smart robots in innovation labs, as well as research in fields such as medicine and education coming out of the University itself.
Sejdic’s current research is in medical applications for AI, including one study he co-authored that used AI to interpret sensor positioning and movement of the hyoid bone during swallowing. This noninvasive tracking could be used to treat and rehabilitate patients who have trouble swallowing due to injury or disease.
Despite advances like these, Sejdic believes the next decade will bring an “AI winter” as real-life applications catch up to the sheer power of new technologies and algorithms.
“There is too much hype going on, and many researchers, scientists and engineers will leave the field as they realize that the true power of AI lies in data and applications,” he said. “And without a thorough understanding of real-life needs in various applications, it will be difficult to create more algorithms and solutions. People that always did AI, data science and machine learning research will be still there, but many people currently involved in this world will move on to the next topic du jour.”
Sejdic believes the “true power” will be in the automation of tasks.
“If you’re a radiologist and AI is used to identify cancerous spots on lungs, you can focus on other tasks such as determining the best treatment options for patients. It would improve the workplace for everyone involved,” he said.
“There’s a lot of excitement about AI. One misconception is that people want to use AI to replace teachers and other workers, but that’s not at all where the research is headed,” said Litman.
Rather, Litman said, the future lies within how AI can be used in tandem with teachers to facilitate tasks that the computer can do, but quicker and easier. This will free teachers to devote their time to more complicated aspects of the job, such as guiding classroom discussions or making sure students are getting equitable opportunities to learn in class.
She said that there’s a lot of interest in using AI to develop systems to support personalized education—like giving students more personalized feedback on their work.
“In addition to seeing more feedback on student papers, we’ll see a lot more computer-based tools to help students with their academic writing,” said Amanda Godley, center associate at LRDC and professor in the School of Education. “And that will mean that students will get more feedback because they’re not only relying on their instructors.”
While a computer program can’t grade a paper by itself, it can address patterns of grammatical errors or illogical reasoning in a particular student’s piece of writing, leaving the instructor to coach the writer to address those issues, for example.
Litman and Godley have been working together since 2012 studying the use of AI in education. Their most recent collaboration is focusing on helping teachers navigate what’s going on in their own classrooms.
In October 2019, Godley and Litman were awarded a grant from the National Science Foundation to develop a computer-driven interface for English teachers to test the ways in which their in-class discussions, or classroom talks as Godley calls them, are productive for student learning. The tech measures this through specificity of student language, the argumentative structure of their claims and the collaborative nature of their discussion.
“Good classroom talk has such a connection to student learning,” said Godley. In the years ahead, she said to expect more accurate and more frequent feedback on classroom talks—to benefit both teachers and students.
“The teacher and the student can look at the data and think: Do I need to explain my ideas more? Where are opportunities where I can question my classmates more often, or ask more open-ended questions?”
And though developments in AI bring promise for the future across many different fields of research, Litman said challenges will come with it.
“Another area that is salient now, for AI in general, is recognizing that a lot of these systems have bias built into them, so there’s more of an interest in ethical issues,” said Litman. “There’s a huge concern for privacy in the data we collect and how it’s being used—so it creates technical challenges.”
In addition to privacy worries, Godley said it’s important that educators understand the limits of what AI can do. “For instance, one of the computer-driven interfaces we developed to provide feedback on teacher talk can detect the frequency of teachers’ open-ended questions with good accuracy across five or six class sessions but not on a single question or class session. Therefore, teachers and researchers should not use it to get feedback on or assess just one class session,” she said.
And while some fear that AI will replace humans and lead to fewer jobs, Sejdic predicts that AI will create more jobs than it will kill. He said the growing demand for skilled workers who can program and adjust machinery will actually mean more opportunities for work and education.
“There will always be a need for human workers, even in an AI-friendly future,” he said.