CDH’s John Lynch Comments on The Cost of Educational Technology in the Chronicle of Higher Education

Published: April 12, 2017

FROM THE CHRONICLE OF HIGHER EDUCATION, Our Academic Technology Manager from UCLA’s Center for Digital Humanities contributes the following article:

The Cost That Holds Back Ed-Tech Innovation

By John Lynch APRIL 09, 2017

Recently, I had an unexpected revelation as I watched a colleague of mine work with a pair of instructors to “hybridize” their introductory foreign-language class.

The team spent weeks breaking down their expected learning outcomes, then more weeks drafting scripts for videos (to supplement the existing textbook) and quiz questions to help students practice those skills, then months recording the videos and building those quizzes in Moodle, our campus learning-management system. Finally, after almost a year of planning and production, the instructors were able to begin testing their new tools by rigorously comparing the learning outcomes of students in the hybrid sections to those of students in traditional-format classes.

Recent research indicates that creating an instructional environment rich in real-time data about student achievement is perhaps the most powerful positive intervention that an instructor can make. So I was excited to see that the new hybrid materials were designed to collect substantial data about student achievement and behavior throughout the course. Want to know how well someone understands past-tense verb conjugation? What about the vocabulary for giving directions? Or matching the gender of nouns, articles, and adjectives? All of these data are available, and given a properly designed dashboard, a skilled instructor could use them to personalize the learning experience of every student in the class. Alternatively, motivated students could use these data to direct their own practice.

But if such an intervention is so effective, why aren’t we doing this in all of our classes? The answer, of course, is cost — but not the cost that I expected. Specifically, it wasn’t the technological cost. Although the instructors used some innovative technologies in their course redesign, none of those is critical to the personalized-learning aspect: The quizzes could be delivered by any learning-management system, or even on paper, and one could reveal the same data in almost-real time with only a properly designed spreadsheet. Nor was it the cost of the instructional designer, or the educational technologist. The single greatest cost of the course redesign that I watched was the faculty instructors (or “subject-matter experts,” as they’re often referred to), who spent hundreds of hours planning and designing all of the new content.

More important, I also realized that faculty will be the biggest cost for just about any successful educational technology project. Instructional designers can advise instructors on learning outcomes and ways to measure them, but they cannot actually design the assignments or reconfigure the readings and other supplemental materials. Technologists can build a quiz in a learning-management system from a spreadsheet listing questions and answers, but they cannot create the spreadsheet in the first place, without an expert’s knowledge of the course content, and they certainly cannot record videos on an instructor’s behalf, authoritatively explicating a subject, even from a script!

A technology platform might be able to transform structured data into an easy-to-parse graph or dashboard, but it cannot structure that data by itself, and we’re still a long way from being able to effectively and efficiently measure “critical thinking and analysis” or “written communication skills” via multiple-choice questions. The instructor, the content expert, is the thread that ties all of these other pieces together, the one without whom the others would be irrelevant.

Unfortunately, when it comes to improving instructional outcomes, giving instructors adequate time and support for course redesign isn’t how most universities seem to spend their money.

Anecdotally, I can think of instructional “innovations” at many institutions where the administration paid a high price for a new, much-praised technology platform while expecting faculty members to voluntarily commit their own time to learning it and putting it in place. Unfortunately, technology platforms are rarely the holy grail. That is to say, they do not solve problems merely by being licensed. Instead, they must be learned and used, and using such tools effectively generally requires labor far beyond what faculty members can afford to do while still meeting their other job requirements, whether they are tenure-track or contingent.

Recent data indicate that faculty members broadly agree. A 2016 study from Inside Higher Ed examining faculty attitudes toward technology found that only 26 percent of faculty members think that they are fairly compensated for developing online courses.

The New Media Consortium reports that 66 percent of the respondents in a recent survey “felt that faculty members lack critical support to advance new teaching and learning practices.”

“Scaling innovative teaching and learning practices requires resources and incentives, yet pedagogical efforts are seldom incorporated in tenure review,” the report says.

I am excited by a lot of the cutting-edge ideas in educational technology, such as personalized learning and predictive analytics. I believe that college students at all levels would benefit greatly if we could all evolve our teaching method from “the sage on the stage” to a data-rich “conversation” with clear learning outcomes, effectively turning every class, no matter how big, into a small seminar. Even for the most qualitative of the humanities, there are viable models that would let us implement these teaching techniques without sacrificing any of the content, depth, or diversity of experience that has traditionally characterized our fields of study.

But if we want to see serious experimentation with such teaching models, we need to first seriously consider how to compensate our instructors for the hundreds, if not thousands, of hours that such experimentation will take. Obviously, one possible approach is to actually pay them to spend extra hours on course redesign, via summer appointments or buyouts from other responsibilities. But there are other possibilities. For example, if leading universities took steps to ensure that evidence-based instructional innovation counted toward tenure advancement as much as an equivalent amount of time spent on research does, I expect that we’d see an explosion of valuable experimentation in this area.I believe that the real barrier to widespread instructional innovation is not technical but cultural. The greatest cost of leveraging a new technology isn’t the tech itself, or the technical support for it; it’s the time required by local experts to build, revise, and sustain content that will make the most effective use of it. And since most universities do not compensate their instructors for this time, in either the short or the long term, that innovation isn’t happening nearly as fast as it could.

If successful teaching truly matters, universities (and the elected officials, donors, and other figures who influence them) need to invest more in giving faculty incentives to engage with evidence-based and learner-centric models. Will such an approach be expensive and full of false starts? Sure. But no more so, I suspect, than another 10 years spent buying software licenses in hopes of finding the holy grail.

John Lynch holds a Ph.D. in Near Eastern languages and cultures and is academic- technology manager at the Center for Digital Humanities of the University of California at Los Angeles.

 This article is part of: