Mar 29, 2019
by Michael Abrams ASME.org
More than a glittering academic record, extensive experience, or a deep knowledge of engineering, there’s one thing that the engineering industry hopes to find in the people they hire: Motivation. Or, more precisely put: self-motivation. When the drive comes from within, problems get solved, teams get led, work gets done.
Assessing knowledge, be it hands-on or of the purely book kind, is a task universities have had decades, even centuries, to refine. But how do you determine if a student is properly motivated? Education is new to the problem.
Peter Rogers, a mechanical engineer and a professor of practice in Ohio State’s Department of Engineering Education, has set out to create the tools to make just such an assessment.
For You: Interviewing Basics for Engineers
Rogers first became interested in the project at a “Transforming Undergraduate Education in Engineering,” or TUEE (pronounced tooey), workshop. There, 40-odd industry engineers filled in educators about “what skills the industry was looking for that they didn’t think academia was responding to very well,” Rogers says. Chief among them was student motivation. It was a problem Rogers and his colleagues hadn’t tackled before.
“If someone wanted to create more motivation in our students, they’d need to measure it if they wanted to make curriculum changes.”
Rogers dove into the literature. He surfaced with both good news and bad.
The good was that motivation could indeed be taught. In articles like “Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being,” by Richard Ryan and Edward Deci, Rogers discovered that there were two kinds of motivation: extrinsic (which comes from outside, like grades and salary) and intrinsic, which comes from within (like a love of problem-solving and getting one’s hands dirty).
“They said if you can increase a student’s autonomy, relatedness and competency, you can move them from extrinsically motivated—from grades or pat on the back or whatever—to intrinsically motivated,” says Rogers. “I’m excited by this.”
The bad news was that no one had yet tried to measure motivation of any kind.
Rogesr and his colleagues began creating a series of questionnaires for their students. They included questions about their attitudes and behavior. Throughout a course (specifically, capstone courses), students were asked to answer how strongly they agreed with statements such as “I get satisfaction from applying technical skills to my project” and “I collaborate with my group to achieve team success.” They were also asked to evaluate their teammates.
The first time Rogers and his colleagues administered the assessment, they found that a certain kind of motivation was indeed a problem: the motivation to stick with the actual assessment.
“If it’s voluntary, the students don’t want to do it,” he says. “After three or four of the surveys throughout the year, the drop off rate was substantial. The only ones doing it were highly motivated. [That] kind of goofs up your result.”
Eventually the assessment was refined, proven to be statistically sound, and made mandatory. It most recently went out to thousands of students at six different universities.
During this final request for student responses to the survey, faculty at each university required completion of the survey for the course. This request, along with student “report cards” showing the results of their own responses compared with their classmates, improved the response rate of survey completion. As a preliminary paper puts it, “the instruments were clearly works-in-progress but still were close to being ready for distribution.”
For Rogers, the value of the data can’t be overstated. With 35 years of industry experience, he has done plenty of hiring.
“To me, the level of motivation was often more important than the technical knowledge they had,” he says. “As you get into project management, your motivation effects your output and your team’s output. It has this multiplying effect—or a dividing effect would be the negative way.”
With the development assessment tools, professors will have a better chance of keeping that effect in the multiples realm.
“How much are you moving the needle? I don’t know,” he says. “But I think you can move the needle.”
Michael Abrams is an independent writer.
To me, the level of motivation was often more important than the technical knowledge [students] had. Prof. Peter Rogers, The Ohio State University