- The job skills in greatest demand will be those that augment machine intelligence rather than compete with it
- Employee incentive and reward systems will need to evolve as AI takes over a larger share of rudimentary tasks
- AI will catalyze life‑long learning by removing barriers between higher ed and the workplace
In his bestselling 2009 book, “Drive,” Daniel Pink defined employee motivation as a combination of autonomy, mastery and purpose. Employers, he argued, need to give people the freedom to choose their own path (autonomy), the opportunity to learn and adapt (mastery), and a greater sense of meaning behind every job (purpose).
Nine years later, just one‑third of employees say they’re engaged with their jobs, according to Gallup research. Clearly, employers are still trying to figure out how to motivate their people. Only now they’re doing so while adding a complex new force—artificial intelligence—to the management equation. In a recent conversation with editor Jeffrey Davis, Pink explained how companies can solve for both issues in the years ahead.
How do you see AI impacting the quest for autonomy, mastery and purpose—the philosophy you’ve written so much about?
My guess—and when it comes to AI, we’re all guessing—is that increasing machine intelligence will sharpen the need and deepen the value of autonomy, mastery, and purpose. If machine intelligence takes over tasks requiring purely reductive thinking, then that theoretically should free up more human capacity for thinking that’s more abstract, conceptual, and creative. What’s more, it seems pretty clear that the skills that will be in greatest demand will be those that augment machine intelligence rather than compete with it—and those sorts of skills flourish in an autonomy‑supportive, purpose‑driven environment.
Monetary rewards tend to be most effective as an incentive for lower‑level tasks. What will motivate people once AI takes over many of those?
It’s tough to say. But the issue isn’t monetary incentives per se. What the research shows is that any sort of controlling, contingent reward—what I call an “if‑then reward”—is effective for simple tasks with short time horizons, but far less effective for more complex tasks with longer time horizons. If AI takes over most of that first set of tasks, and humans are spending more of their time and brainpower on that second set of tasks, then the use of if‑then rewards theoretically will decline.
What kinds of reward systems can work best for those higher‑level tasks?
It’s easy to list the principles, far harder to execute on them. But companies should pay people well and fairly, offer decent amounts of autonomy and self‑direction, provide an atmosphere of psychological safety, and link day‑to‑day activities to a purpose.
What companies do you think have started on this path already?
Again, that’s really difficult, which is why so few enterprises do it well. I can think of a few exemplars: Atlassian. W.L. Gore. USAA. Google. Motley Fool. Gravity Payments. None of them are perfect, but in this realm we shouldn’t let the perfect be the enemy of the good.
What do you think universities should start doing to prepare college grads for a very different labor market ahead?
It will depend on the institutions and the individuals involved. Let me offer two guesses.
First, we’ll need to break down lots of barriers. Take the boundary between work experiences and academic, classroom experiences. That wall doesn’t make much sense. Integrating the two will be essential. It wouldn’t surprise me to see versions of the sorts of co‑op programs you find at places like Northeastern University become much more widespread.
Or consider the boundaries between disciplines. The challenges we all face are always multidisciplinary. Universities ought to offer more classes that span traditional boundaries, and offer more team projects that bring together, say, chemistry majors and English majors in a common mission.
Second, today, we consider college a one‑time, two‑year or four‑year experience. Maybe it ought to be more like a membership, so that people can go back and learn and retool throughout their careers.
Why shouldn’t people who’ve graduated have a chance to return to learn a new skill or a new body of knowledge? This approach might redefine what we think of as alumni.
Daniel Pink is the author most recently of “WHEN: The Scientific Secrets of Perfect Timing.”