In a 1964 interview, the Paris Review asked Pablo Picasso what he thought of computers, which the interviewer described as “enormous new mechanical brains.” Picasso replied: “But they are useless. They can only give you answers.”
Obviously, computing has come a long way since the 1960s. Today most of us carry devices in our pockets that are vastly more capable and versatile than that era’s comparatively dumb mainframes. For all his creative vision, Picasso doesn’t appear to have foreseen a world where digital technology would mediate almost everything people do, from making art to waging war, running organizations and influencing elections, not to mention buying groceries and finding romantic partners.
Computers are great at performing repetitive tasks, mining vast data sets, and predicting future events based on patterns in data. Predictive analytics tools like GE Predix analyze vast quantities of IoT sensor data to anticipate when jet engines, locomotives and factory robots are likely to need maintenance. Increasingly, computers are also good at learning from experience so they can achieve preset goals without being explicitly programmed. Using a machine learning algorithm, Google’s AlphaGo program taught itself to play the world’s most complex board game in a matter of hours, and eventually defeated the best human Go players.
On the other hand, computers aren’t good at asking questions or setting goals. They aren’t curious. They don’t do leadership, and they lack empathy. For example, you can train a chatbot to identify frustration in human speech patterns. You can also script a chatbot to simulate empathy in its responses to irate customers. (“I’m sorry you’re frustrated, Brad, and I’ll do my best to help you.”) However, that training and simulation has nothing to do with actual empathy, the ability to understand and share someone else’s feelings.
Empathy, leadership and curiosity are all traits that distinguish people from computers. They are human core competencies, just as repetitive work and prediction are core competencies of computers. As Adam Smith observed in The Wealth of Nations (1776), economic growth happens when markets operate according to a division of labor where workers specialize in various tasks. In Smith’s famous example of a pin factory, the metal cutter, pin drawer, roller and finisher all collaborate in the production of pins, which allows them to produce far more pins than if each worker tried to perform all these tasks alone.
To paraphrase Quentin Tarantino, those all sound like sh*t jobs. Smith was sensitive to the soul‑crushing potential of dividing production into small repetitive tasks and condemning workers to perform them over and over. He writes: “The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become.”
Smith’s answer to this dilemma was that government should invest in educational and cultural programs to uplift the working poor. The modern division of labor between people and AI suggests a different answer, that computers should take over all the mundane, repetitive tasks so that people can focus on what they do best: asking smart questions, synthesizing information from different domains, setting goals and inspiring their teams.
In a recent talk titled “How not to lose your job to AI,” data scientist and entrepreneur Rand Hindi draws a useful distinction between “vertical” machine intelligence and “horizontal” human intelligence. Vertical intelligence is the ability to do a single job, such as recognizing a picture of a cat or guiding an autonomous vehicle safely through traffic. Today, all AI is vertical. No AI can begin to match the ordinary human abilities of thinking horizontally across domains, putting issues into broader context, applying emotional intelligence to resolve interpersonal tensions, and so forth.
So computers can’t yet think the way people do. What about organizations? In the current edition of Workflow, my colleague Tasker Generes presents a vision for what he calls the Sentient Enterprise. “Sentient” means “aware of one’s surroundings.” A Sentient Enterprise is one that uses advanced technology to maintain real‑time awareness of market conditions. By applying machine intelligence to Internet of Things data, a fashion company can respond instantly to shifts in consumer tastes. A bus company can react immediately to peaks and valleys in demand by adjusting ticket prices and putting more busses on the road.
Doing this requires breaking down the functional silos (IT, HR, finance and the rest) that are used to get work done in traditional organizations. Instead of old‑school functions, Sentient Enterprises organize themselves around core processes such as ticketing for a bus company, lending for a finance company, or design for a fashion company. For this to work, the entire organization needs to have immediate access to all the data needed to deliver a new style, approve a loan or adjust the price of a ticket.
Under the doctrine of corporate personality, companies share certain legal attributes with human persons. Both can enter into contracts and incur liability, for example. Yet with apologies to Mitt Romney, corporations are not people. Organizational sentience is clearly not the same thing as human sentience. Rather, sentient organizations distribute data to people in real time so that people can make better decisions. In short, technology enables organizational sentience, which enhances human sentience and, ideally, wisdom.
Memo to Pablo Picasso: maybe computers aren’t so useless after all.