Modern John Henry

When I was a young man, I read the folk tale of John Henry, the legendary “steel-driving man.” In the story, he works on a railroad tunnel crew. His job is to hammer a steel drill into solid rock so holes can be packed with explosives and the railroad can cut through the mountain. He is described as an unusually strong worker, admired for power and endurance. The tale introduces a new steam-powered rock drill meant to do the same work faster, and John Henry is challenged to race it. Traditional versions end with John Henry beating the machine but dying from exhaustion soon after. The way the story is usually told, the meaning is straightforward: machines eventually outpace people, and a person who tries to keep up will destroy himself.

As an adult, what stands out is what the story does not emphasize. John Henry’s advantage was not simply strength. It was experience. The work of drilling rock is full of subtle variables that only skilled laborers learn to notice. John Henry would have known how weather, moisture, heat, and cold affected both steel and stone. He would have understood that the drill’s bite changes with conditions and that rhythm matters as much as force. He also would have known the rock itself. A tunnel face is not uniform. An experienced steel-driver can read a wall by sight, noticing seams, grain, hairline cracks, and weak points. That kind of perception lets a worker choose better placement, angle, and pressure. It is not brute force. It is craft.

He would also have understood his tools at a finer level than any outsider. Even steel drills vary. A worker who has spent years on the line learns to recognize a flawed tip, a poorly forged bit, or a shape that will behave badly in a particular face of rock. The expert adjusts technique in response. That adjustment is constant, almost automatic, and it is built from years of shared practice, observation, and the tribal knowledge passed among workers who do the job together.

This matters because, while “Artificial Intelligence” is theoretically adaptive, the actual tools sold to enterprises rarely are. A steam drill drives where it is set, at the rate it is built to drive, until something stops it. It does not read a rock face, notice instability, or change strategy midstream. It brute-forces the task.

In this sense, the classic John Henry contest is not “man versus intelligence.” It is “awareness versus blindness.” Theoretical AI can learn, but deployed automation (the kind actually installed in companies) is often just a complex steam drill. It runs a statistical model trained on the past, which makes it blind to the novel nuance of the present. The human competitor is not supposed to think about whether the contest itself makes sense. The machine, for all its “intelligence,” cannot think about that at all.

An updated way to view the tale is to imagine John Henry using his knowledge during the race, not merely his strength. He would likely have adjusted his method as conditions changed, and he might even have questioned the setup if he saw that the environment no longer supported what they were doing. Tunnel work is dangerous. Experienced crews notice early signals of instability and act on them. A machine, lacking perception and judgment, would continue drilling into unsafe rock until failure occurred. In a real scenario, that could mean a cave-in that buries the machine and halts the work. The irony is that, after such a failure, the same workers the machine was supposed to replace would be required to recover, repair, and restart it. The automation survives only because human expertise props it up.

That pattern repeats in modern forms of automation, including the current wave of AI. There is a sharp divide between the definition of AI (a system that learns and adapts) and the product of AI (a frozen model sold by a vendor).

Vendors often arrive with polished demos and confident promises that the old work can finally be replaced. Internal champions repeat the pitch, showing how the AI solved a perfect test case. For a moment, it looks like the future has arrived. Then reality reasserts itself. The environment shifts. The small details that experienced people would have predicted, such as the “sliding cheese” or the “unusual rock grain,” suddenly matter. The system fails, not because it isn’t “smart” in the academic sense, but because it is rigid in the practical sense. It fails in a way that seems surprising only to those who have not lived the work.

Much of this failure stems from the label itself. “Artificial Intelligence” is overly weighted towards a fantasy definition, a technological nirvana where god-like powers inherently solve complex problems. This branding implies that current algorithms possess a kind of mystical understanding that transcends the need for supervision. The reality is far more grounded: we are working with statistical prediction engines, not conscious entities. No matter how sophisticated the pattern matching becomes, it remains a mathematical approximation of the world, not a master of it. When organizations buy into the fantasy, they scrap working processes for a miracle that current technology cannot deliver.

The Zume example shows this clearly. Zume tried to reinvent pizza delivery by baking pizzas inside delivery vans while they drove to customers. Pizzas were assembled at headquarters, loaded into GPS-timed ovens in the vans, and finished on the way to delivery. The concept attracted enormous investment, including $375 million from SoftBank in 2018, and about $445 million raised overall. But the model bumped into a basic reality of motion. Vans vibrate, brake, turn, and hit potholes. In that moving environment, toppings and melting cheese slide and pool. That changes baking and flavor distribution. The machine could follow the recipe perfectly and still produce a worse pizza because the environment was not stable. Zume eventually shelved the baked-in-the-van model and left the pizza business. The failure was not a lack of intelligence. It was a lack of lived, practical understanding of the physical context.

A similar dynamic appeared in IBM’s Watson playing Jeopardy in 2011. Watson defeated two elite champions, but the match was not a fully typical Jeopardy environment. Because Watson could not see or hear, the show removed audio and video clues and delivered the remaining clues to Watson electronically as text. The rules and board were otherwise standard, but excluding whole categories narrowed the contest toward what the machine could handle best. The point is not that Watson was weak. The point is that machines often look unbeatable when the world is trimmed to fit them.

The reframed John Henry story, then, is not a rejection of machines. It is a reminder of what they miss. Expertise is not only effort. It is perception, judgment, and adaptation to a world that is variable and sometimes unsafe. Machines are powerful, but they are also blind to many of the conditions that decide whether work succeeds. When those conditions change, the future still depends on the people who understand the work from the inside.