JJ DANTON
FR | EN

Thoughts

Where there's a way, there's a will

27 Apr 2026

Where there's a will, there's a way... These six words have done immense damage in our heads. For the first time in history, access to the power to act is no longer gated by credentials and gatekeepers. And everything we have ever said about motivation collapses.

We have been told, over and over: where there's a will, there's a way. Willpower precedes capability. Motivation is the engine, effort is the fuel. Those who fail simply did not want it enough.

Where there's a way, there's a will.

Something has just shifted. Quietly, without manifesto, without announcement. A wall is falling. The wall that has always separated those who can from those who cannot. The wall of the diploma, the title, the network, the cultural capital, the hierarchical authorization, the official permit of competence. For the first time in human history, access to knowledge, and with it to the power to act, stops being locked by social position. Generative models, in the hands of an executive as in those of a precarious worker or a high-school student, do the same work.

This is not a technological wave. It is an anthropological shift. And what it shakes first is not employment, not productivity, not education. It is the very idea we have built over decades of what makes a human being act.

In Saint-Flour, on a Tuesday afternoon in November, a participant looked at her screen and cried. The machine had just produced in seconds what she usually spent hours doing. Hours she claimed as her pride, her value, her professional identity. This woman was not enduring a wave. She was living, in her body, what hundreds of millions of humans are living right now, all over the world.

A silent redistribution of power.

The most beautiful scam ever told

"Where there's a will, there's a way." Six words we hear from childhood. Six words that carry an entire ideology: the conviction that willpower is the only ingredient separating those who succeed from those who fail. That everyone starts from the same point. That failure is a choice.

This phrase is a scam. And it sits at the heart of the system that has, for fifty years, structured the way Western societies think about effort, merit, and mobility. It puts every human being on a false equal footing, because everyone can want. It makes the individual fully responsible for their failures by erasing the structural constraints weighing on them. It perpetuates a meritocratic logic whose own inventor denounced its toxicity, in 1958.

Michael Young coined the word "meritocracy" to mock it. In his dystopian novel The Rise of the Meritocracy, he described a nightmare society where the equation Merit = IQ + Effort would produce a definitive stratification between a "deserving" elite and an underclass without recourse.[1] The drama he anticipated: in such a society, the losers would have no basis to protest. If you are poor, it is because you are not intelligent enough, it is mathematical, it is proven, it is deserved. Young saw this as a psychological harm worse than that of a hereditary aristocracy, where the poor could at least blame the system.

Sixty-seven years later, the satire has become the program.

Michael Sandel, philosopher at Harvard, identified the three pathologies of this meritocratic rhetoric.[2] Hubris among the winners, who believe they entirely deserve their success and forget what they owe to luck, birth, position. Humiliation among the losers, told they have only themselves to blame for their place in the social order. Erosion of solidarity, since inequality appears justified by individual capacity. Meritocracy, on the surface, is a discourse of equal opportunity. In depth, it is a machine to legitimize the concentration of positions. It is, in another vocabulary, the promise that God will give each what they deserve, except this time you cannot even blame God.

But the most devastating part is not the cruelty of this logic. It is its internalization. Pierre Bourdieu showed that social structures do not impose themselves only from the outside, through coercion or law. They settle inside individuals, as habitus.[3] A set of deeply rooted dispositions, acquired through social experience, which shape the perception of what is possible before any conscious deliberation. The doctor's son naturally considers medical studies. The cashier's daughter does not even ask the question. Not because she does not want to. Because it is not in her field of perceived possibilities. Habitus has decided before her.

When an event coordinator tells me "AI is not for me," he is not describing a technical reality. He is describing a habitus. He is describing the limits the system has installed in his perception of the possible. And the cruel beauty of the mechanism is that he does not see those limits. He believes they are his own.

Amartya Sen, Nobel laureate in economics, formalized this trap with the concept of adaptive preferences.[4] Individuals systematically deprived end up adapting their desires to what seems possible. They stop wanting what they cannot have. This is not a lack of will. This is an amputation of desire by constraint.

When you can't, you don't want to.

This silent mechanism, at work in every modern society, has just hit something. Something that never existed before. A tool that short-circuits habitus, that disarms adaptive preference, that brings down the wall between the perceived possible and impossible.

And what plays out then is not a matter of motivation. It is a matter of mental architecture.

The architecture of the shift

When a human being discovers they can do what they thought impossible, something precise happens in their brain, something that can be broken down. Five decades of research in psychology, neuroscience, and cognitive science have mapped this moment with a precision that popular culture ignores. And that map, laid flat, tells a story no one wanted to hear as long as no tool existed to test it at scale.

Perceived capability always precedes the will to act.

The first lock is the perception of control. Julian Rotter showed in 1966 that the sense of acting on one's own life is conditioned by what he called the locus of control.[5] External when the individual perceives that events happen to them without their being able to bend them. Internal when they perceive that their actions produce effects and that they are the author. No durable desire to act settles into a psyche that lives itself in external position.

Now, generative AI, as it is publicly presented, is an object designed to keep the locus in external position. Something opaque, magical, vaguely threatening. An entity. A superior intelligence that decides. As long as this representation holds, no empowerment is possible. Only when the user understands that the machine is just a calculation of statistical probabilities, that it reproduces regularities without knowing anything, that it does not even know that it does not know, does the locus shift. The machine stops being an oracle. It becomes a tool. The user becomes the author again.

The second lock is neurobiological. For fifty years, science believed passivity was a learned state: Martin Seligman had shown that dogs exposed to uncontrollable shocks ended up no longer trying to escape, even when escape became possible again.[6] We called this "learned helplessness." In 2016, Seligman himself overturned his own theory, with Steven Maier, in a major revision published in Psychological Review.[7] Passivity is not learned. It is the default state of the brain. What is learned is control. Without experience that action produces a result, the brain stays in standby.

No proof you can, no will. This is not a philosophical posture. This is neural architecture.

Wolfram Schultz completed this picture in the 1990s. Dopaminergic neurons, long associated with pleasure, in fact encode something more fundamental: the anticipation of success.[8] More reward than predicted, dopaminergic discharge. Less than predicted, suppression of the signal. If your internal model predicts failure, your brain does not produce the dopamine necessary to act. The brain does not motivate out of kindness. It motivates when it anticipates that the action will work.

Stanislas Dehaene unified these mechanisms in his theory of the Bayesian brain.[9] The brain does not passively receive information. It constantly predicts what is going to happen, then compares its prediction to reality. Learning only triggers when a gap is detected, what Dehaene calls a prediction-error signal. No surprise, no learning. No astonishment, no update of the internal model. No shift of the internal model, no new desire.

This abstract mechanism has a face. A sentence that comes back, in every training session, with only minor variations. Someone types a prompt, gets a result, looks at the screen, and lets out: "Wait... I can do that?" The tone is neither questioning nor triumphant. It is a stunned acknowledgment. Dehaene's prediction-error signal is not an academic metaphor. It is what this sentence encodes, word for word. The internal model has just rewritten itself live, in front of a brain that did not expect it.

This is exactly what the encounter with a tool that exceeds what the user predicted possible produces. The gap between prediction and reality is massive. The error signal is enormous. The internal model updates by force. And it is in this breach that Schultz's dopamine and the revision of self-image rush in.

That leaves the fourth mechanism. Albert Bandura, the father of self-efficacy theory, identified four sources of belief in one's own capabilities.[10] Direct mastery, the most powerful, which comes from succeeding by oneself. Vicarious experience, which comes from observing someone similar to oneself succeed. Verbal persuasion, fragile and secondary. Physiological states. A 2026 meta-analysis of 23 empirical studies measured the effect of generative AI on learners' self-efficacy. The effect size is 0.758, which means, in plain language, that the great majority of people exposed to AI see their sense of capability rise significantly. It is one of the most powerful effects ever measured in education science.[11]

0.758
effect of generative AI on learners' self-efficacy. The great majority of people exposed walk away more confident in what they can accomplish. Meta-analysis Ren, Stephens, Lee, 23 studies, Behavioral Sciences, 2026.

Finally, Hazel Markus and Paula Nurius introduced in 1986 the concept of possible selves.[12] The cognitive representations of what an individual could become, would like to become, or fears becoming. These possible selves form the bridge between self-concept and motivation. You cannot want to become what you cannot imagine being. A project manager who had never written a line of code builds a presence simulator in two hours from her own statistics. A parent who had never composed writes an original song, melody included, for his three-year-old daughter to help her tidy her room. None of these people had "wanted" to do those things before. They had not even imagined them. A new possible self has just been born, and from now on it generates a desire that nothing else could have created.

The chain is mechanical. Understanding the tool, realignment of the locus of control. Demonstration that exceeds the prediction, massive Bayesian error signal. Vicarious experience, then direct mastery, rise in self-efficacy. Expansion of the field of possible selves, birth of new desire. Perceived capability, imagined self, desire, action. In that order.

Not the other way around.

And what makes generative AI singular in the history of human tools is the speed at which it produces this cascade. Learning to code takes years. Composing a melody assumes musical training. Designing a data simulator demands an analytical background. AI does not replace these skills. It makes their results accessible to whoever can formulate an intention. The time between intention and proof of capability, once measured in months or years, drops to a few minutes. The brain no longer has time to reconstruct the argument that would have held before, the famous "I do not know how." The proof of the contrary arrives too fast.

Everything we have said about motivation for fifty years collapses. The entire industry of personal development, of pedagogy of effort, of managerial discourse on engagement, rests on the idea that will precedes capability. That desire creates action, and action eventually creates competence. It is the other way around. Perceived capability creates desire. Desire creates action. And every pedagogy, every public policy, every organizational discourse that inverts this order wastes its time and other people's.

The great upheaval

What plays out in an individual's head plays out at the scale of systems. And it is here that the deeper layer of the phenomenon reveals itself. Because it is not just one project manager who discovers she can. It is an entire generation that is silently leaving the frames in which organizations had enclosed it.

Erik Brynjolfsson, Danielle Li, and Lindsey Raymond followed 5,172 customer-service agents at a large company for a year. Access to AI raised average productivity by 14%. But the decisive number is elsewhere. The improvement reaches +34% for novice and low-skilled workers, against near-zero impact for experts.[13] AI diffused the best practices of the most competent toward the newcomers. An agent with two months of experience and AI performs like an agent with six months without.

+34%
productivity improvement for novice workers, against near-zero impact for experts, study Brynjolfsson, Li, Raymond, Quarterly Journal of Economics, 2025.

Fabrizio Dell'Acqua and Ethan Mollick, in their study of 758 Boston Consulting Group consultants, measure an identical effect: +43% improvement for the bottom quartile, against 17% for the best.[14] Shakked Noy and Whitney Zhang, in Science, observe 453 professionals: task time drops by 40%, quality rises by 18%, performance inequality between workers shrinks.[15] And their most telling result: participants exposed to AI during the experiment were twice as likely to use it in their actual job two weeks later. Once they had discovered they could, they wanted to keep going.

AI does not make experts better. It makes non-experts capable.

This redistribution escapes every traditional grid of analysis. The NBER study by Yotzov, Davis, Bloom, and colleagues, surveying about 6,000 executives across four countries, reveals a dizzying paradox: more than 80% of firms report no measurable impact of AI on employment or aggregate productivity.[16] The UK government trial of Microsoft 365 Copilot is even more telling: no robust evidence of productivity gains, and yet 72% of users were satisfied and disappointed when the trial ended.[17]

What do these numbers say together? That the real effect of AI is not measured where it is being looked for. Organizations look for productivity gains. AI produces something else. It produces individual empowerment. And this empowerment is invisible to organizational steering instruments because it does not play out in their indicators. It plays out in the gap between what the employee is capable of doing and what their job description says they should do. That gap, widening silently every day, is the real phenomenon under way.

The power to act is changing hands.

And it is not waiting for permission. The Microsoft and LinkedIn study of more than 30,000 people across 31 countries shows it with troubling clarity: 78% of professional AI users deploy it personally, without permission or knowledge of their employer.[18] This phenomenon, which I call BYOA, Bring Your Own AI, is the clearest behavioral face of the shift under way. I developed it in detail in The Revolution That Never Happened. What it says here, in the frame of this essay, is simple: accessible capability creates an imperative to act that overflows the frame of authorization. When an employee can, they do. With or without their organization's consent.

And this is where things get uncomfortable for those who lead.

For two centuries, authority in modern organizations rested on a supposedly stable alignment: those who occupied hierarchical positions were those who held knowledge, accumulated experience, legitimate technical mastery. The title validated competence, and competence justified the title. This alignment held as long as competence was slow to acquire and learning circuits were locked. This alignment is giving way.

When a junior produces in four minutes the strategic analysis their N+2 would take two days to formalize, this is not a productivity problem. It is an authority problem. When a field agent builds their own analytical tool while the IT department spends eighteen months arbitrating the specification of the same tool, this is not an agility problem. It is a problem of cognitive hierarchy. When an employee acquires alone, in the evening, through personal use, the skills internal training did not give them, this is not an HR problem. It is a structural displacement of knowledge-power.

Foucault had set the frame before this technology even existed. Knowledge is power. The institutional gatekeepers of knowledge, diplomas, professions, certifications, hierarchies, are gatekeepers of power. When knowledge stops being locked by those circuits, power redistributes. Mechanically. Without debate. Without consultation. Through usage.

The tool does not ask organizations for permission.

Klarna learned this, after massively replacing its customer service with AI and having to rehire. IBM learned it, after announcing with fanfare the elimination of thousands of positions in the name of AI and watching its internal agency reconfigure in unexpected proportions. Amazon discovered it, after changing its public discourse once it measured what BYOA produces among its own teams. These examples are developed in The Revolution That Never Happened. What they signal together is that an executive who thinks they are piloting the deployment of AI in their organization is mistaking the object. The deployment has already happened. Without them, against them, sometimes in spite of them. What remains to be piloted is not the arrival of AI. It is the balance of power that has formed while they were looking elsewhere.

Organizations that have not taken the measure of this displacement live a fiction. They continue to think competence is validated by title. That productivity is measured in established indicators. That cognitive hierarchy maps onto organizational hierarchy. None of this is true anymore. Competence is validated through usage. Real productivity lies in the gap with the job description. And the cognitive hierarchy no longer looks like the org chart.

But you do not free yourself alone

It would be dishonest to let it be believed that this upheaval fully liberates those who seize it. Several limits remind us that the tool has never been enough, and that individual liberation runs into frames that remain.

The powers that act on us do not disappear. The employer keeps their seven daily hours. The state keeps its taxes. The bank keeps its loan conditions. This persistence is not only a constraint: it has counterparts. The employer produces a regular salary and shares the economic risk. The state redistributes through care, education, infrastructure. Mutual dependence is not a defect of the system, it is what holds society together. To abstract oneself from every power acting on oneself is to isolate oneself. And isolation is not freedom. It is impoverishment. Making one's work more pleasant with AI is one thing. Quitting one's work to go independent is another, which depends on character, environment, family responsibilities, and is sometimes simply impossible.

Freedom is not isolation.

The capacity for projection, next, remains constrained by condition. Not everyone projects at the same level, even when the tool opens the same doors. In two and a half years of training sessions, I have seen only one person tip toward developing personalized GPTs, plugging into APIs, building small software products. One. The great majority of participants leave with the capacity to produce differently what they already produced, and to cautiously explore a few new territories. It is not the tool that differs between these two extremes. It is the imagined self. It is the field of perceived possibilities. It is habitus, which does not erase in a day. Sen was right against the naive optimists: real capability cannot be deduced from the availability of the resource. The tool is there, accessible. One still has to be able to seize it beyond immediate use.

AI opens a field. It does not fill that field on behalf of the one who is looking.

Habitus persists. It takes years to deconstruct. And that internal work, no one can do in someone else's place, not even with the most powerful tool ever built.

The conditions of sovereignty

If generative AI is what it is, an event of redistribution of the power to act, then the political question it poses is not that of its adoption. It is that of the conditions under which this redistribution will happen. Four conditions, without which the upheaval will turn into a simple transfer of gatekeeping.

Access is the first. If AI is accessible only to those who already have the material, cultural, linguistic means, it does not bring the wall down. It moves it, redrawing inside the digital divide a new stratification, this time grounded in "AI capital." Nick Drydakis has shown that this capital is already distributed unequally, by class and by level of education.[19] Ivan Illich, in 1973, distinguished convivial tools, which augment user autonomy, from manipulative tools, which create dependence on the institutions that control them.[20] AI will be one or the other depending on who owns it and who has access to it. This is not a feature of the technology. It is a political decision.

Mode of use is the second condition. Eugene Lee and his coauthors showed in Scientific Reports, a Nature publication, that passive use of AI, copy-pasting the generated result without appropriation, reduces self-efficacy, the sense of ownership, and the meaning given to work.[21] Only active collaboration, where the user thinks with AI and does not let it think in their place, preserves and builds empowerment. The same technology can produce the opposite of what it promises depending on the posture of the one who seizes it. To say "where there's a way, there's a will" assumes that the tool is used as a co-creator, not as an automatic dispenser.

Algorithmic independence is the third. If AI is captured by a few companies, beholden to states, or biased by commercial interests, the democratization of knowledge becomes a transfer of gatekeeping, not a liberation. Knowledge redistributes power, yes, but only if the redistribution itself is not commanded by the interests of those who build the models. This is why this redistribution demands a vigilance that goes beyond use. It demands a politics of models, a diversity of providers, an independence of regulatory bodies. Without that, the wall comes down only to be rebuilt elsewhere.

Critical education is the fourth. AI is not an oracle. It hallucinates, it makes mistakes, it reproduces biases, it can assert false things with confidence. Empowerment demands discernment, the capacity to doubt, to cross-check, to verify. It is the condition of the adult use of the tool. And it is exactly what technological lucidity imposes, as the stopping of Anatole reminds, that product I stopped developing because its reliability did not hold up to reality. What holds for the designer holds for the user. The most powerful tool does not exempt one from judgment.

None of these four conditions refute the thesis of this essay. They sharpen it. To say "where there's a way, there's a will" does not say the tool is enough on its own. It says the political task of the moment is not to exhort individuals to want more. It is to build the conditions, technological, institutional, pedagogical, and social, that allow them to be able.

The rest is moralism.

You think this is a tool. This is a redistribution of power.

Jean-Jérôme DANTONJJ DANTON

Sources

  1. Michael Young, The Rise of the Meritocracy 1870-2033, Thames and Hudson, 1958.
  2. Michael Sandel, The Tyranny of Merit: What's Become of the Common Good?, Farrar, Straus and Giroux, 2020.
  3. Pierre Bourdieu and Jean-Claude Passeron, Reproduction in Education, Society and Culture, Sage, 1977 (orig. Minuit, 1970).
  4. Amartya Sen, Development as Freedom, Oxford University Press, 1999.
  5. Julian B. Rotter, "Generalized expectancies for internal versus external control of reinforcement," Psychological Monographs, 1966.
  6. Martin Seligman and Steven Maier, "Failure to Escape Traumatic Shock," Journal of Experimental Psychology, 1967.
  7. Steven Maier and Martin Seligman, "Learned Helplessness at Fifty: Insights from Neuroscience," Psychological Review, 2016.
  8. Wolfram Schultz, "Dopamine reward prediction error coding," Dialogues in Clinical Neuroscience, 2016.
  9. Stanislas Dehaene, How We Learn: Why Brains Learn Better Than Any Machine... for Now, Viking, 2020 (orig. Odile Jacob, 2018).
  10. Albert Bandura, Self-Efficacy: The Exercise of Control, W.H. Freeman, 1997.
  11. Liling Ren, Jason M. Stephens, and Kerry Lee, "The Impact of AI on Learners' Self-Efficacy: A Meta-Analysis," Behavioral Sciences, 2026.
  12. Hazel Markus and Paula Nurius, "Possible Selves," American Psychologist, 1986.
  13. Erik Brynjolfsson, Danielle Li, and Lindsey Raymond, "Generative AI at Work," The Quarterly Journal of Economics, 2025.
  14. Fabrizio Dell'Acqua, Edward McFowland III, Ethan Mollick et al., "Navigating the Jagged Technological Frontier," Organization Science, 2025.
  15. Shakked Noy and Whitney Zhang, "Experimental evidence on the productivity effects of generative artificial intelligence," Science, 2023.
  16. Ivan Yotzov, Jose Maria Barrero, Nicholas Bloom, Steven J. Davis et al., "Firm Data on AI," NBER Working Paper 34836, February 2026.
  17. UK Department for Business and Trade, "Microsoft 365 Copilot Pilot: DBT Evaluation Report," August 2025.
  18. Microsoft and LinkedIn, "2024 Work Trend Index Annual Report," May 2024.
  19. Nick Drydakis, "Artificial Intelligence Capital and Employment Prospects," IZA Discussion Paper No. 16866, 2024.
  20. Ivan Illich, Tools for Conviviality, Harper & Row, 1973.
  21. Eugene H. Lee, Yidan Yin, Nan Jia, and Cheryl J. Wakslak, "Relying on AI at work reduces self-efficacy, ownership, and meaning while active collaboration mitigates the effects," Scientific Reports (Nature), 2026.