top of page
Search

The 20-Second Problem: Why AI Adoption Is Really a Learning Conditions Issue

ree

The first time I watched students and teachers try to use a new AI tool in real time, it didn’t look like “the future.”

It looked like… frustration.


The tool wasn’t broken. It was doing what it was designed to do. But what surprised me was how fast people abandoned the attempt to make it work.


Not 20 minutes. Not 2 minutes.


Twenty seconds.


If a prompt didn’t work, they rewrote it once. It didn’t work again, so they closed the tab and said, “Eh. I don’t have time for this. Let’s just go back to what we did yesterday.”


And I couldn’t stop thinking about how I learned tech in the first place: tinkering, trial and error, clicking around until something finally made sense. So what happens in a world where we’ve lost the time, permission, or patience for that kind of learning?


Constructionism was built on tinkering, not consuming

Constructionism (most famously associated with Seymour Papert) isn’t the idea that people learn by being told. It’s the idea that people learn best when they build something, test it, break it, and rebuild it. Learning happens through making, not passively receiving.


That philosophy made sense in the early waves of consumer tech. A lot of us learned by:

  • clicking around and breaking things

  • messing with settings until something finally worked

  • getting lost online, finding something useful, and getting lost again


That messy “figuring it out” mattered for skill, but also for identity. You became the kind of person who could solve problems because you had evidence you could. You stayed confused long enough to reach competence.

That’s the part I’m worried we’re losing.


Do we still have the patience for constructionist learning?

There’s an assumption baked into a lot of AI optimism: if tools are “easier,” adoption will be faster.


But does “easy” always mean “learnable?”


AI tools often feel simple, but they’re not predictable. When you click a button and get a weird output, you can’t always trace why. When you write a prompt and the response misses the point, you’re not debugging a system with clear rules. You’re negotiating with something probabilistic.


To do that well, you need a different stance:

  • curiosity (instead of control)

  • experimentation (instead of certainty)

  • persistence (instead of performance)


And that’s where modern school and workplace conditions work against us: fragmented attention, constant notifications, back-to-back demands, and the pressure to look competent at all times.


Trial-and-error learning is slow. It includes dead ends. It requires you to look a little messy while you figure it out.


So maybe the real question isn’t “Can AI help us learn?”

Maybe it’s: Do our environments still allow the kind of learning that AI requires?


A practical solution: design the conditions for Powerful Learning

This is where I think Digital Promise’s work on Powerful Learning gives us a clean way forward.

Powerful Learning doesn’t rely on “try harder” motivation. It helps us design the conditions that make real learning possible, especially when the learning is messy and nonlinear.

The frame centers four elements: agency, purpose, curiosity, and connection.


Agency: make experimentation feel owned, not assigned

If AI learning is experienced as compliance (use this tool, attend this training), people do the minimum and avoid risk. Agency means learners choose a real problem they care about, define what “better” looks like, and get room to test ideas without being penalized for imperfect early drafts.


Purpose: tie tinkering to a real outcome, not vague innovation

People don’t abandon new tools because they’re lazy. They abandon them because the value is unclear and time is scarce. Purpose means anchoring AI practice to something concrete: saving time on feedback, improving family communication, reducing meeting overhead, strengthening lesson planning, increasing accessibility for students.


Curiosity: design for questions, not answers

Most AI training is built around outputs: “Here’s what it can do.” Powerful Learning flips it: “What are we trying to figure out?” Curiosity fuels iteration. Teach prompting as hypothesis testing: change one variable, compare results, notice patterns, refine.


Connection: make learning social so people don’t quit alone

Trial and error is easier when you’re not doing it in isolation. Connection looks like shared tinkering time, show-and-tell of failures, peer examples that feel close to your context, and small communities of practice where it’s normal to say, “This didn’t work. Here’s what I tried next.”


What leaders and educators can do next

Instead of rolling out another AI training, try a different shift:

  • create protected time for tinkering that is real, scheduled, and collaborative

  • make messy drafts visible (leaders included, especially leaders)

  • teach prompting as experimentation, not performance

  • build reflection into the workflow

  • reward judgment and reasoning, not just output and speed


AI is not only a technology shift.

It’s a learning conditions test.


Constructionism assumes people will stick with confusion long enough to build understanding. If our systems no longer allow that, we won’t just fall behind on AI.


We’ll fall behind on the deeper skill underneath it: learning itself.


And that’s the leadership challenge: not adopting AI faster, but building cultures where people can still learn on purpose.

 
 
 
  • Twitter
  • LinkedIn

©2021 by Learning & Leadership Consulting.

bottom of page