|
| The Friction Problem in AI Adoption If you've been reading this newsletter for a while, the friction points covered this week will sound familiar: organizational readiness, verification burden, workflow integration.
This pattern isn't unique to AI. Anything that requires real change hits the same walls. And as I often say, if change were easy, everyone would do it.
The companies that are taking change management seriously are now seeing real benefits from AI adoption. They're removing the friction that kept good tools from getting used, and they are now getting used. This week's articles reinforce why that approach works, and what happens when organizations skip the foundational work of people and process. | Confident AI users abandon tools at work when the perceived cost at the point of a decision exceeds the value, regardless of AI's technical capability. | World Economic Forum research shows organizations achieving AI impact aren't running pilots, they're redesigning workflows and embedding AI where work actually happens. | Workday research shows 37% of AI time savings disappear into rework. The problem isn't the technology, it's that leaders view AI as a switch rather than a work redesign challenge. |
|
| Low AI adoption in the workplace isn't a training problem. Research shows that even well-trained AI users who leverage the technology personally will abandon it at work when the cognitive cost exceeds the perceived value. Small irritations accumulate: navigating to a separate platform, copy-pasting between systems, editing clunky output. Each friction point tips employees toward familiar, predictable methods. The pattern repeats across organizations: workshops generate enthusiasm, but if using AI feels like more effort than the manual process, adoption fails regardless of the technology's capabilities. This explains why billions in AI investment sit unused. The technology works. The training happened. But at the moment of decision, when an employee could use AI or default to what they know, simplicity and familiarity win. Practical Angle: The organizations achieving adoption aren't the ones with better tools or more training hours. They're the ones who embedded AI directly into existing workflows, minimizing steps, and eliminating the context-switching that causes abandonment. |
The World Economic Forum draws a sharp line between organizations experimenting with AI and those achieving results. The difference isn't investment, it's whether AI remains a visitor or becomes a resident. "AI tourism" looks like pilots, demos, and proof-of-concepts. "AI residency" looks like updated job descriptions that explicitly define where humans supervise and where AI executes, modernized data foundations that AI systems can actually use, and new roles built around human-AI collaboration. Strategic Insight: Can you point to a job description in your organization that's been rewritten to reflect AI collaboration? If not, you're likely still in tourism mode, no matter how much AI training you've done. View Summary → | Download Report (PDF) → Organizations keep treating AI as an instant switch. Deploy the tool, run a training session, watch productivity climb. The data tells a different story. A survey of 5,000 white-collar workers found that two-thirds of staff save less than four hours a week with AI, or nothing at all. Workday's research explains why: 37% of time saved is erased by rework as employees correct low-quality output. Workers need to learn new judgment calls, develop verification habits, and rebuild workflows they've relied on for years. Critical Takeaway: The "AI tax" on productivity is real, and it's a structural problem. AI success demands fundamental behavioral change and role redesign, but most organizations haven't closed that gap. Read Article → | Quick HitsAI Capability Is Outpacing Leadership Capacity Based on interviews with CAIOs and senior executives, the binding constraint on AI adoption isn't technological capability. It's leadership capacity, organizational design, and institutional readiness under rapid acceleration. | When Students Can Use AI But Choose Not To A professor allowed LLM use for exams if students disclosed prompts and explained AI errors. 95% opted out, largely because they feared they couldn't catch the hallucinations. The lesson: when users bear responsibility for AI output, they recognize the real need for validation. | Foundations Best Practices in Fact-Checking AI When verification friction is too high, people opt out entirely. Instead, start with cross-referencing one claim per response. The goal isn't perfect verification. It's embedding lightweight checks, particularly when AI tends to tell you exactly what you wanted to hear. |
|
| Industry DevelopmentsOpenAI Launches Mid-Priced Tier, Announces Ads ChatGPT Go expanded globally at $8/month, offering 10x the messages and uploads of the free tier suggesting OpenAI is expanding to capture the broader consumer market. Also, they announced that ads are coming to free and Go tiers. This shift is one that advertisers should watch closely. | LexisNexis Enhances AI Tool with Built-In Citations LexisNexis enhanced their Protégé AI assistant with workflows that deliver citable authority by default. The approach tackles verification friction head-on: instead of asking lawyers to fact-check AI output, the system grounds responses in proprietary legal data from the start. |
|
| |
|