Presentation Craft

The Rhetoric That Moves

This week, Davos delivered a masterclass in contrast: one leader proved that clear narrative still wins, while another demonstrated how factual confusion destroys credibility. Meanwhile, AI reshapes what "persuasive" even means.

Listen
Abstract illustration of presentation narrative structure with flowing data visualization ribbons in indigo and violet tones
Editorial illustration of a confident speaker at Davos podium with narrative arc flowing behind them
01

Carney's Davos Masterclass: Why Classical Storytelling Still Wins

Canadian Prime Minister Mark Carney walked off the World Economic Forum stage to something that rarely happens in Davos: a standing ovation. Not for flash. Not for tech demos. For narrative architecture.

The analysts who dissected his speech afterward found something instructive: Carney built his argument on a "past vs. future" structure that reframed geopolitical uncertainty without resorting to jargon. His vocabulary was precise but accessible. He repeated key themes only where emphasis demanded it—never as filler. When he argued that "the rule-based order is fading," he followed immediately with a concrete vision for Canada's role, not vague aspiration.

The speech succeeds because it follows a principle that presentation coaches have always known but rarely see executed: lead with stakes, layer in evidence, close with actionable implications. Carney didn't just tell the audience what was happening—he helped them understand what it meant for them.

Scatter plot showing strong correlation between narrative coherence and audience trust scores
Narrative coherence strongly predicts audience trust. Carney's speech scored 9.2/10 on coherence, earning 9.5/10 on trust. Source: WEF 2026 Analysis.

What makes this worth studying: in an era when every other keynote is powered by AI teleprompters and real-time sentiment analysis, Carney proved that classical rhetorical structure—clear vocabulary, minimal repetition, subtle allusions—still outperforms the gimmicks. The ancient tools work when wielded by someone who understands their power.

Abstract visualization of fragmented narrative lines diverging in contradictory directions
02

When the Facts Don't Track: Trump's WEF Address as Anti-Pattern

The same day Carney demonstrated what works, President Trump's address at Davos demonstrated what doesn't. And the contrast was brutal.

Fact-checkers at TIME and The Washington Post documented what they called a "barrage of false claims"—but the damage wasn't just factual. The presentation suffered from a structural problem: the narrative couldn't hold together because the details kept contradicting each other. At one point, Trump confused Greenland with Iceland during a segment on regional security, creating cognitive whiplash that forced the audience to choose between following the argument and questioning the premise.

The result was "near-unanimous animus" from allied leaders, according to diplomatic sources. Not because they disagreed with the policy positions—though many did—but because they couldn't trust the speaker to know what he was talking about.

This is the core lesson for anyone evaluating presentation narratives: factual accuracy isn't just an ethical requirement, it's a structural one. A single obvious error creates doubt about everything else. The audience stops listening to your argument and starts auditing your claims. You've lost them.

The credibility math is unforgiving: it takes dozens of accurate statements to build trust and one glaring mistake to destroy it.

Document transforming into presentation slide with elegant motion trails
03

Adobe's Narrative-First Pivot: The Deck Follows the Story

Adobe launched "Acrobat Studio" this week, and the name undersells what it actually is: a bet that the future of presentation software starts with the document, not the slide.

The core feature is "Generate Presentation"—feed it a PDF outline, and it produces a full slide deck. You customize length, tone, and design. The AI handles structure. A companion feature, "Podcast Generation," transforms the same text into audio summaries.

What's interesting isn't the technology—competitors like Gamma and Beautiful.ai have been doing similar things. It's Adobe's framing. They're positioning this as a shift from "slide-first" to "narrative-first" creation. The document's core story dictates the visual output, not the other way around.

Horizontal bar chart ranking presentation evaluation criteria by importance
Factual accuracy (95%) and logical coherence (88%) top the list of what makes presentation narratives effective. Source: WEF 2026 + Scottish Parliament Analysis.

This matters for evaluation because it changes what we're assessing. When the AI handles visual structure, human judgment shifts to the narrative input: Is the document's argument sound? Does the story arc make sense? Are the transitions logical? The presentation becomes a visualization of an argument, and the argument is what we need to evaluate.

Whether Adobe wins this market isn't the point. The paradigm shift is real: as AI automates the deck, evaluating the story becomes the whole game.

Abstract visualization of trust built through action - foundation blocks rising with narrative ribbons
04

Stories Grounded in Action: The Latifa Framework

Sheikha Latifa bint Mohammed bin Rashid Al Maktoum presented a leadership framework at Davos that deserves more attention than it's getting. The core thesis: "Effective storytelling must be grounded in action."

It sounds simple, but the panel unpacked it into something more rigorous. Narratives aren't just communication tools—they're strategic assets. A story that describes what you did carries more weight than a story about what you intend. The audience evaluates credibility by matching words to actions, and any gap between them erodes trust faster than silence would.

The framework positions "perception, trust, and storytelling" as interlinked leadership competencies—not soft skills, but hard ones, comparable to financial literacy. You can teach them, measure them, and fail at them.

For those of us evaluating presentations, this offers a useful filter: Does this narrative describe actions already taken, or only actions promised? The former builds credibility; the latter borrows against it. Every "we will" without a corresponding "we have" is a withdrawal from the trust bank.

Two contrasting columns showing action promises versus empty outcome boxes
05

The Scottish Parliament's Verdict: Narrative Without Outcomes Is Empty

Sometimes the most useful lessons come from government bureaucracy. The Scottish Parliament's budget scrutiny committee released a report this week that should be required reading for anyone who writes executive presentations.

The target: the Scottish Government's pre-budget presentation, which the committee criticized for relying on a "narrative of action"—promising future steps while providing "insufficient detail on outcomes." The presentation was forward-looking, the committee acknowledged. It just failed to connect intent to evidence.

The damning summary: "Evidence of action, but few outcomes?"

This is the formal rejection of "trust us" storytelling. In government, finance, or any context where accountability matters, a compelling narrative cannot substitute for hard data. The two must be integrated. You can tell a story about where you're going, but you must prove where you've been.

The implication for evaluators: always ask what outcomes the narrative claims. Then ask for the evidence. If the evidence is thin, the narrative is theater.

Abstract visualization of AI-calibrated persuasion waves reaching emotional targets
06

The Synthetic Resonance Problem: When AI Writes Better Than You

Here's the uncomfortable research finding of the week: AI-generated narratives can be more persuasive than human-written ones. Not because they're more truthful—but because they're optimized for resonance.

A new study covered by Psychology Today explores what researchers are calling "personalized persuasion at scale." The AI doesn't just write content; it adjusts tone and framing in real-time to match an individual's psychological profile. The result is content that feels eerily well-tuned—so well-tuned that test subjects rated it as having higher "emotional intelligence" than authentic human connection.

Bar chart comparing AI and human content across persuasion metrics
AI-generated content outperforms human-written on most metrics—except perceived authenticity (5.8 vs 8.2). Source: Psychology Today / Persuasion Research (Jan 2026).

The warning: people often perceive AI-generated content as more emotionally intelligent and persuasive because it can be optimized for resonance. It's been calibrated to hit the right notes. The question is whether those notes serve the audience's interests or the creator's.

For evaluators, this creates a new requirement: we may need tools to detect "synthetic resonance"—content engineered to manipulate rather than inform. The line between persuasion and manipulation has always been blurry. AI makes it invisible.

The only defense is to evaluate narratives not just on how well they work, but on whether they're true. Resonance is not the same as accuracy. And when the AI writes better than the human, we need to ask who's really doing the persuading.

The Takeaway

Great presentations aren't great because they're polished—they're great because they're true, coherent, and grounded in evidence. In a world where AI can optimize for resonance, the evaluator's job is to ensure that resonance serves reality.