News

/

Archive

/

BEHIND CLOSED DOORS: UNFILTERED INSIGHTS FROM 35+ CMOs AND CTOs

BEHIND CLOSED DOORS: UNFILTERED INSIGHTS FROM 35+ CMOs AND CTOs

April 1st, 2026

In one week in March 2026, Partech hosted two closed-door councils in Paris: the CMO Council on Tuesday and the CTO Council on Thursday. What did this look like? Simple: no snoozefest panels, no death by PowerPoint, and no sales-y conversations present! Just ~20 marketing leaders and ~20 technology leaders from across Europe and beyond, stress-testing the decisions keeping them up at night.

The premise was that AI is moving fast enough that most leaders are figuring things out alone. We wanted to see what would surface when they were offered an opportunity to assemble the collective intelligence of peers in a safe, no judgement space.

What emerged was more useful than any single insight. The two rooms, working in completely different domains, converged on the same core tension.

The headline: velocity has outrun visibility

Both councils arrived at the same conclusion from different directions. Tech teams are generating more code than ever but losing confidence in quality, comprehension, and review capacity. Marketers are producing more content than ever but watching differentiation collapse and attribution fail.

The bottleneck in 2026 is no longer speed. It's judgment, governance, and the ability to tell whether what you're shipping actually works.

From the CMO Council: Brand trust in an AI-native world

Twenty marketing leaders gathered to debate the 2026 GTM playbook. The conversation was anchored by Elizabeth Coleon (Photoroom), Steph Bowker (Metaview), and Johannes Schiefer (Ex-Voice AI), and ranged across content strategy, discovery, community, and team design.

AI content is infinite. That's the problem

The room reached a fast consensus: AI-generated content underperforms human-created content on every metric that matters. Multiple operators reported engagement differentials of 3-4x in favour of human-originated work. One Exec shared A/B test results showing that human-scripted live-action video featuring real customers and/or employees outperformed AI-generated alternatives by more than four times on engagement rate.

The nuance: AI is valuable for operational tasks including help centre content, change-log updates, ad copy iteration, and internal enablement. But for anything that needs to build trust, convey expertise, or differentiate a brand, the human in the loop is non-negotiable.

Several companies have adopted a deliberate model: use intimate in-person events and customer conversations to generate raw insight and persona intelligence, then scale that content through AI-assisted repurposing. Simply put, the input is human directed while the distribution is AI-driven, not the other way around.

GEO is replacing SEO and the numbers are compelling

One of the strongest signals from the Marketing Council was the shift from traditional search to LLM-driven discovery. Multiple operators confirmed that SEO traffic is declining with one company reporting a 20%+ drop, while traffic from LLMs (primarily ChatGPT at this stage) is growing and converting at significantly higher rates.

The data shared in the room suggested LLM-referred leads convert at approximately 4-5x the rate of traditional organic search leads. The hypothesis: when someone arrives via an LLM recommendation, they've already been pre-qualified by the model's synthesis of available information.

The challenge is measurement. Every GEO/AEO tracking tool currently available on the market (even the sexy, well-funded ones) produces different results. A story was shared about how testing more than ten tools resulted in wildly contradictory rankings from each. The room consensus was to track directionally and invest in visibility across LLM surfaces, while accepting that precise attribution will lag behind the opportunity.

One company restructured its team to place PR, SEO, and AEO under a single owner acknowledging that these are converging into a single discovery function rather than separate channels.

In-person events are the new performance channel

The most counterintuitive consensus from a room of digital-native operators: in-person events are now delivering the highest ROI of any channel. Multiple Execs described pivoting away from all-digital strategies and back toward intimate events, community dinners, and customer meetups. One company dedicates a third of its office space to event hosting. Another runs multi-day retreats for its community members and occasionally extends invites to their families, too.

The thesis is clear: in a world saturated with AI-generated content, real human connection has become the scarcest and most valuable form of distribution. Customer communities are driving purchase decisions, enabling premium pricing, and creating content (via customer advocacy / user-generated content) that outperforms anything a brand can produce on its own.

The 2026 marketing hire: curiosity over specialism

Nearly every CMO flagged team composition as a live question. The emerging model: hire generalists with deep curiosity over specialists with narrow expertise. Multiple operators now include AI fluency as a hard requirement in the interview process. One company rejects candidates who don't demonstrate it in their case study.

But there was an important counterpoint. One operator warned that the pressure to demonstrate AI usage is creating performative behaviour - what the room called "random acts of AI." People using tools for the sake of being seen using them, rather than for intentional outcomes. The shift now is from adoption to intention: not "are you using AI?" but "which specific problem are you solving with it?"

From the CTO Council: Building for scale when the ground won't stop moving

~20 CTOs, VPs of Engineering, and senior technology leaders gathered two days later. The conversation was anchored by Katia Gil-Guzman (OpenAI), alongside Alex Southgate (WorkOS) and Sylvain Ramousse (ex Brevo). The topics ranged from code review bottlenecks to agent identity governance to the cultural dynamics of AI adoption.

The PR review bottleneck is the new constraint

Code velocity has never been higher. AI coding agents - Claude Code and Codex were the most frequently mentioned - have shifted the production curve dramatically. But the bottleneck has moved from writing code to reviewing it.

One CTO described the situation: designers and product managers are now submitting pull requests directly via coding agents, with minimal engineering context. The code is often technically correct but architecturally inconsistent. Senior engineers are spending their days reviewing PRs they don't fully trust, from contributors who may not understand the codebase. It's a new kind of cognitive load that didn't exist twelve months ago.

Several approaches emerged: using AI agents to review other agents' work (creating layered verification), implementing "skills" files that encode architectural standards for coding agents to follow, and classifying code changes into minor (auto-approvable) and significant (human-required) categories.

Nobody fully understands the codebase anymore

Multiple Execs acknowledged that their codebases have grown beyond any single person's comprehension. When AI agents generate large portions of the code and non-engineers contribute via coding assistants, the traditional model breaks down. One CTO noted with a hint of dark humour that this used to be considered a red flag, now it's an accepted Tuesday.

The Technical Leaders pulling ahead are building what might be called "governance infrastructure": AI decision logs that record every consequential model output with context, architecture decision records written before any new build begins, prompt versioning systems that treat prompts like code (versioned, reviewed, deployed). The discipline of writing, several said, catches the majority of mistakes before they compound.

Agent identity and permissions: the ungoverned frontier

A significant portion of the discussions focused on a question most companies haven't confronted yet: what identity do your AI agents have, what permissions do they hold, and who is accountable when they act?

The room revealed that most companies have no formal authorization layer for agents. Authentication is a patchwork of API keys, OAuth tokens, and candidly, trust. The discussion surfaced a fundamental tension: agents need autonomy to be useful, but combinatorial access across tools can create unpredictable risk. Two tools that are individually safe can be dangerous in combination.

The principle that resonated most: every code change, no matter how automated, must have a human accountable. Not necessarily a human who did the work or even reviewed it in detail but a human who bears consequences if something goes wrong.

The adoption divide is cultural, not technical

The room split clearly between companies with 80%+ AI tool adoption and those stuck below 50%. The difference was almost never about the tools themselves. It was about the approach to rolling them out.

The most successful strategy was what one CTO described as "strategic gatekeeping": making tools available first to the most senior engineers, making their productivity gains visible across the organisation, and letting demand build organically rather than mandating adoption top-down. Another peer drew a memorable parallel to how potatoes became popular in France - by being guarded and made exclusive, not by being forced on people. #funfact!

The deeper issue several attendees raised was the changing nature of engineering work - from craft to orchestration. Many engineers signed up to write code, not to manage fleets of AI agents and review their output. This is not a resistance to technology. It's a genuine professional identity question that companies need to address honestly rather than dismiss.

Where both rooms converge

The most valuable output from running these councils back-to-back was the convergence pattern. Four themes cut across both rooms:

  1. Speed vs. quality: CTOs can ship 10x features but can't confidently measure whether they're better. CMOs can produce 10x content but can't stop differentiation from collapsing. The measurement infrastructure hasn't caught up with the production infrastructure.
  2. Humans as the moat: CTOs are building human-in-the-loop checkpoints before any agentic action with real consequences. CMOs are investing in in-person events and founder-led content. Both rooms concluded independently that human judgment is the only durable competitive advantage.
  3. Governance as the dividing line: The operators who are building governance infrastructure now - AI decision logs, boundary maps, intentional workflow audits - are separating from those still in improvisation mode. This holds true for both marketing organisations and engineering organisations.
  4. Non-technical people coding: CTOs described designers submitting PRs. CMOs described marketing teams building agentic workflows in-house. Both are happening with minimal guardrails, and both rooms flagged it as simultaneously the biggest risk and the biggest opportunity of the current moment.

What's still unresolved

Some questions neither room could answer, and that we think will define the next twelve months:

  1. How do you measure AI ROI when the dashboards were built for a different era? Attribution in marketing and quality evaluation in engineering are both structurally broken, and the tools to fix them don't yet exist at the level of reliability operators need.
  2. What does the ideal hire look like in 2026? Both rooms are rewriting job descriptions in real time, but the answer is still forming. Curiosity and generalism are in. Narrow specialism is out. But what the new archetype actually looks like in practice remains unclear.
  3. Who is accountable when an AI agent acts? The governance frameworks for agent identity, permissions, and liability are nascent across the industry. Most companies are one compounding error away from discovering this matters.

What comes next

This is the first edition of our Councils and this post is designed to serve as a synthesis of what we hear from the operators closest to the decisions that matter. We ran these councils because we believe the most valuable intelligence in our ecosystem comes from connecting the people who are building, not just the people who are commenting.

We'll be running more such gathering throughout 2026, across the key company functions, bringing our operator communities together. If the tensions here resonate, or if you think we're missing something, we'd like to hear from you.

1982
onwards
LegalPrivacy SettingsSitemap

©2026 Partech Partners

Developed by

unomena