AI acumen gap: Trust divides shape AI strategy

New findings highlight a widening AI trust split across generations, urging companies to tailor governance and transparency to each audience.
A widening “AI acumen gap” is reshaping how people judge automation, and for companies that rely on AI messaging, trust is becoming the real dividing line.
The latest Brand Expectations Index frames trust in AI as a spectrum rather than a baseline: it varies sharply by how close people feel to AI in their day-to-day work and by their generational outlook.. On one side are knowledge workers and younger users who engage with AI tools routinely and tend to trust the direction of big tech and AI startups.. On the other. within the general population and older generations. only a small slice express trust in AI companies. while nearly half see the technology as a sign of a more dangerous future.
That split creates a communication paradox for leaders: a single “universal” AI story may fail because it lands differently—or not at all—for different audiences.. The report argues that survival in a fragmented AI landscape requires a shift away from broad promises and toward segmented credibility. built around what each group is willing to accept and what it fears.
For professionals and younger generations, the “comfort ceiling” runs high, but it is not unconditional.. The findings show that 78% of knowledge workers and 71% of Millennials are comfortable with AI-driven personalization of products and recommendations.. The same pattern appears with presentation and content tools as well, with 60% of Millennials comfortable with AI-generated executive avatars.. This group tends to reward efficiency and innovation. and they are more likely to interpret AI as an everyday productivity lever.
Skepticism, meanwhile, shows up as a near-opposite stance in the general population.. The report states that 38% of the general population are uncomfortable with AI-driven product or recommendation personalization. and 80% of Boomers reject the idea of automated executive messaging.. For these audiences. AI is not simply a new feature set—it is tied to control. judgment. and oversight. which helps explain why reassurance has to be more explicit.
The playbook shift follows from that mismatch: for B2B tech companies. the recommendation is to lean into the “future of work” framing with AI as a practical capability.. Consumer brands. by contrast. should keep human leadership visible and avoid making the AI label the headline. particularly when the audience includes people who may view AI as a threat to privacy. jobs. or information.
For companies whose main audience is knowledge workers. the challenge is less about proving AI can work and more about demonstrating responsibility.. The report highlights that this group prioritizes proof through process: 63% want to see companies consulting outside experts. and 66% point to a leader’s long-term reputation as a primary driver of trust.. In other words, comfort with AI does not automatically remove concerns about how it is used in sensitive areas.
That uneasiness is measurable in the areas where automation touches critical decisions.. The findings state that 52% of knowledge workers are not comfortable with AI generating legal or policy documents. and 58% resist using AI to make HR decisions.. The practical implication for leadership teams is that governance cannot be treated as paperwork; it needs to be visible. understandable. and tied to the kinds of tasks that people consider high-stakes.
To address that, the report’s tactical guidance is to move from “world-changing” hype to governance that can be explained.. It points to channels suited to credibility—such as LinkedIn and technical whitepapers—to detail data protection and ethical guardrails.. For high-acumen audiences. transparency about process is described as the most durable currency. because it shows how the company controls risk rather than how it markets potential.
Among the general population, the relationship with AI is even more sensitive to how the message is framed.. The report warns that many people do not hear “innovation” when companies talk about AI; instead. they hear a potential loss of control over privacy. employment. and personal information.. In this context, a promise of automation can read as abandonment unless accountability is made clear.
The fear factor in the findings is stark: nearly half (47%) of the general population views AI as leading to a more dangerous future. with skepticism connected to perceived lack of accountability.. Trust is also uneven across audiences. with only 28% of the general public trusting AI companies and startups compared with 58% of knowledge workers.
What helps rebuild trust, according to the report, is a safety-first posture.. More than half of the general population—66%—say protecting customer data is the top driver of trust.. The same share (66%) also points to admitting mistakes and outlining corrective steps as a key trust signal.. That suggests a focus on behavior. not branding. and it helps explain why “tone-deaf” communications can linger as scars when people feel they were not properly safeguarded.
The recommended consumer strategy is therefore more about what problems AI solves than how loudly it is labeled.. The report suggests using AI to address consumer pain points. while keeping the strategy AI-powered and human-led. particularly for groups like women and Gen Z. where trust increases when brands respond to concerns rather than broadcasting features.. It also recommends meeting skeptics where they are through platforms such as YouTube and TikTok/Reels. using demonstrations of accountability and showing the people behind the machine alongside the guardrails designed to protect users.
Even with those audience differences, the report identifies an area where the divide disappears: the penalty for deception.. Most of the general public (73%) and 67% of knowledge workers say they will penalize a brand for undisclosed AI messaging.. That is a crucial operational requirement for marketing and communications teams. because it reframes disclosure as a baseline trust expectation rather than an optional compliance step.
The disclosure standard is especially important for messaging that AI helps draft.. The report argues that when a machine assists in writing. the human must sign off and the audience must be told.. Whether the product is enterprise software or a consumer item like laundry detergent. the central principle is the same: transparency about human involvement helps prevent mistrust from taking root.
As Silicon Valley continues to rewrite business as usual. the report’s broader message is that trust is still defined by the human heart.. It expects the acumen gap to close as technology matures. but it also cautions that deceptive or tone-deaf communication can leave lasting damage.. For leaders. the argument is that empathy for skeptics and governance for insiders can become the new rulebook in the AI era. turning a real divide into something manageable rather than inevitable.
AI trust gap AI governance Brand Expectations Index generational AI attitudes AI messaging disclosure B2B AI strategy consumer AI trust
So basically boomers don’t trust AI? Shocking.
I feel like this is just companies saying “be transparent” like they haven’t been vague forever. Also why is trust even a “spectrum”? Either it works or it’s creepy.
Wait I read it as like half the people think AI is gonna be dangerous future, but not sure how they got that. Like my grandma watches those AI filters on TikTok and thinks it’s fine, so idk lol. Feels like they’re just making excuses for why marketing doesn’t land.
This “acumen gap” sounds like corporate talk for “we’ll tailor the lies to your age.” Younger people use AI daily so they believe whatever startup says, and older people see it as a threat because… they remember when tech companies got caught doing shady stuff. If they can’t explain what’s going on in plain language, then yeah, I’m gonna think it’s dangerous.