AI risks flattening creativity—leaders must design

human-AI collaboration – Research suggests heavy AI use boosts short-term output but can reduce creative diversity and erode employee skills—if collaboration isn’t designed.
Letting AI do all the work may feel like the quickest route to higher output, but research warnings suggest it can quietly hollow out what organizations are actually trying to create.
The argument comes from Sinan Aral. a professor at MIT’s Sloan School of Management and a leading researcher on human-AI collaboration.. In experiments his lab has run over several years. Aral and his team explored what happens when people and AI jointly perform tasks in real-world settings.. In about 85% of the studies the lab and related work have seen. he says. the pattern holds that adding AI improves humans in the short run more than it helps human-AI teams—while in many cases it can be better. at least on simple performance terms. to let the AI work alone.
Aral describes this as a “rational fork in the road.” If AI acting alone reliably outperforms human-AI collaboration teams. a manager following the logic would replace employees with automation.. But his central point is that this managerial conclusion is often wrong. because it focuses on what’s easy to measure while overlooking what gets lost when people disengage from the work.
The risk. Aral argues. is visible in a creativity problem he calls “diversity collapse.” In one landmark study. his team randomized roughly 2. 000 teams—some made up of human workers alone. and others pairing humans with AI—to create marketing ads for a real organization.. On conventional productivity metrics, the human-AI teams produced substantially more ads per worker and the text was of higher quality.. Yet Aral noted that the outputs also began to resemble each other. with ad copy and even ad imagery looking strikingly similar across teams.
As teams delegated more of the creative process to AI, Aral says, they became more vulnerable to that homogenization.. The short-term productivity gains. in his telling. masked longer-term creative erosion: AI trained on widely available internet data can flatten the differences that make creative work distinctive in the first place.. Diversity collapse, he frames, is not just a technical issue—it’s a thinking and workflow issue.
A second. related concern appears in Aral’s more recent work. titled “AI Augmentation Trap.” Here the focus shifts from creative variety to capability building.. The central claim is that “cognitive offloading”—outsourcing tasks workers could do themselves—can erode the skills being handed over.. In practical terms. workers who rely heavily on AI for writing can lose writing fluency. and junior employees may “de-skill” faster than experienced workers. who often have more professional reserves to keep their capabilities.
Aral’s warning is that the long-run effect can be worse than never adopting AI at all.. That is because the short-term boost in output can be followed by a reduction in the worker’s underlying ability to perform the task without assistance.. The productivity improvements therefore become a trap, not a foundation.
This is where Aral’s research intersects with a broader critique of how organizations have historically evaluated productivity.. The model many companies inherited from the First Industrial Revolution tends to reward speed, efficiency, and measurable output.. Aral argues that this misses the “dormancy” phase where people marinate ideas. synthesize information. and slowly cultivate judgment—conditions that often matter for truly original thinking.
From that perspective, the managerial question isn’t simply whether AI can raise production rates. It’s whether the way AI is deployed allows employees to keep learning and exercising the judgment that makes future performance possible.
Aral’s suggested solution is not to avoid AI, because he argues it’s not a realistic option.. He describes AI as potentially the most disruptive technology ever developed in human history. and he frames turning away as a strategy of denial rather than preparation.. Instead, he calls for intentional design of human-AI collaboration.
His prescriptions are practical and aimed at changing incentives and workflow structure rather than limiting adoption.. Aral argues that organizations should measure human skill levels independently of AI output. so performance reviews don’t mistake automation-generated quality for human capability.. He also recommends built-in structured practice where workers regularly perform tasks without AI assistance. helping protect the ability to work through ambiguity and complexity.
He further suggests extending performance evaluation windows so managers aren’t seduced by short-term productivity spikes.. When leaders judge too quickly, they may optimize for immediate gains while allowing long-run capability to shrink.. In the same spirit. he supports workflows where workers review. evaluate. and reshape AI outputs instead of accepting them automatically. keeping human judgment in the loop as a discipline rather than a formality.
Personality and pairing also enter the picture through a second line of Aral’s research.. In that work. his team matched approximately 1. 300 participants with AI systems personalized to complementary Big Five personality traits—intended not to mirror but to complement.. The results. Aral reports. pointed to improvements in both productivity and creative output while reducing diversity collapse. reinforcing a theme that who you team with matters. even when one partner is an algorithm.
Taken together. Aral’s data points to a counterintuitive imperative for the “Imagination Era”: the organizations most likely to win are not those that replace the most humans with AI. but those that become genuinely strong at human-AI collaboration.. That requires investment. deliberate workflow design. and a willingness to resist the easy productivity win that comes from handing tasks to automation too early.
Creativity. Aral emphasizes. has always required what he calls “the rigor of ambiguity”—the ability to tolerate uncertainty and avoid rushing to the fastest answer.. AI can provide a compelling shortcut. but the shortcut can also be the risk if teams stop doing the hard work of thinking.. The organizations that learn to hold both the power of AI and the texture of human judgment. he argues. are more likely to remain competitive over a longer horizon.
The analogy returns to the opening example: Aral notes that Liverpool is figuring out how to make an expensive roster fit together.. In his view, that kind of integration problem is not limited to sports.. For businesses. the challenge is similar—turning “all-star” tools and talented people into a system that performs well not just today. but also without losing the skills that make performance sustainable.
human-AI collaboration AI productivity trap diversity collapse skill erosion creativity workflows cognitive offloading MIT Sloan