X Promises Faster Hate Removal in UK Scrutiny

X promises – Ofcom says X is committing to reduce hate and terror content in the UK by speeding up its reviews and withholding access to accounts tied to terrorist organizations. The pledge comes as hate speech reportedly surged after Elon Musk’s purchase of Twitter, and O
When Ofcom opened its latest probe into online harms. it pointed directly at the accounts and content people in the UK are still seeing every day—especially hate and terrorism-linked posts.. Now X is answering with a set of commitments meant to change how quickly those items are dealt with. and how enforcement works when regulators believe illegal material is involved.
Ofcom says X has agreed to reduce “hate and terror content” in the UK by speeding up its review process for offending material.. The regulator also reports that X will “withhold access in the UK” to accounts that post “illegal terrorist content” and are judged to be “operated by or on behalf of a terrorist organisation.”
The pledge lands in a politically sensitive moment for the UK’s Jewish community. after a number of hate-motivated crimes. and Ofcom says it has “evidence” that terrorist content and illegal hate speech are still persisting on some of the biggest platforms.. “We are challenging them to tackle the problem and expect them to take firm action. ” Oliver Griffiths. Ofcom’s Online Safety Group Director. said in a statement.. “This is of particular importance in the UK following a number of recent hate motivated crimes suffered by the country’s Jewish community.”
X’s plan focuses on speed and targeted enforcement.. The company says it will “review and assess” terrorist and hate content in the UK “on average within 24 hours of it being reported.” If it can’t meet that average. X says it will still review the vast majority: at least 85 percent of hate content within a maximum of 48 hours.
Beyond review timelines, X says it will work with experts around UK hate and terror content, and it will ban offending accounts. Ofcom, for its part, says it will review X’s performance data quarterly over the next year—an effort to keep the commitments measurable rather than purely procedural.
But the trust question is already loud, and it’s not just coming from critics outside the UK’s regulator.. A UC Berkeley study found that after Elon Musk purchased Twitter. which was eventually renamed X. the weekly rate of hate speech increased by 50 percent. with the rise “buoyed by an increase in bots.” The story Ofcom is now trying to force platforms to prove is that promises can translate into sustained change.
Ofcom is also not limiting its attention to X alone.. The regulator is continuing its investigation into Elon Musk’s Grok AI for generating CSAM and non-consensual intimate images.. Separately, Ofcom says that as of March it fined 4chan nearly $700,000 for offenses under the UK’s Online Safety Act.. 4chan’s lawyer responded with an AI picture of a hamster.
For X. this is a familiar regulatory test: commitments are one step. but follow-through is what usually determines whether regulators escalate.. And given the high visibility of Elon Musk’s daily posting and re-posting of racist content. Ofcom’s focus on UK-specific reductions is unlikely to end the wider credibility debate about whether hate is being treated as a priority—or just a compliance target.
X Ofcom UK online safety hate speech terrorist content Online Safety Act Elon Musk Grok AI CSAM non-consensual intimate images 4chan