Zambia News

Sam Altman Apologizes After OpenAI Didn’t Alert Police in Canada Shooting

Sam Altman apologized after OpenAI did not alert law enforcement about a banned account tied to a deadly Canada rampage that killed eight people and injured 25.

OpenAI’s chief executive, Sam Altman, has issued a public apology after the company faced criticism for not alerting law enforcement about the online behavior of a person accused of carrying out a deadly attack in Canada.

In a letter dated Thursday and posted Friday, Altman expressed condolences to the community of Tumbler Ridge, British Columbia, saying OpenAI did not alert authorities about an account that had been banned in June.. The apology came as questions intensified over what platforms should do when abuse-detection systems surface potential warning signs of violence.

Police say the alleged shooter, 18-year-old Jesse Van Rootselaar, killed his 39-year-old mother, Jennifer Jacobs, and his 11-year-old stepbrother, Emmett Jacobs, in their home on Feb.. 10.. He then traveled to the nearby Tumbler Ridge Secondary School and opened fire, killing five children and an educator before taking his own life.. Investigators also reported 25 people were injured during the rampage.. The scale of the violence has left families and the wider northern community grappling with a loss that cannot be undone.

OpenAI said it identified Van Rootselaar’s account last June using its abuse-detection efforts for “furtherance of violent activities.” The company later banned the account under its usage policy.. At the time, OpenAI reviewed whether it should refer the account to the Royal Canadian Mounted Police, but determined the activity did not meet what it described as a threshold for referral.

For residents, the apology lands with a particular kind of weight: it acknowledges a gap between moderation decisions and the moments when police involvement might have been possible.. Eby, British Columbia’s premier, had previously said it appeared OpenAI had an opportunity to prevent the mass shooting.. While officials and the company can now trade details about decision rules, families are left with the question of whether early action could have changed outcomes.

Altman’s letter says he spoke with local leaders, including Tumbler Ridge Mayor Darryl Krakowka and Premier David Eby, and that they conveyed the community’s “anger, sadness and concern.” He also framed the apology as a step that does not—and cannot—restore what was lost.. “I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman wrote, adding that an apology is necessary to recognize the harm and “irreversible loss” the community has suffered.

The incident also spotlights a broader tension technology companies face as they develop systems designed to detect harmful intent online.. Abuse detection and violence-related monitoring can be powerful, but a threshold still has to be defined: what qualifies as credible enough to trigger law enforcement, and what remains classified as policy violation without immediate escalation.. In tragedies like this, the difference between “not enough to refer” and “enough to act” becomes impossible for the public to judge with confidence—especially after the fact.

Going forward, Altman said OpenAI’s focus will include working with all levels of government to help ensure something like the attack never happens again.. Eby, meanwhile, called the apology “necessary, and yet grossly insufficient” for the devastation experienced by families.. That reaction underscores how public trust is not only about procedures, but also about whether communities believe systems are designed to prioritize prevention over uncertainty.

For now, the company’s public steps may satisfy some demands for accountability while leaving others dissatisfied—particularly those who want clearer standards on when platforms should contact police.. As governments and tech firms continue refining policies for violent risk signals, the central question will remain the same: when an algorithm flags dangerous intent, how should that information move from a moderation queue to real-world intervention—fast enough to matter?