Altman apologizes after Canada shooting: ChatGPT account wasn’t flagged

OpenAI CEO Sam Altman says he’s “deeply sorry” that a banned ChatGPT account tied to a Canada school shooter wasn’t referred to law enforcement.
OpenAI CEO Sam Altman has apologized to a Canadian community affected by a school shooting, acknowledging that the shooter’s ChatGPT account wasn’t flagged to law enforcement even after it had been banned.
The apology centers on a February 10 massacre in Tumbler Ridge, British Columbia, where eight people were killed.. Authorities said six victims were shot at Tumbler Ridge Secondary School by 18-year-old Jesse Van Rootselaar.. Van Rootselaar’s mother and 11-year-old brother were also killed at a nearby home. and the suspect died by self-inflicted gunshot wound. according to officials.
The key point. as Altman framed it in a letter shared with Misryoum readers through a provincial leader. is timing and internal decision-making.. Altman wrote that Van Rootselaar’s ChatGPT account had been banned in June 2025—roughly eight months before the shooting—and that OpenAI did not alert law enforcement after the ban.
Altman’s letter says: “The pain your community has endured is unimaginable,” and he added he had been “deeply sorry” that OpenAI did not alert authorities to an account that had already been banned. He also conveyed condolences and emphasized a renewed focus on prevention.
OpenAI previously described how its systems and reviews work when it comes to violent misuse.. Misryoum understands the company told earlier reporting that automated abuse-detection tools and human reviewers had flagged Van Rootselaar’s account for potential misuse related to violent activities.. After that review, OpenAI said it banned the account for violating its usage policies.
Still, OpenAI said it weighed whether to share information with law enforcement at the time. The company’s earlier explanation—tied to its public threshold—was that it did not conclude the account posed an “imminent and credible risk of serious physical harm,” which it said is required for referral.
That distinction—between detecting troubling behavior and determining an urgent. credible threat—sits at the center of the new public apology.. It also underscores one of the most difficult questions facing AI platforms that interact with potentially dangerous requests: when does a moderation decision become a public-safety decision?
For families and communities, the difference can feel academic.. If an account was banned months earlier, the expectation becomes simple: why wasn’t something escalated sooner?. Even if the evidence at the time didn’t meet a strict legal or policy standard for “imminent” harm. the outcome invites painful scrutiny of whether those standards match real-world risk patterns.
There’s also a broader industry pressure building around how AI companies handle threats.. Earlier this week. Misryoum notes Florida Attorney General Ashley Moody announced a criminal investigation into OpenAI after messages between ChatGPT and a Florida State University student were reviewed in connection with an April 2025 campus shooting that killed two people and injured others.. In that separate case. prosecutors said they found “significant advice” provided by ChatGPT. and they are seeking records related to reporting protocols and handling of user threats.
Put together. these developments create a clear signal: regulators and governments appear less willing to accept that “policy thresholds” are the end of the conversation.. They want transparency into the mechanics—what gets flagged, how humans interpret risk, and what triggers escalation.. Even when platforms act in good faith. the expectation is that systems must be able to justify decisions after the fact. not just before.
From an operational standpoint. the apology may push OpenAI to tighten internal review pathways and clarify when account-level actions (like banning) should be paired with law-enforcement referrals.. The fact that the account was banned months before the attack also raises practical questions about continuity: what happens after a ban. how quickly risks are re-evaluated. and whether patterns that suggest harm should lead to broader intervention.
For users. this story will likely reinforce a growing reality of AI services: safety isn’t only about what the model generates—it’s also about what gets detected. reviewed. and acted upon in the background.. As AI systems become more capable and more embedded in daily life. public-safety accountability will increasingly determine how these tools are governed. monitored. and improved.
Altman ended his letter by returning to the theme of prevention. writing that OpenAI will stay focused on efforts “to help ensure something like this never happens again.” For Tumbler Ridge. prevention is the only direction that can matter now—because the tragedy already happened. and the community’s questions about timing. thresholds. and escalation are unlikely to fade.