Us News

OpenAI CEO Sam Altman “deeply sorry” for failing to alert law enforcement about Canadian school shooter's ChatGPT account

OpenAI CEO Sam Altman apologized to members of the Canadian community there big shot happened earlier this year by not tagging the ChatGPT account of the shooter of law enforcement.

“The pain your community has experienced is unimaginable,” Altman wrote in a letter shared Friday on social media by British Columbia Premier David Eby. “I've been thinking about you for the past few months.”

Eight people were killed in the February 10 massacre in the small community of Tumbler Ridge in northeastern British Columbia. Six people were shot and killed when Jesse Van Rootselaar, 18, opened fire at Tumbler Ridge Secondary School, authorities said, and the gunman's mother and 11-year-old brother died in a nearby residence. Van Rootselaar died of a self-inflicted gunshot wound, officials said.

Altman wrote in the letter, dated Thursday, that Van Rootselaar's ChatGPT account had been banned in June 2025 — about eight months before the shooting.

“I'm very sorry that we didn't tell law enforcement about the account that was closed in June,” Altman said.

In February, OpenAI told CBS News that Van Rootselaar's account was flagged last year by automated abuse detection tools and human investigators who identified potential misuse of ChatGPT for violent activities. OpenAI said the account was then banned for violating its usage policies.

OpenAI said the company considered it could flag the account to law enforcement, but determined at the time that it did not pose a proximate and credible risk of serious physical harm to others, failing to meet the referral threshold.

“Our thoughts are with everyone affected by the Tumbler Ridge disaster,” OpenAI said in a statement sent to CBS News in February following the incident. “We have contacted the Royal Canadian Mounted Police about the personal information and their use of ChatGPT, and we will continue to support their investigation.”

OpenAI says ChatGPT is trained to prevent real-world harm, and is instructed to refuse to assist when it detects an illegal intent. Users who show intent to harm others are flagged by human reviewers who determine that the case poses an imminent threat of physical harm and should be referred to law enforcement, according to the company.

Altman wrote in his letter that OpenAI will remain focused on prevention efforts “to help ensure that something like this never happens again.”

“I want to express my deepest condolences to the entire community,” Altman said. “No one should have to endure a tragedy like this.”

Earlier this week, Florida Attorney General James Uthmeier announced criminal investigation into OpenAI after reviewing messages between ChatGPT and a Florida State University student is accused of the April 2025 shooting which killed two people and injured several others.

Uthmeier said his team determined that ChatGPT provided a “significant tip” to the suspected shooter. His office is issuing a subpoena to OpenAI requesting records of the company's policies for reporting potential crimes to law enforcement, and its handling of user threats.

Regarding the Florida shooting, an OpenAI spokesperson said in a statement sent to CBS News on Tuesday that “after hearing about the incident,” it “found a ChatGPT account believed to be associated with the suspect and shared this information with law enforcement.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button