The New York Times

AI Safety Lab Refuses to Release New Model, Citing "Unacceptable Risk"

AI technology
A leading artificial intelligence company has halted the public release of its newest AI model, declaring the technology too dangerous. Anthropic, a firm founded by AI safety researchers, announced it has developed a significantly more powerful model, known as Claude-Next. However, the company stated it will not release or sell this model due to "unacceptable risk" levels. This decision is a rare act of restraint in the competitive AI industry. It highlights a growing internal concern: the very labs building advanced AI may soon create systems they cannot safely control. Anthropic's warning suggests that the next generation of AI could arrive sooner than expected, and with capabilities that alarm its own creators. The company says current safety techniques are insufficient for the power of its new model. The announcement underscores a central debate in technology: should development slow down to ensure safety, or continue at its current rapid pace? By choosing not to release its own product, Anthropic has given a concrete answer.