Conflicting Rulings Leave Anthropic in ‘Supply-Chain Risk’ Limbo
A US appeals court ruling has left Anthropic's Claude model in a state of uncertainty, as it relates to the company's potential use by the US military. A lower court decision from March raised questions about the model's use, but the appeals court ruling has added complexity to the issue. The conflicting rulings have left Anthropic in a state of limbo, with the company's future use by the military hanging in the balance.
The US military has been exploring the use of AI-powered models like Claude for various tasks. However, the conflicting rulings have raised concerns about the model's safety and security. The outcome of the case will have significant implications for the use of AI in military applications and the broader industry.
Original Sources
Tags
More in Models & Research
Meta's Muse Spark is its first frontier model and its first without open weights
Meta has launched Muse Spark, its first frontier model and first without open weights.
<![CDATA[Meta is reentering the AI race with a new model called Muse Spark]]>
Meta has announced the launch of Muse Spark, a new AI model that marks its re-entry into the AI race.
OpenAI releases a new safety blueprint to address the rise in child sexual exploitation
OpenAI has released a new Child Safety Blueprint aimed at tackling the alarming rise in child sexual exploitation linked to advancements in AI.