출장용접 Hey there, and welcome to Decoder! I’m Hayden Field, senior AI repoter at The Verge — and your Thursday episode guest host. I have another couple of shows for you while Nilay is out on parental leave, and we’re going to be spending more time diving into some of the unforeseen consequences of the generative AI 출장용접
Hey there, and welcome to Decoder! I’m Hayden Field, senior AI repoter at The Verge — and your Thursday episode guest host. I have another couple of shows for you while Nilay is out on parental leave, and we’re going to be spending more time diving into some of the unforeseen consequences of the generative AI boom.
Today, I’m talking with Heidy Khlaaf, who is chief AI scientist at the AI Now Institute and one of the industry’s leading experts in the safety of AI within autonomous weapons systems. Heidy has actually worked with OpenAI in the past; from late 2020 to mid-2021, she was a senior systems safety engineer for the company during a critical time, when it was developing safety and risk assessment frameworks for the company’s Codex coding tool.
Now, the same companies that have previously seemed to champion safety and ethics in their mission statements are now actively selling and developing new technology for military applications.
In 2024, OpenAI removed a ban on “military and warfare” use cases from its terms of service. Since then, the company has signed a deal with autonomous weapons maker Anduril and, this past June, signed a $200 million Department of Defense contract.
OpenAI is not alone. Anthropic, which has a reputation as one of the most safety-oriented AI labs, has partnered with Palantir to allow its models to be used for US defense and intelligence purposes, and it also landed its own $200 million DoD contract. And Big Tech players like Amazon, Google, and Microsoft, who have long worked with the government, are now also pushing AI products for defense and intelligence, despite growing outcry from critics and employee activist groups.
So I wanted to have Heidy on the show to walk me through this major shift in the AI industry, what’s motivating it, and why she thinks some of the leading AI companies are being far too cavalier about deploying generative AI in high-risk scenarios. I also wanted to know what this push to deploy military-grade AI means for bad actors who might want to use AI systems to develop chemical, biological, radiological, and nuclear weapons — a risk the AI companies themselves say they’re increasingly worried about.
Okay, here’s Heidi Khlaaf on AI in the military. Here we go.
If you’d like to read more on what we talked about in this episode, check out the links below:
- OpenAI is softening its stance on military use | The Verge
- OpenAI awarded $200 million US defense contract | The Verge
- OpenAI is partnering with defense tech company Anduril | The Verge
- Anthropic launches new Claude service for military and intelligence use | The Verge
- Anthropic, Palantir, Amazon team up on defense AI | Axios
- Google scraps promise not to develop AI weapons | The Verge
- Microsoft employees occupy headquarters in protest of Israel contracts | The Verge
- Microsoft’s employee protests have reached a boiling point | The Verge
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!
AI, Anthropic, Decoder, OpenAI, Podcasts, Tech, xAI 급한 현장, 깔끔한 마감. 숙련 기술자가 바로 찾아가는 출장용접 서비스입니다. 재질·두께·환경을 먼저 진단하고 최적 공정으로 변형과 변색을 최소화합니다. 스테인리스, 알루미늄, 철 구조물까지 출장용접 범위를 넓혔고, 사진만 보내면 견적과 공정 계획을 신속히 안내합니다. 안전수칙과 품질검사 후 A/S까지 책임지며, 야간·주말도 예약 가능합니다. 믿을 수 있는 출장용접, 합리적인 비용의 출장용접, 결과로 증명하는 출장용접을 경험해 보세요. #출장용접 #알곤출장용접 #출장용접알곤 https://communicationphone.store/