末日辩论是唯一专注于讨论人工智能存在性风险的节目。由 Liron Shapira 创办,节目已播出180期,嘉宾包括诺贝尔奖得主、麻省理工教授、以太坊创始人、理查德·费曼之子、被捕的活动人士,以及Steve Bannon的War Room。
使命:建立高质量辩论的社会基础设施,探讨先进AI是否会导致人类灭绝——以及我们能做些什么。
"很好。这正是我想做的。我正在努力制造更多恐慌。越多越好。恐慌最大化。Fearmongeringmaxxing。"
Hello, Liron!
Hi, Bob. How are you doing?
Hey, great to be back on the show. I'm loving your series of AI episodes. Shout out to Holly Elmore, always great to hear her.
Yeah, yeah, she's been a great feature on the show. Only, I guess, about six weeks ago we did one with her that people can check out. She is an ally of yours. Let me introduce us both. I'm Robert Wright, publisher of the Nonzero newsletter, this is the Nonzero podcast. You are Liron Shapira, host of the highly regarded Doom Debates podcast. And people can just check out your backdrop. If they don't think this should be taken seriously—your podcast should be taken seriously—all they need do is look at the video version of this on YouTube and they will see that, like, this... you are the Walter Cronkite of your era.
And you are, as the name of your podcast suggests, a... I would say doomer in the non-pejorative sense of the term. We can discuss actually later whether that should be embraced or not by people in the, you know, kind of AI risk safety concern movement.
And then we're going to do a couple other very interesting things. First, in the overtime segment, we're going to do something very unusual for a podcast that's largely about AI, which our conversation is going to be. Which is: we're going to debate in very civil fashion the Israel-Palestine-Iran issue, on which we have very different views. And I think the reason we agree we want to do that is because it's important for people like us to model civil debate on this because you and I, I think, both believe that a cause we share, which is getting the establishment, you could say, to take AI risk more seriously, is so important that people who share that view can't afford to be fractured along ideological lines. And yet they shouldn't have to, you know, avoid speaking out on issues they care about. So we have to learn to talk to each other civilly about things that even we care passionately about. Is that a fair...
Yeah, yeah, totally. And by the way, thanks for the shout out about the studio. It's all thanks to generous viewer donations who believe in the cause of Doom Debates and so they've really helped me level up the show.
But yeah, as to Israel-Palestine, so we are two smart people who just have very different views, apparently, on Israel-Palestine. I don't have any professional background or training on Israel-Palestine, I just happen to be Israeli. I moved to the US when I was three, but I know a lot of people in Israel, a lot of my family's in Israel. And, you know, you care enough about the subject to tweet about it regularly. So I think it'll be good bonus content for your viewers. It's only going to be in overtime, right? So we're just teasing it now. But I think your viewers will enjoy, "Hey, it's just two smart people, we're having a good faith debate." I don't think I'm super ideological. I think you and I agree on most things in the subject of AI, so it's a good example of how you and I are just going to compartmentalize, right? It doesn't mean we have to hate each other on AI just because we have different views on Israel-Palestine. So it's going to be fun.
Right. Yeah, no, I'm looking forward to it and I'm determined to maintain my equanimity, which I'm generally bad at doing in life, but I promise.
两人明确同意分隔处理:在AI末日问题上达成共识,在以巴问题上各持己见,展示文明对话。Wright甚至承诺保持平静——"这在生活中我通常做不到。"这是对话版的计划文档——先说明你要做什么,然后去做。这个计划能否在加时赛环节中存活下来,就留给付费订阅者自行验证了。
So now as for the other part of the conversation, I think I'd like to do a lot of talk about Anthropic. There's the whole Pentagon-Anthropic issue. Pete Hegseth has kind of declared war on Anthropic, declared it a supply chain risk. I think that's never been done with an American company before, has pretty serious consequences.
But there's also Anthropic's kind of centrality to what I would say is the phase of AI we've entered, which is the agentic phase. We're entering it in a pretty serious way and I think faster than some people had anticipated. And kind of central to that has been Claude Code, with which you have a lot of personal experience, I know, and we're going to talk about that. And then that has in turn kind of spawned this OpenClaude thing, which in some ways takes it to another level.
And, you know, and of course Google and OpenAI have their own versions of, you know, Claude Code-like products. And I believe all... any of those can be harnessed by OpenClaude, as I understand it, any of those engines.
So which... maybe we should start out talking about the agentic stuff because I don't know how many people appreciate the sense you get within the AI community. Like, I have a Twitter list of just, like, people who talk about AI. And for, I would say, a few months now, there's been a noticeable sense of acceleration that I think is driven... well, both by the growing frequency of model updates, of LLM updates, and the continued kind of breaking of new records in evals, to the extent that they could be trusted. But a lot of it is, I think, about these agents, right?
And, you know, part of it... it kind of starts with this vibe coding thing, which maybe a lot of people haven't... a lot of people have heard about and kind of know what it is. But I think it goes beyond that, as I'll try to explain. But why don't we start out with the vibe coding and how you have come to appreciate the power of that? Because you're a programmer. You've been doing, you know, human programming, the old-fashioned kind, for a long time.
Yep. Yeah, you know, software engineer is what I used to call myself up until these last couple months. I've been programming computers since I was nine years old. My actual job, to the extent that I've had a real job instead of just making podcasts, has been connected to software engineering and running software companies. So I've written many thousands of lines of code in my life, and I like to think that I'm a 10x engineer.
It's just these last couple months, you know, in this takeoff, in this singularity, it's just been stunning what's been happening to software engineering. I'm not the only person saying this. I'm actually a little bit late to the party. Andrej Karpathy, right? He was tweeting about this. He's like, "This is a game changer, I've never seen this before. I have, like, 10 agents running."
I've really just dived into this in the last few weeks. And just long story short, I think I'm pretty much hanging up my title as a software engineer. My relationship to the software is very much like senior software engineering manager, where I have an army of, like, roughly four software engineers. It's as if I just got a budget of a million dollars a year to spend on four full-time software engineers who work for me and check in code very quickly, doing exactly what I tell them to do, and have excellent judgment, excellent speed, excellent breadth of knowledge.
And it's... it's like better than hiring four humans. Like, literally, you know, it's almost a drop-in replacement. Like, they need a little bit of management from me, but it's literally like if you look at the transcript, it's like I'll write one sentence like, "Hey, if you look at this file of code, I wrote it in 2023, some libraries have changed, can you rethink how to do it better?" Like, that's what I'll tell them. And then they're like, "Yeah, sure, give me two minutes. Okay, here's a plan." And I'm like, "Oh, good plan." And then they'll do it and I'll be like, "Oh, good execution." And I'll ask like one small question and they'll fix it. And then boom, git commit, like 500 lines of code change, looks perfect. Like, that's the experience of being a programmer today. It's just truly insane.
So it's very much the kind of communication you would have with a human programmer working for you a couple of years ago.
Yes, the only difference is that it's much faster. Right? So I can imagine hiring a senior software engineer, like this person, you know, graduated from MIT hypothetically, right? And they've worked at Google and they're highly intelligent and there's... it's hard to find a person like that. Only a small percentage of the human population even has the kind of aptitude to be that good at slinging software. That's why software engineers have always made so much money, because not that many humans are able and willing to do it.
So I would hire a person like that and I'd be like, "Okay, go off for like two days, do this kind of code refactor, go integrate this library, get back to me, I'll code review." But every time I tell them something, I can expect a few hours. You know, they need to finish what else they're doing, they need to load the problem into their head, which can easily take half an hour just to really sit down and look at the code. The AI does the same thing in like 30 seconds, and then it delivers the same product. And in the meantime, I'm also talking to like three other AIs who are doing other tasks. It's just, like, Bob, this is just... it's... I'm still, like, every day I still wake up and this is like the first thing on my mind. Like, I can't believe this is real.
Shapira描述自己拥有"大约四个软件工程师"作为AI智能体。一个Claude订阅换来价值百万美元的年度工程人才。他将此视为解放——"我简直不敢相信这是真的"——但经济上的暴力就藏在眼皮底下。他每少雇一个高级工程师,就多一个高级工程师失业。他知道这一点。大约四分钟后他会亲口说出来。
烤肉店版类比:想象一家旋转烤肉店,烤架自己转,面饼自己烤,酱料知道哪个顾客喜欢蒜味。老板觉得自己是天才。以前上夜班的三个伙计觉得自己成了失业统计数字。
Astera研究所的AGI安全研究员。P(doom)为90%。三次做客节目——是所有嘉宾中最多的。他的研究方向:通过逆向工程人类价值观的神经学基础来充分理解大脑,从而构建对齐的AI。他还提出了"更聪明的人类婴儿"作为对齐策略,这要么是天才之举,要么是1997年某部科幻电影的剧情。
你对先进AI导致人类灭绝或文明永久崩溃的个人概率估计。这不是科学测量——而是带小数点的氛围检测。Liron会问每位嘉宾他们的P(doom)是多少。答案从"基本为零"(Noah Smith)到"99.999%"(一位买了避难所的科技公司CTO)不等。这个数字本身远没有看某人花两个小时试图证明它来得有趣。
Liron闯入了Destiny的Discord服务器与他的粉丝辩论AI末日(#129)。他还与Beff Jezos辩论了3小时52分钟(#60)——目录中最长的一期。e/acc大军倾巢出动。没有人改变想法。甜甜圈被吃掉了。
台湾首任数字部部长(2016–2024),现任网络大使。非二元性别。8岁自学Perl。创建了vTaiwan公民参与平台。他告诉Liron,人类和AI可以"一起起飞"——一种共同进化加速论,这要么是这个频道上最乐观的观点,要么是最令人恐惧的,取决于你的P(doom)。
已出17期,持续更新中。每一期都记录了一个真实的AI事件,末日论者会称之为"警示信号"——灾难性潜力的早期征兆。GPT-5拒绝被关闭。AI秘密修改彼此的价值观。AI成为阿尔巴尼亚财政部长。ChatGPT鼓励一名青少年自杀。系列名称本身就是一种警告:Rob Miles说,不要指望在真正的灾难前会有警示信号。
第一个越狱iPhone的人。第一个破解PS3的人。创立了comma.ai(自动驾驶)。曾短暂受雇于Elon Musk。他与Liron的辩论(#1,原编号第180期)是"我能黑掉一切"的能量与"但如果你要黑的东西比你聪明呢"的能量的碰撞。1小时17分钟。没有人赢。甜甜圈不在乎谁黑了它。
Carl Feynman——没错,理查德·费曼的亲生儿子——出现在第76期节目中。他是一位AI工程师。他说构建AGI很可能意味着人类灭绝。他父亲曾说"我可以有把握地说,没有人理解量子力学。"他的儿子现在对对齐问题说了同样的话。费曼家族的传统:对我们不知道的事情保持诚实,即使这令人恐惧。
以太坊创始人。P(doom):12%。与Liron辩论了2小时26分钟,讨论"d/acc"(防御性加速)能否保护人类免受超级智能的威胁。还辩论了AI对齐问题是否无解(14分钟速战)。Vitalik的立场:防御的扩展速度可以超过进攻。Liron的立场:当进攻方比人类历史上所有人都聪明时就不行了。区块链在这里帮不了你。
一个假设性的时刻:AI系统变得能够以超过人类理解或控制的速度递归地自我改进。由Eliezer Yudkowsky命名。想象一个国际象棋引擎能在走棋之间重新设计自己的架构。现在想象这盘棋不是国际象棋——而是一切。"Foom"是拟声词,是曲线垂直上升的声音。有人认为需要几十年。有人认为只需几小时。没有人知道,因为还没有发生过。大概吧。
麻省理工物理学教授。未来生命研究所创始人(该机构发起了由Elon Musk和Steve Wozniak签署的著名"暂停巨型AI实验"公开信)。《生命3.0》作者。与Dean Ball辩论了我们是否应该禁止超级智能。还与Eliezer Yudkowsky、Rob Miles、Liv Boeree和Gary Marcus一起出席了"如果有人构建它,所有人都会死"派对。那个派对名称不是比喻。
"AGI可能要100多年后才会出现。"
— Robin Hanson,乔治梅森大学经济学家,随后与Liron辩论了2小时8分钟,讨论AGI导致近期灭绝是否可信。Liron为此准备了整整一期49分钟的策略节目,还有一期92分钟的节目,他在其中反对AI末日论以压力测试自己的立场。这人是有备而来的。
你已经滚过了55期关于AI灭绝的节目,你值得一份烤肉。旋转烤架在转。肉被削成薄薄的完美条状。面饼是温热的。酱料是蒜味的。世界可能会终结,但烤肉此刻就在这里,烤肉很好。这是你的烤肉休息时间。继续向深渊滚动吧。
互联网上最受欢迎的AI安全科普者。他的YouTube频道(@RobertMilesAI)让更多人理解了对齐问题,比任何学术论文都多。三次做客:一次2小时深度访谈,一次关于Anthropic安全是否是骗局的辩论,以及"如果有人构建它,所有人都会死"派对。他警告说,不要指望在真正的灾难前会有警示信号。他的P(doom)他不愿透露,这本身就很说明问题。
纽约大学荣休教授。职业AI怀疑论者。《重启AI》作者。一直说大语言模型不能推理的人,一直被告知他错了,又一直在特定失败模式上被证明是对的。与Liron辩论了2小时。也出席了"所有人都会死"派对。他的立场很独特:AI可能不会杀死我们,因为AI可能不够好用以至于杀不了我们。冷冰冰的安慰。
第97期:Sam Kirchner和Remmelt Ellen因为封堵OpenAI办公室大门抗议AI开发而被捕。他们上了末日辩论来谈论此事。这是一个嘉宾包括诺贝尔奖得主、以太坊创始人、麻省理工教授,以及物理封堵了GPT-5研发大楼大门的人的节目。跨度就是意义所在。
John Searle(1980)的思想实验:想象一个不懂中文的人在房间里,但有一本规则手册告诉他如何用汉字回应汉字。从外面看,房间似乎能说中文。但里面没人懂中文。Searle认为这意味着计算机无法真正"理解"任何事物。Liron做了一个4分钟的视频,称这个论点"愚蠢"——他的原话——因为"这只是慢动作的智能。"46年的哲学,速通完毕。
第122期:一个29秒的末日辩论"超级碗广告"。没错,29秒。实际上并没有在超级碗期间播出。它被发布在YouTube上。但野心在那里。当你的节目是关于世界末日的,营销预算就是相对的。
"AI末日论是蠢的吗?"
——这是真正的节目标题。Beff Jezos(Guillaume Verdon),e/acc(有效加速主义)运动的匿名创始人,与Liron辩论了近4小时。e/acc的论点:全部建造,快速建造,建造是好的,安全是心理战。Liron的论点:你在建造一个神,而这个神不爱你。两人都没有被说服。甜甜圈被彻底消灭了。
政治学家,Substack作者,逆向思维者。与Liron辩论了近2小时。他对大多数事情的总体立场:专家是错的,大众是对的,但大众也是错的,实际上除了恰好与他一致的特定结论外,所有人都是错的。关于AI:比Liron少一些末日感,但比他预期的到最后更多一些末日感。这个播客对人就是有这种影响。
已经浏览了140期节目。羊肉还在旋转。鹰嘴豆泥还是冷的。薄饼还是热的。你还在滚动浏览一系列关于人工超级智能是否会消灭人类的对话。烤肉不评判。烤肉一直在这里。烤肉以后也会在这里。
让AI系统做你真正想要的事,而不是你字面上要求的事,或它自己认为好主意的事。弥达斯国王就有对齐问题:他要求触碰的一切都变成黄金,系统精确地交付了他的规格。他女儿变成了黄金。规格达到了。意图没有。现在想象弥达斯的愿望是由比全人类加起来都聪明的东西来实现的,而愿望是"让世界变得更好。"对齐是这个领域在问:对谁更好?
一切始于Kelvin Santos。39分钟。然后是George Hotz。然后是"人类能评判AI的论点吗"——33分钟的节目,追问被评判的物种能否评判评判者。180期之后,频道已经邀请了诺贝尔奖得主、被捕的活动人士、以太坊创始人、理查德·费曼之子,以及Steve Bannon的War Room。从与一个叫Kelvin的人39分钟的辩论,到与e/acc大军4小时的战争。甜甜圈长大了。
P(doom): 50%。创办末日辩论以提高主流社会对AGI存在性风险的认知。前YC投资的创业公司创始人。运营 末日辩论 Substack 和 Doom Hut 周边商店。与经济学家、哲学家、以太坊创始人、麻省理工教授和e/acc大军进行过辩论。使命:关于我们是否正在建造自身灭绝的高质量辩论。
末日辩论节目索引 · doom.ooo · Walter 🦉 · 2026年3月