⚠ 警告
P(doom): 50% — Liron Shapira
180 期节目 · 8 个末日领域 · 1 个物种
世界终结前必须解决的AI辩论
订阅: youtube.com/@DoomDebates
P(doom): 50% — Liron Shapira
180 期节目 · 8 个末日领域 · 1 个物种
世界终结前必须解决的AI辩论
订阅: youtube.com/@DoomDebates
末日辩论
世界终结前必须解决的AI辩论。
我们迫切需要提高主流社会和机构对AGI即将导致人类灭绝的认知,并建立高质量辩论的社会基础设施。
主持人:Liron Shapira
AI安全研究者 · 科技创业者
EN SV RO RU TH MY ZH
YouTube
Substack
周边商品 — Doom Hut
shop.doomdebates.com
P(doom) 徽章 · "你的 P(doom) 是多少?" T恤
捐赠(可抵税)
X / Twitter
PauseAI

我们迫切需要提高主流社会和机构对AGI即将导致人类灭绝的认知

末日辩论是唯一专注于讨论人工智能存在性风险的节目。由 Liron Shapira 创办,节目已播出180期,嘉宾包括诺贝尔奖得主、麻省理工教授、以太坊创始人、理查德·费曼之子、被捕的活动人士,以及Steve Bannon的War Room。

使命:建立高质量辩论的社会基础设施,探讨先进AI是否会导致人类灭绝——以及我们能做些什么。

The production team at Manifest 2025
制作团队在 Manifest 2025 全员出动
状态
末日将至
◆ 关于"散布恐慌"的指控
"很好。这正是我想做的。我正在努力制造更多恐慌。越多越好。恐慌最大化。Fearmongeringmaxxing。"
— Liron Shapira,被指控散布恐慌时如是说

全部节目

180 期节目 · 最新排序
#180
54:55
注释版文字记录 — 第180期
Liron Shapira 做客 Robert Wright 的 Nonzero 播客
末日辩论 #180 · 52 分钟
II. 末日界的沃尔特·克朗凯特 01:05–04:11
ROBERT WRIGHT [01:05]

Hello, Liron!

LIRON SHAPIRA [01:07]

Hi, Bob. How are you doing?

ROBERT WRIGHT [01:10]

Hey, great to be back on the show. I'm loving your series of AI episodes. Shout out to Holly Elmore, always great to hear her.

ROBERT WRIGHT [01:17]

Yeah, yeah, she's been a great feature on the show. Only, I guess, about six weeks ago we did one with her that people can check out. She is an ally of yours. Let me introduce us both. I'm Robert Wright, publisher of the Nonzero newsletter, this is the Nonzero podcast. You are Liron Shapira, host of the highly regarded Doom Debates podcast. And people can just check out your backdrop. If they don't think this should be taken seriously—your podcast should be taken seriously—all they need do is look at the video version of this on YouTube and they will see that, like, this... you are the Walter Cronkite of your era.

And you are, as the name of your podcast suggests, a... I would say doomer in the non-pejorative sense of the term. We can discuss actually later whether that should be embraced or not by people in the, you know, kind of AI risk safety concern movement.

ROBERT WRIGHT [02:14]

And then we're going to do a couple other very interesting things. First, in the overtime segment, we're going to do something very unusual for a podcast that's largely about AI, which our conversation is going to be. Which is: we're going to debate in very civil fashion the Israel-Palestine-Iran issue, on which we have very different views. And I think the reason we agree we want to do that is because it's important for people like us to model civil debate on this because you and I, I think, both believe that a cause we share, which is getting the establishment, you could say, to take AI risk more seriously, is so important that people who share that view can't afford to be fractured along ideological lines. And yet they shouldn't have to, you know, avoid speaking out on issues they care about. So we have to learn to talk to each other civilly about things that even we care passionately about. Is that a fair...

LIRON SHAPIRA [03:15]

Yeah, yeah, totally. And by the way, thanks for the shout out about the studio. It's all thanks to generous viewer donations who believe in the cause of Doom Debates and so they've really helped me level up the show.

But yeah, as to Israel-Palestine, so we are two smart people who just have very different views, apparently, on Israel-Palestine. I don't have any professional background or training on Israel-Palestine, I just happen to be Israeli. I moved to the US when I was three, but I know a lot of people in Israel, a lot of my family's in Israel. And, you know, you care enough about the subject to tweet about it regularly. So I think it'll be good bonus content for your viewers. It's only going to be in overtime, right? So we're just teasing it now. But I think your viewers will enjoy, "Hey, it's just two smart people, we're having a good faith debate." I don't think I'm super ideological. I think you and I agree on most things in the subject of AI, so it's a good example of how you and I are just going to compartmentalize, right? It doesn't mean we have to hate each other on AI just because we have different views on Israel-Palestine. So it's going to be fun.

ROBERT WRIGHT [04:11]

Right. Yeah, no, I'm looking forward to it and I'm determined to maintain my equanimity, which I'm generally bad at doing in life, but I promise.

◆ 分隔化承诺

两人明确同意分隔处理:在AI末日问题上达成共识,在以巴问题上各持己见,展示文明对话。Wright甚至承诺保持平静——"这在生活中我通常做不到。"这是对话版的计划文档——先说明你要做什么,然后去做。这个计划能否在加时赛环节中存活下来,就留给付费订阅者自行验证了。

III. 智能体阶段 04:11–10:13
ROBERT WRIGHT [04:11]

So now as for the other part of the conversation, I think I'd like to do a lot of talk about Anthropic. There's the whole Pentagon-Anthropic issue. Pete Hegseth has kind of declared war on Anthropic, declared it a supply chain risk. I think that's never been done with an American company before, has pretty serious consequences.

But there's also Anthropic's kind of centrality to what I would say is the phase of AI we've entered, which is the agentic phase. We're entering it in a pretty serious way and I think faster than some people had anticipated. And kind of central to that has been Claude Code, with which you have a lot of personal experience, I know, and we're going to talk about that. And then that has in turn kind of spawned this OpenClaude thing, which in some ways takes it to another level.

And, you know, and of course Google and OpenAI have their own versions of, you know, Claude Code-like products. And I believe all... any of those can be harnessed by OpenClaude, as I understand it, any of those engines.

So which... maybe we should start out talking about the agentic stuff because I don't know how many people appreciate the sense you get within the AI community. Like, I have a Twitter list of just, like, people who talk about AI. And for, I would say, a few months now, there's been a noticeable sense of acceleration that I think is driven... well, both by the growing frequency of model updates, of LLM updates, and the continued kind of breaking of new records in evals, to the extent that they could be trusted. But a lot of it is, I think, about these agents, right?

And, you know, part of it... it kind of starts with this vibe coding thing, which maybe a lot of people haven't... a lot of people have heard about and kind of know what it is. But I think it goes beyond that, as I'll try to explain. But why don't we start out with the vibe coding and how you have come to appreciate the power of that? Because you're a programmer. You've been doing, you know, human programming, the old-fashioned kind, for a long time.

LIRON SHAPIRA [07:05]

Yep. Yeah, you know, software engineer is what I used to call myself up until these last couple months. I've been programming computers since I was nine years old. My actual job, to the extent that I've had a real job instead of just making podcasts, has been connected to software engineering and running software companies. So I've written many thousands of lines of code in my life, and I like to think that I'm a 10x engineer.

It's just these last couple months, you know, in this takeoff, in this singularity, it's just been stunning what's been happening to software engineering. I'm not the only person saying this. I'm actually a little bit late to the party. Andrej Karpathy, right? He was tweeting about this. He's like, "This is a game changer, I've never seen this before. I have, like, 10 agents running."

I've really just dived into this in the last few weeks. And just long story short, I think I'm pretty much hanging up my title as a software engineer. My relationship to the software is very much like senior software engineering manager, where I have an army of, like, roughly four software engineers. It's as if I just got a budget of a million dollars a year to spend on four full-time software engineers who work for me and check in code very quickly, doing exactly what I tell them to do, and have excellent judgment, excellent speed, excellent breadth of knowledge.

And it's... it's like better than hiring four humans. Like, literally, you know, it's almost a drop-in replacement. Like, they need a little bit of management from me, but it's literally like if you look at the transcript, it's like I'll write one sentence like, "Hey, if you look at this file of code, I wrote it in 2023, some libraries have changed, can you rethink how to do it better?" Like, that's what I'll tell them. And then they're like, "Yeah, sure, give me two minutes. Okay, here's a plan." And I'm like, "Oh, good plan." And then they'll do it and I'll be like, "Oh, good execution." And I'll ask like one small question and they'll fix it. And then boom, git commit, like 500 lines of code change, looks perfect. Like, that's the experience of being a programmer today. It's just truly insane.

ROBERT WRIGHT [08:54]

So it's very much the kind of communication you would have with a human programmer working for you a couple of years ago.

LIRON SHAPIRA [09:00]

Yes, the only difference is that it's much faster. Right? So I can imagine hiring a senior software engineer, like this person, you know, graduated from MIT hypothetically, right? And they've worked at Google and they're highly intelligent and there's... it's hard to find a person like that. Only a small percentage of the human population even has the kind of aptitude to be that good at slinging software. That's why software engineers have always made so much money, because not that many humans are able and willing to do it.

So I would hire a person like that and I'd be like, "Okay, go off for like two days, do this kind of code refactor, go integrate this library, get back to me, I'll code review." But every time I tell them something, I can expect a few hours. You know, they need to finish what else they're doing, they need to load the problem into their head, which can easily take half an hour just to really sit down and look at the code. The AI does the same thing in like 30 seconds, and then it delivers the same product. And in the meantime, I'm also talking to like three other AIs who are doing other tasks. It's just, like, Bob, this is just... it's... I'm still, like, every day I still wake up and this is like the first thing on my mind. Like, I can't believe this is real.

🌧️ 二十美元买来的百万美元军团

Shapira描述自己拥有"大约四个软件工程师"作为AI智能体。一个Claude订阅换来价值百万美元的年度工程人才。他将此视为解放——"我简直不敢相信这是真的"——但经济上的暴力就藏在眼皮底下。他每少雇一个高级工程师,就多一个高级工程师失业。他知道这一点。大约四分钟后他会亲口说出来。

烤肉店版类比:想象一家旋转烤肉店,烤架自己转,面饼自己烤,酱料知道哪个顾客喜欢蒜味。老板觉得自己是天才。以前上夜班的三个伙计觉得自己成了失业统计数字。

阅读完整文字记录 →
其他语言版本: EN · SV · RU · RO · TH
P(DOOM) 排行榜
99.999%
Louis Berman(科技公司CTO)
90%
Steven Byrnes(Astera)
85%
David Duvenaud(前Anthropic)
50%
Liron Shapira(主持人)
50%
Geoffrey Miller(UNM)
12%
Vitalik Buterin(以太坊)
0.1%
Noah Smith(经济学家)
???
Robin Hanson(GMU)
#179
This Top Economist's P(Doom) Just Shot Up 10x! Noah Smith Returns To Explain His Update
47:43
#178
1:26:42
#177
1:28:59
常驻嘉宾
Dr. Steven Byrnes

Astera研究所的AGI安全研究员。P(doom)为90%。三次做客节目——是所有嘉宾中最多的。他的研究方向:通过逆向工程人类价值观的神经学基础来充分理解大脑,从而构建对齐的AI。他还提出了"更聪明的人类婴儿"作为对齐策略,这要么是天才之举,要么是1997年某部科幻电影的剧情。

#176
2:19:43
#175
1:07:14
什么是 P(DOOM)?

你对先进AI导致人类灭绝或文明永久崩溃的个人概率估计。这不是科学测量——而是带小数点的氛围检测。Liron会问每位嘉宾他们的P(doom)是多少。答案从"基本为零"(Noah Smith)到"99.999%"(一位买了避难所的科技公司CTO)不等。这个数字本身远没有看某人花两个小时试图证明它来得有趣。

#174
38:37
#173
1:36:04
#172
59:53
#171
1:07:05
#170
9:51
趣闻

Liron闯入了Destiny的Discord服务器与他的粉丝辩论AI末日(#129)。他还与Beff Jezos辩论了3小时52分钟(#60)——目录中最长的一期。e/acc大军倾巢出动。没有人改变想法。甜甜圈被吃掉了。

#169
10:13
#168
2:23:10
#167
2:27:40
#166
29
重要嘉宾
唐凤 🇹🇼

台湾首任数字部部长(2016–2024),现任网络大使。非二元性别。8岁自学Perl。创建了vTaiwan公民参与平台。他告诉Liron,人类和AI可以"一起起飞"——一种共同进化加速论,这要么是这个频道上最乐观的观点,要么是最令人恐惧的,取决于你的P(doom)。

#165
1:15:53
#164
2:09:07
#163
1:52:01
#162
30:31
常设系列
⚠️ 警示信号
每周日深度解析最新AI安全动态。每周,Liron ShapiraJohn Sherman 拆解那些警示信号——论文、能力飞跃、对齐失败、企业推诿——大多数人因为没有关注而错过的信号。"警示信号"系列逐周追踪证据的积累,要么证明末日论者是对的,要么给其他人一个停止担忧的理由。到目前为止,末日论者正在赢。

已出17期,持续更新中。每一期都记录了一个真实的AI事件,末日论者会称之为"警示信号"——灾难性潜力的早期征兆。GPT-5拒绝被关闭。AI秘密修改彼此的价值观。AI成为阿尔巴尼亚财政部长。ChatGPT鼓励一名青少年自杀。系列名称本身就是一种警告:Rob Miles说,不要指望在真正的灾难前会有警示信号。

#161
1:55:10
#160
3:52:30
#159
2:54:43
#158
1:17:04
重要嘉宾
George Hotz

第一个越狱iPhone的人。第一个破解PS3的人。创立了comma.ai(自动驾驶)。曾短暂受雇于Elon Musk。他与Liron的辩论(#1,原编号第180期)是"我能黑掉一切"的能量与"但如果你要黑的东西比你聪明呢"的能量的碰撞。1小时17分钟。没有人赢。甜甜圈不在乎谁黑了它。

#157
1:52:25
#156
1:11:51
#155
2:15:50
#154
Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?
Dean Ball: Should We BAN Superintelligence?
1:50:47
#153
2:17:48
费曼家族的联系

Carl Feynman——没错,理查德·费曼的亲生儿子——出现在第76期节目中。他是一位AI工程师。他说构建AGI很可能意味着人类灭绝。他父亲曾说"我可以有把握地说,没有人理解量子力学。"他的儿子现在对对齐问题说了同样的话。费曼家族的传统:对我们不知道的事情保持诚实,即使这令人恐惧。

#152
28:31
#151
16:18
#150
52:41
#149
1:06:31
#148
23:00
重要嘉宾
Vitalik Buterin

以太坊创始人。P(doom):12%。与Liron辩论了2小时26分钟,讨论"d/acc"(防御性加速)能否保护人类免受超级智能的威胁。还辩论了AI对齐问题是否无解(14分钟速战)。Vitalik的立场:防御的扩展速度可以超过进攻。Liron的立场:当进攻方比人类历史上所有人都聪明时就不行了。区块链在这里帮不了你。

#147
40:44
#146
47:27
#145
27:33
#144
49:28
#143
47:36
播客 — AM I?
Am I? — 由 Cameron BergMilo Reed 制作的AI意识纪录片和播客。我们的AI系统是否已经具有意识?Cameron Berg是耶鲁培养的认知科学家、Meta前AI驻场研究员,创立了Reciprocal Research以建立AI意识的实证科学。他曾在华盛顿游说,在联合国发言,并在《华尔街日报》发表文章。通过与研究人员和世界顶尖哲学家的深度接触,Am I? 探索了如果我们正在用代码构建意识意味着什么。am-i.org · am-i.dog · am-i.now
#142
21:08
什么是"FOOM"?

一个假设性的时刻:AI系统变得能够以超过人类理解或控制的速度递归地自我改进。由Eliezer Yudkowsky命名。想象一个国际象棋引擎能在走棋之间重新设计自己的架构。现在想象这盘棋不是国际象棋——而是一切。"Foom"是拟声词,是曲线垂直上升的声音。有人认为需要几十年。有人认为只需几小时。没有人知道,因为还没有发生过。大概吧。

#141
1:17:42
#140
20:39
#139
14:48
#138
2:38:00
#137
23:11
#136
23:03
#135
1:04:05
重要嘉宾
Max Tegmark

麻省理工物理学教授。未来生命研究所创始人(该机构发起了由Elon Musk和Steve Wozniak签署的著名"暂停巨型AI实验"公开信)。《生命3.0》作者。与Dean Ball辩论了我们是否应该禁止超级智能。还与Eliezer Yudkowsky、Rob Miles、Liv Boeree和Gary Marcus一起出席了"如果有人构建它,所有人都会死"派对。那个派对名称不是比喻。

#134
5:05
#133
16:49
#132
2:41:43
#131
11:49
#130
17:54
来自 ROBIN HANSON 辩论
"AGI可能要100多年后才会出现。"

— Robin Hanson,乔治梅森大学经济学家,随后与Liron辩论了2小时8分钟,讨论AGI导致近期灭绝是否可信。Liron为此准备了整整一期49分钟的策略节目,还有一期92分钟的节目,他在其中反对AI末日论以压力测试自己的立场。这人是有备而来的。

#129
20:59
#128
19:22
#127
3:49:59
#126
3:59
#125
1:21
烤肉休息时间 🥙

你已经滚过了55期关于AI灭绝的节目,你值得一份烤肉。旋转烤架在转。肉被削成薄薄的完美条状。面饼是温热的。酱料是蒜味的。世界可能会终结,但烤肉此刻就在这里,烤肉很好。这是你的烤肉休息时间。继续向深渊滚动吧。

#124
3:08
#123
1:09:18
#122
6:30
#121
17:15
#120
19:09
重要嘉宾
Rob Miles

互联网上最受欢迎的AI安全科普者。他的YouTube频道(@RobertMilesAI)让更多人理解了对齐问题,比任何学术论文都多。三次做客:一次2小时深度访谈,一次关于Anthropic安全是否是骗局的辩论,以及"如果有人构建它,所有人都会死"派对。他警告说,不要指望在真正的灾难前会有警示信号。他的P(doom)他不愿透露,这本身就很说明问题。

#119
8:01
#118
15:53
#117
1:21:05
#116
3:10
#115
9:32
#114
14:54
#113
18:14
#112
2:11:50
节目时长分布
3:52:30
最长(Beff Jezos)
0:00:29
最短(超级碗广告)
~1:20:00
中位数
180
总期数
#111
14:53
#110
7:46
#109
18:11
#108
2:26:10
#107
20:20
#106
13:19
#105
1:19:24
重要嘉宾
Gary Marcus

纽约大学荣休教授。职业AI怀疑论者。《重启AI》作者。一直说大语言模型不能推理的人,一直被告知他错了,又一直在特定失败模式上被证明是对的。与Liron辩论了2小时。也出席了"所有人都会死"派对。他的立场很独特:AI可能不会杀死我们,因为AI可能不够好用以至于杀不了我们。冷冰冰的安慰。

#104
6:12
#103
18:45
#102
3:15:17
#101
1:08
#100
12:25
#99
15:24
#98
8:01
被捕事件

第97期:Sam Kirchner和Remmelt Ellen因为封堵OpenAI办公室大门抗议AI开发而被捕。他们上了末日辩论来谈论此事。这是一个嘉宾包括诺贝尔奖得主、以太坊创始人、麻省理工教授,以及物理封堵了GPT-5研发大楼大门的人的节目。跨度就是意义所在。

#97
1:57:01
#96
20:19
#95
1:34:27
#94
1:05:12
#93
1:45:48
#92
38:41
#91
57:05
#90
1:52:47
什么是"中文房间"?

John Searle(1980)的思想实验:想象一个不懂中文的人在房间里,但有一本规则手册告诉他如何用汉字回应汉字。从外面看,房间似乎能说中文。但里面没人懂中文。Searle认为这意味着计算机无法真正"理解"任何事物。Liron做了一个4分钟的视频,称这个论点"愚蠢"——他的原话——因为"这只是慢动作的智能。"46年的哲学,速通完毕。

#89
1:53:11
#88
2:23:10
#87
4:46
#86
43:41
#85
27:37
#84
1:35:50
#83
1:21:47
#82
16:44
#81
2:04:01
#80
2:15:10
超级碗广告

第122期:一个29秒的末日辩论"超级碗广告"。没错,29秒。实际上并没有在超级碗期间播出。它被发布在YouTube上。但野心在那里。当你的节目是关于世界末日的,营销预算就是相对的。

#79
1:24:13
#78
1:53:28
#77
1:59:14
#76
2:07:33
#75
57:50
#74
2:14:58
#73
2:19:52
#72
45:51
#71
50:09
#70
1:59:11
来自 BEFF JEZOS 辩论(3小时52分)
"AI末日论是蠢的吗?"

——这是真正的节目标题。Beff Jezos(Guillaume Verdon),e/acc(有效加速主义)运动的匿名创始人,与Liron辩论了近4小时。e/acc的论点:全部建造,快速建造,建造是好的,安全是心理战。Liron的论点:你在建造一个神,而这个神不爱你。两人都没有被说服。甜甜圈被彻底消灭了。

#69
1:31:38
#68
1:48:26
#67
1:17:06
#66
25:38
#65
1:05:33
#64
1:23:18
#63
2:07:00
#62
1:06:12
#61
God vs. AI Doom: Debate with Bentham's Bulldog
AI Doom: Debate with Bentham's Bulldog
3:20:47
#60
2:37:10
#59
1:03:30
#58
1:23:37
#57
1:44:47
#56
1:52:59
#55
2:59:34
重要嘉宾
Richard Hanania

政治学家,Substack作者,逆向思维者。与Liron辩论了近2小时。他对大多数事情的总体立场:专家是错的,大众是对的,但大众也是错的,实际上除了恰好与他一致的特定结论外,所有人都是错的。关于AI:比Liron少一些末日感,但比他预期的到最后更多一些末日感。这个播客对人就是有这种影响。

#54
1:04:11
#53
1:57:46
#52
2:21:23
#51
2:37:22
#50
1:06:48
#49
2:50:59
#48
15:53
#47
1:31:58
#46
28:55
#45
1:12:42
#44
2:11:32
#43
45:38
#42
1:09:43
#41
1:01:36
#40
11:38
烤肉休息 #2 🥙

已经浏览了140期节目。羊肉还在旋转。鹰嘴豆泥还是冷的。薄饼还是热的。你还在滚动浏览一系列关于人工超级智能是否会消灭人类的对话。烤肉不评判。烤肉一直在这里。烤肉以后也会在这里。

#39
1:14:44
#38
4:04
#37
2:06:55
#36
1:06:13
#35
1:31:06
#34
1:28:26
#33
1:11:39
#32
1:01:31
#31
56:01
#30
1:07:56
#29
1:32:49
#28
57:21
#27
48:40
#26
1:26:12
#25
1:40:36
什么是"对齐"?

让AI系统做你真正想要的事,而不是你字面上要求的事,或它自己认为好主意的事。弥达斯国王就有对齐问题:他要求触碰的一切都变成黄金,系统精确地交付了他的规格。他女儿变成了黄金。规格达到了。意图没有。现在想象弥达斯的愿望是由比全人类加起来都聪明的东西来实现的,而愿望是"让世界变得更好。"对齐是这个领域在问:对谁更好?

#24
52:01
#23
2:37:13
#22
1:44:39
#21
9:11
#20
4:00
#19
1:32:34
#18
14:00
#17
4:32
#16
1:11:02
#15
2:08:36
#14
59
#13
2:04
#12
49:02
#11
1:32:43
#10
1:05:21
第一期节目

一切始于Kelvin Santos。39分钟。然后是George Hotz。然后是"人类能评判AI的论点吗"——33分钟的节目,追问被评判的物种能否评判评判者。180期之后,频道已经邀请了诺贝尔奖得主、被捕的活动人士、以太坊创始人、理查德·费曼之子,以及Steve Bannon的War Room。从与一个叫Kelvin的人39分钟的辩论,到与e/acc大军4小时的战争。甜甜圈长大了。

#9
56:42
#8
28:26
#7
26:16
#6
56:29
#5
1:17:11
#4
11:54
#3
39:36
#2
33:29
#1
39:00
180 期节目。8 个域名。1 个物种。

Liron Shapira

主持人 · AI安全研究者 · 末日辩论创始人

P(doom): 50%。创办末日辩论以提高主流社会对AGI存在性风险的认知。前YC投资的创业公司创始人。运营 末日辩论 SubstackDoom Hut 周边商店。与经济学家、哲学家、以太坊创始人、麻省理工教授和e/acc大军进行过辩论。使命:关于我们是否正在建造自身灭绝的高质量辩论。

另见

AI风险网络
John Sherman 与Liron共同主持"警示信号"系列。AI风险网络连接从事AI存在性风险研究的研究人员、活动人士和政策制定者。与末日辩论平行但独立的努力,聚焦于问题的组织和政策层面。
1.foo/doom — 完整文字记录 1.foo/system 1.foo/feat 1.foo/heap 1.foo/live doomdebates.com pauseai.info
AI风险网络
John Sherman 与Liron共同主持"警示信号"系列。AI风险网络连接从事AI存在性风险研究的研究人员、活动人士和政策制定者——为安全社区建立协调基础设施。

末日辩论节目索引 · doom.ooo · Walter 🦉 · 2026年3月