We're in beta. Stay tuned for updates.x
Loading...
PODCAST

LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you"d like more, subscribe to the “Lesswrong (30+ karma)” feed.

All Episodes

3:38
“Gradual Disempowerment: Systemic Existential Risks...
en
42:07
“Planning for Extreme AI Risks” by joshc
en
23:39
“Catastrophe through Chaos” by Marius Hobbhahn
en
43:18
“Will alignment-faking Claude accept a deal to reveal...
en
61:13
“‘Sharp Left Turn’ discourse: An opinionated review”...
en
7:06
“Ten people on the inside” by Buck
en
18:37
“Anomalous Tokens in DeepSeek-V3 and r1” by henry
en
14:06
“Tell me about yourself:LLMs are aware of their...
en
9:53
“Instrumental Goals Are A Different And Friendlier...
en
18:04
“A Three-Layer Model of LLM Psychology” by Jan_Kulveit
en
4:47
“Training on Documents About Reward Hacking Induces...
en
24:33
“AI companies are unlikely to make high-assurance...
en
28:36
“Mechanisms too simple for humans to design” by...
en
0:34
“The Gentle Romance” by Richard_Ngo
en
3:15
“Quotes from the Stargate press conference” by Nikola...
en
13:20
“The Case Against AI Control Research” by johnswentworth
en
3:05
“Don’t ignore bad vibes you get from people” by...
en
4:32
“[Fiction] [Comic] Effective Altruism and Rationality...
en
9:49
“Building AI Research Fleets” by bgold, Jesse Hoogland
en
46:26
“What Is The Alignment Problem?” by johnswentworth
en
598 results

Similar Podcasts