We're in beta. Stay tuned for updates.x
Loading...
PODCAST

LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you"d like more, subscribe to the “Lesswrong (30+ karma)” feed.

All Episodes

3:05
“Don’t ignore bad vibes you get from people” by...
en
4:32
“[Fiction] [Comic] Effective Altruism and Rationality...
en
9:49
“Building AI Research Fleets” by bgold, Jesse Hoogland
en
46:26
“What Is The Alignment Problem?” by johnswentworth
en
5:55
“Applying traditional economic thinking to AGI: a...
en
57:25
“Passages I Highlighted in The Letters of...
en
14:50
“Parkinson’s Law and the Ideology of Statistics” by...
en
25:11
“Capital Ownership Will Not Prevent Human...
en
15:56
“Activation space interpretability may be doomed” by...
en
8:40
“What o3 Becomes by 2028” by Vladimir_Nesov
en
25:26
“What Indicators Should We Watch to Disambiguate AGI...
en
78:48
“How will we update about scheming?” by ryan_greenblatt
en
20:22
“OpenAI #10: Reflections” by Zvi
en
2:15
“Maximizing Communication, not Traffic” by jefftk
en
44:21
“What’s the short timeline plan?” by Marius Hobbhahn
en
117:07
“Shallow review of technical AI safety, 2024” by...
en
28:44
“By default, capital will matter more than ever after...
en
39:20
“Review: Planecrash” by L Rudolf L
en
14:03
“The Field of AI Alignment: A Postmortem, and What To...
en
11:20
“When Is Insurance Worth It?” by kqr
en
602 results

Similar Podcasts