We're in beta. Stay tuned for updates.x
Loading...
PODCAST

LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you"d like more, subscribe to the “Lesswrong (30+ karma)” feed.

All Episodes

8:19
“Too Soon” by Gordon Seidoh Worley
en
4:34
“PSA: The LessWrong Feedback Service” by JustisMills
en
8:20
“Orienting Toward Wizard Power” by johnswentworth
en
13:15
“Interpretability Will Not Reliably Find Deceptive...
en
11:33
“Slowdown After 2028: Compute, RLVR Uncertainty, MoE...
en
27:35
“Early Chinese Language Media Coverage of the AI 2027...
en
1:17
[Linkpost] “Jaan Tallinn’s 2024 Philanthropy...
en
15:17
“Impact, agency, and taste” by benkuhn
en
5:42
[Linkpost] “To Understand History, Keep Former...
en
15:22
“AI-enabled coups: a small group could use AI to...
en
28:50
“Accountability Sinks” by Martin Sustrik
en
10:46
“Training AGI in Secret would be Unsafe and...
en
1:15
“Why Should I Assume CCP AGI is Worse Than USG AGI?”...
en
35:51
“Surprising LLM reasoning failures make me think we...
en
21:00
“Frontier AI Models Still Fail at Basic Physical...
en
57:32
“Negative Results for SAEs On Downstream Tasks and...
en
4:12
[Linkpost] “Playing in the Creek” by Hastings
en
40:27
“Thoughts on AI 2027” by Max Harms
en
2:10
“Short Timelines don’t Devalue Long Horizon Research”...
en
41:04
“Alignment Faking Revisited: Improved Classifiers and...
en
577 results

Similar Podcasts