We're in beta. Stay tuned for updates.x
Loading...
PODCAST

LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you"d like more, subscribe to the “Lesswrong (30+ karma)” feed.

All Episodes

1:33
“Pay Risk Evaluators in Cash, Not Equity ” by Adam...
en
23:46
“Survey: How Do Elite Chinese Students Feel About the...
en
4:04
“things that confuse me about the current AI market....
en
17:32
“Nursing doubts ” by dynomight
en
31:17
“Principles for the AGI Race ” by William_S
en
6:13
“The Information: OpenAI shows ‘Strawberry’ to feds,...
en
99:04
“What is it to solve the alignment problem? ” by Joe...
en
42:06
“Limitations on Formal Verification for AI Safety ”...
en
7:00
“Would catching your AIs trying to escape convince AI...
en
8:03
“Liability regimes for AI ” by Ege Erdil
en
18:39
“AGI Safety and Alignment at Google DeepMind:A...
en
20:01
“Fields that I reference when thinking about AI...
en
38:23
“WTH is Cerebrolysin, actually?” by gsfitzgerald,...
en
22:42
“You can remove GPT2’s LayerNorm by fine-tuning for...
en
3:40
“Leaving MIRI, Seeking Funding” by abramdemski
en
4:00
“How I Learned To Stop Trusting Prediction Markets...
en
16:01
“This is already your second chance” by Malmesbury
en
19:40
“0. CAST: Corrigibility as Singular Target” by Max Harms
en
23:21
“Self-Other Overlap: A Neglected Approach to AI...
en
8:43
“You don’t know how bad most things are nor precisely...
en
598 results

Similar Podcasts