We're in beta. Stay tuned for updates.x
Loading...
PODCAST

LessWrong (Curated & Popular)

Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you"d like more, subscribe to the “Lesswrong (30+ karma)” feed.

All Episodes

23:51
Transformers Represent Belief State Geometry in their...
en
2:18
Paul Christiano named as US AI Safety Institute Head...
en
21:49
[HUMAN VOICE] "Toward a Broader Conception of Adverse...
en
3:02
[HUMAN VOICE] "How could I have thought that faster?"...
en
75:13
[HUMAN VOICE] "On green" by Joe Carlsmith
en
13:07
[HUMAN VOICE] "My PhD thesis: Algorithmic Bayesian...
en
20:46
LLMs for Alignment Research: a safety priority?
en
15:04
[HUMAN VOICE] "Scale Was All We Needed, At First" by...
en
12:17
[HUMAN VOICE] "Using axis lines for good or evil" by...
en
50:08
[HUMAN VOICE] "Social status part 1/2: negotiations...
en
27:26
[HUMAN VOICE] "Acting Wholesomely" by OwenCB
en
22:39
The Story of “I Have Been A Good Bing”
en
14:44
The Best Tacit Knowledge Videos on Every Subject
en
13:59
[HUMAN VOICE] "My Clients, The Liars" by ymeskhout
en
46:59
[HUMAN VOICE] "Deep atheism and AI risk" by Joe...
en
24:14
[HUMAN VOICE] "Speaking to Congressional staffers...
en
9:10
[HUMAN VOICE] "CFAR Takeaways: Andrew Critch" by Raemon
en
20:03
Many arguments for AI x-risk are wrong
en
39:53
Tips for Empirical Alignment Research
en
11:55
Timaeus’s First Four Months
en
601 results

Similar Podcasts