We're in beta. Stay tuned for updates.x
Loading...
PODCAST

AI: post transformers

The transformer architecture revolutionized the world of Neural Networks. It was a springboard for what we know today as modern artificial intelligence. This podcast focuses on modern state of the art research paper reviews starting from the transformer and on.

All Episodes

00:14:57
Random Walk Methods for Graph Learning and Networks
AI: post transformers ·
2025/11/10
en
00:14:26
AlphaEvolve: Mathematical Discovery at Scale
AI: post transformers ·
2025/11/10
en
00:12:12
AdaFlow: Variance-Adaptive Flow-Based Imitation Learning
AI: post transformers ·
2025/11/10
en
00:14:57
zFLoRA: Zero-Latency Fused Low-Rank Adapters
AI: post transformers ·
2025/11/04
en
00:13:32
SuperBPE: Space Travel for Language Models
AI: post transformers ·
2025/11/04
en
00:15:26
Google: Supervised Reinforcement Learning for...
AI: post transformers ·
2025/11/04
en
00:13:52
MorphKV: Constant-Sized KV Caches for LLM Inference
AI: post transformers ·
2025/11/04
en
00:15:40
HALoS: Hierarchical Asynchronous LLM Training over...
AI: post transformers ·
2025/11/04
en
00:14:59
Anchored Diffusion Language Model: Superior...
AI: post transformers ·
2025/11/04
en
00:18:25
Gumbel-Softmax for Differentiable Categorical...
AI: post transformers ·
2025/11/04
en
00:16:11
PolicySmith: Automated Systems Heuristic Generation...
AI: post transformers ·
2025/11/04
en
00:13:08
RetNet: Retentive Networks: Transformer Successor for...
AI: post transformers ·
2025/11/02
en
00:15:26
Kimi Linear: Efficient Expressive Attention Architecture
AI: post transformers ·
2025/11/02
en
00:12:02
ALiBi: Attention with Linear Biases Enables Length...
AI: post transformers ·
2025/11/01
en
00:12:29
Quest: Query-Aware Sparsity for Efficient LLM Inference
AI: post transformers ·
2025/10/31
en
00:17:17
Flash-LLM: Efficient LLM Inference with Unstructured...
AI: post transformers ·
2025/10/31
en
00:12:31
ELASTIC: Linear Attention for Sequential Interest...
AI: post transformers ·
2025/10/31
en
00:14:01
Anthropic: Introspective Awareness in LLMs
AI: post transformers ·
2025/10/31
en
00:14:03
Small Versus Large Models for Requirements...
AI: post transformers ·
2025/10/31
en
00:17:25
Hyper-Scaling LLM Inference with KV Cache Compression
AI: post transformers ·
2025/10/31
en
340 results

Similar Podcasts