Large Language Models and Transformers | Simons Institute for the Theory of Computing
Date : 2023-08-14
Presentation
The Simons Institute for the Theory of Computing released the recordings of a 5-day workshop on the ongoing revolution in transformers and large language models. The goal of this workshop was to understand the revolution through a wide lens (including neuroscience, physics, cognitive science, and computation) in a setting that facilitates discussion, debate, and intellectual cross-pollination. The workshop touched on issues of fairness, trust, and alignment, and sought to illuminate how industry and academia, and theory and systems, can collaborate. What an incredble resource!
Watch all videos here
Recently on :
Artificial Intelligence
Information Processing | Computing
WEB - 2025-11-13
Measuring political bias in Claude
Anthropic gives insights into their evaluation methods to measure political bias in models.
WEB - 2025-10-09
Defining and evaluating political bias in LLMs
OpenAI created a political bias evaluation that mirrors real-world usage to stress-test their models’ ability to remain objecti...
WEB - 2025-07-23
Preventing Woke AI In Federal Government
Citing concerns that ideological agendas like Diversity, Equity, and Inclusion (DEI) are compromising accuracy, this executive ...
WEB - 2025-07-10
America’s AI Action Plan
To win the global race for technological dominance, the US outlined a bold national strategy for unleashing innovation, buildin...
WEB - 2024-12-30
Fine-tune ModernBERT for text classification using synthetic data
David Berenstein explains how to finetune a ModernBERT model for text classification on a synthetic dataset generated from argi...