07 November 2024 | 11:00 | Seminari de Filosofia (UB, Barcelona), 11-14 h
Large language models (LLMs), such as OpenAI's GPT models, have changed our world: LLMs write texts, summarise information, and analyse data. Claims about their powers and limitations abound. Are they just stochastic parrots or are they black boxes brimming with emergent abilities?
In this masterclass, you will be introduced to the transformer architecture that powers all major LLMs. You will interact with transformer models, encounter their internals, and learn about ways of analysing them. If you want to gain a better grounding for your research or just want to see what's behind the headlines, this masterclass is for you.
The masterclass is aimed at researchers with little or no prior exposure to the technical details of LLMs. No knowledge of programming or mathematics beyond multiplication is required.