Stanford CS521 - AI Safety Seminar
Stanford,
Updated On 02 Feb, 19
Stanford,
Updated On 02 Feb, 19
4.1 ( 11 )
Tatsu Hashimoto, Professor of Computer Science at Stanford University
April 20, 2022
Large, pre-trained language models have driven dramatic improvements in performance for a range of challenging NLP benchmarks. However, these language models also present serious risks such as eroding user privacy, enabling disinformation, and relying on discriminatory `shortcuts’ for prediction. In this talk, we will provide a short overview of a range of potential harms from language models, as well as two case studies in the privacy and brittleness of large language models.
For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/ai
Sam
Sep 12, 2018
Excellent course helped me understand topic that i couldn't while attendinfg my college.
Dembe
March 29, 2019
Great course. Thank you very much.