Gå med som ny medlem genom att gå till följande sida: Membit . Har du några frågor så skicka ett e-brev till främjandets sekreterare.
AI Alignment, Embedded Agency and Decision Theory
Following the ethically onerous message in his article "AI-utvecklingen och forskarens ansvar" in Qvintensen 1/2020, Olle Häggström was invited to speak on related matters at Statistikfrämjandet's annual meeting on March 18 this year. Due to the covid-19 crisis, the meeting was cancelled at a late stage, but now is the time to hear his talk.
The term AI had not yet been coined in the days of Alan Turing. Nevertheless, he did foresee the field, and in 1951 he famously predicted that machines would eventually become so capable as to surpass human general intelligence, in which case he suggested that "we should have to expect the machines to take control". The (small but growing) research area known as AI Alignment takes this ominous prediction as a starting point, and aims to work out how to instill the AI with goals that lead to a good outcome (for humans) despite their taking control. Attempts to solve AI Alignment lead to many intriguing philosophical and mathematical questions involving, e.g., the notion of embedded agency, and the fundamentals of decision theory.
Anmäl mig!