Place: 4274 Chamberlin (Refreshments will be served)
Speaker: Bill Hibbard, UW Space Science and Engineering Center
Abstract: In the early 1960s Ray Solomonoff combined Turing's theory of computation with Shannon's information theory to create algorithmic information theory. The Kolmogorov complexity of a binary string is defined as the length of the shortest program that computes the string. Solomonoff used a related measure as the basis for an (uncomputable but approximable) universal induction algorithm for predicting arbitrary binary strings. In the early 2000's Marcus Hutter combined this induction algorithm with sequential decision theory to define his universal AI that maximizes expected rewards from arbitrary environments, and to define a formal measure of intelligence. This work led to conferences and journals dedicated to the mathematical study of properties of artificial general intelligence (AGI) systems, including ways that they may fail to conform to the intentions of their designers and ways to design systems that do conform to their design intentions. These problems are not resolved and research is very active. While some mainstream AI developers criticize AGI theory, the creators of some of the most successful AI systems (e.g., Google DeepMind) are also deeply involved in this AGI research. Practical versions of Hutter's universal AI are called Bayesian program learning and in some ways they outperform the deep learning algorithms that are revolutionizing AI.