CAIML Symposium 2023: The Secret Ingredients for Improving Artificial Intelligence
ChatGPT grips the public’s attention. With fueled debates in the media and explosively increased usage, it is crucial to understand how AI systems work.
May 12th 2023
- 13:00 – 18:00 CEST
- TU Wien, Campus Getreidemarkt Konferenzsaal TUtheSky
1060 Vienna, Getreidemarkt 9
Bauteil BA (Hoftrakt), 11. Stock, Raum BA11B07
On This Page
The current developments around systems such as ChatGPT have fueled debates on artificial intelligence in both the media and the informed public. It is therefore all the more important to understand how such systems basically work, how they are trained, and where their functioning still has gaps.
The symposium addresses these issues and aims not only to improve our understanding of AI but also to show paths to advance the systems themselves. John Shawe-Taylor will give an overview of statistical methods used in AI systems. Antonio Casilli will examine the historical process that ties human computing to contemporary artificial intelligence. Wolfgang Nejdl will discuss how AI systems still lack an understanding of causality and why this is a problem for trustworthy AI.
- 13:00 / Welcome / Dean Gerti Kappel, Dean of TU Wien Informatics and Michael Stampfer, WWTF Managing Director
- 13:15 / John Shawe-Taylor: Statistical Learning Theory for Modern Machine Learning, slides
- 14:15 / Antonio A. Casilli: A Counter-History of Artificial Intelligence
- 15:15 / Coffee Break
- 16:00 / Wolfgang Nejdl: On the Role of Causality for Trustworthy Artificial Intelligence, slides
- 17:00 / Panel Discussion / Moderation: Julia Neidhardt / Panelists: Antonio A. Casilli, Gerti Kappel, Wolfgang Nejdl, Helga Nowotny, John Shawe-Taylor
Statistical Learning Theory for Modern Machine Learning
By John Shawe-Taylor
The talk will introduce the approach of Statistical Learning Theory to the analysis of machine learning. The fusion with Bayesian learning in the PAC-Bayes approach will be described with applications to SVMs and Deep learning.
A Counter-History of Artificial Intelligence: Computing factories and machinery (18th-21st century)
By Antonio Casilli
Creating large databases that enable machine learning is the key to advancing automation in our society. The calculations that are at the foundation of today’s intelligent solutions are annotated, validated, and sometimes supervised in real time by a multitude of workers, often recruited through digital platforms. Human-based computations, fragmented and delegated to underpaid workers, might seem like a recent development.
However, the historical existence of “computors”, ie. of persons who perform calculations as a job, is documented since the 18th century in France, England or the United States. This labor force, often composed of unemployed people and minorities excluded from the labor market, was responsible for repetitive “mental work”, such as creating logarithmic tables, astronomical maps, and nautical ephemerides. As the 19th and 20th centuries progressed, the idea would be used in areas as diverse as meteorology, electrical transmission analysis, household surveys, and telecommunications.
The link between current-day automation and the occupation of “human computers” has emerged at this point. Data production and manipulation have survived the rise of electronics and digital technology in the last century and are now the secret ingredients of artificial intelligence. It is imperative to rediscover the forgotten origins of AI to understand how it impacts society and science today.
On the Role of Causality for Trustworthy Artificial Intelligence
By Wolfgang Nejdl
State-of-the-art AI models largely lack an understanding of the cause-effect relationship that governs human understanding of the real world. Consequently, these models do not generalize to unseen data, often produce unfair results, and are difficult to interpret. This has led to efforts to improve the trustworthiness aspects of AI models. Recently, causal modeling and inference methods have emerged as powerful tools. In this talk, I will give an overview of causal methods that have been developed to improve the trustworthiness of AI models, as well as several examples, covering the aspects of interpretability, fairness, robustness, privacy as well as safety and accountability.
John Shawe-Taylor is professor of Computational Statistics and Machine Learning at University College London and Director of the International Research Centre on Artificial Intelligence (IRCAI) under the auspices of UNESCO at the Jozef Stefan Institute in Slovenia. He has helped to drive a fundamental rebirth in the field of machine learning, with applications in novel domains including computer vision, document classification, and applications in biology and medicine focussed on brain scan, immunity and proteome analysis. He has published over 300 papers and two books that have attracted over 92000 citations.
He has coordinated a series of influential European Networks of Excellence including the PASCAL Networks. The scientific coordination of these projects has influenced a generation of researchers and helped to promote the widespread uptake of machine learning in both science and industry that we are currently witnessing.
He was appointed UNESCO Chair of Artificial Intelligence in November 2018 and is the leading trustee of the UK Charity, Knowledge 4 All Foundation, championing the cause of open education including assisting with the establishment of the UNESCO Recommendation on Open Educational Resources adopted in 2019 and the Recommendation on the Ethics of AI adopted in 2021.
Antonio A. Casilli
Antonio A. Casilli, is a professor of sociology at Telecom Paris, the telecommunication school of the Polytechnic Institute of Paris, and a researcher at the Interdisciplinairy Institute on Innovation (i3), a unit of the French CNRS. His research focuses on digital communication, fundamental rights and the future of work. He is the co-founder of the DiPLab (Digital Platform Labor) research program and of INDL (International Network on Digital Labor). In addition to several scientific publications in French, English and Italian, he is the author of “En attendant les robots” (Seuil, 2019; forthcoming University of Chicago Press, 2023). In 2020, he was the editorial advisor of the France Télévisions documentary series “Invisibles - Les travailleurs du clic”, based on his research.
Wolfgang Nejdl (born 1960) has been full professor of computer science at the University of Hannover since 1995. He received his M.Sc. (1984) and Ph.D. degree (1988) at the Technical University of Vienna, was assistant professor in Vienna from 1988 to 1992, and associate professor at the RWTH Aachen from 1992 to 1995. He worked as visiting researcher / professor at Xerox PARC, Stanford University, University of Illinois at Urbana-Champaign, EPFL Lausanne, PUC Rio, Trento and Politecnico di Milano.
Prof. Nejdl heads the L3S Research Center, as well as the Data Science Institute / Knowledge Based Systems, and does research in the areas of Information Retrieval, Web Science, Artificial Intelligence, Social and Semantic Web, Digital Libraries and Technology Enhanced Learning. He was PI of the ERC Advanced Grant ALEXANDRIA, from 2014 - 2019, working on foundations for temporal retrieval, exploration and analytics in Web archives. Current projects include NoBIAS, SoBigData++, the International Leibniz Future Lab on Artificial Intelligence, with a special focus on personalized medicine, and DAISEC, a European Digital Innovation Hub to innovate through AI and Cybersecurity, especially in the areas of production, mobility and skilled crafts. Wolfgang Nejdl published more than 420 scientific articles, as listed at DBLP, with an h-index (based on Google Scholar) of 78.