Oriol Vinyals is a Principal Scientist and Deep Learning Team Lead at DeepMind. His work focuses on deep learning and artificial intelligence, with particular emphasis on machine learning and reinforcement learning. Prior to DeepMind, Oriol was part of the Google Brain team. He holds a Ph.D. in EECS from the University of California, Berkeley and is a recipient of the 2016 MIT TR35 innovator award.
His research has been featured multiple times by the New York Times, Financial Times, WIRED, BBC, etc., with his articles being cited over 130,000 times. Contributions such as seq2seq, knowledge distillation, or TensorFlow are used in Google Translate, Text-To-Speech, and Speech recognition, serving billions of queries every day. Other notable contributions include being the lead researcher for the AlphaStar project, and a contributor to the research and papers of the AlphaFold project, both breakthrough achievements featured on the cover of Nature.
ABSTRACT: Games have been used for decades as an important way to test and evaluate the performance of artificial intelligence systems. As capabilities have increased, the research community has sought games with increasing complexity that capture different elements of intelligence required to solve scientific and real-world problems. In recent years, StarCraft, considered to be one of the most challenging Real-Time Strategy (RTS) games and one of the longest-played esports of all time, has emerged by consensus as a “grand challenge” for AI research.In this talk, I will introduce our StarCraft II program AlphaStar, the first Artificial Intelligence to reach Grandmaster status without any game restrictions. The focus will be on the technical contributions which made possible this milestone in AI, and which yielded a cover in the prestigious journal Nature. To end, I'll also reflect on what has happened since the release of AlphaStar.
Georgios N. Yannakakis is a Professor and Director of the Institute of Digital Games, University of Malta, and a co-founder of modl.ai. He does research at the crossroads of artificial intelligence, computational creativity, affective computing, and human-computer interaction and he has published over 300 journal and conference papers in the aforementioned fields (h-index 60). His research has been supported by numerous European and national grants and has appeared in Science Magazine and New Scientist among other venues. He has been involved in a number of journal editorial boards and he is currently the Editor in Chief of IEEE Transactions on Games and an Associate Editor of IEEE Transactions on Evolutionary Computation. Prof. Yannakakis has been the General Chair of key conferences in the area of game artificial intelligence (IEEE CIG 2010) and games research (FDG 2013, FDG 2020). He is the co-author of the Artificial Intelligence and Games textbook and the co-organiser of the Artificial Intelligence and Games summer school series.
Why bother about emotions and their computation? Why is emotion such a critical element of every aspect of AI and Games research nowadays? How can emotion help us test games, represent the games we play, design creative AI algorithms, offer reliable agency to AI, and ultimately understand player experience? In this talk, I will attempt to address these questions through a series of milestone research studies (hubris) that led to a number of key lessons learned over the years (nemesis). I will conclude the talk by suggesting directions through which emotion can reframe the ways we build AI algorithms and develop games (catharsis).
Yuandong Tian is a Research Scientist and Senior Manager in Meta AI Research (FAIR), working on deep reinforcement learning, representation learning and optimization. He is the recipient of 2021 ICML Outstanding Paper Honorable Mentions and 2013 ICCV Marr Prize Honorable Mentions. He is the lead scientist and engineer for ELF OpenGo project. Prior to that, he worked in Google Self-driving Car team in 2013-2014 and received a Ph.D in Robotics Institute, Carnegie Mellon University in 2013. He has been appointed as area chairs for NeurIPS, AAAI and AIStats.
Deep Reinforcement Learning (DRL), as a smart search technique that dynamically improves its policy and value estimation based on observation given previous data, has shown human-level or even super-human performance for games such as Go, chess and Starcraft. On the other hand, when applying DRL in real-world applications, new challenges emerge such as effective integration with current working systems, learning representation of large state and action spaces, or even redefining the temporal structure of sequential decision making. In this talk, I will cover our recent works that include learning initial solutions to the existing solver, learning state representations, or even learning the structure of sequential decision itself.
Sarit Kraus (Ph.D. Computer Science, Hebrew University, 1989) is a Professor of Computer Science at Bar-Ilan University. Her research is focused on intelligent agents and multi-agent systems integrating machine-learning techniques with optimization and game theory methods. For her work, she received many prestigious awards. She was awarded the IJCAI Computers and Thought Award, the ACM SIGART Agents Research Award, the ACM Athena Lecturer, the EMET prize and was twice the winner of the IFAAMAS influential paper award. She is an ACM, AAAI, and EurAI fellow and a recipient of the advanced ERC grant. She also received a special commendation from the city of Los Angeles. She is an elected member of the Israel Academy of Sciences and Humanities.
Intelligent computer agents are increasingly being deployed in group settings in which they interact with people in order to carry out tasks. To operate effectively in such settings, computer agents need capabilities for making decisions and negotiating with other participants—both people and computer-based agents. To construct effective agent strategies, it is almost impossible to use only a purely analytical approach, but there is a need to learn and evaluate agent strategies empirically in specific domains. In the talk, we will discuss five types of domains: (i) real applications such as agents for road safety; (ii) simulations of real applications such as autonomous car simulations; (iii) complex, off-the-shelf games such as Diplomacy or card games; (iv) designed games for the specific research questions such as the Colored Trails environment; and (v) abstract simple games such as the Ultimatum game. For each of the domain types, we will present several examples and demonstrate their use in agents’ development and evaluation. We will discuss the advantages and the challenges of agent studies in each of the settings.
Lei Guo received his BS degree from Shandong University in 1982 and Ph.D degree from the Chinese Academy of Sciences(CAS) in 1987. He is currently a professor of the Academy of Mathematics and Systems Science, CAS, and serves as the Director of the National Center for Mathematics and Interdisciplinary Sciences, CAS. He is a Fellow of IEEE, a Member of CAS, a Foreign Member of Royal Swedish Academy of Engineering Sciences, and the recipient of Hendrik W. Bode Lecture Prize from the IEEE Control Systems Society in 2019. His research interests include adaptive estimation, adaptive filtering, adaptive control, adaptive game theory, control of stochastic and nonlinear uncertain systems, feedback capability, multi-agent systems, and game-based control systems.
In the traditional control systems theory, the plant to be controlled or regulated usually does not have its own payoff function, and much progress in both theory and applications has been made in this fields. However, in many practical systems such as those in social, economic and the future “intelligent” engineering systems, the dynamical systems to be regulated may have their own objectives to pursue, which may not be the same as that of the global regulator or controller. Such hierarchical decision making dynamical systems may be called as game-based control systems (GBCS), where the system models may contain various uncertainties and the global regulator can be feedback signals. This lecture will present some backgrounds and examples for introducing GBCS , followed by an investigation of some basic characteristics and properties of GBCS. It will be shown how the macro-states of the GBCS may be regulated by intervening the Nash equilibrium that is reached at the micro-level. In particular, we will present some basic results on global controllability and stabilizability of linear GBCS with multi-players at the micro-level. Both deterministic and stochastic systems will be investigated, and a main technical issue involves the analysis and control of forward-backward (stochastic) differential equations.
Jessica Hammer is the Thomas and Lydia Moran Associate Professor of Learning Science at Carnegie Mellon University, jointly appointed in the Human-Computer Interaction Institute and the Entertainment Technology Center. Her work focuses on transformational games, which are games that change how players think, feel, or behave. She is also an award-winning game designer.
Mitigating climate change is an urgent challenge. It is also an extremely difficult one, not just from a climate systems perspective but from the perspective of mobilizing effective action. Barriers include understanding complex systems, supporting collective action, and resisting climate despair. While these are hard problems, they are also ones where games can help. In this talk, we will explore how game designers can contribute our knowledge and expertise to this grand challenge.
Xiaotie Deng got his BSc from Tsinghua University, MSc from the Chinese Academy of Sciences, and PhD from Stanford University.He is currently a chair professor at Peking University. He taught in the past at Shanghai Jiaotong University, University of Liverpool, City University of Hong Kong, and York University. Before that, he was an NSERC international fellow at Simon Fraser University. Deng's current research focuses on algorithmic game theory, with applications to the Internet and Blockchain Economics and Finance.He is an ACM fellow for his contribution to the interface of algorithms and game theory, an IEEE Fellow for his contributions to computing in partial information and interactive environments, and a CSIAM Fellow for contributions to game theory and blockchain. He is a foreign member of Academia Europaea.He is a winner of the 2022 Test of Time Award of ACM SIGecom for settling the complexity of computing a Nash equilibrium.
We discuss the line of research approach in understanding the computational wisdom of game theory, in terms of rationality, complexity, and dynamics.
https://cfcs.pku.edu.cn/english/people/faculty/xiaotiedeng/index.htm
Dr. Jichen Zhu is an Associate Professor of Digital Design at the IT University of Copenhagen (Denmark) Where she directs the Procedural eXpression Lab (PXL) and leads the User eXperience (UX) Design Specialization. Her research interest lies at the intersection of human-computer interaction, interaction/game design, and artificial intelligence (AI). Her focus is on designing and developing novel human-AI interaction, especially in the form of personalized games for learning and health.
Recent growth in artificial intelligence and machine learning propels human-AI interaction, especially with end-users, to the forefront of HCI research. Among the fast-growing body of literature on human-AI interaction design, an overlooked area is the context of play and playful interaction. Since computer games naturally focus on end-user experience, the fields of game AI and game design have accumulated decades of valuable design knowledge. In this talk, we will synthesize the current trends of player-AI interaction and discuss how they can advance broader open problems in Human-Center AI, such as interpretability/explainability, trust and ethics, and human-machine collaboration. Built on examples from recent AI-based games and digital art, we will propose open problems in the design and technical implementation of player-AI interaction.
https://pure.itu.dk/portal/en/persons/jichen-zhu(e383647f-00d3-41b1-ad95-ff69bdfffb42).html