The tutorials that are being held at the IEEE CoG 2022 are the following:
Fee Type |
1.Speaker registration (IEEE Student, IEEE Life, Industry Session Speaker)2.Speaker registration (Student) 3.Full registration(IEEE Member) 4.Full registration(Non-IEEE Member) |
IEEE Member Number |
|
Organization Name |
|
Position Title |
|
First Name |
|
Last Name |
|
Degree |
|
Business Phone |
|
Fax |
|
|
|
Address 1 |
|
Address 2 |
|
City |
|
State/Province |
|
Zip/Postal Code |
|
Country |
|
Paper ID |
国内注册费 (注册截止日期2022年6月30日)
常规注册 |
IEEE 会员(包括学生会员) |
1000CNY |
非IEEE会员 |
1650CNY |
|
参会注册 |
IEEE Student, IEEE Life, Industry Session Speaker |
350CNY |
学生/IEEE Member |
1000CNY |
|
非IEEE会员 |
1650CNY |
1.一篇论文必须有一个常规的注册:1000元(IEEE会员注册)或者1650元(普通注册)
2.非论文参会者注册请选择参会注册,注册截止日期2022年8月5日,注册后,请发送注册表、学生证明文件、转账记录到ieee-cog@ia.ac.cn 邮箱,邮件主题为:CoG-注册信息-参会.
3.每个参会注册最多包含一篇论文
4.如果学生注册,请提供学生的证明
5.包含论文的常规注册的注册截止日期是2022年6月30日,注册后,请发送注册表、学生证明文件、转账记录到ieee-cog@ia.ac.cn 邮件,请加上邮件主题:CoG-注册信息-论文id,如:CoG-注册信息-55.
6.注册截止日期2022年6月30日
7.注册费通过对公转账到以下账户:
开户行:中国光大银行北京中关村支行户 名:中国科学院自动化研究所帐 号:75080188000132442
我们将提供会议注册费的电子发票.(如需发票请将单位名称和纳税人识别号发邮件到ieee-cog@ia.ac.cn 邮箱,请加上邮件主题:CoG-发票-论文id.
After many contacts, IEEE agrees to set up a Paypal account, which will be released once finished. The deadline for regular paper registrattion is extended to June 30.
· Julian Togelius (julian@togelius.com) New York University, USA
· Jialin Liu (liujl@sustech.edu.cn) Southern University of Science and Technology, China
This tutorial introduces several techniques and application areas for evolutionary computation in games, such as board games and video games. We will give a broad overview of the use cases and popular methods for evolutionary computation in games, and in particular cover the use of evolutionary computation for learning policies (evolutionary reinforcement learning using neuroevolution), planning (rolling horizon and online planning), and designing (particularly procedural content generation via evolutionary and machine learning methods). The basic principles will be explained and illustrated by examples from our own research as well as others' research.
The tutorial will be given in two parts (1.5 hours each, in total 3 hours duration).
The first part will cover the computational intelligence for game-playing while the second part will cover the computational intelligence for game design, as detailed in the Outline. Questions are welcomed during and after the tutorials.
• Introduction        Who are we, what are we talking about?        Why evolutionary computation (EC) for games?
• Part 1: Playing games        Reasons for building game-playing AI        Characteristics of games (and how they affect game-playing algorithms)        EC acts as a planning agent                • Single-agent games (rolling horizon evolution)                • Multi-agent games (online evolution)        Evolving planning agent                • Reinforcement learning through evolution                • Neuroevolution
• Part 2: Designing and developing games        The need for AI in game design and development        Game balancing        Procedural content generation (PCG)                • Search-based procedural content generation                • PCG via Quality Diversity                • Procedural content generation machine learning (PCGML)                 • Combining EC and PCGML
• Outlook
Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on artificial intelligence for games and games for artificial intelligence. His current main research directions involve procedural content generation in games using evolution and machine learning, general video game playing, player modeling, generating games based on open data, and fair and relevant benchmarking of AI through game-based competitions. He was Editor-in-Chief of IEEE Transactions on Games until 2021, and has been chair or program chair of several of the main conferences on AI and Games. Julian holds a BA from Lund University, an MSc from the University of Sussex, and a Ph.D. from the University of Essex. He has previously worked at IDSIA in Lugano and at the IT University of Copenhagen.
Jialin Liu is currently an Assistant Professor in the Department of Computer Science and Engineering of Southern University of Science and Technology (SUSTech), China. Before joining SUSTech, she was a Postdoctoral Research Associate at Queen Mary University of London (QMUL, UK) and one of the founding members of the Game AI research group of QMUL. Her research interests include, but not limited to, AI in Games (AI for playing games, AI for designing games, particularly procedural content generation), optimization under uncertainty (derivation-free optimisation, dynamic resampling strategies, algorithm portfolio), evolutionary computation and its applications, such as vehicle routing problems. Jialin is an Associate Editor of IEEE Transactions on Games. She is the Program Co-Chair of the 2022 IEEE Conference on Games (IEEE CoG2022) and was a Program Co-Chair of the 2018 IEEE Conference on Computational Intelligence and Games (IEEE CIG2018). She has also served as Competition Chair of several main conferences on evolutionary computation and AI in games.
· Hamna Aslam (h.aslam@innopolis.ru) Innopolis University, Russia
· Joseph Alexander Brown (j.brown@innopolis.ru) Innopolis University, Russia
Emotions and perceptions are associated with each play experience. Therefore, players' potential emotional outcomes must be regarded at an early stage of game design.
Games are usually designed with a theme or mechanics setup as a starting point. However, only after the theme and mechanics are laid out, the player experience can be investigated. This tutorial entails a process that prioritizes and ensures that gameplay can (to a great extent) achieve the designer’s goal for player experience and emotions along with desired mechanics and settings. This design process includes several questions for the designer. These questions help game designers reflect on the mechanics to identify players’ potential emotional outcomes.
This tutorial aims to demonstrate to the audience how to design a game, taking into account the player’s emotional outcomes and the designer’s intended theme and mechanics. Playtesting is a crucial part of the development process; however, a designer can obtain an understanding of a player’s emotional journey at a point when they have only decided on some broader details and ideas about the game. The player’s mental process cannot be assumed without a playtest and investigation as players are diverse, and so are their gameplay experiences. This tutorial aims at helping designers to formulate the game template with possible emotional outcomes that they deem closer to players’ mental processes.
The tutorial requires the audience to work individually or in groups. Other than a pen and paper, they do not require anything else.
Hamna Aslam is a Ph.D. student at the University of Toulouse, France, and an instructor at Innopolis University. Her research interest includes game design, player perceptions, artificial intelligence, and requirements engineering. She has published numerous peer-reviewed papers on the topic of human factors and game design.
Joseph Alexander Brown received a B.Sc. (Hons.) with first-class standing in Computer Science with a concentration in software engineering, and an M.Sc. in Computer Science from Brock University, St. Catharines, ON, Canada in 2007 and 2009, respectively. He received a Ph.D. in Computer Science from the University of Guelph in 2014.He previously worked for Magna International Inc. as a Manufacturing Systems Analyst and as a visiting researcher at ITU Copenhagen in their Games Group. He is currently an Associate Professor and Head of the Artificial Intelligence in Games Development Lab at Innopolis University in Innopolis, Republic of Tatarstan, Russia, and an Adjunct Professor of Computer Science at Brock University, St. Catharines, ON, Canada.He is a Senior Member of the IEEE, a chair of the yearly Procedural Content Generation Jam (ProcJam), the proceedings chair for the IEEE 2013 Conference on Computational Intelligence in Games (CIG), and Vice-Chair for the IEEE Committee on Games.
· Yaodong Yang (yaodong.yang@pku.edu.cn) Peking University, China
Recent advances in multiagent reinforcement learning have seen the introduction of a new learning paradigm that revolves around population-based training. The idea is to consider the structure of games not at the micro-level of individual actions, but at the meta-level of the which agent to train against for any given game or situation. A typical framework of population based training is Policy Space Response Oracle (PSRO) method where, at each iteration, a new Reinforcement Learning agent is discovered as the best response to a Nash mixture of agents from the opponent populations. PSRO methods can provably converge to Nash, correlated and coarse correlated equilibria in N-player games; particularly, they have showed remarkable performance on solving large-scale zero-sum games. In this tutorial, I will introduce the basic idea of PSRO methods, the necessity of using PSRO methods in solving real-world games such as Chess, the recent results on solving N-player games and mean-field games, how to promote behavioral diversity during training, and the relationshiop of PSRO method to the conventional no-regret methods. At last, I will introduce a new meta-PSRO framework named Neural Auto-Curricula where we make AI learning to learn a PSRO-like solution algorithm purely from data, and a new PSRO framework called online double oracle that inherits the benefits from both population-based methods and no-regret methods.
Yaodong Yang is a machine learning researcher with ten-year working experience in both academia and industry. Currently, he is an assistant professor at Peking University. His research is about reinforcement learning and multi-agent systems. He has maintained a track record of more than forty publications at top conferences and journals, along with the best system paper award at CoRL 2020 (first author) and the best blue-sky paper award at AAMAS 2021 (first author). Before joining Peking University, he was an assistant professor at King's College London. Before KCL, he was a principal research scientist at Huawei U.K. where he headed the multi-agent system team in London. Before Huawei, he was a senior research manager at AIG, working on AI applications in finance. He holds a Ph.D. degree from University College London, an M.Sc. degree from Imperial College London and a Bachelor degree from University of Science and Technology of China.
· Junliang Xing (jlxing@nlpr.ia.ac.cn) The Institute of Automation, Chinese Academy of Sciences, China
· Kai Li (kai.li@ia.ac.cn) The Institute of Automation, Chinese Academy of Science, China
In recent years, many breakthroughs have been made in artificial intelligence, from playing Atari games to learning complex robotic manipulation tasks. However, an agent still faces challenges from sparse reward and imperfect information in many real-world scenarios. Sparse reward refers to a reward function that is zero in most of its domain and only gives positive values to very few states. It is difficult for an agent to learn effective policies in sparse reward games since it will not get any feedback about whether its instantaneous actions are good or bad. Moreover, the presence of imperfect information in games, where an agent does not fully know the state of the world, makes learning more difficult since learning in such imperfect information games requires reasoning under uncertainty about other agents’ private information.
This tutorial will focus on commonly used approaches for learning in sparse reward and imperfect information games. The first part of the tutorial will discuss learning in sparse reward games, in which we will cover prediction error-based, novelty-based, and information gain-based methods. We will also introduce the latest results of our research group, such as self-navigation-based, potential-based, and influence-based learning algorithms. The second part of the tutorial will focus on learning in imperfect information games. We will first introduce the formal definition of imperfect information games. Then we will discuss regret-based (CFR) and population-based methods (PSRO) to learn fixed optimal worst-case policies. Finally, we will introduce opponent modeling-based methods to learn adaptive policies. We will also discuss the unsolved problems and the directions for future research. We hope this tutorial inspires and motivates attendees to continue learning and contributing to the current development in this field.
Part 1: Introduction of the Tutorial (~5 minutes)
Part 2: Agent Learning in Sparse Reward Games (~40 minutes)
Part 3: Agent Learning in Imperfect Information Games (~40 minutes)
Part 4: Conclusion and Closing Remarks (~5 minutes)
Junliang Xing is currently a Professor with the Institute of Automation, Chinese Academy of Sciences. He received his B.E. degrees in Computer Science and Technology as well as Applied Mathematics from Xi'an Jiaotong University in 2007 and his Ph.D. degree in Computer Science and Technology from Tsinghua University, 2012. Dr. Xing has published over 120 peer-reviewed conference papers in IJCAI, AAAI, ICCV, CVPR, and journal papers in TPAMI, IJCV, AI. He has translated two books in computer vision and wrote one book on deep learning. His main research areas lie in computer vision and computer gaming, with a current focus on agent learning in complex decision-making problems.
Kai Li is currently an associate professor at the Institute of Automation, Chinese Academy of Sciences. He received his Ph.D. degree in pattern recognition and intelligent system from the Institute of Automation, Chinese Academy of Sciences, in 2018. His main research interests are imperfect information game solving, opponent modeling, and deep multi-agent reinforcement learning. He has published dozens of papers at top artificial intelligence conferences and journals.
· Phil Lopes (phil.lopes@ulusofona.pt) Universidade Lusófona de Humanidades e Tecnologias
· David Melhart (david.melhart@um.edu.mt) University of Malta
Affective computing is a field of computer science that focuses on the sensing and prediction of affect and emotions. Games provide a rich and robust testbed for affective computing applications due to a unique mix of constrained environment and emergent interaction. However, affective computing also has value to game research for both academic and industrial applications. Methodologies lifted from affective computing can help game researchers build more robust predictive models of not just behaviour but also player experience.
This tutorial aims to introduce affective computing concepts and methods for player modelling. The first part of the tutorial gives a broad overview of the field of affective computing in the games domain including fundamental concepts, best practices, tools, and industry applications. The second part of the tutorial focuses on the frontiers of affective computing research in games and touch on the topics of VR and research into embodied cognition.
1. Introduction
2. Concepts and Methods in Affective Computing        a. Affective Loop in Games        b. Representations of Emotions        c. Physiology in Player Modelling        d. Modelling Emotions
3. Frontiers in Affective Computing and Player Modelling        a. Industrial Applications        b. Player Modelling for Virtual Reality Applications        c. Embodied Cognition Research
4. Open Avenues and Research Questions
Phil Lopes is an Assistant Professor at the Universidade Lusofona, Lisbon, Portugal. He has a PhD in Artificial Intelligence applied in the field of Human-Computer and Digital Game Interaction. He graduated in 2017 from the Institute of Digital Games of the University of Malta. His main research interest lies in the development of automated digital game experiences personalized to each individual user. Before this he was a Post-Doc at the University of Geneva and the École Polytechnique Fédérale de Lausanne (EPFL), where he worked at the crossroads of neuroscience, affective computing, artificial intelligence, virtual reality and digital games. He is currently a guest editor for Frontiers in Virtual Reality and a published author for several IEEE, ACM and Springer peer-reviewed conferences and journals.
David Melhart is a post-doctoral researcher at the University of Malta, Msida, Malta, focusing on player modelling, annotation tools, research design, affective corpora, and general affect modelling in games. He earned his PhD at the University of Malta, Institute of Digital Games in 2021. During his studies, he was involved in 6 industrial and academic collaborations with partners including the acclaimed game-studio Ubisoft Massive Entertainment. He has also played a key role in organizing the GALA Conference in 2019 and the Conference on the Foundations of Digital Games in 2020. He has been a recurring organizer of the International Summer School series on AI and Games since its inception (2018-2022). He has published conference and journal papers in various venues from the IEEE Conference on Games, the AAAI Conference on Affective Computing and Intelligent Interaction, and the International Joint Conference of AI to the International Journal of Child-Computer Interaction. His work has been nominated for awards at the IEEE Conference on Games 2 times (runner up for Best Short-Video and Best Paper in 2019 and 2021 respectively). He is currently a Guest Editor for the Frontiers in Virtual Reality and an Editorial Assistant for the IEEE Transactions on Games.