The industry day is an open day (free entrance) with topics provided by and focused on the games industry and how it links to research in games. The industry day takes place on the first day of IEEE CoG (20th August 2019) at the Great Hall of Queen Mary University of London. For information about the venue, visit https://www.ieee-cog.org/venue/
The industry day will include:
Invited keynote talks from top figures in the field.
The full program for the industry is already available. You can find the link to the complete program below, as well as the information for all keynotes and accepted talks.
VR games are in an adolescent stage of development. Gimmicky visual effects are common and features for novelty’s sake abound. Because of the lack of standardization, developers add eye catching VFX to direct the user’s attention to features they want to tutorialize. If any of this sounds familiar, it might be because web development went through a similar pattern. Before standardization visual effects such as animated hit tickers and blinking ‘new’ GIFs were used to draw users attention. Eventually standardization in layout and controls led to a cleaner experience. VR development has yet to benefit from widespread standardization. Movement controls, selection mechanics, and guiding attention are tackled individually by each developer adding to the burden of tutorialization of not only the game but also the platform. This talk will speak to what we can predict for the course of VR development based on similar patterns of standardization from other platforms.
Biography
Theresa Duringer is CEO of Temples Gates Games, a San Francisco based game studio creating award-winning games for Gear, Rift and Vive. Her team is known for Bazaar, Ascension VR, and Race for the Galaxy as well as integrating machine learning AI to challenge even the most experienced players. With a focus on delivering seamless UX with high performance, she uses their custom C++ engine to create flexible solutions for the challenges of delivering games across platforms. Prior to founding Temple Gates Games, she co-created Cannon Brawl and contributed on the Sims, Sim City, and Spore at Maxis.
Prof. David Silver
Google DeepMind
Deep Reinforcement Learning from AlphaGo to AlphaStar
Recently, self-learning systems have achieved remarkable success in several challenging problems for artificial intelligence, by combining reinforcement learning with deep neural networks. In this talk I describe the ideas and algorithms that led to AlphaGo: the first program to defeat a human champion in the game of Go; AlphaZero: which learned, from scratch, to also defeat the world computer champions in chess and shogi; and AlphaStar: the first program to defeat a human champion in the real-time strategy game of StarCraft.
Biography
David Silver leads the reinforcement learning research group at Google DeepMind. David graduated from Cambridge University in 1997 with the Addison-Wesley award. Subsequently, David co-founded the video games company Elixir Studios, where he was CTO and lead programmer, receiving several awards for technology and innovation. David returned to academia in 2004 to study for a PhD on reinforcement learning with Rich Sutton, where he co-introduced the algorithms used in the first master-level 9x9 Go programs. David was awarded a Royal Society University Research Fellowship in 2011, and subsequently became a professor at University College London. David consulted for DeepMind from its inception, joining full-time in 2013, where he leads the reinforcement learning team. David co-led the Atari project, in which a program learned to play 50 different games directly from pixels (Nature 2015). He is best-known for leading the AlphaGo project, culminating in the first program to defeat a top professional player in the full-size game of Go (Nature 2016), as well as the AlphaZero project (Nature 2017), in which a program learned by itself to defeat the world's strongest chess, shogi and Go programs (Science 2018). These achievements have been recognised by awards such as the Marvin Minsky Medal, Royal Academy of Engineering Silver Medal, Mensa Foundation Prize, Cannes Lion Grand Prix and several best paper awards.
Katja Hofmann
Microsoft Research Cambridge
Minecraft as AI Playground and Laboratory
This talk focuses on Project Malmo, an AI experimentation platform that my team built on top of the popular video game Minecraft. I will show how the open-ended nature of Minecraft, which is so appealing to its human fan-base, also makes the game uniquely challenging for current AI agents. I will highlight some of opportunties this creates for driving AI research towards faster learning, complex decision making, and - ultimately - collaboration with human players. Looking out into the future, I will discuss directions for tackling these challenges, from learning with human priors to multi-task learning.
Biography
Dr. Katja Hofmann is a Principal Research Manager at the Game Intelligence group at Microsoft Research Cambridge, UK. There she leads a research team that focuses on reinforcement learning with applications in modern video games. She and her team strongly believe that modern video games will drive a transformation of how we interact with AI technology. One of the projects developed by her team is Project Malmo, which uses the popular game Minecraft as an experimentation platform for developing intelligent technology. Katja's long-term goal is to develop AI systems that learn to collaborate with people, to empower their users and help solve complex real-world problems.
Before joining Microsoft Research, Katja completed her PhD in Computer Science as part of the ILPS group at the University of Amsterdam. She worked with Maarten de Rijke and Shimon Whiteson on interactive machine learning algorithms for search engines.
Jon Paul Schelter
Team Lead Programmer, Ubisoft Toronto
Starlink: The Opportunity Machine
Open World games typically require a large team of game content authors to fill large worlds with varied experiences for the player. Linear Narratives are often presented in Open World games with story beats overlaid on the game world.
In Starlink: Battle For Atlas, the team at Ubisoft Toronto used offline Procedural Content Generation to create seven planets and populated them with a variety of Non-Playable Characters and world elements. The actions of over 100,000 units were continuously simulated throughout the game, and interesting situations were detected and presented dynamically to the player as missions. In addition, Linear Narrative Sequences were seamlessly integrated with the simulation, to be explored at will by the player.
In this talk, Ubisoft Toronto team lead programmer Jon Paul Schelter will discuss how simulation is an opportunity machine, and the development of AI and a Simulation that allow us to present a challenging, varied, and fun experience to the player.
Biography
Jon Paul Schelter is a Team Lead Programmer at Ubisoft Toronto. As the AI Team Lead on action-adventure game Starlink: Battle for Atlas, he led the team responsible for the populating and simulating the procedurally generated worlds, and is focused on compelling NPC behaviour in the intersection between systemic and more traditional, crafted narrative approaches to games.
Jon Paul brings 23 years of game development experience to Ubisoft Toronto, where he previously worked on Splinter Cell: Blacklist and Assassin's Creed Unity, and is currently working on an unannounced project. A graduate from Queen’s University with a degree in Physics and Computer Science, Jon Paul first began programming games on the Ti-99/4A, and his first published titles include The Crow: City of Angels and The Perfect Weapon for the Saturn and PS1. He joined Matrox Graphics as a member of the driver and GPU architecture teams, representing Matrox at the OpenGL ARB where he contributed to several versions of the standard as well as to the vertex and fragment programming extensions. Prior to joining Ubisoft Toronto, Jon Paul was a programmer at Rockstar Games, where he was responsible for the Animation, 3Cs and Combat systems on the highly acclaimed PS2/Xbox game The Warriors, and later a founding member and Technical Director at Bedlam Games in Toronto, focused on AAA and online competitive games.
How developers analyse AI behaviour in Total War games
The presentation gives insight to what challenges we face and what techniques we use when we analyse and balance AI behaviour in the turn-based mode of Total War games, focusing on the most recent release, record-breaking title; Total War: Three Kingdoms.
The complexity and depth of this grand strategy game makes it difficult to have a clear picture on how well the AI is performing, how smart it is, where it’s failing or whether the latest balancing changes have moved its behaviour towards the right direction. With potentially 50 individual AI factions making decisions about every aspect of empire management, warfare and diplomacy, over hundreds of game rounds in a sandbox environment, this is not a small task.
What gap? Bringing Game AI research to industry challenges
We have all learned that game AI as researched in academia and as practiced in industry are fundamentally different. But is that really true? As it turns out, for many challenges there is no gap in interests or understanding between game AI research and game development. There is only a gap in communication and execution. Bringing modern game AI to game development studios may seem infeasible, but in many cases the problem resides mostly in mutually identifying how research ideas can be translated into solutions that scale into production. How do you translate insights from an academic ai games research setting into industry? How do you know if principles identified in the lab will be useful in a professional game production?
In this talk, we share our learnings from our journey bringing Game AI research into professional game productions. Building on specific cases and real world game development challenges, we describe how we took research ideas and turned them into valuable products and solutions for industry partners and customers, while also translating insights from solution development into research ideas. Building off the central idea of Procedural Personas - player models that capture, model, and predict player actions and experiences - we have successfully brought modern game AI to professional game productions in mid-size and large studios.
Using these models, our customers and partners have been able to: a) massively expand their playtesting capabilities by using AI agents in the place of human players, saving significant amounts of time for the development team b) predict player motivations from behavioral data, providing valuable insights into potential optimizations of game design from telemetric data, and c) generate new level content and assisting game designers in their daily work and helping them overcome the 'blank canvas' problem by giving them starting points for level designs. These designs can then be further developed in a mixed-initiative AI-assisted tool for level generation.
Designer Recipes: Going procedural in For Honors Arcade Mode
Making a procedural game mode for an existing AAA game is a challenge: trying to re-formalize the design to make new procedural content, working within existing pipelines and codebases and ensuring AAA quality on something that intentionally can’t be exhaustively tested, whilst trying to avoid repetition and provide variety. Making For Honor’s Arcade Mode we tackled these problems, key to this was how we formalized our design. The talk will detail how we formalized design scenarios as recipes using content tagging, and will touch on the Quality Assurance strategy.
Transforming fitness with VR, Exergaming, and Zombies, Run!
Low levels of physical activity worldwide are a major public health concern so novel solutions are required. Exergaming (a subset of serious gaming) has the potential to engage large proportions of the population and immersive virtual reality can enhance the exergaming experience. However, for an exergaming intervention to have demonstrated impact on behaviour, collaboration between game developers and research scientists is required.
This talk will highlight a unique collaboration between game developers Six to Start and researchers at University College London, Coventry and Anglia Ruskin Universities which began with gaining joint funding from the Medical Research Council. The talk will describe the VR game development process including presenting survey data from around 6000 users of the immersive app-based exergame developed by Six to Start's ‘Zombies, Run!’ (ZR), the world’s most popular smartphone exergame, with over 5 million players. ZR uniquely combines location-based audio storytelling with augmented reality gameplay to motivate players to run further and faster.
Our preliminary analyses suggest that ZR players not only significantly increase their levels of physical activity, but that immersive gaming could result in an identity shift (e.g. from ‘gamer’ to ‘runner’) that was associated with increased activity levels. In addition we asked users to describe which aspects of the game they particularly engaged with. We will also briefly present data from an analysis of 498 public reviews of VR exergames identifying which features players like and dislike (Faric et al. JMIR 2019 DOI: 10.2196/13833).
Meeting user needs and giving them meaningful content is a common goal in free to play mobile games. In an effort to give our users meaningful ways to spend in-game currency, we developed a recommender system that determines the most appropriate/optimal content for each individual user. Incorporating these predictions in existing features we managed to outperform previous rule based systems and generate extra revenue. In this talk, we'll give light overview over recommendation techniques and how we allowed model to capture some general behaviour patterns regarding buying and how we tackled common obstacles such as data dimensionality and class imbalance.
When working on game projects which contain a huge amount of animation content every additional feature to the animation technology, every bug fixed in the animation code, is a potential risk for breaking the existing content. This is even more true when working on additional content packs. The code needs to be extended to support new features but should not break any of the content originally shipped with the game. Not only does it require a long process of production and testing but sometimes following investigation, it is just not possible to complete the desired improvements if the risk of breaking content is too high.
In this talk, we will present our regression tests system for the animation code which we have introduced at Creative Assembly for the animation systems to tackle this issue. We use only a small subset of the full animation content as data for the tests, but which guarantee to cover every combination of animation features that we actually use in production. For this, we created a tag system for the animation features within the animation code which allow us to identify any untested combination of animation features in the production assets, and make sure to add them to our test set. Then, the tests can be performed automatically by comparing the difference between the generated animation poses at each revision of the code.
Even if the implementation of this system is quick and straightforward, it has not come without its share of unexpected issues which will be discussed in the talk. Nevertheless, we have now successfully used those regression tests in production. We will describe how it has already saved significant development time by reducing the need for manual tests and decreasing the number of iterations to get features and improvements right. Beyond this, it also allows large code changes at a very late stage of development thanks to reducing the risk of doing so.
Demos
Pass in Human Style: Learning Soccer Game Patterns from Spatiotemporal Data
Victor Khaustov and Maxim Mozgovoy
RDF* Graph Database as Interlingua for the TextWorld Challenge
Guntis Barzdins and Didzis Gosko
Visualization of Deep Reinforcement Learning using Grad-CAM: How AI Plays Atari Games?
Ho-Taek Joo and Kyung-Joong Kim
An Overview of the Ludii General Game System
Matthew Stephenson, Eric Piette, Dennis J. N. J. Soemers and Cameron Browne
Remixing headlines for context-appropriate flavor text
Judith van Stegeren and Mariët Theune
Interactive Machine Learning for More Expressive Game Interactions
Carlos Gonzalez Diaz, Phoenix Perry and Rebecca Fiebrink
neomento - towards building a universal solution for virtual reality exposure psychotherapy
Adam Streck, Philipp Stepnicka, Jens Klaubert and Thomas Wolbers
Beyond a Steel Sky: the Math behind the Art
Emanuele Salvucci
Spreading machine learning familiarity through games