modl.ai: The Story and Development of AI Engine

modl.ai's CEO Christoffer Holmgård and Lead Engineer Ricardo Sisnett have told us about the company's AI Engine, discussed its capabilities, and shared their approach to privacy and AI training.

Introduction

Christoffer Holmgård: My name is Christoffer Holmgård, Co-Founder and CEO of modl.ai. My background is in the games industry, starting in 2008. I worked in indie games for a long time and co-own an indie game studio called Die Gute Fabrik, which has been running for a bunch of years. On the side, I also worked in research and machine learning for games, did some research in that and did a doctorate, and was a professor for a few years in that field. Around 2018, I got together with the other co-founders of modl.ai, and we started the company and have been growing it ever since. Today, we're now 34 people in the company, which includes Ricardo, our Lead Engineer.

Ricardo Sisnett: My name is Ricardo Sisnett, and I'm the Lead Engineer at modl.ai. My background is as a software engineer, both in enterprise software and in the games industry. I started at Oracle down here in San Mateo in the Silicon Valley and then moved into the games industry 10 years ago. I worked for Riot for six years and at the same time, I did a master's degree on games and AI and got the chance to apply at Riot, working in Legends of Runeterra as the architect of the AI that you play against and also as the lead of a small reinforcement learning project that is kind of the seed of some of the stuff that Riot is now doing in that area. 

After that, I had a little bit of a pause, traveled the world, and then COVID came. I tried to figure out what was next, and then I met these guys at a conference in 2017. I kind of stayed in touch with some of them, and then Julian, one of the founders, reached out to me and said, "Hey, we want to bring someone with your background in software engineering to productize what we're doing." And I've been here for about two years now.

modl.ai & AI Engine

Christoffer Holmgård: At modl.ai, we are building what we term an AI engine. If you look at the tools used in the game industry, you have obviously a game engine that pulls everything together, but you also have things like a physics engine or an audio engine.

We think there's room for the next thing, which is an AI engine, and we think there's an opportunity for that to change how games are built and made. You think about the industry, and you'll see that, at least in my mind, one of the largest changes in how games were made and who could make games happened maybe a decade or even 15 years ago with Unity and Unreal Engine. And it feels like we're now at a moment where some of that technology upgrade is happening again. 

We have bots or virtual players for games, that's what our engine drives, and we offer automatic QA for game developers. You can deploy a player, and it will play your game as long as you want in order to break it, test it, see if it works, and automate a lot of that work. And at the other end, you can have multiplayer NPCs that you can play against that will copy player’s behavior, kind of like driving against avatars in Forza Motorsport or some of the work we've seen from some of the large AI houses. But for us, all of it sits on a spectrum. All of those behaviors and those use cases are centered around the same thing, an AI engine that makes virtual players operate the game. That's the idea.

AI Engine's Capabilities

Christoffer Holmgård: We're not trying to answer the question, "Is your game fun?" That is for humans to answer, and that's why you need human players for. We focus on, "Does your game work?" And on top of that, "How does it work? How well does it work?" So, it's a lot about bug finding and finding glitches and identifying aspects of the game that aren't really performing or places where it would crash. Also, understanding how the game is running in terms of performance.

If you think about game testing as a pyramid, at the top we have the human intelligence evaluating the game design. And then, maybe at the bottom, we have to check that all the walls are solid and check that the game doesn't crash, and all of it is kind of like the bottom part. But we see it as an opportunity to automate a lot of work that happens there. So, you can take a lot of the effort that goes in here, which is like road level but really necessary, and then you can move that human labor up the chain where it has greater value. And so, our automation targets the bottom level.

Ricardo Sisnett: Games change really fast, right? In development, scripting has its limitations, and if your space changes too often, like is the case with games, then you have to be revisiting those scripts or basically throwing them away and starting them again. Our proposition is that AI can help with this because it's adaptable. So, you could throw roughly the same game at it, and you don't have to adapt your script.

The Avatars

Christoffer Holmgård: We operate with this idea of the spectrum where, at the lowest end, you can do a very simple installation into a game, and that will allow you to take simple actions. But as you keep adding to the complexity of the integration, more things become possible. And when you work with us, you can start out by having the automatic QA part that's relatively simple to install. And then we can also help you install what we call a data pipeline into the game. And what a data pipeline really is, is a way to observe what is happening in the game at any given moment and what was an individual player is doing. And when I say doing, I mean like what buttons were you pressing, which way were you looking? If it's a shooting game, were you shooting? If it's a narrative game, what was your choice? It's really getting down to that granular behavioral level of what you're doing at the moment. 

And when you start having that kind of data available, and when you get enough of it, that could be from your internal game team, it could be from your testing team, it could be from your players during early access. That allows you to take that data, embed information about what is going on in the game, and then learn over there to imitate what a certain player was doing or what a group of players was doing at a specific point in time. 

And what you can get out of that is, in particular, for the moment-to-moment decision-making that you have in games, training machine learning models to replicate better over human data. Not only is it an efficient way to encode and get the behavior that you want, but it's also really efficient for updating it over time as new data comes in, and it's efficient for capturing that game feel of playing against a human player where there's always a little bit of noise in how people are playing the game. And that's one of the one of the things that ML can do for games, but classic game programming has a hard time doing it. 

Benefiting From the Solution

Christoffer Holmgård: I think you can imagine a range of use cases. "Oh, I'm getting into the game. I'm a new player. I don't want to get destroyed immediately by all the other players." So you can help people onboard into the game, or you're an elite player, and you're like, "Okay, I'm waiting one and a half hours in the lobby for somebody to play against." But I think those are adjacent use cases as well. 

You could think about the ability to use parts like this for extended training or coaching, or you can think about them as tools for training over individuals like avatars. Then, as a game designer, when you have bots if you start testing a game, you can use that to sort of balance the game and understand how game mechanics play out before you put it out to the players. 

Implementing the DeepMind-Style Approach

Ricardo Sisnett: I actually think our biggest asset is our team. Actually, OpenAI and DeepMind have cited people in our team, so we have a lot of talent already. And I think the whole trick is, they are trying to solve the game, right? OpenAI is trying to be the best of the best, and that's it. We are targeting a specific set of industry challenges. We have a little bit of a slider, so I think that is what allows us to focus a little bit better and not require, like, seven billion computers to train our models. One of our products just kind of learns on the fly and adapts during that session, but you don't have to train it for hours. It will just play your game on that, and there's no training required. So, I think a little bit of ingenuity and also attacking different problems is what allows you to be a little bit smaller.

Christoffer Holmgård: I completely agree, and it's also a little bit about what's your mission or objective as a company, in a sense. I think if you look at DeepMind or OpenAI, it's a very general-purpose application AI, which is a very worthwhile entry. We're seeing all the effects of that. I mean, we're a company made up of people who did research into games and people who did games, and we're focused on the games industry as our target. That's where we want to have an impact and change how games are made. So, that means that we think a lot about practical solutions and practical applications. It's not necessarily important for us to push the state-of-the-art of AI, but it is important to us to push the state-of-the-art of how AI is used to create game experiences. And that's really a very different direction, right? Because that tells you it's all about the application and how you're moving the needle for developers and players, and less about, you know, we'll work on inventing new AI methods when we have to, but the point for us is very product-oriented.

Approach to Privacy and Training the AI

Ricardo Sisnett: We don't collect or use Personal Identifiable Information. We don't need that. Actually, part of the stuff we were talking at GDC is how to keep that and your PII and the data that is useful for machine learning systems separated. It is an interesting problem because some people have made the claim that your gameplay style could be PII and that makes this whole conversation a little bit more complex. 

I think there are two things when we get into this. One, I think we can be honest with players. Players are smart, so we can tell them, "Hey, we're collecting this data in this way to solve these problems, and this will be good for you long term because of these reasons." And maybe that way, you feel a bit more comfortable accepting instead of the usual, "We're gonna capture data and we're not gonna tell you why."

And second, I think we just need to establish a baseline practice as a game AI community. I think that's also something I was hinting at in the session, which is we need some baselines, and we need to agree on these ethical boundaries. And where do you source data? How do we share it? It's something that is still very, just in general, we haven't figured out. And I think it's a problem for the community to solve. Data is the new oil kind of thing, and we want to keep doing cool stuff like Midjourney or Stable Diffusion. We need to find ways of sourcing that data ethically and the right way.

Conclusion

We're opening our beta program to game developers, and we're trying to find the first studios that want to help us develop this product to the next level and become the first wider audience to start using this in their game productions. And for the game playing bots, well, you send us an email and we have a conversation, and we'll see how we can help you.

Christoffer Holmgård, CEO and Co-Founder at modl.ai

Ricardo Sisnett, Lead Engineer at modl.ai

Interview conducted by Kirill Tokarev

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more