Weekly Gaming Q&A Series – Week 9: How effective is AI when it comes to in-game chat moderation?
Questions arise around AI because AI is often used as a broad, catch-all term, but AI systems can vary hugely in their capabilities, so it really depends on how a model and algorithms are built. It’s important to point out a common misconception around AI, as just calling something ‘AI-powered’ doesn’t mean it’s a futuristic machine that can think like a human.
In the context of in-game chat moderation, many “AI” products are in fact just software that filters content using human-made rules, which are based on lists of banned words. That is not AI in the true sense. Really, they are extensive dictionaries of words and phrases which need to be regularly updated as players find workarounds like swapping or misspelling words to get around the filters.
Equally, the capabilities of a specific system and how advanced the AI is, determines how much human input is required. For the advanced AI systems, the work of human moderators can be reduced by up to 99.99%. This means only a fraction of the AI decisions need to be checked by human moderators. These human decisions can be used to retrain the AI models on a regular basis keeping them up to date as the language(s) and the world around us changes.
Furthermore, AI is based on algorithms that can be biased in the same way that human decision-making can be. Utopia’s AI Moderator has been designed to avoid bias. Our AI understands entire messages so it doesn’t misconstrue the semantic meaning of an individual word. Being language-agnostic means our technology can be used to moderate any language or dialect in the world.
Want to learn more?
Check out our case studies or contact us if you have questions or want a demo.
You may also like