Weekly Gaming Q&A Series – Week 10: Who’s responsible for artificial intelligence?

January 20, 2022
Weekly Gaming Q&A Series – Week 10: Who’s responsible for artificial intelligence?

Defining who is in charge of artificial intelligence has prompted a lot of debate. It’s natural that people question the responsibility of automated technologies. But experts agree that legislation lays the foundation for the ethical use of AI. Where regulation is concerned, AI should be seen like any other automated system. Despite ideas perpetuated by the media and film, there is nothing supernatural about the capabilities of AI – at least for the foreseeable future.

Machine learning (ML) is currently the most advanced form of AI. Despite its impressive capabilities, ML cannot create anything new that isn’t in the data it was trained on. The biggest issues of AI lie mostly in the collection and transfer of personal data and the technologies that hold safety implications for humans, such as machines, cars or aircraft.

But where does the responsibility really lie?

It may sound like comparing apples and oranges, but when it comes to AI, the hierarchy of responsible parties could be compared to air travel.

Just like aircraft, AI is a tool built by humans. Most AI applications do not currently represent a lot of physical risk for humans, mostly due to the still-limited application areas of machine learning compared to other technologies which run on CPUs.

Where aircraft safety is concerned, the responsibility primarily falls on the pilots, engineers, cabin crew, airport staff – each working in their own area of responsibility and expertise. Secondly, the manufacturer of the aircraft. Leading airline companies have professionals in every discipline when it comes to designing and building aircraft.

The responsibility of AI decision making should work like this: legislators and authorities should define the boundaries for collecting and using data, and what kind of certificates are needed to be eligible to produce AI tools. Just as they would regulate any other tools or products that significantly affect human lives.

So at Utopia Analytics, we believe that when it comes to AI there are three responsible parties:

  1. The field experts who perform their profession and at the same time provide training data for machine learning-based AI.
  2. The AI model and product developers, who need professional skills in understanding how different algorithms behave on different kinds of data and in different types of AI products.
  3. The users of the AI tool who should follow the user instructions in order to get the expected behaviour of the AI decision-making system.

Want to learn more?

Check out our case studies or contact us if you have questions or want a demo.

Message sent! Thank you.

An error has occurred somewhere and it is not possible to submit the form. Please try again later or contact us via email.

You may also like

No items found.

Let’s meet! Book a free 30-minute consultation with our AI expert.

BOOK A DEMO

Blog

How Tech Savvy Are You? Learn how Utopia AI Moderator is compared to competing moderation solutions.