Skip to content

What future for humanity in society governed by AI?

  • by

Irene Kuling: “People have trouble reflecting on their own biases, if they are aware of them at all. Until we properly understand those biases, I think it’s impossible to develop uncolored algorithms. In itself, that need not be a problem, as long as those algorithms do what you want them to do. It also very much depends on what you are using the system for. In job applications, for example, it can work very well for an initial screening of candidates, precisely to prevent human bias in that phase.

However, I would be very careful with it in health care, for example, especially when it comes to questions of life and death. The moment the computer says, well, your chances of survival are very small, so we’re not going to treat you. I think it will be a very long time before we accept that.”

Lambèr Royakkers: “Agreed. Healthcare is really a domain where you have to put the responsibility with the human. You can’t leave that to a computer. The same goes for killer robots, for example, which I’ve written a lot about. There, too, you want the operator to have the last word.

But overall, I’m hopeful. In fact, I think AI systems will soon possess less bias than a human. A lot will depend on how much we invest in developing explainable AI, so that, as a user, you know why a system makes certain choices. People need to understand why they have been picked out by the IRS, or why they are denied certain loans.

With certain simple AI models this is already possible, but with deep learning and neural networks it is a lot harder. Of course, these systems will never be truly error-free. The question is how many errors one is willing to accept.”

Leave a Reply

Your email address will not be published. Required fields are marked *