Thursday, December 26, 2019

Artificial intelligence: the only way is ethics

Ed.'s note: It's time we had a serious conversation about artificial intelligence (AI) and who will ultimately control it.

Source: CERN

During his talk, Nallur called for increased collaboration between computer scientists, legal professionals and experts in the domains where AI technologies are being applied (Image: Andrew Purcell/CERN)

5 SEPTEMBER, 2019 | By Andrew Purcell

CERN has an ambitious upgrade programme for its flagship accelerator complex over the next two decades. This is vital to continue pushing back the frontiers of knowledge in fundamental physics, but it also poses some gargantuan computing challenges.

One of the potential ways to address some of these challenges is to make use of artificial intelligence (AI) technologies. Such technologies could, for example, play a role in filtering through hundreds of millions of particle collision events each second to select interesting ones for further study. Or they could be used to help spot patterns in monitoring data from industrial control systems and prevent faults before they even arise. Already today, machine-learning approaches are being applied to these areas.

It was in view of the potential for further important developments in this area that Vivek Nallur was invited to give a talk last week at CERN entitled 'Intelligence and Ethics in Machines – Utopia or Dystopia?'.

Nallur is an assistant professor at the School of Computer Science at University College Dublin in Ireland. He gave an overview of how AI technologies are being used in wider society today and highlighted many of the limitations of current systems. In particular, Nallur discussed challenges related to the verification and validation of decisions made, the problems surrounding implicit bias, and the difficulties of actually encoding ethical principles.

During his talk, Nallur provided an overview of the main efforts undertaken to date to create AI systems with a universal sense of ethics. In particular, he discussed systems based on consequentialist ethics, virtue ethics and deontological ethics – highlighting how these can throw up wildly different behaviours. Therefore, instead of aiming for universal ethics, Nallur champions an approach based on domain-specific ethics, with the goal of achieving an AI system that can act ethically in a specific field. He believes the best way to achieve this is by using games to represent certain multi-agent situations, thus allowing ethics to emerge through agreement based on socio-evolutionary mechanisms – as in human societies. Essentially, he wants AI agents to play games together again and again until they can agree on what actions should or shouldn't be taken in given circumstances.

Please go to CERN to read the entire article.

Ed.'s note: Who put these Google employees up to this? They should not have done this and instead maintained control over this technology in America.

Google Employees Resign in Protest Against Pentagon Contract

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Who's visiting Abel Danger
view a larger version of the map below at