Cambridge Analytica: not AI’s ethics awakening

From the wonderful XKCD, research ethics

By now, the majority of people who keep up with the news will have heard of Cambridge Analytica, the whistle blower Christopher Wylie, and the news surrounding the harvesting of Facebook data and micro targeting, along with accusations of potentially illegal activity.  In amongst all of this news I’ve also seen articles that this is the “awakening ” moment for ethics and morals AI and data science in general.  The point where practitioners realise the impact of their work.

“Now I am become Death, the destroyer of worlds”, Oppenheimer

Continue reading Cambridge Analytica: not AI’s ethics awakening

Algorithmic transparency – is it even possible?

Could you explain to a lay person how this network makes decisions?

The Science and Technology Select Committee here in the UK have launched an inquiry into the use of algorithms in public and business decision making and are asking for written evidence on a number of topics.  One of these topics is best-practise in algorithmic decision making and one of the specific points they highlight is whether this can be done in a ‘transparent’ or ‘accountable’ way1.  If there was such transparency then the decisions made could be understood and challenged.

It’s an interesting idea.  On the surface, it seems reasonable that we should understand the decisions to verify and trust the algorithms, but the practicality of this is where the problem lies. Continue reading Algorithmic transparency – is it even possible?