Early 2017 I got an Apple Watch. I wasn’t fussed about them at the time as I never normally wear a watch of any sort. But when my husband didn’t want his any more, I thought I’d give it a go. A few months later and I was addicted. While I used the word lightly at the time, what really worked for me were the regular achievements and challenges. It was the same thing that got me hooked into World of Warcraft many years ago1 and I know that if I do something, I throw everything at it, but once I can’t complete a challenge I usually drop something. After my initial post about the watch I found myself in a situation where I couldn’t achieve the challenges. Towards the end of 2017 I had a few too many days in front of the computer with work and just wasn’t active. What I noticed was that as soon as I missed a day of activity, and thus I couldn’t get a “perfect month” achievement, I stopped even trying to be active until the start of the next month. If there was no reward, even a completely irrelevant badge in an app, then why try… Long term health benefits don’t give the same level of accomplishment in the short term for most people, myself included. So after a particularly gluttonous December 2017 I made myself a promise. Continue reading A year of Apple Watch addiction and motivation
By now, the majority of people who keep up with the news will have heard of Cambridge Analytica, the whistle blower Christopher Wylie, and the news surrounding the harvesting of Facebook data and micro targeting, along with accusations of potentially illegal activity. In amongst all of this news I’ve also seen articles that this is the “awakening ” moment for ethics and morals AI and data science in general. The point where practitioners realise the impact of their work.
“Now I am become Death, the destroyer of worlds”, Oppenheimer
The Science and Technology Select Committee here in the UK have launched an inquiry into the use of algorithms in public and business decision making and are asking for written evidence on a number of topics. One of these topics is best-practise in algorithmic decision making and one of the specific points they highlight is whether this can be done in a ‘transparent’ or ‘accountable’ way1. If there was such transparency then the decisions made could be understood and challenged.
It’s an interesting idea. On the surface, it seems reasonable that we should understand the decisions to verify and trust the algorithms, but the practicality of this is where the problem lies. Continue reading Algorithmic transparency – is it even possible?