One of the great benefits of lockdown for me is the time I have to catch up on some of the papers released that are not directly related to my day to day work. In the past week I’ve been catching up on some of the more general outputs from NeurIPS 2020. One of the papers that really caught my eye was “Ultra-Low Precision 4-bit Training of Deep Neural Networks” by Xiao Sun et al.
It’s no doubt that AI in its current form takes a lot of energy. You only have to look at some of the estimated costs of GPT-3 to see how the trend is pushing for larger, more complex models with larger, more complex hardware to get state of the art results. These AI super-models take a tremendous amount of power to train, with costs out of the reach of individuals and most businesses. AI edge computing has been looking at moving on going training into smaller models on edge devices, but to get the accuracy and the speed, the default option is expensive dedicated hardware and more memory. Is there another way?
It’s been seven years of studying while working full time (and in some cases nearly double full time hours!) and I’ve now finished the degree I started for “fun” because I wasn’t being intellectually challenged in the job I had at that time. I was sceptical of all aspects of the Open University but thought I’d give it a go, knowing that without a cost to me and an exam, I would never make the time to study. While I’ve been blogging about individual modules over the years I’ve had quite a few conversations with many of you reading this blog about the pros and cons of study with the OU and one of the comments on my last post was from Korgan, who suggested I do a post about this and I’ve combined their questions with all of the others I’ve had.
There are a lot of people interested in data right now and there are a lot of visualisations to make that data easier to consume for people who are not data scientists. However, like any branch of statistics, visualisations can easily mislead. We are programmed to see patterns. If we are presented with a graphic that supports the surrounding text then we are more likely to believe the argument presented without further research1. I wrote about this on the Royal Statistical Society Data Science Section Blog in May, where reversing the colours in successive graphics can cause confusion. I’ve seen further examples and one caught my eye this month because it was being called out.
Seven years ago I was in work bored and desperate for a new challenge. My daughter had recently been born and I had decided to stop playing World of Warcraft. Needing a new challenge, I had toyed with an MBA but really wanted to do something for me. So I signed up for a BSc in Mathematics with the Open University, which I knew would take about 6 years part time while working. This week, I got the results for my final module and it was confirmed I had earned a first class honours degree. But why didn’t I do maths the first time round?
This week I was due to be sat in a large hall with about 200 other Open University students taking my exam for module M347, the last of the modules for the BSc in Mathematics I started for “fun“1. As with students in traditional universities, March 2020 gave a lot of uncertainty2. While some modules were switched to be coursework based assessment, mine was confirmed to be a remote exam with the originally planned exam paper. The paper would be accessible as a PDF on the day of the exam and then submitted in two parts: a multiple choice computer marked section and then a human marked second section. We would not be time limited (other than by the 24 hours in the day!) So how did I feel about this and how did it go?
Today I submitted the last assessment ahead of the exam for my tutor to mark in my Mathematical Statistics module. For once, I’m actually on track with my study but it’s not been without difficulty. If you’ve been following my OU journey then you’ll know I work full time and have a family, so dedicated study time can often be a low priority. Up until the second week of March this year1 I had a reasonable routine: I’d spend the two hours I commute Monday to Friday going through the course materials and then this extra maths wouldn’t impact work or home life.
My husband is a game developer and my contributions are usually of the sort where I look at what he’s done and say “hey wouldn’t it be great if it did this”. While these are usually positive ideas, they’re mostly a pain to code in. Today however, I was able to contribute some of my maths knowledge to help balance out one of his games.
Using an open api, he’d written a simple pokemon battle game to be used on twitch by one of our favourite streamers, FederalGhosts, and needed a way of determining player level based on the number of wins, and the number of wins required to reach the next level without recursion. While this post is specifically about the win to level relationship, you could use any progression statistic by applying scaling. Here we want to determine:
Number of wins (w) required for a given level (l)
The current player level (pl) given a number of wins (pw)
Wins remaining to the next level (wr) for a player based on current wins (pw)
Let’s take a look at a few ways of doing this. Each section below has the equations and code examples in python1. Assume all code samples have the following at the top:
My LinkedIn news feed was lit up last week by a medium post from Dario Radečić originally posted in December 2019 discussing how much maths is really needed for a job in data science. He starts with berating the answers from the Quora posts by the PhD braniacs who demand you know everything… While the article is fairly light hearted and is probably more an encouragement piece to anyone currently studying or trying to get that first job in data science, I felt that, as someone who hires data scientists1, I could add some substance from the other side.
In December, Lample and Charton from Facebook’s Artificial Intelligence Research group published a paper stating that they had created an AI application that outperformed systems such as Matlab and Mathematica when presented with complex equations. Is this a huge leap forward or just an obvious extension of maths solving systems that have been around for years? Let’s take a look.
I’ve recently submitted the first tutor marked assignment of M347, my final course in the BSc Mathematics I’ve been studying with the Open University. The third unit of this course was long and quite a slog to go through. While I’ve been using many of these equations over the past few years, diving deep into the theory and derivation has been fascinating, although frustrating due to the lack of practical application. If you’ve read my other posts then you may recall how frustrated I was with group theory and the early parts of complex analysis, while the quantum world was far more engaging from the start1. As with all my maths studies, this exercise of filling in the gaps has revealed that there are far more things I didn’t know I didn’t know than things I knew I needed to know.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.