True Type Fonts in LaTeX: a brief guide

Adding new fonts to LaTeX doesn’t have to be painful…

One of the things I love about \LaTeX is how customisable it is.  Separating content from design a long time before web design cottoned on to this.  However, out of the box, \LaTeX comes with very limited fonts and most people just use these defaults, mainly because setting up other fonts isn’t as easy as it should be.

One of the great things about drawing diagrams in \LaTeX is that the fonts match, it’s always a little jarring to my eye when I see papers with a mismatch between diagrams and main text.  However, sometimes you just can’t control what’s in your diagram or you want something a little more modern than Times New Roman for whatever you’re putting together.

So how do you go about doing this?  Like most things, the answer is “it depends”… let’s start with an assumption that you’re starting from scratch and if you’re already a few steps down the process then that’s just less work for you to do 🙂 Continue reading True Type Fonts in LaTeX: a brief guide

Algorithmic transparency – is it even possible?

Could you explain to a lay person how this network makes decisions?

The Science and Technology Select Committee here in the UK have launched an inquiry into the use of algorithms in public and business decision making and are asking for written evidence on a number of topics.  One of these topics is best-practise in algorithmic decision making and one of the specific points they highlight is whether this can be done in a ‘transparent’ or ‘accountable’ way1.  If there was such transparency then the decisions made could be understood and challenged.

It’s an interesting idea.  On the surface, it seems reasonable that we should understand the decisions to verify and trust the algorithms, but the practicality of this is where the problem lies. Continue reading Algorithmic transparency – is it even possible?

Internet of Things: Making a Smart Home

The hive thermostat, it’s a lot prettier than my old one.

I grew up reading and watching Sci-Fi. As a child with an Acorn Electron, the idea of smart interactable devices seemed far future rather than near future. I loved the voice interactivity and things ‘just working’ without needing to be controlled. When I got my Echo dot last year, I knew this would be the start of a journey to upgrade my house to a SmartHome and truly be part of the Internet of Things. It’s been four months now and I’ve got a setup with which I’m pretty happy.  Here’s what I chose and why… Continue reading Internet of Things: Making a Smart Home

Diagrams with LaTeX – easier than you might think

Diagrams like this are easy to do in LaTeX

I’ve written before about the power of literate programming, using \LaTeX to create reports when code runs.  It’s fairly simple to combine this with the graphical drawing packages to create impressive graphs and figures on the fly.  A lot of academics I’ve spoken to have shied away from using \LaTeX for drawing, despite being very proficient with the textual layout.  Similarly, a lot of students on the OU Mathematics degree write up all of their assignments in \LaTeX but drop in hand drawn graphs and diagrams.  Just like anything else in \LaTeX, once you get your head around how it works, it’s actually not that difficult to create very complex structures. Continue reading Diagrams with LaTeX – easier than you might think

WiDS2017: Women in Data Science Reading

Yesterday I had the great pleasure in being part of the global WiDS2017 event show casing women in all aspects of data science.  The main conference was held at Stanford but over 75 locations world wide had rebroadcasts and local events, of which Reading was one.   In addition to spending a great evening with some amazing women, I was asked to speak in the career panel on my experiences and overall journey. Continue reading WiDS2017: Women in Data Science Reading

Learning Fortran – a blast from the past

Intro to Fortran – are you tolerably familiar with BASIC?

Over the weekend, I was clearing out some old paperwork and I found the notes from one of the assessed practical sessions at University.  Although I was studying biochemistry, an understanding of basic programming was essential, with many extra optional uncredited courses available.  It was a simple chemical reactions task and we could use BASIC or Fortran to code the solution.  I’d been coding in BASIC since I was a child1 so decided to go for the Fortran option as what’s the point in doing something easy…. Continue reading Learning Fortran – a blast from the past

Anything you can do AI can do better (?): Playing games at a new level

Robot hands dealing cards
Image from BigThink.com

Learning to play games has been a great test for AI.  Being able to generalise from relatively simple rules to find optimal solutions shows a form of intelligence that we humans always hoped would be impossible.  Back in 1997, when IBMs Deep Blue beat Gary Kasparov in chess1 we saw that machines were capable of more than brute force solutions to problems.  20 years later2 and not only has AI mastered Go with Google’s DeepMind winning 4-1 against the world’s best player and IBM’s Watson has mastered Jeopardy,  there have also been some great examples of game play with many of the games I grew up playing: Tetris,  PacMan3, Space Invaders and other Atari games.  I am yet to see any AI complete Repton 2. Continue reading Anything you can do AI can do better (?): Playing games at a new level

Artificial images: seeing is no longer believing

Loom.ai can generate a 3D avatar from a single image

“Pics or it didn’t happen” – it’s a common request when telling a tale that might be considered exaggerated.  Usually, supplying a picture or video of the event is enough to convince your audience that you’re telling the truth.  However, we’ve been living in an age of Photoshop for a while and it has (or really should!!!) become habit to check Snopes and other sites before believing even simple images1 – they even have a tag for debunked images due to photoshopping. Continue reading Artificial images: seeing is no longer believing

How to build a human – review

Gemma Chan, a real human and also now a real synth
Gemma Chan, a real human and also now a real synth

Ahead of season 2 of Channel 4’s Humans, they screened a special showing how a synthetic human could be produced.  If you missed the show and are in the UK, you can watch again on 4OD.

Presented by Humans actress Gemma Chan, the show combined realistic prosthetic generation with AI to create a synth, but also dug a little deeper into the technology, showing how pervasive AI is in the western world.

There was a great scene with Prof Noel Sharkey and the self driving car where they attempted a bend, but human instinct took over: “It nearly took us off the road!” “Shit, yes!”.  This reinforced the delegation of what could be life or death decisions – how can a car have moralistic decisions, or should they even be allowed to? Continue reading How to build a human – review

ReWork Deep Learning London 2016 Day 1 Morning

Entering the conference (c) ReWork
Entering the conference (c) ReWork

In September 2016, the ReWork team organised another deep learning conference in London.  This is the third of their conferences I have attended and each time they continue to be a fantastic cross section of academia, enterprise research and start-ups.  As usual, I took a large amount of notes on both days and I’ll be putting these up as separate posts, this one covers the morning of day 1.  For reference, the notes from previous events can be found here: Boston 2015, Boston 2016.

Day one began with a brief introduction from Neil Lawrence, who has just moved from the University of Sheffield to Amazon research in Cambridge.  Rather strangely, his introduction finished with him introducing himself, which we all found funny.  His talk was titled the “Data Delusion” and started with a brief history of how digital data has exploded.  By 2002, SVM papers dominated NIPs, but there wasn’t the level of data to make these systems work.  There was a great analogy with the steam engine, originally invented by Thomas Newcomen in 1712 for pumping out tin mines, but it was hugely inefficient due to the amount of coal required.  James Watt took the design and improved on it by adding the condenser, which (in combination with efficient coal distribution) led to the industrial revolution1.   Machine learning now needs its “condenser” moment.

Continue reading ReWork Deep Learning London 2016 Day 1 Morning