From the interviewer’s side of the table

I’m currently building a team for my new secret project and far more of my time than I’d like is spent with the recruitment process. However, every minute of that time is essential and we’re at a point where none of it can be handed off to an agency even if I wanted to1.  So getting the recruitment process right is essential.

One of the basic principles of management in any industry is that if you set metrics for your team, they will adapt to maximise those results: set a minimum number of bugs to be resolved and you’ll find the easy ones get picked off, set an average number of features and you’ll find everything held together with string, set too many metrics to cover all bases and you’ll end up with none of them hit and a demoralised (or non-existent) team2.  The same is true of recruitment – you will end up hiring people who pass whatever recruitment tasks you set, not necessarily the type of person the company needs.  While this may appear obvious, think back to the last interview you were at, either as the interviewer or interviewee – how much relation did the process really have to the role?

When I started recruiting for my new team, I knew I had neither the time nor resources to make mistakes.  I had to get this right first time. Continue reading From the interviewer’s side of the table

AI for image recognition – still a way to go

Result from IBM watson using images supplied by Kaptur
Result from IBM watson using images supplied by Kaptur

There’s a lot of money, time and brain power going in to various machine learning techniques to take the aggravation out of manually tagging images so that they appear in searches and can be categorised effectively.  However, we are strangely fault-intolerant of machines when they get it wrong – too many “unknowns” and we’re less likely to use the services but a couple of bad predictions and we’re aghast about how bad the solution is.

With a lot of the big players coming out with image categorisers, there is the question as whether it’s really worth anyone building their own when you can pay a nominal fee to use the API of an existing system.  The only way to really know is to see how well these systems do “in the wild” – sure they have high precision and recall on the test sets, but when an actual user uploads and image and frowns at the result, something isn’t right. Continue reading AI for image recognition – still a way to go

MS221: Illidan was wrong

Illidan defeated, I was prepared :)
Illidan defeated, I was prepared 🙂

So the results are starting to come out for the OU exams taken in June.  Those who were on their last module have got their final degree classification and for the rest of us we’re getting our individual module scores.  Despite not being due for another 8 days, the results for MS221 came out today.

If you’ve been following my blog you’ll know that I really hadn’t focused on studying for this module as much as I should and, with a new role taking up my time in the evenings and weekends I just hadn’t revised as much as I should have done.  I even took my text books to the ReWork DL conference in Boston but only opened them briefly on the plane on the return flight. So how did I do?

Continue reading MS221: Illidan was wrong

So I wrote my first Python script

Today I sat down and wrote code in python, from scratch, with intent(!), for the first time… and, it was pretty easy.  After spending some time trying to alter other people’s code and feeling like I was wading through treacle, writing something from scratch allowed me to see how easy python really is.

While I’m not making any great statement about my own code architecture, diving into something complex an be an inefficient way to learn unless the code you’re looking at is designed to be followable at entry level.  When you’re experienced with other languages this can be even more frustrating as it’s easy to skip over the parts you assume you know and suddenly find you’ve skipped slightly too much. Continue reading So I wrote my first Python script

Facebook’s latest deep learning research

Images auto generated by Facebook's deep learning system
Facebook’s auto generated images

A few days ago, researchers from Facebook published a paper on a deep learning technique to create “natural images”, with the result being that human subjects were convinced 40% of the time that they were looking at a real image rather than an automatically generated one.  When I saw the tweet linking this, one of the comments1 indicated that you’d “need a PhD to understand” the paper, and thus make any use of the code Facebook may release.

I’ve always been a big believer in knowledge being accessible both from being freely available (as their paper is) and also that any individual who wants to understand the concepts presented should be able to, even if they don’t have extensive training in that specialism.  So, as someone who does have a PhD and who is in the deep learning space, challenge accepted and here I’ll discuss what Facebook have done in a way that doesn’t require advanced degrees, but rather just a healthy interest in the field2. Continue reading Facebook’s latest deep learning research

Python: serious language or just for beginners?

Python logo and code smapleTwo months ago I hadn’t looked at a line of Python code – it was never a requirement when I was a developer and as I moved into management I worked with teams and projects using everything from C and COBOL through LAMP to .Net, while Python sat on the periphery.  I’d always considered it to be a modern BASIC – something you did to learn how to code or for a quick prototype but not something to be taken seriously in a professional environment.

I’ve always believed that really good programmers understand the boundaries and strengths of multiple languages, able to choose the right tool for the job, and finding the correct compromise for consistency and maintainability.  People like this are really hard to find1 although I do tend to veer away from individuals who can only evangalise a single language and say all the others are rubbish2.  Due to the projects I’ve been involved with, Python ability has been irrelevant and never considered part of that toolbox. Continue reading Python: serious language or just for beginners?

Whetlab and Twitter

Whetlab joins Twitter
Whetlab joins Twitter

At the ReworkDL conference in Boston last month I listened to a fantastic presentation by Ryan Adams of Whetlab on how they’d created a business to add some science to the art of tuning deep learning engines.  I signed up to participate to their closed beta and came back to the UK very excited to use their system once I’d got my architecture in place.  Yesterday they announced that they had signed a deal with Twitter and the beta would be closed.  I was delighted for the team – the business side of me is always happy when a start-up is successful enough to get attention of a big corporate, although I was personally gutted as it means I won’t be able to make use of their software to improve my own project.

This, for me, is a tragedy. Continue reading Whetlab and Twitter

Microsoft HoloLens and backwards compatibility

It’s been a while since I’ve had the time to really dedicate to gaming in the way I’d like.  My XBox360 is languishing unloved, my World of Warcraft account dormant since I got the Insane achievement in Cataclysm and the most I can manage now is the odd game of Hearthstone on my Nexus 71.  I still keep an eye on what’s going on as the gaming industry is pushing the development of a lot of technologies that make their way into our lives in one way or another and I always have that hope that maybe something will give me the free time to immerse myself into some of the latest games again.

I was watching the announcements for XBox at E3 as they popped up on twitter this evening and two things really stood out for me.

Continue reading Microsoft HoloLens and backwards compatibility

3D Printer part 6: x-axis timing belt and motor

At the end of my last post, we had started to build the x-axis assembly and got both shafts and the bearing block in place.  This post looks at adding the timing belt and the x-axis motor, covering issues 20 to 23 of 3D Create and Print by Eaglemoss Technology.  If you’ve skipped a part of this series you can start from the beginning, including details of the Vector 3 printer I’m building on my 3D printer page.

You’ll need to dig out the motor test circuit board and AC adaptor to completed these steps and they are very similar to that for the y-axis so should be fairly quick for you to go through.  If you’re a subscriber then this drop also comes with four packs of filament in different colours, which you’ll just have to keep somewhere safe until we’re in a position to use them.

Continue reading 3D Printer part 6: x-axis timing belt and motor

MS221 – was Illidan right?

So, a few days ago I tweeted that I had this snippet from World of Warcraft going round my head where Illidan taunted that we weren’t prepared for what awaited us.  It was how I felt going into MS2211 and now that I’ve done the exam I wanted to reflect on why I’d ended up feeling unprepared for a test in a subject I am very enthusiastic about for a degree I’m doing for no direct gain other than for the fun of learning.

I started this degree back in 2013 because I was intellectually unstimulated in my job.  I was busy, spinning many plates and wasn’t bored, but there just wasn’t anything to do that really set my neurons firing.  I’d started the process of looking for another job for a whole host of reasons I won’t go into, but I could feel my brain getting “comfortable” at not having to think much beyond which of my team needed to do which task in what order in response to changing priorities.  So I signed up to do the maths degree I’d always wished I’d done. Continue reading MS221 – was Illidan right?