18 October 2018 by Spencer Symmons
Amazon is not having a good time of it lately. From poor working conditions in its warehouses to manipulating reviews, it seems barely a day goes by without another negative headline and everyone from Bernie Sanders to eBay getting in line to lodge a complaint.
In the midst of this, claims have surfaced that Amazon have had to scrap an AI based recruitment tool because it taught itself that men would make better candidates.
55 per cent of HR managers in the US said that artificial intelligence would be a regular part of their work within the next five years, according to a survey from CareerBuilder, a trend that is bound to carry over to the UK.
So, will we ever be able to build an AI recruitment tool that doesn’t carry bias?
Pioneers of AI technology still seem to think so. A huge draw of using artificial intelligence in recruitment was to remove the unconscious bias which humans can put into the process, but Amazon hasn’t been the only business to produce an AI tool with prejudice.
Google’s image recognition software, Photos, identified black people as ‘gorillas’. Voice command in cars failed to recognise female voices. Facebook’s translation tool confused the Hebrew for ‘good morning’ and ‘attack them’, leading to the arrest of a Palestinian worker.
The problem is that AI is created by humans, and all humans inherently hold some form of bias. It may be unconscious, it may be unwitting, but it is there.
Take a look at the various virtual assistants currently available. The ones performing basic tasks – Apple’s Siri, Amazon’s Alexa, Google Assistant, Microsoft’s Cortana – all have default female voices. Those created for more complex and computational functions, like IBM’s Watson and Salesforce’s Einstein, have male voices.
What we also have as humans, however, is the ability to check our bias. We can try to become conscious of unfair preconceptions and challenge our own thinking. And if we cannot teach machines to do the same, we must change our methodology.
Amazon’s tool learned to discriminate against women because more men applied for the roles, so there was more male data available. The data, in this case, was found on individual CVs and, perhaps, therein lies the problem.
Recruiters have used CVs to formally screen candidates since the 1950s. The sheer volume of sites dedicated to crafting the ‘perfect CV’ stands as testament to how difficult it can be to prove one’s worth in a single two-pages-at-an-absolute-maximum document.
Younger people are increasingly asking for new ways to demonstrate their skills – over half of 18-24 year olds were put off from applying for jobs because they were concerned about the quality of their CVs. Recruiters, too, are tired of CVs, spending an average of just 7 seconds reading before making a judgement.
So, rather than use artificial intelligence to review data which is fast becoming old hat, why not use it to build a new candidate screening system which prioritises skills, talent and expertise over experience and university pedigree?
In that way, the AI can learn to approve the applicant who has best demonstrated problem-solving, communicative or creative ability, rather than the one with a Y chromosome.
Technology has the power to advance us in unknown ways, but during that development we must acknowledge the processes which are no longer relevant. Perhaps by ditching the age-old CV, we can reduce the inherent bias in the system.
This website uses 'cookies' to give you the best, most relevant experience. Using this website means you're happy with this. You can find out more about the cookies used by clicking this link.