June 28, 2019
A couple of weeks ago I attended Talent & AI, a London Tech Week conference hosted by recruitment firm Penna about the impact and applications of artificial intelligence (AI) in business and its ethical implications.
London Tech Week is an annual event that celebrates creativity and innovation, as well as technology’s impact in key sectors. At a conference hosted by Adecco Group’s Penna, four industry professionals and academics discussed the benefits and risks of implementing AI in recruitment and HR, asking two key questions: what exactly does an ethical approach to AI entail, and how can this be enforced?
AI is becoming increasingly prevalent across industries and in our everyday life. It can liberate us of tedious tasks and can collect and condition data to enhance user experience. But is it always ethical?
The human touch?
One might suppose that ‘artificial intelligence’ and ‘human resources’ have little to do with each other. ‘Human’ sounds personal and intuitive, whilst ‘artificial’ is cold and mechanical. However, human value should, in fact, exist at the forefront of how we experience technology. This conference proved to be invaluable insofar as it helped me realise that much progress could be made with AI if executed properly, but also that undesirable consequences could ensue if exploited or abused.
Dr Allison Gardner, lecturer at Keele University and co-founder of Women Leading in AI, explained that a benefit of AI in the recruitment process is its potential to reduce discrimination in what she calls a, ‘systematically biased society’. By using AI to find talent based on skills and experience, hiring processes can not only become more efficient, but also free of conscious and unconscious discrimination and bias.
However, a blind reliance on AI can also prove to be detrimental and unintentionally augment biases. Campbell Shaw, head of bank relationships at Cardlytics, noted algorithms ‘need human parents’ - there will always be a human factor in the creation of an intelligent system. For this reason, an unconscious bias will inevitably be injected even with an unbiased dataset unless dealt with beforehand. After all, an intelligent system will only do what it has been programmed to do.
Analysing behaviour and personality
A striking concept to me was the notion of ‘cybersnooping’ when searching for potential candidates to recruit. To narrow down job positions, intelligent systems will monitor and collect vast amounts of data on each candidate - be it their Twitter posts, Spotify activity, or Instagram pictures. The goal of this is to analyse an individual’s behavioural patterns and personality traits. This highlights both privacy issues and moral issues. Furthermore, these bots show no mercy in this respect, lacking inherently human capabilities such as empathy and compassion, which could greatly influence the final decision.
What I ultimately gathered from this conference is that AI is not just about saving time and money, but about diversity and inclusion. Its limitation lies in its lack of regulation and incomplete strategies - utilising tech for tech’s sake is not an optimal solution.
In an era where AI is a controversial subject and where ‘technologically agnostic’ companies are sceptical of its game-changing potential, it is imperative that recruiters lacking experience with data analytics are able to judge for themselves when AI is or isn´t needed. Be it generating revenue, saving lives, or simply being efficient, it’s important for companies to keep their minds set on their objectives, and understand that artificial intelligence is just as enabling as it can be disrupting.