Nobody can deny that artificial intelligence (AI) has taken the world by storm, starting new discussions around what it means for future workforces and how people will be impacted. But while it’s easy to fall down a rabbit hole of existential questions on whether humans will someday be rendered redundant, it’s important to take a step back and see these technologies for what they really are – tools that ultimately exist to serve human beings.
After all, developments have never been about AI for the sake of AI; they have always been designed to make interactions with humans quicker and easier. Like any technology, AI at its core is about the problems we’re trying to solve with it. What’s more, there’s a natural progression of these tools that means that they won’t take over the world overnight. This also means that there’s a gradual path ahead for us to shape the outcomes that we all want to see – where AI works with people, not against them.
This is particularly important as adoption levels grow. The recent IDC report Leveraging the Human Advantage for Business Transformation, sponsored by Endava, found that 1 in 2 of the organisations surveyed have deployed AI or are currently running a proof of concept. As AI advances and implementations increase, it’s our chance to seize the opportunity. This means championing ways to empower humans to work more strategically and promoting safe, responsible use of these tools.
Strength in acknowledging AI’s weaknesses
Changing workforce dynamics for the better will undoubtedly require a mindset shift so that technologies like AI can be embraced by employees. Without this, businesses are likely to face resistance to adoption, stunting their impact and benefits for people. And many organisations recognise that a change in their approach is needed, as the IDC report also revealed that 76% of organisations expect reskilling demands to increase due to the impact of automation and AI, and 64% have already set up formal upskilling programmes to support skill augmentation.
But rather than creating a need to upskill or even reskill, I see this as a need to ‘side skill’. Interfaces like ChatGPT are relatively straightforward to use and take little training to hit the ground running, but it might not occur to employees which processes can be transformed with these tools. It’s therefore more powerful to train people on how they can make themselves more effective so they can keep evaluating and searching for opportunities to move forward. This active engagement with AI tools is ultimately what will shape successful outcomes as, without this interaction, there’s a danger of passively accepting the outputs of large language models (LLMs) and missing issues.
Educating employees on the responsible use of AI is a huge part of this, not just in terms of understanding that compliance standards are being met (either from a self-regulated or government-regulated perspective), but also in fostering a challenge culture. Humans have a key role to play in coordinating AI and stepping in to combat inaccuracies, but to do so, they need to be able to acknowledge biases within models. LLMs are never flawless, but if we can build a challenge mindset and awareness of different biases, then we can keep questioning potential issues and improving the outputs.
This is a mindset we’re passionate about as a team, and part of fostering it includes embracing continuous learning and development so that we can supervise and direct tools like LLMs effectively. It’s about leveraging the uniquely human skill of critical thinking, as well as calling on a diverse set of people to interact with these models so that errors don’t become reinforced. And when sectors other than the IT industry often have a more static strategy towards learning and development, we also help our clients adopt this continuous cycle of learning in their culture, so they too can take advantage of what technologies can offer.
Aside from a human-in-the-loop approach, other tools can also help businesses reach better outputs and raise awareness of risks. In fact, our teams have been working on a tool that summarises documents to speed up internal processes. But the outcomes aren’t just delivered as the truth of what that document covers – instead, the tool creates a probability percentage of how confident we can be about its accuracy. This is developed on a basis of whether the source document is objectively fact-based and straightforward, or whether it’s more multifaceted and contains subjective human opinions. It brings to light factors that may influence quality and encourages stakeholders to think critically, rather than taking it as verbatim.
Teaming people up with technology
Humans are hugely important in both shaping AI solutions and being able to bring out the greatest value from AI’s outputs. Businesses recognise this too – with the IDC report showing that 51% say retaining human influence on their AI use is very or extremely important. Beyond the obvious productivity and efficiency benefits for people, tools like generative AI can be a real source of inspiration for the likes of creative industries. Whether it offers prompts in the form of art, words or pictures, individuals can sharpen them and add value through the very human quality of ‘taste’ that can’t be coded. For example, if generative AI is used within an image or movie scene, humans are needed to shape the output so that it looks and feels ‘right’ based on lived experiences.
Ultimately, even as we look to the future and question whether we’ll reach advances towards artificial general intelligence, we will still need humans to coordinate tools, supervise them and collaborate with them. Becoming allies with AI means being able to spot its weaknesses and applying a strategic mindset to its implementation and outputs – whether it’s understanding exactly how to develop technologies for a specific audience or intervening when it comes to biases. Because when we’re active participants in technologies, we can use them to do good for people and the world around us.