Article
5 min read
Richard Pugh

With artificial intelligence (AI) capabilities developing at speed, industries that have historically struggled with adoption now face transformative opportunities. Agentic AI – which uses autonomous AI agents to streamline complex workflows – significantly outperforms generative AI in precision and regulatory compliance. This makes it particularly useful for healthcare and life sciences organisations, which have been cautious in adopting AI due to stringent regulations.  

 

However, fully understanding how this new, reflective form of AI will impact industries can be challenging. Here, our Global SVP, Head of Data and AI, Richard Pugh answers common questions about the technology.  

 

What is the difference between generative AI and agentic AI?  

 

Generative AI uses large language models (LLMs) trained on vast datasets to generate outputs. The capabilities of generative AI have been truly inspiring. However, the very nature of LLMs means it is possible for generative AI to provide inaccurate results, or ‘hallucinations’.

 

Agentic AI is a cutting-edge approach that allows us to create autonomous AI ‘agents’ who can collaborate on complex tasks.  We can give these AI agents specific roles and constraints, while also allowing them to use tools to achieve some tasks. This creates a powerful capability that can automate a range of sophisticated tasks. 

 

For example, by having agents check each other's work we can identify errors and significantly reduce the potential for hallucinations, creating more accurate and reliable outputs. 

 

What would be the advantage of using agentic AI over generative AI in healthcare and life sciences? 

 

Generative AI alone is challenging because it can hallucinate, and it is up to a human to spot a mistake. However, agentic AI is powerful because it can ‘reflect' – check the appropriateness or accuracy of outputs from several perspectives. Then, it can flag any concerns to a human for further investigation. 

 

For example, for one use case in which we created clinical code, our first attempt with generative AI had an accuracy of around 40%. With agentic AI our first attempt in production reached 93%, and we're now working on getting that to 100%. For me, this ability to reduce errors is an incredibly important advantage of agentic AI over generative AI. 

 

What are the challenges in integrating AI tools in clinical research and clinical diagnostics with respect to regulatory compliance requirements? 

 

Clinical regulations are critical to ensure patient safety, and so a cautious approach is essential. When we’re talking about AI, we’re often talking about cutting-edge techniques that are evolving at pace. 

 

Because of this, the challenges in AI adoption are largely about trust – how do we ensure these techniques don’t make mistakes that put patients at risk? 

 

This is where agentic AI techniques have an advantage over generative AI techniques, as they allow us to spot and resolve issues, while providing rich data to support transparency.  In the short term, these capabilities will be integrated with human QC processes to ensure patients are protected. 

 

Which agentic AI use cases are most popular right now?  

 

There is a real focus right now on cost reduction using AI for effective intelligent automation. Before, we had tools like robotic process automation to automate processes, but they are very limited and can't handle anything 'non-standard'. The benefit of an agentic approach is that the agents can use reasoning and reflection to automate sophisticated processes.  

 

I took part in a recent webinar and some examples I shared included the automation of code creation to analyse clinical trials, which could save as much as $20million a year, or reviewing medical records for an insurance company, saving around 80% of costs.  

 

I've seen lots of other use cases around things like back-office automation, customer/patient workflows and more. The good news is that the savings tend to be high due to the nature of the technology.  

 

It also helps companies to scale at speed. I was at a recent OpenAI Exec Summit, and it was highlighted that by using agentic AI, companies of 100 employees can do the work of companies with 10,000 employees. Jensen Huang, CEO of NVIDIA, recently said that he wanted to turn his company from ‘a company of 32,000 human employees, to a company with 50,000 human employees and 100 million AI employees working together’. 

 

I think this gives a sense of the direction of travel with agentic AI, and the potential for human-augmented AI in the workplace. 

 

Do you have any examples of this happening already? 

 

Yes, we’ve seen businesses thinking innovatively about how to achieve this. For example, an insurance company wants to create an agent to represent each customer, so they have a virtual customer base and a white-glove service for every customer – despite having an employee:customer ratio of 1:800. 

 

Another example is a media department in pharmaceuticals looking to use agentic AI to generate personalised and compliant marketing content at scale.  

 

Of course, doing things smarter can lead to revenue uplift and margin optimisation, but the cost and scale focused use cases tend to be more popular at the moment. 

 

What are the weaknesses of agentic AI that businesses should be aware of? 

 

Despite the impressive results, there are a few weaknesses that stem from the fact that agentic AI is in its infancy, so it is still maturing, even though we already have a few use cases in production.  There are two main areas of concern: 

 

  • Speed of change
    The technology and capabilities are changing almost on a weekly basis. Exciting as this is, it can be difficult to know when to invest and this increases the risk of building things that are obsolete by the time you finish development.  I think one of the most important things we've done over the last year or so is to help companies know how to develop and protect against change. 

 

  • Autonomy of agents
    The real power of agentic is that the agents can autonomously perform actions.  While this is fantastic, the way in which you grant such autonomy has to be carefully thought through. For example, if you're going to allow agents to email every customer in the database, you need to be sure it isn't going to do something bad.  We've learned a lot about this over the last year, and have managed to control things using reflection, but we've also leaned towards human-in-the-loop approaches to ensure there is a human quality control element. Also, we can begin by focusing on use cases that are quite narrow in terms of the actions we will allow, for example updating CRM systems with guardrails.  

 

 

To learn more about agentic AI for pharmaceuticals, watch below to hear Richard’s talk at NEXT Pharma, where he spoke about how this technology can drive efficiency and innovation for the industry. Or, see Richard’s webinar with experts from Novartis and AstraZeneca