Emerging tech like AI and IoT are reshaping insurance—but not without nuance. We recently interviewed Carli Jacobs, CEO: Swiss Re Africa and Head L&H Middle East & Africa, who offers a refreshingly grounded perspective on what’s worth automating, when AI makes sense, and why responsible AI should guide the way.

Q: How do you see emerging technologies like AI, blockchain, and IoT reshaping the future of insurance in the next five years?

A. The use of AI is relevant for us, shaping the world of automated underwriting (UW) and claims, typically though on the periphery of our solutions. This includes intelligent document processing, OCR, and NLP to extract value from unstructured data that can then be used in a structured environment, such as an automated UW solution like Magnum or UW Ease. But insurers have yet to embrace AI fully for making UW decisions, whether due to natural risk aversion or regulatory requirements like EU law.

Additionally, the use of AI has advanced considerably in the domain of conversational analytics. We use such a solution to process and analyse customer interactions across channels, leveraging Natural Language Understanding (NLU) and machine learning to drive operational efficiency, improve customer experience and boost sales and customer persistency as well as better adherence to compliance and quality standards. We don’t see this replacing human interactions, nor agents, but rather integrating and complementing them.

The IoT and the concept of devices and apps to measure blood pressure or heart rate have faced their own challenges due to disruptions in the customer journey – such as whether the customer has the necessary app or wants to download it just for an insurance application – or acceptance to access the data from the applicant or data platform owners.

In the end, the use of AI is a great way to take a mountain of unstructured data and make it usable in our modern platforms and processes. One must keep in mind, however, that sometimes the best solution may not involve AI at all.

The question is whether it’s worth the effort. For example, if you’re taking UW case data from 10+ years ago to train your LLM, how is that going to impact your customer portfolio and the risk you’re taking on? Case data from 10 years ago may not be relevant and a good source to train your LLM. Why not invest in solutions like Magnum, which would provide a steady supply of quality structured data without potential

AI challenges. On the other hand, every AI project is an opportunity to learn and accumulate a unique set of experiences and expertise in insurance risk. Another important consideration is that, to foster a trustworthy and beneficial usage of AI, insurers need to adopt a responsible AI (RAI) framework to steer the development and use of AI systems according to principles of fairness, transparency, accountability, robustness and privacy. RAI is not only about the technical aspects of AI, but also about the human-AI interaction and the alignment of AI with business objectives and societal values.

Q: What are the biggest challenges insurers face in accelerating digital transformation, and how can they overcome them?

A. Challenges include “analysis paralysis.” For years, AI has been looming on the horizon, so why invest in a “traditional” rules-driven solution in the meantime? There’s too much waiting for the next AI big bang and for someone else to do it first. Digital transformation requires more than just buying a single solution and often has a knock-on effect requiring further system changes upstream and downstream of the new solution. As discussed in our recent publication “An expanded role for AI in predictive underwriting”, the next evolution in underwriting may entail multiple AI models working interactively at various stages in the process. A key determinant of user engagement with AI is whether users trust it. Earning trust, in turn, depends on whether AI tools are understandable, reliable and meet users’ expectations. Without those ingredients, AI users are prone to disengagement – a key challenge for insurers wanting to accelerate digital transformation. Indeed, workers are commonly reluctant to follow algorithmic advice; and as many as three in five managers indicate they are wary of trusting AI systems.

Another important dimension is resistance to change within organisations. We notice a reluctance to abandon familiar workflows and adopt new tools, particularly when these changes disrupt habits and ways of working.

Once users trust AI, it’s essential to cultivate this trust. Research shows that once a new technology disappoints, regaining trust and reengaging users is a challenge. Thus, a key to fostering long-lasting trust and engagement is to manage user expectations effectively.

To build trust, we must cultivate two behaviours:
(1) manage expectations by being operationally transparent, and
(2) clarify the role of humans in the human-technology relationship.

Q: Customer expectations are evolving rapidly. How can insurers leverage innovation to deliver more personalised, seamless, and efficient experiences?

A. Insurers have massive untapped potential to offer personalised customer experiences. As discussed in our recent publication, “Get claims smart: Using AI to transform your Life & Health insurance claims management”, the use of AI is not necessarily the best solution for all cases; in other cases, it’s unbeatable. From our Magnum experience, a simple example is why ask the same number of questions and questions about the same medical conditions for someone applying for a low sum assured and is under 30 years old versus someone who is 50+ years old? With a digital solution like Magnum, this is simple. With a traditional paper-based process, it means many more forms and potential for error. Sometimes it’s the simple things that make a customer’s life easy: just making it relevant to them and making the process painless.

How can organisations scale new technologies to enhance user experiences? Carefully design and test human-technology workflows. For example, in some circumstances, consumers prefer anonymity over the human touch. A Swiss Re trial testing the effectiveness of different underwriting journeys found that a “humanised” chatbot reduced applicants’ perceived anonymity, undermining their willingness to sharesensitive personal information. As a result, the typical online underwriting form had a 24% higher disclosure rate for mental health questions compared to the humanised chatbot. A test-and-learn approach can help insurers bring the human touch to the right moments in the customer‘s interactions with insurers. Test and learn on a small scale before adopting at large scale to anticipate counterintuitive implications. As part of testing, apply good practices, like “AI red teaming” – a practice used to evaluate the robustness and security of a system where human testers try to identify issues (e.g. inaccuracy or bias) with the solution.

Use conversational AI to better understand and enhance customer experience. It’s impressive how many insights can be drawn from customer interactions. By tracking patterns in frustration, complaints, and sentiment during conversations, insurers can gain a clearer understanding of customer satisfaction—and just as importantly, identify opportunities to improve processes and adjust product offerings to better meet evolving customer needs.

Apply a ‘responsible AI’ (RAI) framework from the point of design through to development and testing. By integrating RAI principles, like fairness, transparency and explainability, robustness, accountability and privacy, organisations can reduce their exposure to downside risks.

Applying responsible AI principles can also help organisations meet regulatory and legal requirements.

DOWNLOAD THE INSURANCE TRANSFORMATION AFRICA AGENDA

Recommended Articles