Explainable Ai - What is the B2B relevance?

Explainable AI – What’s the B2B Relevance?

Explainable AI – What’s the B2B Relevance?

Explainable or Interpretable AI (XAI) is all about humans being able to understand the result, output or solution that has been generated in by an Artificial Intelligence engine that has been developed by technical specialists and data scientists.

It’s three core principles are transparency, interpretability and explicability.

You could say it’s about opening the ‘black box’ (an increasingly uncomfortable term) into a ‘glass box’ that explains why and how a particular decision was reached.

I recently got to see an excellent example of XAI in the “myWimbledon App” that IBM further developed for the 2022 championships. As part of their upgrades for an even better fan experience, their ‘Match Insights’ included a Watson-powered AI prediction of who would win each match. The thing about this that ignited my interest was not whether the prediction was right or not, but that it was explainable

XAI is about more than just understanding, however. It’s about TRUST. As IBM puts it Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production”.

A lot of examples given of XAI talk about trust in the consumer or citizen, i.e. B2C, context e.g. credit decisions, but it also struck me that this will increasingly be required in B2B as well, where contracts are bigger and more complex, and personal interactions still tremendously matter.

My colleague Dr Mark Hollyoake has produced a ground breaking new definition of trust in B2B: 

“The willingness to be vulnerable to another party and the decision to engage in actions based upon an interpretation of their ability, credibility and the expectations of mutual value exchange over time”

(Hollyoake M, 2020)

I would highly recommend that executives take this definition very seriously when implementing new AI models in a B2B setting e.g. for streamlining supply chains, automating customer interactions, or detecting fraudulent claims. Will your account managers and key account managers be equipped to explain the AI-generated results, outputs or solutions in a way that demonstrates ability, credibility and mutual value?

You can also read IBM’s case studies on the subject at https://www.ibm.com/uk-en/watson/explainable-ai.



Are you working with Ai and want to know how to build Trust into the models? Then talk to us – let us help you build Trust into your ways of working.