I’ve been very fortunate to have worked at Mars for more than 22 years, across several countries and a variety of technologies, including large SAP implementations and the deployment of global collaboration capabilities to more than 45,000 Associates.
In the last 3.5 years, I was put in a unique position where I’ve been able to build up an internal team of data scientists (who I chose from broad backgrounds, including but not limited to, a Mathematician, a data scientist, a bioinformatician and my own Physics (PhD) background).
The scope of.work that we undertake is broad and goes across several Mars business units and functions, but one common factor is trust. How do we get people to trust the results from machine learning?
Without trust, people won’t leverage the benefits of machine learning at all or as fully as they otherwise could. Here at Mars, we envisage humans and artificial intelligence (AI) working together, giving us the ability to leverage the best of both worlds.
So to help us gain that trust, we approach the problem from a number of different directions.
“The first part of trust is trusting that the problem being solved needs solving”
There is no one size fits all since each problem we are solving for has unique aspects to it being able to see where an algorithm is looking can be useful where the outputs of a model are visual, whereas seeing how the different input values were weighted in terms of importance for a specific numerical output can be important in other situations.
First and most importantly, we must identify and ask the right question, and that requires leveraging a process that is called design thinking with key user groups
Here at Mars,we apply design thinking to find the right problem, which then forms the basis of our digital flywheel, e.g. finding the right problems to solve, use data, analytics and AI to solve them and then automate and scale them to help free Associates time to find more problems to solve.
To support this, we have trained more than 25,000 Associates across Mars in design thinking and have a thriving online community of practice. The first part of trust is trusting that the problem being solved needs solving. The next component of trust is co-creation, working with domain experts, whatever the problem is and collaborating on quick sprints to test validity of solutions, leveraging that expertise to understand the data and ensure the outputs are comprehensible and actionable.
When we implemented an AI-based system to do this identification for Mars Antech pathologists, we implemented it such that pathologists could see what we had identified as mitotic features and do a quick manual validation of the count presented.
The images below show the view a patholigist receives for a fully counted slide sample and the next image shows a zoomed in view of how we identify each of the features for the pathologists validation.
For other work we have done, we use very different techniques.In X-rays diagnosis (veterinary), we use heat maps to show what the model saw when it identified a finding as being present, and for nonvisual data we use a variety of techniques – a key one is called Shapley values.
We use Shapely values to decompose a result into its contributing factors, so the person using the output can see why a value is what it is, and how much contribution each of the factors determining the value makes to it. E.g. if a model was built to for example predict house prices, understanding which features contribute most to the price could be very important.
In summary, for humans and AI to work together most effectively, its crucial to build trust at every stage of development: even an easy-to-use system may not be trusted. We build this into all our AI projects as part of our digital flywheel to ensure we deliver what’s needed in a way that can be used and most importantly trusted so it can be actioned to deliver real business benefit.