Importance of building trust in AI
AI has been the buzzword everywhere recently. From the apps in our smartphones to automated robots working in the industry, it has become a household name. They can sense, learn, predict, adapt and produce. The use of AIs has now become more common than ever which helps reduce human work.
But still, there have been speculations and predictions about the evil side of AI: That now it is coming for our jobs, and later it will come after our lives.
Although such scenarios cannot be ruled out completely, there are not very strong factual or logical explanations behind them.
AI models are still a black box, and their decisions and outcomes are still very much questioned by the clients. Research has been going in full swing to deal with this issue, and many algorithm optimizations have been done, but still, no concrete solution has been found.
Instilling values in AI
Concerns about building human values in AI have become widespread. It has grown manifold over time. The factor of morality, or moral judgement, in case of real-world scenarios like for example self-driven cars: when does the AI take the decision to take a turn or stop at a pedestrian; or in case of the technology used in surgical robots, to make a certain incision at a specific part guarantees how much accuracy is still an ongoing process.
“Without proper care in programming AI systems, you could potentially have the bias of the programmer play a part in determining outcomes. We have to develop frameworks for thinking about these types of issues. It is a very, very complicated topic, one that we’re starting to address in partnership with other technology organizations,” says Arvind Krishna, Senior Vice President of Hybrid Cloud and Director of IBM Research, referring to the Partnership on AI formed by IBM and several other tech giants.
There have been many shortcomings when it comes to AI systems. This has been prevailing and has been seen in some high-profile cases also. Technicians who have been working with AI have first-hand experience in how this may undermine the process of building trust with the technology; but they recognise and fix faults, which adds to increasing the progress of building trust with AI.
“Machines get biased because the training data they’re fed may not be fully representative of what you’re trying to teach them,” says IBM Chief Science Officer for Cognitive Computing Guru Banavar. “And it could be not only unintentional bias due to a lack of care in picking the right training dataset but also an intentional one caused by a malicious attacker who hacks into the training dataset that somebody’s building just to make it biased.”
Creation of a Transparent Environment
Transparency is very important in building trust in AI, according to experts. People must have a fair idea of how the system works, and how it does its prediction and analysis before coming to a logical conclusion of taking a decision. They can have an understanding of the system, whether ethical or not, which prevents them from making poor judgement.
“We will reach a point, likely within the next five years, when an AI system can better explain why it’s telling you to do what it’s recommending,” says Rachel Bellamy, IBM Research Manager for human-agent collaboration.
“A similar parallel right now is how willing people are to share their location information with an app. In some cases it has a clear benefit, while in others, they may not want to share because the benefit isn’t significant enough for them,” says Jay Turcot, Head Scientist and Director of Applied AI at Affectiva.
Several areas should be touched upon about the topic of transparency in AI, but the important ones are:
1. Education
Education promotes openness. It is an excellent approach to debunk misconceptions about what and what not AI can achieve, undermining faith in their capabilities. Also, misconceptions about AI and which areas of work AI can potentially affect promotes more confusion and scepticism about the technology.
Thought leaders around the world have agreed on the idea of pushing the educational aspect of AI far more than it currently is. Teaching people about adapting to technology and learning new skills to take up new professions created by AI in the future is important.
2. Responsibility
AI has a huge societal impact and implications. It has a lot of advantages, reducing manual labour to performing complex tasks, it has got immense power. But as the saying goes, " With power comes responsibility"; developing AI technology fairly and responsibly is going to take the effort of more than a few companies working towards the goal. To tackle the challenge and eventually create interest among consumers about the product is going to need a collective effort of the industry, academics, government, and the people.
3. Fairness
Ensuring fairness in AI means getting rid of bias in machine learning technology. The algorithms might be developed in a way that might be biased in the sense of a mismatch between training data distribution and wanted data distribution. Errors can easily creep in, using the training data that is usually created, collected or processed. The more the biased data is fed to the AI, the more it is going to be trained on and then amplifies it. It might potentially result in a massive increase in the biases of the system.
To encourage the adoption of AI, properly analysed unbiased data must be collected and processed by the AI as much as possible. Therefore proper checking of the algorithms and the data-feeding parts need to be checked at every level to ensure proper fairness and avoid any kind of biases in the system.
4. Robustness
The robustness of a system is measured by the stability of algorithm performance when a model that is deployed in the real world is probed upon and distortions are introduced in the system. It demonstrated how effectively one algorithm fairs based on the new independent data set. This ensures that the algorithm of the system can handle the unforeseen uncertainties and problems that may arise in the system. It determines how robust the model is.
Robustness consists of two factors: Safety and security.
Safety is typically associated with the ability of an AI model to build knowledge that incorporates societal norms, policies, or regulations that leads to well-established, safe behaviours.
Security is defending AI systems from malicious attacks. Like any software system, AI systems are vulnerable to l attacks. This raises security concerns as models can be tampered with, and data can be compromised; hence compromising on the security.
Tools
Now that we know the importance of establishing trust in AI systems and also about the factors that play a key role in influencing them, let us look at a few tools that might help to achieve them:
AI Explainability 360
AI Explainibility 360 is a comprehensive toolkit that offers a unified API to bring together state-of-art algorithms that help people understand how machine learning makes predictions. It has guides, tutorials, demos and much more all in one interface.
It is an open-source library that supports the interpretability and explainability of datasets and machine-learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. There is no single approach to explainability that works best.
The toolkit is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. IBM moved AI Explainability 360 to LF AI in July 2020.
Watson Open Scale
IBM Watson OpenScale is an enterprise-grade environment for AI applications that provides your enterprise visibility into how your AI is built, is used, and delivers a return on investment. Its open platform enables businesses to operate and automate AI at scale with transparent, explainable outcomes that are free from harmful bias and drift.
With the Watson OpenScale service, you can scale the adoption of trusted AI across enterprise applications on hosted on-premises environments or in a private cloud environment.
AI Fairness 360
The AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R.
The AI Fairness 360 package includes:
1. A comprehensive set of metrics for datasets and models to test for biases,
2. Explanations for these metrics, and
3. Algorithms to mitigate bias in datasets and models. It is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education.
Conclusion
Though AI technology can be intimidating to cope with at times, it provides us with a plethora of opportunities and possibilities that can be realized by a multidisciplinary scientific approach. By taking into consideration the advantages and knowledge of the pitfalls that it might have, and how to deal with them, AI models can be built that are robust, transparent, bias-free and coherent.
Comments
Post a Comment