Welcome to Aaquarians!
AAQUARIANS.AIAAQUARIANS.AIAAQUARIANS.AI
(+971) 585113677
contact@aaquarians.ai
Dubai & Bangalore

How fair is the use of artificial intelligence?

Achieving accountability in AI is easier said than done, as multiple stakeholders are involved and users perceive that the algorithm is fair, accountable, transparent and explainable, as they are able to trust and use the algorithm

How fair is the use of artificial intelligence?

Sivaramakrishnan R Guruvayur (L), Chief Product Officer & AI Scientist of GEMS Group in Dubai and Chitro Majumdar (R), Chief Strategist of Sovereign Institutions and Board Member at RsRL

AI transparency can also help mitigate the problems of trust, discrimination and fairness in AI. However, greater the transparency of an AI system, the more vulnerable it is from outside attack. Regulation in AI may fail to truly improve AI and moreover restrict the benefits accrued from AI systems.

While organisations safeguard the use of personal data, governments must collaborate with industries and tech companies to frame guidelines for ethics in AI. From my personal experience in dealing with AI and ethics, I believe that certain guidelines and standards must be followed that can bring fairness and governance in AI design systems.

Fairness and equality have never been the corner stones of the new era of capitalism and globalisation, nor should they be. However, we still need to stress to our children that the world is changing, and they need to be proactive about their own future.

AI transparency can also help mitigate the problems of trust, discrimination and fairness in AI. However, the greater the transparency of an AI system, the more vulnerable it is from outside attack. Regulation in AI may fail to truly improve AI and moreover restrict the benefits accrued from AI systems. From my personal experience in dealing with AI and ethics, I believe that certain guidelines and standards must be followed that can bring fairness and governance in AI design systems.

The human mind has come up with ways to reason beyond measure and logical explanations to different occurrences in life. However, the AI principles set by these technology firms serve more to protect themselves from legal repercussions while focusing less on an actionable set of criteria that would make AI fair for everyone. The basic question arises on how to govern these algorithmic systems effectively and legitimately, while ensuring that they are human centered and socially accountable. When users perceive that the algorithm is fair, accountable, transparent and explainable, they are able to trust and use the algorithm.

Achieving accountability in AI is easier said than done, as multiple stakeholders are involved including programmers, dealers, and implementers of AI products and services. The programmer is the first person to be held accountable, but so are the programmer’s manager and others who helped the program designer and knowledge expert to create the AI. The end users who deploy the AI products and services are also accountable as it is their duty to ensure ethical practices are followed at the design or selling stage. One proposal to deal with this is to ensure that the first generally intelligent AI is ‘Friendly AI’ and will be able to control subsequently developed AIs but it does raise few questions on its sustainability.

Another of the plausible solutions to the problem of ethical and accountable AI is explainable AI, which provides an opportunity to the users of AI to understand the technology behind the AI, add inputs to the decision-making process and contribute to algorithmic accountability. Explanations that are conveyed need to be understandable for society and policymakers as well.

One of the most likely applications of AI is at the discovery of crime. Javelin Strategy & Review calculated that the amount of false declines is 13 times higher than that sum lost in current credit card fraud and is a great source of disappointment for clients. Using AI to decrease the number of false declines could be a great benefit for some card users.

There is a need for involved organisations to collaborate and discuss the ethics of AI, set up advisory bodies, formulate core ethical principles and handle data responsibly. Apart from existing regional regulations, a central regulatory body just like the food and drug administration in Europe is required to address ethical issues. A better alternative would be to make these regulatory bodies responsible for assessment of AI products and applications rather than the underlying algorithms.

The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem. Google created an AI ethics board, namely, Advanced Technology External Advisory Council for the responsible development of AI. The panel was later dissolved as it kept AI ethics and governance as only a sideshow. Regulation in AI may fail to truly improve AI and moreover restrict the benefits accrued from AI systems. A multidimensional approach to achieve fairness in AI would go a long way in increasing trust in AI technology.

While organisations safeguard the use of personal data, governments must collaborate with industries/tech companies to frame guidelines for ethics in AI. From my personal experience in dealing with AI and ethics, I believe that certain guidelines and standards must be followed that can bring fairness and governance in AI design systems. Vulnerabilities of the AI system should also be internally reviewed and released to end users to ensure awareness of possible inherent biases as documented by AI developers/ designer.

Secondly, there is a requirement of a quantitative measure of fairness of an AI system. A standardised software to check for bias in the AI system will go a long way to go.

Thirdly, there must be no ambiguity regarding the company policy on responsibility or accountability of the AI system, and each AI designer and developer must know.

Fourthly, laws and regulations governing AI must be strictly adhered to including national and international laws, guidelines and regulations.

Fifthly, secondary research should be weighed while designing the AI system, such as that by linguists, sociologists and behaviorists which ensures application specific ethical issues to be approached in a holistic context.

Lastly, legal liability to be imposed on AI developers to ensure safety of the AI system and risk mitigation. Regulators and manufactures, though to be kept accountable to some extent, cannot be majorly responsible as it is the developers that directly shape the AI system.

Ultimately, what we need is a strict implementation of the practice guidelines and a formal software process that will guarantee AI fairness, AI accountability and AI transparency.

Sivaramakrishnan R Guruvayur (L), Chief Product Officer & AI Scientist of GEMS Group in Dubai
Chitro Majumdar (R), Chief Strategist of Sovereign Institutions and Board Member at RsRL

The AB Future of Work Forum will will dive into our new reality and present business leaders with best practices, advice and solutions for thriving in the new digital era.

X
× How can I help you? Available from 09:00 to 17:00 Available on SundayMondayTuesdayWednesdayThursdayFridaySaturday