4 Non Binding Artificial Intelligence Principles by Smart Dubai

  • January 09, 2019
  • 0 Comments
Image

Real estate technology firms should review Dubai’s AI ethics to gauge its impact on their business.

Any technological advancement brings excitement to end users due to its novelty and potential benefits for mankind. Artificial Intelligence (AI) is the latest to join this bandwagon. A frank discussion on ethics while using technology is a dire need of the hour.

Dubai’s Ethical AI Toolkit has been created to provide practical help across a city ecosystem. It supports industry, academia and individuals in understanding how AI systems can be used responsibly. It consists of principles and guidelines, and a self-assessment tool for developers to assess their platforms.

Smart Dubai aims to provide a unified guidance that is continuously improved in collaboration with the city’s various communities. Ultimately, the goal is to reach widespread agreement and adoption of commonly-agreed policies to inform the ethical use of AI.

DUBAI AI PRINCIPLES

The four non-binding AI principles are Dubai’s attempt to answer crucial questions such as these.

What overarching goals should an AI system have?  How should it behave, and what values should it follow?

These high-level statements lay out the aspirations and roadmap for the behavior of AI systems as they get smarter and become an integral part of people’s lives. The four main principles contain sub-principles that help in clearly define the goals for AI design and behavior.

Smart Dubai considers these principles a collaborative work in progress and entertain feedback for refining them.

1. Making AI systems Fair

  • Data ingested should, where possible, be representative of the affected population
  • Algorithms should avoid non-operational bias
  • Steps should be taken to mitigate and disclose the biases inherent in datasets
  • Significant decisions should be provably fair

2. Making AI systems accountable

  • Accountability for the outcomes of an AI system lies not with the system itself but is apportioned between those who design, develop and deploy it
  • Developers should make efforts to mitigate the risks inherent in the systems they design
  • AI systems should have built-in appeals procedures whereby users can challenge significant decisions
  • AI systems should be developed by diverse teams which include experts in the area in which the system will be deployed

3. Making AI systems transparent

  • Developers should build systems whose failures can be traced and diagnosed
  • People should be told when significant decisions about them are being made by AI
  • Within the limits of privacy and the preservation of intellectual property, those who deploy AI systems should be transparent about the data and algorithms they use

4. Making AI systems as explainable as technically possible

  • Decisions and methodologies of AI systems which have a significant effect on individuals should be explainable to them, to the extent permitted by available technology
  • It should be possible to ascertain the key factors leading to any specific decision that could have a significant effect on an individual. 

(Source: Smart Dubai)