The Future of Artificial Intelligence: Ethical AI Framework
December 15, 2021
With the big data and analytics market expected to reach $103 billion by 2023 (1), there is no question that Artificial Intelligence (AI) will continue rising at the forefront of automated decision-making systems across a multitude of industries. What once started as easy-to-interpret rule-based algorithms that automate simple tasks has now transformed into the use of complex machine learning and AI-based predictive models which can leverage millions of data points to inform business decisions with high confidence.
As AI continues to increase its influence on business decisions, governments are taking steps to ensure that all AI usage is ethical. With more and more at stake, business leaders need a proper framework to make smarter AI-related decisions and leverage automated decision-making capabilities with a degree of trust and transparency.
Ethical Challenges in AI
There are several challenges and ethical considerations when it comes to the use of AI for automated decision-making. One of the biggest challenges is the presence of inherent biases in the underlying datasets used to train AI models that may lead to discrimination or unfairness. From recruitment systems that favour candidates of a specific gender to AI-assisted diagnoses in medical tests that are trained on a limited number of ethnic groups, there are numerous ways that the use of AI can cause severe implications stemming from data quality. While this may seem like an obvious consideration when preparing data for training these algorithms, some of the world’s leading organizations have faced serious allegations in recent years when it comes to ethical decision-making of large-scale AI systems.
Other challenges in automated decision-making systems include differences in methodology and a lack of standardized decision rules or thresholds. Both domain and analytics experts need to collaborate in developing criteria for when AI outputs can be confidently accepted for decision-making. With complex AI algorithms, the explainability of model decisions may also pose a challenge in getting approval to use these systems in practice. Determining the level of explainability required early in the development process will allow data scientists to choose the appropriate model to support the business use case. Lack of human intervention and validation, as well as proper audit trails, can further increase the risk of unethical decisions being made in AI-based systems.
Adastra’s Automated Decision Framework
Leveraging the power of AI for automated decision-making doesn’t always mean removing humans from the equation. In fact, data scientists, business stakeholders, lawyers, and other key stakeholders need to be involved in various stages of the AI decision-making process, from early development to commercial use of deployed models to meet certain ethical standards.
Adastra’s automated decision-making framework keeps Human in the Loop (HIL) in mind to guide and update AI models and their outcomes. HIL leverages both machine and human intelligence to create, validate and update AI models. Human experts are responsible for evaluating and labeling data for algorithms to use in training, as well as scoring model outputs to use for continuous model tuning and retraining. Allowing AI models to be used in practice earlier, when training data is still limited, or model performance has not yet met the required criteria.
Our framework can be broken down into two key components: a production-state workflow and a validation-state workflow (Figure below).
Figure 1: A production-state workflow and a validation-state workflow
Models begin in the validation state. At this stage it is critical that humans review and approve model outcomes when confidence levels of the predictions fall below a certain threshold. For binary classification problems, a model will predict an outcome for expert review, and the expert will either accept or reject the outcome. Their decision will then be used to add to the training data and retrain the model as part of an iterative feedback cycle, as well as execute their decided outcome in practice.
For multi-class classification problems, several proposals may be presented to an expert for review and final decision-making. In a similar manner as binary classification, their decided outcome will be added to the input dataset for retraining purposes. Should a model’s outcome fall below a specified confidence threshold, an expert should make a decision with no input from the model. As the model goes through multiple iterations of retraining and tuning, it is expected that the predictions will reach higher levels of confidence and eventually be suitable for full production use. In the production state, model outcomes may automatically be approved, with an HIL checkpoint as a precursor to outcome execution. It is important to note that after each decision made, either by the model or expert, technical metadata is produced to store information on the datasets used, executions, features, model versions, and other key artifacts. This allows companies to go back and evaluate each decision outcome, enabling both traceability and transparency.
MLOps and Ethical AI
MLOps often goes hand-in-hand with Ethical AI, as many of the automated processes and management considerations of MLOps aid in assuring that productionized AI models and their underlying datasets are fully governed and sufficient for their business use cases. Business rules, thresholds, model explanations, and scoring are all integrated into audit and notification systems to ensure the transparency of automated decision frameworks. Accountability and trust are achieved through policies, access controls, and stewardship review of model outcomes. Continuous monitoring of underlying datasets is a key element of MLOps, allowing systems to automatically detect data drift or bias, and apply appropriate remediation strategies when necessary to ensure only models suitable for business use are deployed. Versioning of models and their training datasets can also enable automatic reversal of model versions should the currently deployed models fall below certain performance criteria.
Why Trust Adastra
Adastra Corporation transforms businesses into digital leaders. For the past 20 years, Adastra has been helping global organizations accelerate innovation, improve operational excellence, and create unforgettable customer experiences, all with the power of their data. By providing cutting-edge Artificial Intelligence, Big Data, Cloud, Digital, and Governance services and solutions, Adastra helps enterprises leverage data they can control and trust, connecting them to their customers – and their customers to the world.
With continuous advancements in Machine Learning, Adastra invests in ongoing learning to stay abreast of recent developments, including certifications and research partnerships with academic institutions and government supercluster programs. Adastra focuses on providing practical applications that will give your business a competitive edge. From simpler regression models leveraging structured data to more complex models leveraging various types of structured and unstructured data, our team of highly qualified data scientists can build models that fit your specific business needs and data sets. Let Adastra help your company achieve data quality excellence.
Overall, implementing the best practices when it comes to MLOps enables the use of AI to be more ethical, transparent, and trustworthy. For more information on the principles of MLOps, please see Adastra’s two-part series: “The Road to MLOps: Machine Learning & Implementation Approach” and “The Road to MLOps: The 7 Principles of Machine Learning”.