Delivering Customized and Actionable Solutions That Generate Revenue, Grow Customer Base and Improve Internal Productivity
Data. Analytics. Design. Solution
Navigating the Ethical Minefield: Professional Responsibility in Unknown Territory of AI Deployment
Sep 22, 2024
6 min read
0
6
0
Strategic initiatives around integration of Artificial Intelligence (AI) are being mandated by boards across various industries. The goal to adapt or build new processes and tasks with capabilities to deliver greater productivity and efficiency has become a priority. However, amidst the declarations by the key application vendors and statements from various consultancies including McKinsey, Gartner, BCG etc. a pressing concern about the ethical deployment of AI always gets attention.
In my presentation at the Data Science Connect 2024, organized by Women in Analytics, I focused on a pragmatic approach to overcoming this tremendous challenge. Read on!
Understanding Ethical AI Deployment
The adoption of AI technologies raises critical questions regarding transparency, bias, privacy, and accountability. As AI systems become more sophisticated, ensuring that they are designed and deployed ethically becomes a central concern of enterprises and national governments. Professional responsibility in AI deployment involves adherence to ethical guidelines, safeguarding against bias, and upholding transparency in decision-making processes. According to established standards by NIST Ethical deployment of AI is closely related to our ability to sustain a 'Trustworthy' AI system.
Trust comes from ensuring that the following guardrails are accounted for in any AI deployment. AI use or applications should be -
Safe
Secure & Resilient
Explainable & interpretable
Privacy-Enhanced
Fair – With Harmful Bias Managed
Accountable & Transparent
Valid and Reliable
Embracing Ethical Principles
Ethical AI deployment necessitates a proactive approach in aligning technology with moral values and societal welfare. By incorporating ethical principles into the development and deployment of AI systems, professionals in the field can mitigate potential risks and ensure that AI applications serve the greater good.
This next generation of AI will reshape every software category and every business, including our own. Although this new era promises great opportunity, it demands even greater responsibility from companies like ours.
Satya Nadella
CEO, Microsoft, 2023
Responsibility for AI can be addressed at 3 levels:
National Responsibility
Enterprise Responsibility
Professional/ personal Responsibility
National actions in the formation of executive orders, regulations, national/ state laws provide guidance and Risk Management Frameworks. Rigor of enactment of laws and their compliance are key to driving this behavioral change. Examples include EU AI Act 2024, Executive Order on Safe and secure AI in US 2023, Bill C27, AI and Data Act 2024 by Canada. (See Sources below)
Enterprises are building plans to interpret and design frameworks to respond to such national mandates. Their challenge: to customize without non-compliance, within their industry, market, and adoption of AI use cases. AI readiness and AI risk mitigation actions are top of the organization priority list.
The most impactful actions come from the professional and personal actions of responsibility. Individuals' participation is atomic in nature. Progressively AI training and awareness education have been focused on individuals as professionals or as participants in AI execution. The domino effect of one failed action or one weak link can bring down communities, society, institutions, and nations.
Call To Action: Mitigating AI Risk
RADAR - Guidance for daily interactions with AI
Reading and understanding the AI processes, how algorithms work, what the outcomes of the AI processes are, and other aspects of the AI deployment are important as we consider executing deployments successfully.
Predictive AI (includes Machine Learning techniques) uses and deployment have been practiced for more than 10-20 years. Their role and impact have been domain and function specific. Risk management has been isolated and have been managed through the SR-OCC 2011 US regulations that have established rigorous MRM (Model Risk Management) framework. The training and awareness built over the years in adherence to MRM, have augmented how professionals have deployed the use of Machine Learning - based applications (ML applications).
Early trials with integrating Generative AI (GenAI) underway in significant number of enterprises/ organizations, the risk factor around AI failure has grown. The immaturity of the technology and the downside of algorithmic bias require greater scrutiny and awareness. This is only a small aspect of larger concern. Not surprisingly, the role of -Human-in-the-loop has been advocated at the individual level. The need to understand roles and responsibilities as professionals designing, building, deploying or using AI-based products is key responsibilities of every HR department.
Awareness of how as professionals we should guide, supervise, and measure AI performance is foundational to trustworthy AI deployments.
Asking questions about the applications, its use, and failure are the second step in building your understanding of AI. The level of risk tolerance, the cost of a 'hallucination' (bad, erroneous generated responses by the Large Language Models/ LLMs) or the potential negative impact on individuals, product, and consumer have to be questioned to ensure a safe, fair, secure, reliable and accountable AI process.
"If a poor student can’t get a loan because a lending model deems him too risky (by virtue of his zip code), he’s then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a “toxic cocktail for democracy.” Welcome to the dark side of Big Data."
---------- Cathy O' Neil Author, Weapons of Math Destruction
Documenting across the stages of deployment helps build a better knowledge repository of the AI system outcomes. Performance monitoring and data governance become central to the ways professionals are guided to understanding the history and the need for future improvements against AI deployments.
Compliance to avoid stiff regulatory penalties require transparent audit trails. This is more important with Generative AI (GenAI) applications. For professionals involved in executing AI deployments, that audit trail begins with observations using one's moral compass. A professional compass that will help identify and document, fairness, biases or unsafe outcomes of the deployment early on.
Awareness building takes time, resource and effort. As stewards of AI technology, professionals hold a significant responsibility in ensuring the ethical deployment of AI systems. Transparency, accountability, and continuous monitoring are essential components of professional responsibility in AI deployment. By committing to ethical practices and standards, professionals can uphold the trust of stakeholders and mitigate potential ethical dilemmas.
With any evolving technology there are limitations. There are likely to be a snafu or two. With the rapid evolutions of new approaches in LLM-based GenAI, the diversity of test and validation results can simply have us tune-off from the information overload. This can create a fissure in RAD of RADAR!
A pragmatic approach for busy individuals to overcome this gap, is to keep a mental image of 'What AI is". Trusting our professionally trained gut instincts about AI allows us to
Invite AI to the table for discussions
Understand and assess how AI is to be used
Treat AI as a person (the fast-learning intelligent child)
As the Human in the process establish the guard rails you think are most appropriate to the context.
Establishing ethics committees within organizations can serve as a mechanism to uphold ethical standards in AI deployment. They can provide guidance, oversight, and evaluation of AI projects to ensure compliance with ethical frameworks and regulations. As individuals operating in a culture of ethical awareness and accountability, organizations can navigate the ethical minefield of AI deployment more effectively. Awareness around all things AI is important but by giving it context and focus of the use case you are working on- applying the RADAR principles can deliver effectiveness around AI deployments.
Finally, as we learn and understand the impact of the AI deployments within the context of the functional domain, business unit or departments, our knowledge or information needs to be socialized.
Reporting the individual experiences to the team or organizations is important. Sustaining a trustworthy AI deployment requires a continuous learning and adaption. As professionals we learn to improve and update existing processes to strengthen the reliability and consistency of the deployment outcomes. This due diligence supports the lowering of situations that result in AI failures.
With Predictive AI models the establishment of MLOps has been tied to operationalization of machine learning models. Similarly, the use of GenOps will become an important aspect of operationalizing GenAI-led systems. Since GenAI is based on common, generalized LLM models, their applications can be enterprise wide. Their impact from a bad deployment can also be enterprise wide. Monitoring across the enterprise for diverse uses can be very difficult in the early stages of adoption/ deployment. This is where reporting and communicating good or bad outcomes to the user community becomes a pivotal activity that allows improvements and customization of the models (RAG, RLHF, Fine Tuning LLM etc.)
Conclusion
In conclusion, ethical AI deployment is a team effort that demands the strong commitment of professionals to uphold moral values, foster inclusivity, and ensure transparency. By embracing the practical RADAR principles, professionals can navigate the uncharted territory of AI deployment with integrity and foresight. While the skills of AI are still evolving, being alert, collaborating and communicating around individual experiences will lead to a more responsible community, society, enterprise or nation.
Remember, the decisions we make today will shape the AI landscape of tomorrow.
#AIdeployments #RADAR #WIA #ResponsibleAI #AIethics #SucceedingwithAI
Sources:
https://medium.com/the-generator/what-does-it-mean-to-regulate-ai-692c9842967b
EU Act: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
US AI Safety rules https://www.nist.gov/itl/ai-risk-management-framework
WOMD: "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy", Cathy O'Neil (Author)