AI TRiSM: Ensuring that AI remains ethical

IMAGE CREDIT:
Image credit
iStock

AI TRiSM: Ensuring that AI remains ethical

AI TRiSM: Ensuring that AI remains ethical

Subheading text
Companies are urged to create standards and policies that clearly define the boundaries of artificial intelligence.
    • Author:
    • Author name
      Quantumrun Foresight
    • October 20, 2023

    Insight summary



    In 2022, research firm Gartner introduced AI TRiSM, standing for AI Trust, Risk, and Security Management, to ensure the governance and reliability of AI models. The framework consists of five pillars: explainability, model operations, data anomaly detection, resistance to adversarial attacks, and data protection. The report highlights that poor management of AI risks can lead to significant losses and security breaches. Implementing AI TRiSM requires a cross-functional team from legal, compliance, IT, and data analytics. The framework aims to build a culture of "Responsible AI," focusing on ethical and legal concerns, and is likely to influence hiring trends, government regulations, and ethical considerations in AI.



    AI TRiSM context



    According to Gartner, there are five pillars to AI TriSM: explainability, Model Operations (ModelOps), data anomaly detection, adversarial attack resistance, and data protection. Based on Gartner's projections, organizations that implement these pillars will witness a 50 percent boost in their AI model performance in relation to adoption, business objectives, and user acceptance by 2026. Additionally, AI-powered machines will make up 20 percent of the world's workforce and contribute 40 percent of overall economic productivity by 2028.



    The findings of Gartner's survey suggest that many organizations have implemented hundreds or thousands of AI models that IT executives cannot comprehend or interpret. Organizations that do not adequately manage AI-related risks are significantly more prone to encountering unfavorable outcomes and breaches. The models may not function as intended, leading to security and privacy violations, and financial, individual, and reputational harm. Inaccurate implementation of AI can also cause organizations to make misguided business decisions.



    To successfully implement AI TRiSM, a cross-functional team of legal, compliance, security, IT, and data analytics personnel is required. Establishing a dedicated team or task force with proper representation from each business area involved in the AI project will also yield optimal results. It's also essential to ensure that each team member clearly understands their roles and responsibilities, as well as the goals and objectives of the AI TRiSM initiative.



    Disruptive impact



    To make AI safe, Gartner recommends several vital steps. First, organizations need to grasp the potential risks associated with AI and how to mitigate them. This effort requires a comprehensive risk assessment that considers not only the technology itself but also its impact on people, processes, and the environment.



    Second, organizations need to invest in AI governance, which includes policies, procedures, and controls for managing AI risks. This strategy includes ensuring that AI systems are transparent, explainable, accountable, and comply with relevant laws and regulations. Additionally, ongoing monitoring and auditing of AI models are crucial to identify and mitigate any potential risks that may arise over time. Finally, organizations need to develop a culture of AI safety, promoting awareness, education, and training among employees and stakeholders. These steps include training on the ethical use of AI, the risks associated with AI, and how to identify and report issues or concerns. 



    These efforts will likely result in more companies building their Responsible AI departments. This emerging governance framework addresses the legal and ethical obstacles related to AI by documenting how organizations approach them. The framework and its associated initiatives want to eliminate ambiguity to prevent unintended negative consequences. The principles of a Responsible AI framework focus on designing, developing, and using AI in ways that benefit employees, provide value to customers, and positively impact society.



    Implications of AI TRiSM



    Wider implications of AI TRiSM may include: 




    • As AI TRiSM becomes increasingly important, companies will need to hire more skilled workers knowledgeable in this field, such as AI security analysts, risk managers, and ethicists.

    • New ethical and moral considerations, such as the need for transparency, fairness, and accountability in using AI systems.

    • AI-augmented innovations that are secure, trustworthy, and reliable.

    • Increased pressure for government regulation to protect individuals and organizations from risks associated with AI systems.

    • A greater focus on ensuring that AI systems are not biased against particular groups or individuals.

    • New opportunities for those with AI skills and potentially displacing those without them.

    • Increased energy consumption and data storage capacity for constantly updated training data.

    • More companies being fined for not adopting global Responsible AI standards.



    Questions to consider




    • If you work in AI, how is your company training its algorithms to be ethical?

    • What are the challenges of building Responsible AI systems?


    Insight references

    The following popular and institutional links were referenced for this insight: