Europe AI regulation: An attempt to keep AI humane

IMAGE CREDIT:
Image credit
iStock

Europe AI regulation: An attempt to keep AI humane

Europe AI regulation: An attempt to keep AI humane

Subheading text
The European Commission's artificial intelligence regulatory proposal aims to promote the ethical use of AI.
    • Author:
    • Author name
      Quantumrun Foresight
    • June 13, 2022

    Insight summary



    The European Commission (EC) is taking strides to set ethical standards for artificial intelligence (AI), focusing on preventing misuse in areas like surveillance and consumer data. This move has sparked debate in the tech industry and could lead to a unified approach with the US, aiming for global influence. However, the regulations may also have unintended consequences, such as limiting market competition and affecting job opportunities in the tech sector.



    European AI regulation context



    The EC has been actively focusing on creating policies to safeguard data privacy and online rights. Recently, this focus has expanded to include the ethical use of AI technologies. The EC is concerned about the potential misuse of AI in various sectors, from consumer data collection to surveillance. By doing so, the Commission aims to set a standard for AI ethics, not just within the EU but potentially as a model for the rest of the world.



    In April 2021, the EC took a significant step by releasing a set of rules aimed at monitoring AI applications. These rules are designed to prevent the use of AI for surveillance, perpetuating bias, or repressive actions by governments or organizations. Specifically, the regulations prohibit AI systems that could harm individuals either physically or psychologically. For example, AI systems that manipulate people's behavior through hidden messages are not allowed, nor are systems that exploit people's physical or mental vulnerabilities.



    Alongside this, the EC has also developed a more rigorous policy for what it considers "high-risk" AI systems. These are AI applications used in sectors that have a substantial impact on public safety and well-being, such as medical devices, safety equipment, and law enforcement tools. The policy outlines stricter auditing requirements, an approval process, and ongoing monitoring after these systems are deployed. Industries like biometric identification, critical infrastructure, and education are also under this umbrella. Companies that fail to comply with these regulations may face hefty fines, up to USD $32 million or 6 percent of their global annual revenue.



    Disruptive impact



    The technology industry has expressed concerns about the EC's regulatory framework for AI, arguing that such rules could hinder technological progress. Critics point out that the definition of "high-risk" AI systems in the framework is not clear-cut. For instance, large tech companies that use AI for social media algorithms or targeted advertising are not classified as "high-risk," despite the fact that these applications have been linked to various societal issues like misinformation and polarization. The EC counters this by stating that national supervisory agencies within each EU country will have the final say on what constitutes a high-risk application, but this approach could lead to inconsistencies across member states.



    The European Union (EU) is not acting in isolation; it aims to collaborate with the US to establish a global standard for AI ethics. The US Senate's Strategic Competition Act, released in April 2021, also calls for international cooperation to counter "digital authoritarianism," a veiled reference to practices like China's use of biometrics for mass surveillance. This transatlantic partnership could set the tone for global AI ethics, but it also raises questions about how such standards would be enforced worldwide. Would countries with different views on data privacy and individual rights, like China and Russia, adhere to these guidelines, or would this create a fragmented landscape of AI ethics?



    If these regulations become law in the mid-to-late 2020s, they could have a ripple effect on the technology industry and workforce in the EU. Companies operating in the EU may opt to apply these regulatory changes globally, aligning their entire operation with the new standards. However, some organizations might find the regulations too burdensome and choose to exit the EU market altogether. Both scenarios would have implications for employment in the EU's tech sector. For example, a mass exit of companies could lead to job losses, while global alignment with EU standards could make EU-based tech roles more specialized and potentially more valuable.



    Implications for increased AI regulation in Europe



    Wider implications of the EC increasingly wanting to regulate AI may include:




    • The EU and the US forming a mutual certification agreement for AI companies, leading to a harmonized set of ethical standards that companies must follow, irrespective of their geographical location.

    • Growth in the specialized field of AI auditing, fueled by increased collaboration between private firms and public sectors to ensure compliance with new regulations.

    • Nations and businesses from the developing world gaining access to digital services that adhere to the ethical AI standards set by Western nations, potentially elevating the quality and safety of these services.

    • A shift in business models to prioritize ethical AI practices, attracting consumers who are increasingly concerned about data privacy and ethical technology use.

    • Governments adopting AI in public services like healthcare and transportation with greater confidence, knowing that these technologies meet rigorous ethical standards.

    • Increased investment in educational programs focused on ethical AI, creating a new generation of technologists who are well-versed in both AI capabilities and ethical considerations.

    • Smaller tech startups facing barriers to entry due to the high costs of regulatory compliance, potentially stifling competition and leading to market consolidation.



    Questions to consider




    • Do you believe that governments should regulate AI technologies and how they are deployed?

    • How else might increased regulation within the technology industry affect the way companies in the sector operate? 


    Insight references

    The following popular and institutional links were referenced for this insight: