The Potential Impact of New AI Regulations

Digital globe map
Author: Shini Menon, CISA, CISM, CDPSE
Date Published: 1 January 2024
Related: The Promise and Peril of the AI Revolution: Managing Risk

This article was not written by any generative artificial intelligence (AI) chatbot. This disclaimer would not have mattered last year; however, AI has experienced a significant boom. As a result, there has been a sudden, widespread realization that AI is not something that can be ignored—either because it instills fear in those unaware of its potential or because it generates so much excitement about what it can do.

There is a dire need for regulations that address the use, misuse and ethical challenges of generated data.

AI is an umbrella term that is often confused with terms such as machine learning (ML) or large language models (LLMs). Amid the potential and excitement surrounding emerging AI technologies and their impact, industry leaders, government organizations and enterprises are debating regulatory interventions such as the European Union (EU) AI Act,1 the US state of New York AI law (restricting the use of automated employment decision tools)2 and approximately 30 other AI-related proposals being circulated around the world. There is a dire need for regulations that address the use, misuse and ethical challenges of generated data. The enforcement of regulations will have profound impacts on various sectors, including education, data privacy, life science and research.

An analysis of the impact of AI reveals key themes underscoring the influence of regulations, bills and laws. The goal of the analysis is to juxtapose the present state with a desired future state, thereby assessing the need for impactful regulations as well as increasing general awareness of how regulations can positively improve the field of AI.

Types of Regulatory Interventions Around the World

China, the EU, the United Kingdom and the United States have each laid out regulatory requirements that address the changing AI landscape. Recently, the EU AI Act laid the groundwork for further development of a risk management framework. China has taken a different approach, adopting three distinct regulatory measures aligned with national, regional and local perspectives. The latest, China’s deep synthesis provisions, came into effect in 2023.3 Although other countries have tried to impose and implement different regulations, not many have kept up to speed with the rate at which ML and AI tools and algorithms have evolved.

Figure 1

Figure 1 depicts the annual increase in laws and regulations that feature the words “artificial intelligence.” Legislative bodies in 127 countries passed 37 laws that included AI over the last few years, which underscores the relevancy of AI in the regulatory discussion.4

Countries around the world, including Canada, China, Spain, the United Kingdom and the United States, have either drafted AI-specific bills, enacted AI acts or passed regulations in an effort to address ethical concerns (e.g., data bias, origin or data sources specific to a region or country) that have arisen in this rapidly progressing field of knowledge.5 Some of these regions use a sectorial approach to regulating AI (i.e., specific to data privacy or specific to content), while others, such as China and the United States, focus on multiple areas (i.e., transparency, bias and data protection) when implementing laws. There is room for improvement in drafting these regulations, starting with understanding how AI and ML models and algorithms work.

Understanding AI and ML Models

Figure 2

AI and ML models (based on algorithms) rely on several inputs—the main input being data. Data is used to generate training data sets using already-known AI and ML models to derive a certain result or effect. Most models interpret data and provide recommendations on solving issues related to specific problems. Figure 2 depicts the steps to developing an AI model.

Deep neural networks, logistic and linear regression, decision trees, k-nearest neighbor, random forest and Naïve Bayes are some of the current popular algorithmic models. These models are grouped into categories that are usually based on where they can be applied, either in an enterprise or via a system.

Figure 3

Figure 3 depicts the categorization of AI models and their application areas.

Any AI-based ML model enables learning via statistical approaches that support ML from data sets. There are four categories of AI model learning:

  • Supervised learning involves learning from labeled datasets or data gathered from experience.
  • Semisupervised learning involves two sets of data—one that is labeled data and one that is not labeled.
  • Unsupervised learning enables the machine to learn with data that is not labeled.
  • Reinforcement learning is based on a series of positive or negative outcomes in regard to a learning goal.

What Is the Impact of Regulations on AI?

It is the role of governments to debate issues related to fairness, authenticity and content restrictions (some of the biggest regulatory considerations). Many existing AI acts and regulations establish far-reaching consequences for LLMs, generative AI chatbots and natural language processing algorithms. IT professionals and organizations around the world need to examine the current and future impact of regulations to understand where further regulatory intervention may be required.

Figure 4

Impact of Recent Regulations
Existing regulations do not delve any deeper into the actual impacts (such as promoting ethical practices when it comes to children or how AI models are programmed to make decisions based on human-centric values) beyond prescribed guidelines and prohibitions. The EU aims to categorize applications based on the risk they pose to the public.6 The China AI Act recommends adding or creating filter mechanisms (such as management systems to review algorithms and provide user and child protection) via a regulatory LLM designed to scrub unwarranted, nonproprietary and unwelcome information.7 Both of these approaches contrast starkly with rulemaking efforts in the United States, where it is more common to propose principles that will guide how to design or use AI via regulatory guardrails.8

All current regulatory interventions were put in place by governments and regulators that were strongly motivated to manage certain risk areas. Those areas are identified based on the type of risk (generated by AI and ML) and the specific outcomes expected in response to the regulations. Figure 4 depicts the four risk areas managed by regulatory intervention.

The best outcome from ML and AI regulations and bills would be AI models that are aligned with human values, protect against unethical principles and have the ability to show how decisions are made.9

The best outcome from ML and AI regulations and bills would be AI models that are aligned with human values, protect against unethical principles and have the ability to show how decisions are made.

Desired Impact of AI Regulations
There are both implicit and explicit desired impacts associated with the pursuit of regulatory guardrails to enforce fairness in the AI universe:

  1. Impact of uncontrolled data—There is a need to safeguard where data is generated, its purpose, its original source and citation of its owner/ author. There are some regulations (e.g., the EU AI Act, China AI regulations) that state this need, but some others do not. There needs to be a mandatory regulatory requirement in place to provide the necessary safeguards.
  2. Impact and bias related to ethical and human considerations—Regulations should enforce organizations to review AI models for the types of bias that are present in existing algorithms (e.g., population-centric data driven by unverified data sources) and refine those algorithms. Regulatory requirements should list the types of biases (e.g., population-centric, driven by unverified data sources) that algorithms should avoid. Algorithms can be trained on different types of data sets to prevent bias.
  3. Impact of easily accessible, private and sensitive information—Several data privacy regulations have addressed this potential security risk (e.g., the EU AI Act, China AI regulations); however, regulations need to mandate that AI organizations provide data cleansing, scrubbing and the ability to encrypt personally identifiable information (PII).
  4. Impact on children’s education—Regulators and leading AI organizations need to work together to design AI-specific filtering models that show age-appropriate content, filter bias and promote children’s curiosity, thereby helping them derive learning from answers, rather than providing easy answers to questions, such as with generative AI.
  5. Impact on research and innovation—AI models can help derive meaningful insights from research papers based on the priorities, areas, timelines and innovation readiness (the ability to implement an idea quickly) of such papers. In the future, AI models may be able to bridge the gap between research and innovation by designing research and development models by topic or by generating innovation models based on existing research by topic. Regulators need to be aware of AI’s potential and create or update regulatory bills and laws to allow for generating trust in the data and efficacy of innovative AI models.
  6. Impact on life sciences (including medicinal drug development)—Currently, there are AI algorithms that expedite the identification of potential drug components that may be essential in the formulation of drugs for some life-threatening diseases.10 There are also models that help mimic physician consultations that are nearly 70-80 percent accurate. The key for regulators is to achieve trust and data accuracy via auditing these models on a periodic basis. This process would be very similar to statutory audits or onsite inspections.
  7. Future impact on devices (interconnected)—The world is complex and connected. Smart watch data (measuring health data) can sit on the same cloud service that smart home devices might share using the same email address. Because of the inevitable interconnectivity between data, user and device, it is imperative that regulators start considering all AI software on primary and secondary devices as an extension of the device itself and provide regulatory intervention (either via regulatory AI models or additional filters for bias and privacy issues).

What Does the Future Hold for AI Regulations?
Issues that need to be addressed immediately to ensure the effectiveness of both current and future AI regulations11 include:

  • Awareness and training for the regulators on the types and applications of AI models
  • Consideration of the current applicability of data, algorithms and models around the world in light of risk; provision of a layered approach to self-regulation—not just driven by regulators but also based on feedback from the industry, the public and organizations—to continuously enable feedback and improve
  • Establishment of a consortium of regulators who can meet and discuss the gaps in current regulatory laws (such as data transparency and decision-making in AI models) and how those gaps can be addressed in the short, medium and long term
  • Ongoing and periodic alignment of regulatory and industry bodies on the impact of AI with discussion of challenges and solutions. Alignments could result from government, state or local regulatory bodies meeting up with organizations, enterprises and consultants who work in developing and implementing ML and AI tools.

Conclusion

The future of regulatory intervention for AI and ML seems like a long, winding road. It will help if industry practitioners and regulators form a consortium or engage in joint projects to continually monitor and improvise on existing regulations so that guardrails for implementing AI tools are clearly identified and implemented.

Note

The details of the regulations included in this article may have changed by the time of publication but were accurate at the time of writing.

Endnotes

1 European Parliament, “EU AI Act: First Regulation on Artificial Intelligence,” 14 June 2023, http://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
2 Lohr, S.; “A Hiring Law Blazes a Path for AI Regulation,” The New York Times, 25 May 2023, http://www.nytimes.com/2023/05/25/technology/ai-hiring-law-new-york.html
3 Finlayson-Brown, J.; S. Ng; “China Brings Into Force Regulations on the Administration of Deep Synthesis of Internet Technology,” Allen and Overy, 1 February 2023, http://www.allenovery.com/en-gb/global/blogs/data-hub/china-brings-into-force-regulations-on-the-administration-of-deep-synthesis-of-internet-technology-addressing-deepfakes-and-similar-technologies
4 Li, C.; “Global Push to Regulate Artificial Intelligence, Plus Other AI Stories to Read This Month,” World Economic Forum, 2 May 2023, http://www.weforum.org/agenda/2023/05/top-story-plus-other-ai-stories-to-read-this-month/
5 Op cit European Parliament, Op cit Lohr, Op cit Finlayson-Brown and Ng; Op cit Li; US White House, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, USA, October 2022, http://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
6 Smith, B.; “Advancing AI Governance in Europe and Internationally,” Microsoft, 29 June 2023, http://blogs.microsoft.com/eupolicy/2023/06/29/advancing-ai-governance-europe-brad-smith/
7 Pan, C.; “China Sets Out New Rules for Generative AI, With Beijing Emphasising Healthy Content and Adherence to ‘Socialist Values,’” South China Morning Post, 13 July 2023, http://www.scmp.com/tech/big-tech/article/3227576/china-sets-out-new-rules-generative-ai-beijing-emphasising-healthy-content-and-adherence-socialist
8 Op cit Smith
9 Levin, B.; L. Downes; “Who Is Going to Regulate AI?” Harvard Business Review, 19 May 2023, http://hbr.org/2023/05/who-is-going-to-regulate-ai; Hutson, M.; “Rules to Keep AI in Check: Nations Carve Different Paths for Tech Regulation,” Nature, 8 August 2023, http://www.nature.com/articles/d41586-023-02491-y
10 EurekAlert!, “Insilico Medicine Receives IND Approval for Novel AI-Designed USP1 Inhibitor for Cancer,” 25 May 2023, http://www.eurekalert.org/news-releases/990417
11 Kohn, B.; F. Pieper; “AI Regulation Around the World,” TaylorWessing, 9 May 2023, http://www.taylorwessing.com/en/interface/2023/ai---are-we-getting-the-balance-between-regulation-and-innovation-right/ai-regulation-around-the-world; Op cit European Parliament; Tabsharani, F.; “Types of AI Algorithms and How They Work,” TechTarget, 5 May 2023, http://www.techtarget.com/searchenterpriseai/tip/Types-of-AI-algorithms-and-how-they-work; Lobel, O.; “The Law of AI for Good,” SSRN, 26 January 2023, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=4338862; Thierer, A.; “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” RStreet, 9 February 2023, http://www.rstreet.org/commentary/mapping-the-ai-policy-landscape-circa-2023-seven-major-fault-lines/; Frank, M.; “Managing Existential Risk From AI Without Undercutting Innovation,” Center for Strategic and International Studies, 10 July 2023, http://www.csis.org/analysis/managing-existential-risk-ai-without-undercutting-innovation

SHINI MENON | CISA, CISM, CDPSE

Is associate director of consulting services at Xybion Corporation, Canada. She has nearly 17 years of experience working at product, service and consulting firms such as Oracle, Infosys, MetricStream, PricewaterhouseCoopers, KPMG and Deloitte. Her expertise lies in helping clients implement software products (specifically in the life science and medical device industries); advising on governance, risk and compliance platforms; and setting up and managing teams for IT audit, risk and control transformation programs, International Organization for Standardization (ISO) and US Sarbanes-Oxley Act of 2002 compliance, third-party risk assessments, data privacy standards and regulatory compliance. She is associated with ISACA and other professional bodies.