All About the AI Regulatory Landscape


 

All About the AI Regulatory Landscape
Image from Canva
 

AI is advancing at an accelerated pace, and while the possibilities are overwhelming, to say the least, so are the risks that come with it, such as bias, data privacy, security, etc. The ideal approach is to have ethics and responsible guidelines embedded into AI by design. It should be systematically built to filter the risks and only pass the technological benefits.

Quoting Salesforce:

“Ethics by Design is the intentional process of embedding our ethical and humane use guiding principles in the design and development”.

But, it is easier said than done. Even the developers find it challenging to decipher the complexity of AI algorithms, especially the emerging capabilities.

 

“As per deepchecks, “ability in an LLM is considered emergent if it wasn’t explicitly trained for or expected during the model’s development but appears as the model scales up in size and complexity”.

 

Given that the developers need help understanding the internals of the algorithms and the reason behind their behavior and predictions, expecting authorities to understand and keep it regulated in a short time frame is an overask.

Further, It is equally challenging for everyone to keep pace with the latest developments, leaving aside comprehending it timely to make the amenable guardrails.

 

The EU AI Act

 

That points us to discuss the European Union (EU) AI Act – a historic move that covers a comprehensive set of rules to promote trustworthy AI.

 

All About the AI Regulatory LandscapeAll About the AI Regulatory Landscape
Image from Canva
 

The legal framework aims to “ensure a high level of protection of health, safety, fundamental rights, democracy and the rule of law and the environment from harmful effects of AI systems while supporting innovation and improving the functioning of the internal market.”

The EU is known for leading data protection by introducing the General Data Protection Regulation (GDPR) previously and now for AI regulation with the AI Act.

 

The Timeline

 

For the interest of the argument as to why it takes a long time to bring regulations, let us take a look at the timeline of the AI Act, which was first proposed by the European Commission in Apr ’21 and later adopted by the European Council in Dec’22. The trilogue between three legislative bodies – European Commission, Council, and Parliament, has concluded with the EU Act in action in Mar’24 and is expected to be into force by May 2024.

 

Concerns Who?

 

With regards to the organizations that come under its purview, the Act applies not only to the developers within the EU but also to the global vendors that make their AI systems available to EU users.

 

Risk-Grading

 

While all risks are not alike, the Act includes a risk-based approach that categorizes applications into four categories –  unacceptable, high, limited, and minimal, based on their impact on a person’s health and safety or fundamental rights.

The risk-grading implies that the regulations become stricter and require greater oversight with the increasing application risk. It bans applications that carry unacceptable risks, such as social-scoring and biometric surveillance.

Unacceptable risks and high-risk AI systems will become enforceable six months and thirty-six months after the regulation comes into force.

 

Transparency

 

To start with the fundamentals, it is crucial to define what constitutes an AI system. Keeping it too loose makes a broad spectrum of traditional software systems come under purview too, impacting innovation, while keeping it too tight can let slip-ups happen.

For example, the general-purpose Generative AI applications or the underlying models must provide necessary disclosures, such as the training data, to ensure compliance with the Act. The increasingly powerful models will require additional details such as model evaluations, assessing and mitigating systemic risks, and reporting on incidents.

Amid AI-generated content and interactions, it becomes challenging for the end-user to understand when they see an AI-generated response. Hence, the user must be notified when the outcome is not human-generated or contains artificial images, audio, or video.

 

To Regulate or Not?

 

Technology like AI, specifically GenAI, transcends boundaries and can potentially transform how businesses run today. The timing of the AI Act is appropriate and aligns well with the onset of the Generative AI era, which tends to exacerbate the risks.

With the collective brain power and intelligence, nailing AI safety should be on every organization’s agenda. While other nations are contemplating whether to introduce new regulations concerning AI risks or to amend the existing ones to align them to handle new emerging challenges from advanced AI systems, the AI Act serves as the golden standard for governing AI. It sets the trail for other nations to follow and collaborate in putting AI to the proper use.

The regulatory landscape is challenged to lead the tech race among countries and is often viewed as an impediment to gaining a dominant global position.

However, if there ought to be a race, it would be great to witness one where we are competing to make AI safer for everyone and resorting to golden standards of ethics to launch the most trustworthy AI in the world.
 
 

Vidhi Chugh is an AI strategist and a digital transformation leader working at the intersection of product, sciences, and engineering to build scalable machine learning systems. She is an award-winning innovation leader, an author, and an international speaker. She is on a mission to democratize machine learning and break the jargon for everyone to be a part of this transformation.

Recent Articles

Related Stories

Leave A Reply

Please enter your comment!
Please enter your name here