Driven by the rapid advancement of artificial intelligence, governments worldwide are enacting important legislation to promote AI’s virtues while also minimizing its risks. To continue leveraging AI innovations in HR technology with accountability, it’s imperative to stay abreast of the evolving regulations.
This article explores the latest legal developments impacting HR and AI, focusing on key regulations from the European Union (EU) and the United States.
EU AI Legislation
In May 2024, the European Parliament approved the EU AI Act, the first-ever comprehensive legal framework on AI by a major regulator. At the heart of the AI Act is a risk-based categorization system that classifies AI systems into four levels: unacceptable, high, limited, and minimal. This approach ensures that regulatory efforts are proportionate to the potential risks posed by different AI use cases.
The Act categorizes HR applications of AI as high-risk, prompting specific requirements for their deployment. These include:
- A comprehensive risk management framework
- Data governance processes
- Monitoring and record-keeping procedures
- Detailed documentation alongside transparent and human oversight
- Standards for accuracy, robustness, and cybersecurity
- Registration in a publicly accessible EU database
The Act also imposes severe fines for using AI in emotional recognition systems, reflecting the EU’s commitment to ethical AI use.
When Will the EU AI Act Take Effect?
The EU AI Act will come into effect 20 days after its publication in the EU’s Official Journal (a confirmed date isn’t yet available). While the core provisions will be fully applicable two years later, some aspects will be implemented at different stages following its publication:
- Prohibitions: These will take effect after just six months.
- Governance and General Purpose: The rules for governance and general-purpose AI models will be implemented after 12 months.
- Embedded AI Systems: The regulations for AI systems embedded in regulated products will have a longer transition period of 36 months.
United Kingdom Regulations
The Trades Union Congress (TUC) has published draft guidelines titled the Artificial Intelligence (Regulation and Employment Rights) Bill, which aims to regulate the use of AI in the workplace. Although TUC refers to the guidelines as a bill, it’s not an official UK government entity so these should be regarded as interesting ideas and suggestions, more than possible country-wide law. Nevertheless, the TUC guidelines are similar to the EU’s AI Act and seek to protect workers from potential risks associated with AI in hiring, monitoring, and firing processes.
The guidelines include several key provisions:
- Workplace AI risk assessments (WAIRA): Employers can’t use AI for high-risk decision-making until a WAIRA has been conducted. This assessment evaluates the AI system’s potential impact on health and safety, equality, data protection, and human rights.
- Transparency: Job applicants and employees have the right to request information about how AI is used in employment decisions that affect them. This includes the right to receive personalized explanations for high-risk decisions made by AI, particularly those that could be detrimental to their job prospects or employment status.
- Ban on emotion recognition systems: TUC proposes a ban on using emotion recognition technology in the workplace, especially where it could be used to disadvantage workers or jobseekers.
- Proof of no discrimination: Employers would need to demonstrate that the output of an AI tool wasn’t discriminatory. Still, employers can establish a legal defense against discrimination claims if they can demonstrate they:
- Did not create or modify the AI system.
- Took reasonable steps to audit the system for bias.
- Implemented safeguards to prevent discriminatory outcomes.
YOU MIGHT ALSO LIKE | ‘The Top 3 Recruiting and Hiring Trends HR Professionals Shouldn’t Miss in 2024’
US AI Legislation
The US approach to AI regulation is more fragmented, with both federal and state-level initiatives. Key potential federal laws include:
- The Algorithmic Accountability Act: Mandates businesses to report the impact of automated decision systems on individuals.
- The Federal Artificial Intelligence Risk Management Act: Requires Federal agencies to use the AI risk management framework developed by the National Institute of Standards and Technology (NIST).
- The Stop Spying Bosses Act and the No Robot Bosses Act: Address employer surveillance and the use of automated decision systems in HR.
Please be aware that all of these acts are, at the time of publishing this article, in the introductory phase of the legislative process.
State-Level Regulations
Some U.S. states and cities have already enacted their own AI-related laws, for example:
- New York City: Requires annual bias audits for automated employment decision tools and mandates transparency about their use, including disclosing selection or scoring rates across gender, race, or ethnicity categories.
- Illinois: Decreed the Artificial Intelligence Video Interview Act, which sets guidelines for using AI in video interviews, including asking for candidates’ consent.
- Maryland: Similar to Illinois, ordained a law restricting the use of facial recognition technology during job interviews, unless the applicant explicitly consents under specific provisions outlined in the law.
- Massachusetts: Enacted the Preventing a Dystopian Work Environment Act to safeguard employees from AI-related excessive surveillance, algorithmic hiring, and privacy concerns.
The EEOC’s Take
The U.S. Equal Employment Opportunity Commission (EEOC), though not a legislative body, has been actively addressing the implications of artificial intelligence in employment practices.
On May 18, 2023, the EEOC issued new technical guidance titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” This guidance outlines how to measure adverse impact when AI is used in employment decisions, such as hiring, promotion, and termination. By understanding and adhering to these guidelines, HR professionals can responsibly leverage AI to enhance decision-making while complying with existing requirements under the law.
READ MORE ON AI | ‘How AI Is Shaping HR & Workforce Management in 2024’
How a VMS Can Help With AI Legislation Compliance
While HR AI legislation is still developing, a vendor management system (VMS) can be a valuable tool for organizations with a contingent workforce aiming to comply with future regulations. This applies specifically when it comes to suppliers who leverage AI in their services. Here’s how a VMS can contribute:
- Standardized compliance requirements: A VMS allows you to define and enforce standardized AI compliance requirements within vendor procedures. This ensures all vendors adhere to your organization’s expectations regarding responsible AI use in HR practices.
- Centralized documentation: A VMS can serve as a central repository for storing and managing all compliance-related documents from your HR vendors. This includes contracts, data security policies, and AI risk assessments, making it easier to track and audit vendor practices.
- Streamlined communication: A VMS facilitates communication with vendors regarding AI compliance issues. You can use the platform to send requests for information, track outstanding actions, and ensure all vendors are kept up to date on your evolving HR AI compliance policies.
LEARN MORE | ‘The Gen-1 Advantage: Maximizing Your Extended Workforce With VMS Technology’
Guarantee compliance across your entire extended workforce and ensure its future-proof management with VectorVMS. Contact us or request a demo to explore the full capabilities of our best-in-class vendor management system!
Meet the Expert
Irene Koulianos – Program Manager
Irene Koulianos brings a decade of experience in contingent labor staffing and recruitment to her role as Program Manager. She helps new and existing clients to develop best-fit vendor management solutions for their contingent labor programs. This includes product demonstrations, completing bids, and supporting the product team with roadmap initiatives. In addition to this primary role, she is passionate about building eLearning solutions for clients, partners, and internal VectorVMS staff leveraging Learning Technologies Group products. Prior to joining VectorVMS, Irene worked for large international staffing organizations as well as smaller boutique IT recruitment firms. She has a deep understanding of the contingent workforce landscape which helps her create meaningful solutions for her clients. Connect with her on LinkedIn.