Working for workplace religious diversity, equity & inclusion


Ethically Designing and Deploying AI-Powered Human Resource Tools, Including Faith Considerations

19 Mar, 2014

Ethically Designing and Deploying AI-Powered Human Resource Tools, Including Faith Considerations: This panel considers rewards and risks from deploying high-powered algorithms (and assistive technology like ChatGPT) in the HR processes like recruiting and hiring, performance evaluations, etc.; and ultimately whether and how companies can gain the benefits of such tools in ways that increase trust and reduce suspicion. This will be valuable information for Faith ERG leaders seeking to gain general understanding protect inadvertent sectarian discrimination and for companies seeking to build trust with their employee resource groups of all types.  We anticipate covering:

  • – Current and soon-to-come AI-fueled HR technology and how it works
  • – Addressing and reducing challenges such as fairness, bias, and lack of transparency in data sets and algorithmic analyses of data within the tools.
  • – The benefits of such tools weighed against these challenges.
  • – How corporate governance and government regulators are studying these issues with an eye toward self-policing or government intervention
  • – How Faith ERGs can participate in this conversation, both to protect religious practice and enhance strong corporate ethics applying their faith values.

Panelists we have engaged or are seeking are:

  • Thomas Osborn, COO of Vettd, a Bellevue, Washington-based company which uses deep learning AI models and search engines to enhance candidate intelligence for staffing and recruiting companies. Vettd’s Candidate IQ product integrates with companies recruiting and staffing software (ATS) to enable AI-fueled data enrichment and search/match.
  • Kevin Richards, Vice President, Head of U.S. Government Relations at SAP (spoke last year – Ben is reaching out) to provide a broad perspective of research and public policy considerations related to such technology
  • Andrea Lucas, Commissioner of the EEOC, to discuss the EEOC’s AI and Algorithmic Fairness Initiative (or another representative of the EEOC – Brian and Kent reaching out on our behalf).

A Workshop to Drill Down on Ethical use of HR Technology Tools: Using hypotheticals, best practices under development, and other vehicles to work through the considerations discussed by the Panel in real life contexts. Discussion groups led by Panelists and DEI Officers attending the Conference will expose participants to greater nuance and equip Faith ERG and other corporate leaders to engage in informed discussion around these rapidly expanding technologies.

  • – The Three Panel Members
  • – DEI leaders attending the conference as discussion leaders

Topics to consider diving deeper into (generated by ChatGPT 😊)

  • – Bias: AI systems can perpetuate existing biases in the data used to train them. For example, if historical hiring decisions were biased against certain groups, an AI system trained on that data may also be biased against those groups. This can result in discrimination against certain candidates and perpetuate inequalities in the workplace.
  • – Privacy: AI systems may collect and process large amounts of personal data about job applicants, such as their education, work history, and social media profiles. Employers must ensure that this data is collected and used in accordance with applicable privacy laws and regulations, and that candidates are informed about how their data will be used.
  • – Transparency: Job candidates have a right to know how AI systems are being used in the hiring process and how decisions are being made. Employers must be transparent about the criteria being used to evaluate candidates, the algorithms being used, and the data sources being used.
  • – Accountability: Employers must be accountable for the decisions made by AI systems. They must ensure that the systems are fair and unbiased, and that candidates are evaluated based on relevant criteria. Employers must also have processes in place to address any errors or mistakes made by the AI system.
  • – Human oversight: AI systems should be used as a tool to assist human recruiters, not as a replacement for them. Human oversight is necessary to ensure that the AI system is functioning correctly, to monitor for bias and discrimination, and to make final hiring decisions.
  • – Fairness: Employers must ensure that the use of AI systems does not result in unfair advantages or disadvantages for certain candidates. For example, if the AI system favors candidates who attended certain universities or have certain types of experience, this may unfairly advantage some candidates over others.