Responsible AI Documentation for a Smart Interview Platform
How Responsible AI Framework helped an AI-based Smart Interview Platform gain trust in a highly regulated industry!
In a recent interview with IEEE spectrum, Joy Buolamwini, founder of the Algorithmic Justice League and author of Unmasking AI (Her work was the subject of the documentary Coded Bias, running on Netflix), when asked what can concerned engineers at big tech companies do about algorithmic bias and other AI ethics issues?
Buolamwini answers: I cannot stress enough the importance of documentation. In conducting algorithmic audits and approaching well-known tech companies with the results, one issue that came up time and time again was the lack of internal awareness about the limitations of the AI systems that were being deployed. [Documentation] that provides an opportunity to see the data used to train AI models and the performance of those AI models in various contexts is an important starting point. Then the question becomes: Is the company willing to release a system with the limitations documented or are they willing to go back and make improvements?
A smart Interview platform worked with DeepDive Labs to create a Responsible AI framework document to enable regulatory compliance in the pre-sales and client onboarding process, that showcased the reliability of the product both quantitatively and qualitatively to their clients and for extensive business development.
The responsible AI framework describes the processes in the intelligent interview platform and evaluates them by the Fairness, Ethics, Accountability, and Transparency (FEAT) laid by the Monetary Authority of Singapore (MAS), Singapore’s financial statutory body. MAS is SG’s central bank and integrated financial regulator, which administers the various statutes pertaining to money, banking, insurance, securities, and the financial sector in general, as well as currency issuance and manages the foreign-exchange reserves.
This particular Responsible AI Framework was created in collaboration with the Responsible AI team of a large insurance company, who wanted to onboard our client, a smart interview platform, for using it to hire insurance agents. The smart interview platform gives recommendations to their clients on various personality and skill parameters and The actual hiring decisions are the clients responsibility by ensuring that they are not biased and fair in their hiring process. The smart interview platform is a tool used for hiring agents in an efficient and easy manner and the framework evaluates the tool on being unbiased and meeting the highest ethical standards.
With the influx of AI-based interview tools, the operational efficiency has gone up in the organizations, however, large enterprises are still quite skeptical to use them autonomously as they are well aware of the limitations of these AI tools. The above is the case for most businesses in highly regulated industries like healthcare, financial etc.
From the perspective of the large insurance organization, the important yardsticks for evaluation of the Intelligent Interview platform were :
Does the product follow responsible AI practices?
Information on the distribution of data used for training the models?
How would interview scores affect their particular ethnicity?
How were the scores evaluated and
Are all the processes of the interview platform fair, ethical and transparent?
Smart Interview platform’s client engaged in an elaborate regulatory compliance vetting process to onboard any SaaS providers, especially young new-age AI companies, based on their AI-based scores and data. Here, As a step towards enabling better regulatory compliance, Deepdive labs created this responsible AI framework document for the smart interview platform, which was a significant part of the proposal to their client. In addition, this provides structure and direction for the pre-sales team to generate more business leads and build trust.
The various pillars of the evaluation:
Transparency: The framework reveals the details of the data collected by the smart interview platform from the resume and the video, the data it uses, the processing that is done on the data and how the final score is arrived at. It also publishes the factors that affect the scores and the details on the interpretation of the scores generated by the system.
Accountability: The framework evaluation discloses that the system divulges the score given for a candidate to not only the hiring managers but to the candidate also. The system is constantly learning and any false positives can be fixed based on inputs from the different stakeholders.
Ethics and Inclusivity: The framework describes the capability and limitations of the system in recognizing and scoring of differently abled applicants. Considering the strengths and imitations of the system it defines the capabilities and responsibilities of the platform versus those of the hiring managers, in ensuring a seamless process.
Fairness: Evaluation of the platform's use of sensitive information like age, gender, race, ethnicity in its calculation of applicant’s scores. This part also presents details on the distribution of the base data used to train the system across various demographics and countries.
Furthermore, the framework evaluates the system’s variances of the scores across geographical regions and gender, while representing these in tabulated for the resume scores ( across skills, education, experience) , the work-map scores (skills, attention to detail, communication, creative thinking, interpersonal skills, social desirability, presentation, time management, team work, service orientation, analytical and works under pressure) and the video score (professionalism, sociability, communication, positive attitude etc)
Consequently, creation of this evaluation framework in the space of Responsible AI which is designed to empower employees and businesses, and fairly impact customers and society, provides the smart AI platform a systematic evaluation of their platform.
In brief, the impact of this Responsible AI Framework for the client are :
This documentation enabled the pre-sales team to get 30% increased leads from large enterprise organizations, especially in a highly regulated industry like Financial, medical etc.
Enabled the client to reorganize their AI algorithm development with better documentation and also bring focus on creating monitoring dashboards. The smart interview platform now provides special access to clients to their products for transparency.
As AI algorithms are data-dependent, these may not completely work for all real-world edge cases. Hence, bringing responsible AI framework early in the development process is the only way forward to build trust on AI SaaS platforms and to establish reliability for Large Enterprises on AI products.
This evaluation with the responsible AI practices in purview adds great value to the presales team. The team would be able to pitch with assurance, backing and data to their prospective or potential customers in a more informed manner. Further, this enables the smart interview platform team to respond to questions posted by their client effectively and accurately in line with the client’s values, mission, and vision.