There is a quote that is widely mis-attributed to Albert Einstein.
Insanity is doing the same thing over and over again expecting a different result.
While the origins of the quote are uncertain, the sentiment is clear: if you repeat the same actions, you’ll get the same results. Anyone with a scientific or empirical bent will surely agree with the premise, as will anyone who has played golf.
This comes to mind today on the heels of a story from Reuters that Amazon has scrapped a program intended to speed up the review of applications by applying Artificial Intelligence (AI) tools to the process. The program, it seems, was dutifully duplicating all the innate biases from the past.
I deeply sympathize with Amazon’s plight. When I was VP of Human Resources at Microsoft, the company’s applicant popularity was astounding. We were getting tens of thousands of résumés a month. We had two shifts of people working at scanners entering them into a database. We tried all manner of sorting, filters, and pattern recognition to separate the wheat from the chaff.
It didn’t work. We missed vast numbers of bright people (and wasted time on countless misses) because they chose not to use the current buzzwords. Or because their font choice was misread by the scanning software. Or we just weren’t looking for the right things. We still needed real humans to look them over, even if each review took mere seconds.
Amazon is like Microsoft at that time, among the hottest places to work, and I’m sure the are swamped with applications. I fully understand the desire to use the latest technology to try to assist in battling the onslaught. Had we had AI at the time, I would have tried it in a heartbeat.
But just like our filters and scanners, the AI tools have an inherent problem. They are “trained”, and the decisions about the training data set make all the difference in what choices the system makes. Like virtually every computer system AI systems are subject to the old adage: garbage-in leads to garbage-out .
From what we can tell, Amazon used a vast array of résumés and decisions from the past to teach the system to find the needles in their haystack. Here are all the applicants for this job, here’s the person we chose. I sympathize with this approach, they have an enormous data set to work from. The problem is, of course, it replicated their past behavior precisely.
The system showed inherent biases against women and other groups. It preferred male-dominated language, and expressed precisely what the company had presumably done in hiring over the years. And there’s the problem.
The better way to train the system would have been to use a pool of applications, and then manually coded them for the desired outcome, controlling carefully against bias. But this would have taken forever, required great manual effort, and resulted in a much smaller training data set. Unfortunately AI systems are at their best with very large training inputs. So they chose the more tractable path.
To their great credit, Amazon scrapped the system when they realized the issue. They deserve kudos for that decision, other firms might well have pressed on, hoping the results would improve with more data. Continuing to do the same thing over and over, and expecting a different result. Amazon didn’t and that’s worth recognizing.
At the highest level, this points out the complexity of the diversity issue. Biases, either obvert or more subtle, are deeply ingrained in a company’s culture. Correcting those takes explicit, conscious, and proactive behavior changes. Something no automated system is likely to be of much assistance with. I have many more ideas, thoughts, and comments on how to make these changes, but those will have to wait for another day.
In the meantime, let’s celebrate at least one company that is actively recognizing they have a hiring challenge. And is actively searching for ways to not repeat the past.