Digital dashboard tackles unconscious bias in academic recruitment
10 March 2023
By Linda Willems
New application has indicators that provide a more holistic view of candidates by highlighting attributes like mentoring, social engagement and collaboration
Academic recruitment is fraught with challenges. It can be time-consuming and complex, and with its reliance on publications and personal networks, there’s a high potential for bias. This bias can compromise the integrity of research, which depends on diverse perspectives. And it ultimately impacts researchers and their careers. “If early-career researchers feel they aren’t being recognized, promoted or rewarded, they will go and do something else,” says Prof Margaret Sheil(opens in new tab/window).
As Vice Chancellor of Queensland University of Technology(opens in new tab/window) (QUT), Margaret has a first-hand view of the challenges and intricacies of the recruitment and promotion process in academia — and innovative ideas on how to improve it.
The topic came up when Margaret met with Elsevier’s Chief Academic Officer, Dr Nick Fowler(opens in new tab/window), during his visit to Australia in 2020. When they touched on the use of research indicators in recruitment and promotion, Margaret’s gender radar was triggered:
There is this view that using indicators and data to generate candidate shortlists is less biased than traditional methods, such as reviewing CVs. But I think indicators can be just as biased, and can discriminate against women, in particular.
She points to the h-index as one of the biggest culprits:
It relies on length of career and other factors, including patronage from senior colleagues, yet women are more likely than men to take career breaks for family reasons and/or be nominated less frequently for awards and international collaborations.
However, her years in academia and university leadership roles, along with her policy work for the Australian government, have shown Margaret just how valuable indicators can be.
People really like using them — I do too when I want evidence of outcomes or performance in an area or discipline — but I am very careful when they are applied to the evaluation of individuals.
So I asked Nick whether we could use indicators to make the initial selection process for vacancies fairer; for example, look at people’s careers more inclusively and reward other behaviors, such as being a generous collaborator or mentor.
Nick’s answer was a resounding yes:
Working with someone like Margaret, who is so knowledgeable and respected, is a huge honor. And when she described her idea to me, not only did it make so much sense, I also knew our teams had the experience, data access and computing power to make it a reality.
Once home in Amsterdam, Nick connected Margaret with colleagues in Elsevier’s International Center for the Study of Research (ICSR) and its ICSR Lab. Together, they embarked on a project that has the potential to transform academic recruitment. Over the 18 months that followed, Margaret and the ICSR team developed a prototype application with an interactive dashboard that contains an array of indicators. These include familiar choices, such as publication count and h-index, but also new and broader indicators that expand the definition of researcher success.
The application is currently being tested by Margaret and other research leaders in the higher education sector. And while the project team gathers their feedback, work is already underway on the next stage of the collaboration — the development of a graphical CV for researchers that provides an enriched view of their careers and achievements over time.
The recruitment dashboard — creating a more even playing field
How the application works
An institution provides the ICSR with a search query defining the field of expertise required for the vacancy. This is used to generate a dashboard of suitable candidates, which contains more than 30 indicators on five themed tabs:
Career Status
Is Innovative
Leader in the Field
Multi-Dimensional
Social and Education Oriented
Figure 1 shows the Is Innovative tab for candidates. Dynamic text at the top provides a breakdown of inferred gender, including an “unknown” category for researchers whose gender is impossible to infer within the confidence threshold. For Margaret, even this relatively simple data point adds great value:
Knowing the gender breakdown of the candidates on your shortlist can help to raise early red flags. For example, if I’m recruiting for a biology position and I see more men than women, then I know I should check the data because women generally outnumber men in that field.
Each indicator has a histogram showing the distribution of candidates by gender, with a “slider” filter that dynamically changes the data. For example, Margaret can reduce her shortlist by increasing the percentage of publications with funder acknowledgement that candidates must have published. And whenever a filter is adjusted, all histograms in the dashboard are updated, including the figures that break down the proportion of gender in the shortlist. This allows Margaret to see how her weighting affects other indicators, the overall number of researchers on her shortlist, and the ratio between men and women.
Dr Angela McGuire(opens in new tab/window), who headed up the project for the ICSR, explains:
What I like about the dashboard is that it helps recruiters make evidenced-based decisions, and they can see the immediate impact of their choices on the gender balance of the candidate pool. I know that it’s not always possible to get a 50:50 gender split in a shortlist, but improving on the initial gender ratio, and getting more women seen outside of personal networks, is a positive start.
I also like that it highlights the efforts of groups that might otherwise get overlooked, like candidates that are nurturing the next generation and those that are supporting inclusivity and diversity by expanding their collaboration networks. This dashboard makes it possible for these efforts to be recognized and rewarded.
A Results table at the bottom of the dashboard enables institutions to select and download indicators for their chosen shortlist of candidates. The candidates are not individually identifiable by gender.
Generating new insights with new indicators
For Margaret, while all tabs and indicators have their value, there are a few she turns to regularly. For example, histograms on the Social and Education Oriented tab show the number of candidates’ advisees and their percentage breakdown by gender.
“These are some of my favorites,” she says. “I want to recruit people for QUT who will mentor the next generation of researchers. The advisee indicator shows how nurturing the candidates on the shortlist are overall, as well as how nurturing they are towards men and women, in particular.”
She adds: “While these indicators still need refining, people are already getting excited about them as they credit something that is rarely recognized.”
The co-author information on the Multi-Dimensional tab is also proving popular at QUT (see figure 2).
“The indicators show that women tend to have a higher percentage of women co-authors than men do, Margaret explains. “That’s something we wouldn’t see if we were looking at CVs alone.”
When considering men for a role, Margaret looks to see whether they have a healthy number of women co-authors relative to the field. “That tells me they have a diverse lab and are thinking more inclusively and broadly,” she says. “And if I see a high-profile woman who isn’t publishing with other women, that suggests they aren’t necessarily good at supporting women colleagues.”
For Margaret, the ability to work in a team and collaborate at all levels is crucial for potential QUT employees. “I was recently selecting a new head of school, and one of the school’s professors pointed out that while a man candidate had a really good research record, the data on the dashboard showed that he rarely published with more than one person. He said, ‘That’s not what we are about here. We are collaborative and want to work across disciplines.’ He had a point!”
Margaret also finds the 5-year field-weighted citation index (FWCI) on the Leader in the Field tab useful. The FWCI normalizes impact by field. An FWCI value of 1 indicates the average global impact for that research area, while an FWCI of more than 1 indicates higher than expected citations, and vice versa.
Margaret reveals: “I was on the selection committee for a prestigious national prize with a shortlist of very high-performing researchers. We evaluated them in a variety of ways, but there was one woman candidate with a high h-index, and the committee members thought it was down to her field, where everyone has high publication rates. Using the FWCI, I was able to show that, even within that field, she was a very strong performer.”
Addressing a pressing industry need
For Margaret, initiatives like the dashboard are much needed given the ongoing rise in indicator types and uptake. “We don’t want people relying on indicators that entrench explicit bias at the same time as we are trying to change behaviors.” Importantly, the dashboard can also result in a more cost-efficient and effective recruitment process for institutions.
And to those who might have concerns over the potential for positive discrimination, she says: “This initiative is just about broadening your pool beyond the networks of your team or external recruiters. We are not diluting excellence — we always want to appoint quality people. The dashboard just helps us make good choices about how we recruit them.”
From Margaret’s perspective, these choices not only create a fairer recruitment process, but they also help to build a better future:
There are so many examples of where diversity has led to better outcomes; for example, in tackling implicit biases in research questions. A broader network brings the best minds to a problem. And universities need to reflect the societies they serve — that’s something we haven’t done historically, at least in recent times.
For researchers, benefits include a more transparent and equitable recruitment process. Margaret notes: “I’ve found that people don’t mind entering a competition if they feel it’s a fair or level playing field. And this dashboard will help us to retain talent. If early career researchers feel they aren’t being recognized, promoted or rewarded, they will go and do something else. So, for me, the dashboard is not just about recruitment, it also has the potential to help us promote and support the right internal people.”
The graphical CV — telling the story of the researcher
The second strand of the collaboration between the ICSR and QUT is the creation of a graphical CV for researchers. Although the project is still in its early stages, figure 3 shows a conceptual view.
According to Angela, the goal is to create a CV for each candidate that displays selected indicators over time, helping recruiters estimate their career trajectory. Although the CV will be automatically generated using data from a variety of sources, researchers can annotate the document: for example, provide more detail on career breaks or achievements.
For Margaret, this kind of in-depth, more holistic view of the researcher is invaluable from a recruitment perspective: “I’m looking for people who are good at what they do. So if there’s a career break, this CV helps me understand what they did before that break, and whether it was any good.” She adds:
Whatever a candidate’s background, I want people who can climb a mountain and reach the peak.
“The aggregated indicators you see in traditional CVs often average things so that those peaks are excluded,” she explains. “For example, I know people who have had a good h-index for a very long time, just based on one or two blockbuster PhD papers. Seeing the trajectory of a career like this is much more helpful. There is so much richness in there.”
Next steps
The project team plans to continue developing the CV concept with input from Margaret and her colleagues at QUT. It will also continue gathering feedback from the academic community to optimize the dashboard’s performance, and usefulness, to QUT and other universities and research institutions worldwide.
According to Angela, she and the ICSR team are also exploring additional ways the dashboard can be generated. It currently relies on the institution supplying a customized search query, but in the future, it might be possible to create a dashboard based on a SciVal Topic, or even select a researcher who fits a vacancy’s requirements and then ask the dashboard to find lookalikes.
Angela adds: “We have so many ideas, and many of them come from Margaret. We’ve really appreciated her honest feedback — every time she suggests a pivot in direction, the dashboard improves.”
Margaret is also eager to continue the collaboration:
I could have found a data scientist to do this work for me, but the ICSR is at the forefront of this field, and far ahead of where any academic would be — it understands what’s possible in terms of the data and is familiar with institutions’ needs and best practice. Having this kind of think tank available is a great opportunity.
For Margaret, the collaboration has been a rare opportunity to return to her academic roots:
This has really felt like a proper research project. By working with the ICSR team, which is at the cutting edge of this kind of work, I’m learning so much.
About the International Center for the Study of Research
The International Center for the Study of Research (ICSR) seeks to advance research evaluation in all fields of knowledge production. It delivers on this mission through its research projects and reports, and its ICSR Lab, a cloud-based computational platform that enables researchers to analyze big datasets, including those that power Elsevier solutions such as Scopus, SciVal and PlumX. For this project, the ISCR team drew on multiple data sources, including:
Scopus author profiles for publication and citation data
Inferred gender*
PlumX Indicators for the reach and impact of online publications
SciVal Topics for a fine-grained view of research focus
Policy data from Overton(opens in new tab/window)
Information on patents from LexisNexis PatentSight
Publicly-available data on researcher prizes and awards
*We used the gender inference approach previously deployed in Elsevier’s 2020 report on gender in research. Using this approach, the inferred gender is on a binary scale given as simply man/woman and unknown, and we note that this is a limitation. Gender inference is based on first and last name and country of origin (defined as the country in which the author published the most papers in their first year). Gender is assigned when the confidence threshold is >= 0.85 otherwise the gender is classified as ‘unknown’.