School of Healthcare Leadership’s Danaher-Garcia on publication team calling for AI to play larger role in helping humans in decision-making

Artificial Intelligence is pervasive in society as we use it for navigation, personal assistants, shopping, healthcare, transportation – you name it, and AI is seeping into our everyday lives. 

You can now add one more avenue – the research funding world – where AI may stop scholars from gaming the system. 

Currently, most research grants are funded based on several factors: the number of citations in a paper, impact factor of the journal, name recognition, whether research in a field is already underway. It’s a high bar to get research published, let alone funded, and yet the subjectiveness of it all can be unfair, especially to researchers who haven’t been publishing for years or those not publishing in well-known journals. 

Can Artificial Intelligence identify funding possibilities that will have the most societal impact, while taking human bias out of the decision-making?  

Nicole Danaher-Garcia, PhD, hopes so. She’s an assistant professor of Healthcare Data Analytics in the School of Healthcare Leadership at MGH Institute of Health Professions and is on the research team that has just published a paper in the prestigious Scientific Reports, on this very topic. Shuhan He, MD, and director of the IHP’s Healthcare Data Analytics program, is also an author in this endeavor.

“It’s trying to get away from being quite so subjective and make it more of an even playing field,” said Danaher Garcia. “Right now, most grants are funded based on citations and impact factor of the journals in which you're publishing. Sometimes people will cite themselves or people in their own lab will cite papers from the lab so that it will look like people are citing those papers more and appear more influential.” 

In the paper, Impact of medical technologies may be predicted using constructed graph bibliometrics, the authors write about the growing concern about the potential misuse and abuse of bibliometric measures, of how researchers may engage in self-citation, “which artificially inflates their citation counts, or they may publish multiple papers on the same topic to boost their h-index [influence score].”  

Why Research is Funded This Way  

Danaher-Garcia says it’s an extraordinary challenge in deciding which research is deserving of funding, and how much. 

“How do you objectively assess research” she asks rhetorically. “How do you compare research in different realms of the same field or in different fields and compare it without having some external measure to tell you how well it's doing in the broader scientific research world?” 

Entities such as the National Institutes of Health and National Science Foundation have deep pockets with which to fund promising research. However, without a specific expert in each field to review each grant, a few people well-versed in multiple research fields are left to decide who gets funding. And that’s the problem. 
“A lot of times the money will go to people who have already been funded previously or to research that's building on something already out there instead of taking a chance with a known researcher who’s trying something new that could be really eye-opening,” she said “But because there's no evidence for that particular research track, it's getting skipped because it's not a sure thing. 

“I also think the problem is that people know that decision-makers are doing their best, but they know the way that they're going about it,” she added. “So, scientists will game the system by doing these little tricks and making their own work look better because they're competing with so many other potential projects.” 

That’s where using AI to help decide the funding future grants comes in. 

Over the course of two years, the team used AI to analyze the metadata of almost 1.5 million publications from 40 high-impact journals from 1980 – 2020; the subject, journal, authors, and influence of the authors were all factored in by network analysis. 

“Network analysis is a way to get around the issue of independence in data,” said Danaher-Garcia, who noted that this approach can determine how much breadth there is to one’s research. “People will cite themselves, or a lab will cite its own research. And if it's a human being looking, you might just see different first authors and think, ‘OK, those are different projects.’ But with AI, you can more objectively look at where the citations are coming from. Network analysis allows you to consider more sides of the story.” 

The authors found AI could, according to the paper, “demonstrate the impact of the network framework in predicting high-impact publications” and that the “models have a high level of accuracy in distinguishing between high-impact and low-impact publications.” 

As for how this works, the metadata is compiled into a spreadsheet, an algorithm is created, and the research is then ranked based on selected criteria. 

“Our paper showed that there are certain features that matter more,” said Danaher-Garcia. “Presumably, you would want to look at the features that are most important. And then it would rank them based on that. You could say, ‘I have $10,000. I'm going to give the most to the No. 1 ranked research grant application’ and so forth.” 

The hope is that this approach leads to more innovation and quicker funding decisions, and that it gives more researchers a greater chance because they’ll be getting more attention and more money. 

“Obviously, for people who have been regularly getting lots of money, this might be detrimental to them,” acknowledged Danaher-Garcia. “But the goal isn't just to fully fund something different. It's more to spread the wealth and give a little bit more money to things that are potentially going to have a greater impact in the future. Instead of, ‘Oh, this cancer research is building on previous cancer research and we’re going to keep funding it. Instead of giving all of our money to that project, we'll give some of it to the cancer project, and then give some money to different research that’s tackling some other side of the story that will get us further than this established research plan’.” 

Along with removing the human factor, AI is an obvious time saver. 

“It already takes so long for grants applications to be reviewed and approved,” she said. “Now you can do it even more quickly and, hopefully, more objectively.” 

Do you have a story the Office of Strategic Communications should know about? If so, let us know.