Institute of Neurosciences, Mental Health and Addiction: Stakeholder Engagement Report to the CIHR Peer Review Expert Panel

November 2016

Main Messages

  1. Deep disappointment and concern regarding the manner in which the reforms were executed initially, with little apparent recognition of the negative impact on peer-review by adoption of unproven technology. This included poor procedures for ensuring appropriate matching of reviewer expertise with the scientific focus of a given application, less reliance on face-to-face discussion and in particular little apparent concern about the potential harm to the viability of Canada's health research community that could result from adoption of changes whose validity remained unproven."Fundamentally flawed" was a recurrent theme from the research community regarding the new review process and approach.
  2. Appreciation of the rapid response to the demands of the health research community that procedures for direct involvement of experienced researchers be adopted by CIHR to address deficiencies that were immediately apparent in the 2016 Projects Grants competition.
  3. Some recognition of the merits of the new Foundation Grants scheme approach, coupled with disappointment that the tangible outcome, for those relatively few researchers who were fortunate enough to receive funding, was simply to have their current levels of funding sustained for 7 instead of 5 years. As one colleague noted; "Much ado about nothing".

Stakeholder Engagement Approach

The SD elected first to contact members of the dedicated IAB (12 members) that existed prior to its termination in the spring of 2016, with a request to answer the 6 key questions that constitute the core concerns of this Stakeholder engagement exercise. He also distributed the material available on the CIHR website that provided a detailed description of the rationale for the reforms, along with a full description of the Foundation and Project programs. NO REPLIES WERE RECEIVED. This reveals the level of burnout and frustration by previously vocal and engaged research leaders.

Participants

Over the past 2 months, the SD had numerous opportunities to discuss these questions with leaders in the neuroscience and mental health research communities, who were more forthcoming. This report also incorporates the comments provided by 13 respondents to the web-based survey (2F/10M/1 preferred not to disclose), who declared INMHA as their primary Institute affiliation. The majority were senior investigators from pillar 1 (biomedical), although a number identified as mid-career, early career or other. The detailed information provided here is an integration of responses from these two participant sources.

Summary of Stakeholder Input

Question 1: Does the design of CIHR's reforms of investigator-initiated programs and peer review processes address their original objectives?

This question elicited comments that were uniformly negative, whether conveyed orally or as a written response. Indeed several written responses were quite hostile as indicated by statements such as: "Reforms are a disaster."; "it has been an unmitigated disaster." "Both the design of the programs and the associated peer review process has been fundamentally flawed."; "Review process fundamentally flawed."; "As best I understood the reforms were initiated to address problems with reviewer fatigue and quality. The reforms have been disastrous in this regard. We have lost high quality peer review (mismatching of reviewers to applications, reviewers with no experience reviewing, non-specialized reviewers), lost face-to-face peer review which is essential for high quality review."

A particularly disturbing theme expressed by several respondents is the assertion that "the reforms have created a two-tiered funding ecosystem where a small fraction of investigators--a majority of whom are senior PIs, many with their best work long behind them--are funded at a very high level for an unprecedented length of time. They are now the only Canadian biomedical scientists resourced to do internationally competitive research, yet they are the least likely to do truly innovative work, which study after study has shown is usually done in earlier career phases."

Nearly every response to Question 1 expressed major concern about the design flaws in the 'online ranking system', which in turn led to even deeper discomfort with a system that dismisses face-to-face peer review seen as essential for high quality review. A great deal of concern was also expressed about the reliance on a Ranking system rather that one based on numerical ratings. This sentiment is captured well in the following quote, "The idea of ranking each aspect of the grant (applicant, environment, approach etc), and then adding these is completely wrong, and proven to not work (see NIH). Instead the grant needs to be given an overall rating, that primarily depends on the quality and feasibility of the research." In a similar vein, another participant noted, "NIH continues with a very effective review process, based on face to face meetings, or at least call-in reviewers. The New CIHR system is very poor because reviewers do not actually participate in the discussion when not face to face. The grants are reviewed and ranked, rather than rated. This is a disaster, since a superb grant could be ranked low, if a reviewer has other good applications. There is no way to compare across all grants."

Question 2: Do the changes in program architecture and peer review allow CIHR to address the challenges posed by the breadth of its mandate, the evolving nature of science, and the growth of interdisciplinary research?

Again there was little support for a claim that the changes in program architecture achieved the desired outcomes with respect to the broad interdisciplinary mandate of CIHR and the evolving nature of health research.

Several participants took exception to assumption that interdisciplinary research is superior to other organizing priniciples. Indeed there is merit in the argument that "Fundamental discovery and novelty almost always arises within disciplines...they are where the deep thinking of science has always occurred. Interdisciplinary research is essential for the cross-pollination of ideas and the opportunistic application of concepts or tools from another field in one's own. But this is nothing new. The idea that scientists need funding agencies faddishly chasing an "interdisciplinary" agenda is absurd. Collaboration is natural to scientists and always has been."

Another unintended consequence of the move away from specialty –based review panels is reflected by the observation that "small branches of research (like motor systems, diet, etc.) are getting completely wiped out with almost no grants funded. We need to protect our specialty research, against big research. This is not a small issue. Many labs are now closing, and Canada may take decades to recover from these changes CIHR has made." The loss of experienced and knowledgeable reviewers - "the critical mass of expertise required to provided informed review" - was also highlighted.

Question 3: What challenges in adjudication of applications for funding have been identified by public funding agencies internationally and in the literature on peer review and how do CIHR's reforms address these?

None of the participants claimed any familiarity with the challenges in grant adjudication encountered by different international funding agencies, or with the literature on peer-review. Comments were made about the superiority of the peer-review process used by the NIH which places greater reliance of in depth discussion of applications following a rigorous triage process. Of particular relevance to Question 3 is the observation that CIHR implemented a virtual peer review system that NIH decided not to implement, and other countries which had used them have found them to be woefully inadequate.

A very experienced investigator who was successful in the Foundation Grants competition noted "I routinely review (or chair) for NIH, Wellcome Trust, MRC, DFG and none of them has identified the "challenges" that CIHR has. How can a country with such an insignificant research budget have identified problems that far bigger agencies have not?" Others voiced the same concern that the apparent crisis in the peer review process in Canada that these reforms were intended to address was over-blown. It was also noted that "The biggest problems in peer review are equity and bias. As demonstrated by analyses from within and outside CIHR, the reforms have substantially exacerbated these problems relative to peer agencies."

Similar to comments made in response to Question 1, but this time in the context of challenges to the adjudication process identified by international funding agencies, one colleagues again made reference to the difficulties of virtual reviews and the superiority of face-to-face discussion within a committee of experts.

Question 4: Are the mechanisms set up by CIHR, including but not limited to the College of Reviewers, appropriate and sufficient to ensure peer review quality and impacts?

Again we find overwhelming negative reaction to this question. Although some respondents did see the merit of an informed and experienced College of Reviewers, they also were quite puzzled as to why this important step is being implemented at relatively late stage in the reform process, instead of at the very beginning?

On the other hand, the sentiment was expressed that "Populating the College of reviewers is a gong show. I have 24 years of NIH, CIHR, MRC, DFG, Wellcome Trust, MRC (UK and Canada), AIHS, AHFMR, FRQS and God knows how many other agencies worth of reviewing experience, in addition to being a Foundation Grant Holder and a 16 year holder of a CRC and I have not been asked to review in the current round despite the fact that my area is one of the hot international areas of research." Two other individuals writing in response to this question noted respectively, "No. you have lost sight of the need to nurture communities of expertise. You have lost sight of the importance of the review as a mentoring tool." "We know that every idea behind the reforms made peer review worse, necessitating the current flailing around to find a working system."

Question 5: What are international best practices in peer review that should be considered by CIHR to enhance quality and efficiency of its systems?

Perhaps not surprisingly, given the overall tone of the responses to this set of six questions, once again many respondents referred to the Gold standard of face to face peer-review meetings and expressed admiration for the NIH system of peer-review. We find comments such as "NIH takes a risk and clearly states what it would take to make the grant fundable. When the application returns and has addressed all concerns, then its fundable."

The need for face-to-face meetings for adjudicating all grants and fellowships was highlighted, as well as proactive policies to provide equitable funding by career stage and gender."NIH standards should be followed especially Face to face meetings. Small committees, with no more than 100 grants in each section. So specialized grants can be fairly evaluated."

Anticipating the recommendations of the Committee Chaired by Paul Kubes, many respondents recommended: expert, competent face to face peer review; reviewers who have a history of receiving CIHR or other major funding; and competent chairs who select reviewers, rather than matching algorithms and automatic assignment of referees. It was also seen as crucial that each application is reviewed by someone who understands the proposed research, that there be evaluation by peers who are expert in the relevant research fields, along with scientific discussion of each fundable application by the panel.

I think it is fair to say that the changes currently being implemented under the effective leadership of Jeff Latimer will go a long way to ensuring that these sensible suggestions shape a peer-review process that can regain the confidence of a deeply disaffected scientific community.

Question 6: What are the leading indicators and methods through which CIHR could evaluate the quality and efficiency of its peer review systems going forward?

Challenges abound. Take for example the following sentiment."Your system is too broken, and you are not funding a sufficiently high percentage of grants to allow you to compare grant scores with outputs. For example, the NIH had longitudinal data that allowed them to calculate relative citation ratios for research outputs from a large numbers of grants across the top 30 percent in scoring. You can't do that with success rates so low--what would you compare to?  But we already know the answers from robust studies conducted at NIGMS: Grants ranked at the 5th and 20th percentile have no measurable difference in output. Concentrating funds--as in Foundation--leads to immediately diminishing returns per PI. There is more impact and higher quality output when funds are distributed more evenly."

CIHR should also pay close attention to 1) Variance of reviewer scores despite the fact that "if the funding rate for Project Scheme turns out to be <10% then really peer-review is a lottery," and 2) Consistency of initial scores and variance to the final scores is one way to measure quality of reviews.

A final word of advice admonishes CIHR to "Listen to the applicants. They are all scientists and deal with scattered data on a day-to-day basis. They are also the reviewers and the applicants. As long as CIHR refuses to listen and tries to push through its own agenda, it will keep failing."

Date modified: