Institute of Cancer Research: Stakeholder Engagement Report to the CIHR Peer Review Expert Panel

November 2016

Main Messages

The respondents that represented ICR in the evaluation shared a number of overlapping themes across the six questions posed. Overall, there was limited support for the reforms made by CIHR, particularly reforms that were made to the peer review process. The rationale, planning and implementation of the reforms were poorly understood and received by respondents and there is a preference for greater involvement and transparency going forward.

Key themes that emerged from a review of the stakeholder responses are as follows?

  1. Face to face peer review process was unaminously supported allowing for greater accountability, and higher quality reviews.
  2. The current Foundation Scheme and reduced allocation of funding for applications that fall under the Project scheme penalizes mid/early career scientists
  3. The emphasis on translation and clinical implementation has placed those in basic science and discovery research at a disadvantage when applying for funding
  4. CIHR has a responsibility to scientists to remain transparent with the rationale for changes and in obtainting their input in decision making processes

Stakeholder Engagement Approach

Input from stakeholders was obtained through the CIHR PREP submission form. Stakeholders were notified of and directed to the form through email messages sent directly to them for feedback. Background information on the evaluation was also provided to stakeholder along with links to the reforms. Complete responses were also sent individually to the SD by stakeholders and on a group of investigators from a single institution. Feedback on the six questions was also emailed directly to the SD.


Approximately 80 new investigators and 25 former IAB members and key contacts were sent a personal email requesting their participation in the PREP. Fourteen PREP submission on-line forms were completed and others provided their responses directly to the SD. Responses were provided by individuals at all career stages. All respondents from the PREP web form identified as Biomedical (Pillar 1). Not all participant indicated their gender however, the majority of respondents that self-identified were male (10 males and 3 females). All respondents had applied to either the foundation or project funding stream prior to completing the questions. Of the 11 that responded to their success in 14/15 and 15/16 foundation and 2015 project grant competition 3 reported that they had been successful. Only one respondent indicated that they had been a reviewer on the previously mentioned grants.

Individuals that corresponded directly with the SD did not provide demographic details as collected in the PREP on-line submission form.

Summary of Stakeholder Input

Question 1: Does the design of CIHR's reforms of investigator-initiated programs and peer review processes address their original objectives?

Overall, respondents did not view the reforms as effectively addressing the original objectives. They also struggled to understand the rationale for the reforms, particularly when there was a paucity of information available on the CIHR website, or it was difficult to find. Furthermore, they were not supportive of the reforms and the implementation of these reforms. Responses on this question emphasized the potential disadvantages to investigators, problems with the peer review process, a lack of transparency with the reform process, and implementation challenges.

Foundation and Project Schemes

Some respondents commented that new and mid career investigators were disadvantaged due to the large concentration of funding set aside for the Foundation Scheme which favours senior investigators. It was felt that this division of resources reduces diversity, limits the growth of newer labs, and makes it increasingly difficult to maintain research programs. It is important to mention that other respondents were not in favour of opening up the Foundation Scheme to early and mid-career investigators as this deviates from the original intention which was to support senior investigators with established and highly productive research programs (save them from continually applying for grants). Furthermore, opening the funding open to all career stages means including those without a fully developed research program, encouraging them to put all their eggs in one basket thereby limiting their abilty to branch out/explore other avenues of research, and may result in a reduced likelihood of them being renewed against more senior investigators.

Peer Review

There was also consistent discontent with the review process. Respondents found the virtual review to be ineffective for a multitude of reasons including the following: career stage and diversity not taken into account; decreased eligible reviewer pools; lack of proper expertise in the reviewers for the grants they were assigned; reduced accountability and integrity of the review process; and no positive changes on reviewer or applicant fatigue. It was also pointed out that the new review process favoured men and senior investigators. The algorithm used for selecting reviewers was not well received. Additional comments on the review process are provided in the questions that follow in this report. A comment was also made on the rank order approach not working well without adequate statistical power. Additional comments were made on basic research not being well supported by CIHR.

Question 2: Do the changes in program architecture and peer review allow CIHR to address the challenges posed by the breadth of its mandate, the evolving nature of science, and the growth of interdisciplinary research?

Project and Foundation Schemes

The majority of respondents did not find the changes helpful in addressing the above challenges. It was commented that fewer PIs were funded now than prior to 2014. In addition, the changes were disruptive across all pillars. Furthermore, the move to Foundation Grants was perceived as taking money out of the system. A recommendation was made to include a finite number of Foundation scholars and sufficient funds to renew all of these individuals pending their productivity. In addition, more funding could then be allocated to the Project Scheme (this would be benefical to mid and newer PIs).

The Project Scheme is viewed positively overall as a method of funding research. There was also approval on some of the related logistics. For example, there was support for the limit of two grant applications per cycle, the 10 page limit, and changes to simplify the budget. However, concerns were raised with the limited number of characters for the justification of the budget and qualifications and expertise of the personnel.

Peer Review

The changes to the review process were also not well received with the loss of discussion that would normally occur from face to face panels and biased outcomes due to a lack of expertise in the fields involved in the reviews. Those that were interdisciplinary were seen as facing a serious disadvantage. Previously panels could discuss the merits of different components to arrive at a more balanced consensus. The reviews were seen as superficial due to the lack of expert reviewers and the instructions to focus on justifying ranking without useful feedback. In addition, it was indicated that the scores are often widely variant with limited effort put into explaining these variances. It was perceived that one reviewer could be enough to move an application into the unsuccessful pile. The absence of a face-to-face review impacted both reviewer professionalism and accountability. The use of a computer algorithm to select reviewers was also not supported by the respondents as the matching of expertise to grants was poor.

The review process raised particular concerns for those conducting basic biomedical and discovery research. It was commented that those conducting this type of research are disadvantaged and losing support due to a lack of a clear clinical path and immediate translation of the propsed project. However, it is seen by the respondents as important to acquire new knowledge in order to have a critical base.

SPOR was criticized and considered to be more useful for health policy with funding coming from sources outside of research council grant budgets.

In addition, a general comment was made on the too many changes occurring together (i.e., funding mechanisms, application processes, peer review). An incremental approach to change would have been preferred.

Question 3: What challenges in adjudication of applications for funding have been identified by public funding agencies internationally and in the literature on peer review and how do CIHR's reforms address these?

It was pointed out that the justifications for the reforms were not transparent and did not seem to be supported by independent or international data. Other concerns were voiced about the variability in reviews and reduced predictability, lowered success rates, placing research into niches, increased applications with competition gaps, risk-adversity of reviewers/conservative peer reviews, anonymity and lack of personal accountability in reviewers.

It was noted that other agencies have taken a more incremental process to improvements (i.e., online scoring changes followed by face to face meetings). Respondents also emphasized that in person face to face reviews were considered the best way to review across other international funding bodies as this increases accountability for the reviewers. Other effective practices mentioned included a term study section comprised of experts in the field and a strong panel chair to ensure impartiality and review integrity.

Additional recommendations based on other agencies included increasing the number of reviewers and ensuring their expertise and accountability (e.g., panel based system). An example was given from Australia where applicants are given a chance to respond to reviews before the final score is given.

Question 4: Are the mechanisms set up by CIHR, including but not limited to the College of Reviewers, appropriate and sufficient to ensure peer review quality and impacts?

There was general disagreement with the mechanisms set up by CIHR to ensure peer review quality. It was mentioned that a letter was signed by more than a thousand Canadian health researchers to share their discontent with the quality of the peer review processes since the reforms. Room for improvement was seen in numerous areas. The relevance review was viewed as needing to be a core aspect of the peer review process in general and not just for FOs and PAs. There is interest in putting mechanisms in place to ensure the quality of reviewers (e.g., pairing new reviewers with experienced reviewers). In the early phases it was also suggested that only experts take part in the preliminary triage to prevent quality grants from being thrown out in this initial phase. Another point brought up was the need for accountability mechanisms to ensure reviewers complete reviews on time, they are of good quality, discrepanices are addressed between their reviews and other reviewers on the same application, and a consensus process. The discrepancy issue was viewed as particularly problematic with the virtual review process. It was especially worrisome to an investigator that noted a strong predictor of research success is high scores across panel members and this is less common with the discrepancies seen.

The application processes felt very disorganized according to respondents with frequent changes being made and use of processes to streamline the review process (e.g., computer matching) that were found to be ineffective. An additional issue explored included the challenging format of the new Project Scheme application. This was considered one of the possible reasons the reviews were not conducted effectively.

The NSERC model was suggested for working with large application pressure. A college of reviewers was also spported to provide stability.

Question 5: What are international best practices in peer review that should be considered by CIHR to enhance quality and efficiency of its systems?

The NIH was listed most frequently as the agency with a peer review process to emulate. An emphasis was placed on returning to face to face reviews. This is considered the gold standard for a review process and respondents noted that it was a key component in what leads to high quality reviews in other agencies. Face to face was seen as leading to a higher level of professionalism and reviewers are typically better prepared knowing their scores may require defense. In addition to returning to face to face reviews an emphasis was placed on ensuring the expertise of the reviewers and those taking part in any aspect of the review process within CIHR. A comment was also made on involving scientists in the selection of reviewers and setting up separate assignment committees that are composed of more than the Chair and Scientific Officer. When in person meeting is not possible respondents indicated that a web-based panel meeting or mix of online and in person could be acceptable.

Prior to taking on the processes of another agency CIHR was cautioned to acknowledge the research community within Canada and tailor changes to nuances within this community. Part of a community driven approach could include selecting research chairs with at least 5 years experience. The decision to fund a researcher should broaden beyond publications and impact factors and look at networks built, commercialization, and train highly qualified people. The research chair position was also viewed as an opportunity to remove uneccessary focus on a single project and require input from a diverse pool of expert reviewers.

The idea of a college of reviewers was mentioned and supported. It was believed that this structure can provide stability and continuity to the review process. Membership can also be evaluated periodically and take into account reviewer commitment and feedback from grant applicants. The panel chairs were also envisioned as working closely with the reviewers.

Question 6: What are the leading indicators and methods through which CIHR could evaluate the quality and efficiency of its peer review systems going forward?

Respondents provided a diverse set of indicators to consider for their evaluation of the peer review system:

  • Equity
    • Career stage
    • Gender
  • Length, depth, and breadth of reviews
  • Time required per review
  • Success rates of "riskier" funded projects
  • Feedback from reviewers on their experience
    • Were the right grants funded
  • Look at the variance in reviews (compare excellent versus fair reviews)
    • Low standard deviation between reviewers after a panel discussion
  • Ask reviewers for an "enthusiasm" score along with overall application score
  • Scientific merit of proprosal
  • Research productivity of applicant (remove subjective assessment of future impact)
  • Applicant feedback on reviews received
  • Timeliness of release of adjudication performance data
  • High impact publications/quality of publications
  • Publication output/dollars spent
  • Breadth of subject matter funded
  • Timelines for peer review
  • Review completion rates
  • Success of resubmitted (previously unsuccessful) applications

It was suggested that CIHR could introduce a few modified peer review processes and compare these approaches over a three to five year period. Respondents also considered mechanisms to remove poor quality reviewers from future grant panels. An independent audit was also recommended by representatives from international funding agencies (e.g., NIH, MRC, DFG, ANRS). It is believed that this approach will assist with transparency in assessing CIHR. The lack of transparency and openness to scrutiny was a concern for many of the respondents. Going forward, respondents want to see scientists involved more significantly in CIHR reforms to help rebuild confidence.

Date modified: