Institute of Infection and Immunity: Stakeholder Engagement Report to the CIHR Peer Review Expert Panel

November 2016

Main Messages

While the focus group discussions were geared towards the recent changes in peer review, there were nonetheless considerable discussions/comments on reform design. The catharsis is not yet over and additional time will be necessary to regain the trust of the community. Reforms were introduced to reduce the time invested in writing grants, reduce reviewer's burden and provide an equal chance to all including new investigators and interdisciplinary researchers. Alas, we are far from these laudable goals. CIHR is not unique, as other funding bodies are facing similar challenges: ensuring consistent reviews, recruiting qualified reviewers, limited funds and difficulties in evaluating multidisciplinary research.

The members consulted indicated that the elimination of the face-to-face evaluation was a mistake. The decision to bring it back was the right thing to do and this should remain. The members acknowledged responsiveness from CIHR but believed that CIHR should have listened long before. Participants believe that experienced researchers should do the peer review but new investigators should sit as observers and be mentored by more seasoned investigators. Several agencies have tried virtual review and have moved back to face-to-face meetings (e.g., the MRC). It was raised that the face-to-face review guarantees higher standards from the participating reviewers. The reappointment of Scientific Officers was considered very positively as was the increased involvement of researchers in the adjudication process.

Basic fundamental researchers (in biomedical sciences, social sciences, and humanities) felt disadvantaged compared to other fields as their research may take more time for translation and impact. There should be less emphasis on these aspects in evaluating basic research. Some participants believe that it was important for Canada and CIHR to reaffirm that they value knowledge generation and that we want to build on Canadian innovation to deal with some of the most pressing health needs.

Stakeholder Engagement Approach

The Institute of Infection and Immunity chose to consult its community using a focus group approach. Given the time allowed to consult and write the report, we felt it was the most efficient way to get sound opinion from our researchers regarding the new investigator-initiated programs as well as the new peer review process at CIHR. Our focus group consultations were WebEx consultations with both video and audio functionalities. We held three distinct WebEx consultations on October 21st (3:30 to 5:30 pm); October 27th (2:00 to 4:00 pm); and October 28th (3:00 to 5:00 pm). We also included comments expressed by the community at large gathered from the CIHR online survey.


Each focus group consisted of 4 participants. We had three focus groups, for a total of 12 participants. All participants were active researchers with research projects in line with the Institute's mandate. We aimed to obtain geographical, sex, expertise, research theme and career stage representation, and have both successful and unsuccessful applicants in the recent Foundation and Project Grant programs. However, due to the time constraint and the availability of potential participants, it was not always possible to reach this goal. Many of the focus group members reviewed in the Foundation or Project Grant programs. The demographics of the 12 participants are as follows: 4 women and 8 men; 6 from the biomedical research theme, 3 clinician scientists, and 3 from the population/social sciences health research theme; 4 with research activities in the immunology area, 5 in the infection area, and 3 with activities in both; 8 participants were based in Ontario and Quebec, and 2 each from Western and Eastern Canada. This report also includes the views of 6 anonymous researchers who filled the online survey, three of them being senior researchers and 3 mid-career investigators, all from the biomedical research theme.

Summary of Stakeholder Input

Question 1: Does the design of CIHR's reforms of investigator-initiated programs and peer review processes address their original objectives?

Reforms were introduced to reduce the time invested in writing grants, reduce reviewers' burden and provide an equal chance to all including new investigators and interdisciplinary research. While it was thought that it might be too early to answer this question, the focus group members indicated that researchers are writing more grants than ever to CIHR. With the current uncertainty, some researchers wrote both a Project and Foundation Grant to mitigate the risk. The lower success rate also increased writing of additional grants in a general context of higher-than-ever quality applications. There was also a consensus that peer review burden has not decreased as a result of the reforms. Metrics to evaluate interdisciplinary research do not seem readily available and claims versus genuineness of this type of research are difficult to discriminate. We also need to take into consideration the fact that three variables were changed at once: removal of the face-to-face panels, introduction of structured review and introduction of the two kinds of grants –Project and Foundation. It may have affected the overall success of reform implementation.

The virtual peer review was almost unanimously decried and at many levels. Some of the most frequently raised issues include: imperfect machine-led adjudication; variation in quality of reviews; lack of engagement of some reviewers during the chatting exercise. The changes implemented following the July 13th meeting, including re-introduction of face-to-face review, adjudication processes, and possibility for rebuttal, are most welcome. But some issues were raised that are listed below. It is not clear how these changes will address multidisciplinary research; it will still be difficult to have all the content expertise in a face-to-face panel, unless the panel consults with external experts through conference call. The extent of triage (60%) before the face-to-face meeting is also a concern. While focus group members recognize the value of streamlining, a mechanism for salvaging worthy applications should be in place.

The rationale for all the changes was questioned. According to the focus group members, the system was not that broken and, while changes were necessary, this was too drastic. Until now the reforms of the investigator-initiated programs have unacceptably reduced the quality of reviews. It was argued that the changes were introduced to favour "translational" research over basic discovery. This was highlighted as a risk to Canada's standing in the world, of leading to a brain drain of mid-career talent, and creating a generation gap with fewer early-career researchers and highly qualified personnel and trainees necessary for leading the innovation agenda in Canada. The current criteria for the Foundation Grant give 25% to the leadership category. This selection criterion may not select for better scientists but rather for better businesspersons. The knowledge translation (KT) aspect of the review process is not easily applied to basic, fundamental research and, as such, should not penalize those researchers.

With current success rates and allocation between the Foundation and Project Grant programs, there are not sufficient health researchers funded in Canada. The Project Grant program is considered ill-suited to fundamental basic research. The Foundation Grant review criteria favour senior researchers and put mid-career researchers at a disadvantage. This program is also seen as a way to cap the amount of funding a researcher can have from CIHR open programs. There is a question about Foundation Grant renewals and the risk associated with consolidating all projects under one program. Baseline calculation for Foundation Grant may disadvantage researchers holding a single grant (or two small ones) from CIHR and payment unevenness through the years may be problematic to the grantees as it challenges the maintenance of their research program. It has also been mentioned that the Foundation is perpetuating an oligarchy of the old boy's club.

Question 2: Do the changes in program architecture and peer review allow CIHR to address the challenges posed by the breadth of its mandate, the evolving nature of science, and the growth of interdisciplinary research?

There was a general sentiment that, despite improvements after the July 13thmeeting, there is still a long way to go for supporting and reviewing research that spans the spectrum from atomic structure of drug targets to social determinants of health and implementation science. The main issues that were raised by the focus group members related to interdisciplinary research; place and review of basic research; equity in the funding of basic social science and humanities research, the difficulties in getting the right and best reviewers; and finally gender inequity and the funding of investigators at all career stages. Indeed, the results of the recent competitions suggest that the reforms caused disadvantages for junior, mid-career and female applicants.

Interdisciplinary research is a challenge for peer review and it is not clear that the current architecture will improve this. After SSHRC abandoned health research in the late 2000's, CIHR neglected social science and humanities, in particular basic, fundamental social science research. Social scientists have not found a niche yet at CIHR. There was concern that basic, stand alone, social science of health research without biomedical or clinical co-investigators does not get funded. The Common CV removes significant publications which may still be in heavy circulation that were published more than 5 years ago. In order to satisfy the requirements for interdisciplinary research, biomedical and clinical applicants will include a social scientist or ethicist in the application, but this is often tokenism and social science and humanities scholars are then left out going forward. A thorough peer review process should be able to determine whether this interdisciplinary collaboration is genuine or not. Interdisciplinary grants need panels with enough breadth of expertise to get fair and consistent reviews. It comes down to the breadths and depth of expertise of the reviewers and how they value interdisciplinary research. Face-to-face reviews with sufficient time for discussion among the reviewers may facilitate the funding of interdisciplinary research.

Biomedical and social science basic researchers indicated that there was too much emphasis on interdisciplinary research and on direct impact on human health. In the last Project competition, the review criteria for the Project Grant were very skewed towards benefit to human health, a goal that is not always in immediate reach for a biomedical or a basic social science or humanities project. Several basic research projects are not multidisciplinary or interdisciplinary. It takes years for a basic science project to achieve translation, and it has been argued that basic fundamental projects did not fare well during the competition because of less advanced translational plans.

The focus group members generally felt that the current reforms, including changes taking place after July 13th, did not reduce the workload of reviewers. While there is great hope that the College of Reviewers will lead to improvements, this was not a unanimous view. During the last round of competitions, there were complaints about the quality of reviews, including inconsistency and even absence of any comments. It was mentioned that recruitment of quality reviewers for the last competitions was difficult. CIHR should consider recruiting early-career investigators within a process that ensures they receive sufficient mentoring to contribute to the reviewing process. If CIHR considers recruitment of international reviewers, it might be challenging because they have no clear incentives. Financial incentives, as for several other funding bodies, to do reviews should be considered by CIHR, as well as any other form of recognition, such as academic recognition. This will require further discussions between CIHR and host institutions.

Question 3: What challenges in adjudication of applications for funding have been identified by public funding agencies internationally and in the literature on peer review and how do CIHR's reforms address these?

Challenges for quality peer reviews are not unique to CIHR. A consensus view of the focus group members was that face-to-face review is the gold standard and the experience of other agencies in moving to a virtual peer review was not successful. Face-to-face panels/committees allow more thorough discussions and allow the mentoring of the new generation of peer reviewers. It was mentioned that participating in a grant panel was a privilege and can be an important learning and networking experience.

The initial peer-expertise matching algorithm using mostly keywords was inappropriate in several instances to match reviewers to applications. The menu of keywords is geared to biomedical and clinical science, neglecting social science and humanities terms. Close collaborators, people in the same departments, or non-experts were often asked to review specific grants. Of course, the potential reviewers can choose not to review a grant because of conflicts of interest or lack of expertise, but this should be achieved more efficiently upstream through a more comprehensive survey of the reviewer's expertise. Changes implemented post July 13th are in the right direction, and to be effective, the College of Reviewers should have an excellent reviewer matching process. The involvement of expert researchers (Chair/Scientific Officer) is important and should validate any machine based matching algorithm before applications are sent to prospective reviewers. It would be essential for CIHR to continuously improve and assess the algorithm's ability to match reviewer expertise with the applications. Canada has a small research community, and it is often difficult to find experts so we sometimes have to rely on generalists. This was the case before as well. When the expertise is variable, it often leads to divergence in comments and inconsistent reviews and ranking; this is why face-to-face meetings are essential. This situation forces the applicant to adapt his/her grants to a broader audience so that it can be understood and reviewed by a diverse group of reviewers.

Despite the small size of the Canadian researcher community, focus group members nonetheless mentioned that individuals applying in the same competition and asked to review would be in conflict of interest. The number of grants submitted is high, which requires many reviewers; CIHR will need to monitor whether capping the number of grants that one can submit per competition will reduce the need for reviewers. A suggestion would be to increase the number of international experts as full panel members but this is challenging as indicated at the end of Question 2, above. As a first step, we may reach out to international researchers that are co-PI on CIHR grants, or to Canadian citizens working abroad. In addition to interdisciplinary research, another area that merits further analysis is for reviewers of grants with big data. Large data set, -omics data and bioinformatics tools are now important components of an increasing number of applications. Often, the experts do not hold an academic appointment and are embedded in large teams and expertise to appropriately evaluate these grants is lacking.

It was recognized that the quality of applications is continuously increasing which makes the ranking of applications increasingly difficult. This creates pressure on the reviewers, and those challenges remain with the new system. All the applications that are ranked in the middle and above should be discussed in a face-to-face panel review meeting. Ranking on excellence is best practice and in an era of limited funding resources, the highest standards should be used. So not only is adjudication important, but the quality of the reviewers should also be continuously monitored. We all know how critical it is for a researcher's career to get a grant or not and it was suggested that CIHR perform an analysis to ensure that specific types of grants/projects are not systematically ranked lower. Moreover, CIHR could put more emphasis on verifying overlap between a proposal and active grants from an applicant.

Question 4: Are the mechanisms set up by CIHR, including but not limited to the College of Reviewers, appropriate and sufficient to ensure peer review quality and impacts?

Upon its creation, the College of Reviewers has been presented to the research community as the body that will ensure that the peer review system supports the selection of the best proposals, while continuing to be fair, well managed and transparent. However, there seems to be confusion about the actual status of the College. Although the College Chairs have been selected and appointed, the College is not yet operational (but some respondents thought that they were already part of the College because they were asked to review recently). Since it has not officially started, it is impossible to measure its impact on the peer review system. The recruitment process of the reviewers is unclear, and some of the focus group members were concerned about it. On a more positive note, the fact that all reviewers will be trained and their performance evaluated was well received, although details are still unknown (who will be responsible? what topics would be covered? who will do the training? format?). It was indicated that both the Chair and Scientific Officer (SO) will have an important role in evaluating the reviewers. This was already happening informally in the previous system and formalizing the process is seen positively.

It was mentioned that CIHR should find a way to shift the attitude of researchers taking part in peer review from a burden to an honour. Being a member of the College should be prestigious and it should also be recognized by the academic home institution as an important contribution to academic life. Further discussions between funding bodies and research-intensive University representatives should address this. The NIH has managed to raise the perception of a sense of pride for being one of their reviewers.

Face-to-face meetings are considered by many as an essential part of the peer review process, and their return following the July 13th meeting was a good decision. In-person meetings allow peer pressure and greater accountability which all support higher quality reviews. The face-to-face meetings also allow networking, a collateral advantage of participation in committees.

CIHR needs to increase the base of reviewers who have content expertise but who also have experience with peer review. An exchange program between several agencies at the international level could be beneficial for CIHR and the reviewers. CIHR may offer, as a means for integration, the possibility for early-career researchers to be observers on peer review panels. They would learn how peer review is conducted, appreciate the efforts put into this, and the skills necessary to provide and defend a review. One advantage for new investigators would be to see how more seasoned investigators are writing their grants (grantsmanship). CIHR should have a proactive role in increasing the base of reviewers, and using a personal approach (e.g., telephone call) can have a great impact on recruiting members. This is an approach that CIHR is using and for which it is congratulated.

The role of the panel chair and SO is also seen as a critical component to ensure a high quality peer review. Chairs act to moderate discussions and make sure the opinion of each reviewer is heard and respected. The SO would then convey the main messages of the discussions in a more comprehensive and coherent form to the applicants. Both the chair and SO should also assign the grants based on expertise to optimize the capacity of each reviewer, and assign the right set of reviewers to each grant application. Finally, the chair and SO can identify the underperforming reviewers.

Question 5: What are international best practices in peer review that should be considered by CIHR to enhance quality and efficiency of its systems?

There are several funding agencies, within and outside Canada, with best practices in peer review from which CIHR can build on. The peer review process would benefit from a comparison with other Canadian funding programs/agencies such as the Banting Postdoctoral Fellowships or the National Cancer Institute of Canada. For example, the Banting Program proceeds with a first stage of screening and places in rank order all the applications, with the top 20% applications going through. Applications with large discrepancies in scores are discussed between the panel members, emphasizing the necessity for face-to-face discussion during peer review. The National Cancer Institute of Canada, with its limited number of review panels, put in place a system allowing exchange of information between panels in order to maximize the number and quality of applications supported. The NIH offers different tools to its panel members to facilitate reviews: 1) online chats; 2) phone calls with external reviewers if the expertise is not found on the panel; 3) efficient triage of the less competitive applications; 4) pre-review phone calls with program staff to orient reviewers; 5) structured review form.

The evaluation criteria for peer review are also targets for improvement. The structured review form doesn't allow for an adequate review of the proposal. More latitude for risky projects should be given as they often prove to be game changers, and importance should be given to the track record and impact of the research without giving too much weight to non-research activities. The Foundation Grant program could benefit from benchmarking with the Howard Hughes Medical Institute (HHMI).

CIHR has a catalyst role in enabling research. Whatever the process is, it should engage in providing extensive comments to the applicants, either funded or not funded, in a fully transparent way. This approach is helpful for those who re-apply, and to new investigators who are appreciative of the guidance. CIHR, through the peer review process, should provide the adequate information that could promote improvements in subsequent research and grant applications or indicate clearly that the research proposal is not sound. The reintroduction of the scientific officer position should help in providing a high level and coherent summary of the discussions that took place within the committee and that provide context to the reviewers' comments.

It is believed that CIHR is struggling to recruit experienced researchers for peer review committees, where they play a critical role, in part as mentors. CIHR may take example from the European Research Council and the NIH who have succeeded in doing so. Although incentives may need to be offered, it is the credibility and prestige around the agency itself as well as the quality of the review process that seem to attract and retain experienced reviewers. This also has a positive impact on applicants who value the expertise and trust the system.

The focus group members acknowledged that CIHR reintroduced a rebuttal following a community request but it is not clear that this is the best tool/strategy for unsuccessful applications. Indeed, the procedure for the re-review process, or more precisely the second time an unsuccessful application is reviewed, is still not addressed adequately by CIHR. In many cases, revised applications seem to not be considered. This is due in large part to the re-review being done by a different panel than the original one, often with a different angle, leading to a rejection of the application. Also, the time for re-submission is considered too long. Members of the focus groups found this counter-productive. A speedy process that offers a short-time period for rebuttal already exists internationally (Australia) and was suggested as an option to implement for applications ranked in the grey zone. This may help in supporting additional worthy applications that required only minimal additional information and should not add to the evaluation timeline. Online forms or videoconferences could be used for the rebuttal phase in order to decrease costs and the number of resubmissions.

Question 6: What are the leading indicators and methods through which CIHR could evaluate the quality and efficiency of its peer review systems going forward?

A recurrent theme that emerged from answers to this question was the customer satisfaction and trust in the CIHR process and system. In order to achieve this, there is a need for better direct communication with researchers throughout the application process and credible post-decision surveys and follow-up. The outcome may be that the grant is not awarded but did the applicant feels that the application was treated fairly, that they received the information and feedback they needed from CIHR throughout the application process, and that they trust the system. The satisfaction measured among the non-funded applicants is clearly most valuable as an indicator of the peer review acceptability. It is believed that increasing levels of trust will increase engagement of the community in participating to the reviewing process.

Quantitative metrics have been used to measure research quality and impact, especially in biomedical and clinical research, and this should continue. However, the focus group members highlighted many nuances that should be brought to those metrics in order to adjust the vision we have of Canadian research and also qualitative analyses that should be included. While impact factors and citations index may be appropriate in some fields of research, exploration of the equal valuing of books, edited volumes, colloquia, dissemination through newer forms of media, public and community engagement, changes in policy or other types of impact. The impact of research differs across different fields of research. Applied and translational research can have impact through development of new policies, guidelines or clinical practices. The breadth of the CIHR mandate makes comparison of measurement impact complex as it is highly variable across the different research fields and themes. There were several suggestions about generating new indexes for evaluating outputs but there was no consensus about this, in part because of the diversity of research stakeholders.

Other aspects that need to be taken into account include the amount of money received by Canadian researchers in comparison to other jurisdictions. It is believed that Canadians are highly productive per dollar invested. Research output may also depend on other factors including demographics, career stage, gender considerations, that should be taken into account in the development of output quality indicators in order to have a fair assessment.

Some members of the focus groups acknowledged that an independent international evaluation of the peer review system was a good idea. The evaluation framework developed in the Final Report 2012 (Evaluation of the Open Operating Grant Program) identified relevant indicators that should be considered (knowledge creation, program design and delivery, knowledge translation, and capacity development). However, some indicators may be ill suited for basic science. In particular, evaluation of knowledge translation should place the emphasis on quality of the review process as well as to fund the best science, and less on short term review of gains.

The reform per se should also be the object of an evaluation, and a measurement system allowing pre- and post-reform assessment should be developed. The impact of the new funding programs needs to be measured and adjusted. Both the Foundation and Project schemes should be evaluated, along with whether grant consolidation indeed leads to more impactful research. CIHR should collect data allowing them to determine the impact of the new funding schemes on a number of different indicators. The number of early, mid-career, and female investigators funded, the particular disciplines funded and those not, the amount of time spent in writing grants, the amount of time doing grant peer review, the number of CIHR-funded research laboratories, the number of funded trainees, and the number of jobs created or lost following the reforms are all indicators which could be used to compare the pre- and post-reform periods.

Date modified: