Evaluation of the Open Operating Grant Program – Final Report 2012

Acknowledgements

Special thanks to the all the participants in this evaluation – survey respondents, interview and case study participants, current and former members of the Subcommittee on Performance Measurement, CIHR and Institute management, OOGP management, Finance and ITAMS Branch managements, and members of the Evaluation Working Group, Additional thanks to the National Research Council of Canada for the cover photo.

The OOGP Evaluation Study Team:

David Peckham, MSc; Kwadwo (Nana) Bosompra, Phd; Christopher Manuel, M.Ed.

Canadian Institutes of Health Research
160 Elgin Street, 9th Floor
Address Locator 4809A
Ottawa, Ontario K1A 0W9
Canada
www.cihr-irsc.gc.ca


Table of Contents

Executive Summary

This evaluation of the Open Operating Grant Program (OOGP) takes place at a time when CIHR is proposing changes to its open suite of programs and enhancements to the peer review system. The evaluation therefore focuses on both the program performance of the existing OOGP and findings that can feed into the process of reforming CIHR's open programs.

The OOGP as it is currently designed has met its key program objectives. Findings from this evaluation demonstrate how the program has contributed to the creation and dissemination of health-related knowledge and supported high quality research.

The health research context in which the OOGP operates has however evolved since the program's inception, leading to questions about how well the current design funds excellence across the breadth of CIHR's mandate. Evidence from this evaluation shows that there are opportunities to enhance both program design and cost-effective delivery. Enacting these changes should ensure that CIHR's open suite of programs is well-equipped to meet current and future needs.

Key Findings

Recommendations

Evidence from the evaluation strongly confirm that broad open funding is a valid and rigorous way of supporting research and that the OOGP engenders research excellence and should therefore be continued. The following recommendations are made to further enhance program design and cost effective delivery:

  1. Ensure that future open program designs utilize peer reviewer and applicant time as efficiently as possible; for example, in the design of the peer review system and the amount of application information required to be submitted by applicants.
  2. Ensure that future open program designs account for the varying application, peer review and renewal behaviours of different Pillars.
  3. Conduct further analyses to understand fully the potential impacts of changes to the peer review system. Studies of peer review models using experimental designs would provide a strong evidence base.
  4. Create measures of success for future open programs, ensuring that these are defined to be relevant for CIHR's different health research communities.

Management Response

Recommendation Response
(Agree or Disagree)
Management Action Plan Responsibility Timeline
1. Ensure that future open program designs utilize peer reviewer and applicant time as efficiently as possible; for example, in the design of the peer review system and the amount of application information required to be submitted by applicants. Agree Agreed and in progress. The current exercise to reform the open programs involves completely reviewing application information requirements on the basis of needs for peer review or analytical information with the intention of streamlining the application requirements as well as aligning information to the applicable criteria of a new structured peer review processes. The objective is to decrease the peer review time per application. This measure of improvements in use of peer review time (per application) will be captured in the performance metrics as suggested in Recommendation #4 below. Jane Aubin Initial redesign of peer review and application processes will be complete by end of fiscal year 2012-2013 followed by testing and implementation by winter 2013.
2. Ensure that future open program designs account for the varying application, peer review and renewal behaviours of different communities. Agree Agreed and in progress. One of the objectives of the open reforms is to capture excellence across different communities. Data on how excellence is assessed by different communities has been gathered and is being built into the structured review process. The open reforms are also aiming to improve accessibility, from a technical and content perspective, of future funding opportunities to all areas and modes of health research. Jane Aubin Initial redesign of peer review and application processes will be complete by end of fiscal year 2012-2013 followed by testing and implementation by winter 2013.
3. Conduct further analyses to understand fully the potential impacts of changes to the peer review system. Studies of peer review models using experimental designs would provide a strong evidence base. Agree Agreed and in progress. A Research Plan is linked to the Transition and Implementation Plan of the open reforms and includes the conduct of a number of retrospective, short-term and long-term studies focusing on different aspects of peer review. While it is not certain whether comprehensive experimental designs can be used in studying peer review aspects without jeopardizing the integrity of a competition, all efforts will be made by Management in working with the Evaluation group so that studies have valid outcomes. Management intention is to keep changes to the new peer review system in the open suite as minimal as possible once developed, however ongoing research on peer review quality will be conducted and reported through the performance metrics as suggested in Recommendation #4 below. Jane Aubin Metrics and the Research plan will be established by the end of the fiscal year 2012-2013. The implementation of the Research plan will be ongoing.
4. Create measures of success for future open programs, ensuring these are defined to be relevant for CIHR's different health research communities. Agree Agreed. The development of performance metrics and a system of collection and analysis is underway as part of the Research plan mentioned above. Jane Aubin Metrics and the Research plan will be established by the end of fiscal year 2012-2013. The implementation of the Research plan will be ongoing.

Evaluation Purpose, Key Findings and Conclusions

Evaluation Purpose

This evaluation is designed to assess the extent to which the Open Operating Grant Program has achieved its expected outcomes in relation to its main objectives: the creation, dissemination and use of health-related knowledge, and the development and maintenance of health research capacity in all areas of health research in Canada.

The evaluation is also designed to meet CIHR's requirements to the Treasury Board Secretariat (TBS) under the 2009 Policy on Evaluation and Directive on the Evaluation Function.Footnote 1 It therefore covers specific core evaluation issues of program relevance and performance as described in the TBS policy suite.Footnote 2

In line with TBS policy and recognized best practice in evaluation, a range of methods - involving both quantitative and qualitative evidence - were used to triangulate evaluation findings.

Key Findings

Knowledge Creation

Program Design and Delivery

Knowledge Translation

Capacity Development

Program Relevance

Evidence from the evaluation speaks to the continued need for the OOGP and the program's alignment with the federal government and CIHR's priorities and with federal roles and responsibilities.

Conclusions

Knowledge Creation

Evaluation questions

Introduction

The creation of knowledge is central to the program theory of the Open Operating Grant Program. The program allows for researchers to apply with their 'best ideas' from across health research, which, if funded, may result in a wide and diverse range of research outcomes from publications to patents.

There is, of course, no single 'right way' of measuring knowledge creation in relation to research funding programs. Bibliometric analysis is one frequently used approach; academic papers published in widely circulated journals facilitate access to the latest scientific discoveries and advances and are seen as some of the most tangible outcomes of academic research (Goudin, 2005; Larivière et al., 2006; Moed, 2005; NSERC, 2007). Bibliometric analysis of these publications is used to measure, among other things, the volume of a researcher's publications and the relative frequency with which they are cited as a proxy for an article's scientific impact. In this evaluation, the Average of Relative Citations (ARC) is used as a measure of 'scientific impact.'Footnote 3

Critics of bibliometric analysis contend that estimates of publication quality based on citations can be misleading and that citation practices differ across disciplines and sometimes between sub-fields in the same discipline (Ismail et al., 2009). This is a particularly salient issue for CIHR and the OOGP, with a mandate to fund across all areas of health research, including research disciplines where outputs such as books or book chapters may be a more useful and accurate measure of knowledge creation. In light of this, measures of other outputs are also used in this evaluation to assess knowledge creation as a result of the program. A case study approach is also taken to assess highly impactful research conducted as a result of OOGP funding.

It should be noted that the bibliometric analyses in this report are based on data for publications produced by OOGP researchers while supported by these grants. While this method is commonly accepted based on an assumption that these grants are a significant contribution to research output (e.g. Campbell et al, 2010), an outright attribution between grant and publication bibliometric data cannot be made. With further development of CIHR's Research Reporting System, where researchers list publications produced as a result of the grant that can then be linked directly to bibliometric data, this type of analysis should become available for future evaluations.

Have publications by OOGP-funded researchers had a greater scientific impact than those of health researchers in Canada and other OECD countries?

As shown in Figure 1-1, publications produced by OOGP-funded researchers while supported by an OOGP grant have a consistently higher scientific impact (based on ARC) than the average for Canadian health researchers. The analysis also shows that for the period 2001-2009, OOGP-supported papers were cited more often than health research papers from other comparable Organization for Economic Co-operation and Development (OECD) countries (Figure 1-1).

Figure 1-1: Impact of supported papers produced by OOGP-funded researchers vs. OECD health research comparators (2001-2009)

Figure 1-1 long description

Source: Bibliometric data drawn from Canadian Bibliometric Database built by OST using Thomson Reuters' Web of Science (OOGP sample n=1,500)

It should be noted that the overall average of relative citations for Canada is comprised of all Canadian health researchers, including those funded by the OOGP. The OECD comparators are based on all health researchers within each country, rather than on individual funding agencies or programs. Given the differing mandates for health research funding in agencies such as the National Institutes of Health in the United States or the Medical Research Councils of the United Kingdom or Australia, direct comparisons between agencies could prove problematic. However one potential area for future evaluations to address would be to assess the feasibility of deriving agency or even program benchmarks based on matching a sub-set of data that is directly comparable (e.g. in biomedical research).

Has the scientific impact of OOGP-funded publications increased, decreased or remained the same since 2005?

As shown in Figure 1-2, the scientific impact of supported papers produced by OOGP funded researchers has significantly increased between the periods 2001-2005 and 2006-2009 (1.44 for 2001-2005, 1.54 for 2006-2009 (p<0.001))Footnote 4.

One potential factor in this increase relates to an increasingly competitive environment for applying for OOGP funding. Success rates based on the number of applicants funded compared with the number of applications have decreased by 12 percentage points from 2000-2001 (34%) to 2009-2010 (22%). CIHR's investment in the program has doubled over this period ($201.2m in 2000-2001 to $419.1m in 2010-2011), but the OOGP has attracted an increasing number of applications (under 2,500 in 2000-2001 to over 4,500 in 2010-2011).

Figure 1-2: Scientific impact of OOGP-supported research papers (ARC)

Figure 1-2 long description

Source: Bibliometric data drawn from Canadian Bibliometric Database built by OST using Thomson Reuters' Web of Science (OOGP sample n=1,500)

Feedback from a recent Canadian health researcher-initiated petition concerned about declining success ratesFootnote 5 identifies a range of undesirable consequences of higher application pressure from a researcher perspective. These include the loss of highly qualified personnel due to inconsistent funding, a danger to the research "pipeline" producing the next generation of health researchers, the loss of international competiveness, difficulty in conducting peer review effectively, and spending more time preparing unsuccessful applications.

Has the production of OOGP research outputs per grant increased, decreased or remained the same since 2005?

The number of journal publications produced as a result of an OOGP grant provides a further measure of knowledge creation. There are of course significant limitations as to how these data can be used and interpreted; simply producing a peer-reviewed publication gives no indication of its quality. However, when considered alongside bibliometric analyses, this measure provides useful basic data on the outputs that result from investment in the program, as well as some insight into the publishing behaviours of the different parts of CIHR's health research communities in the OOGP.

As displayed in Table 1-1, available data from CIHR's Research Reporting System (RRS)Footnote 6 shows that OOGP-funded researchers published an average of 7.6 papers per grant. The data also suggests that the overall production of OOGP-funded knowledge outputs, as measured by journal articles, has increased since 2004 (p<0.05). It should however be mentioned that this observed increase may be attributable to an overall increase in journal productivity observed globally (Archambault, 2010). Data on Canadian publication trends suggests that the total number of papers published by Canadian researchers has steadily climbed from approximately 27,000 in 2000 to approximately 37,500 papers in 2008 (Archambault, 2010).

Table 1-1: Average number of journal articles published per grant
Mean N Standard Deviation Sum
Pre 2004 Grants 7.2 553 8.7 3,965
Post 2004 Grants 8.9 153 8.8 1,364
All Grants* 7.6 706 8.8 5,329

Source: Research Reporting System, 2008 Pilot (N=565); Current Research Reporting System 2011-2012 (N=141)

Footnote *

Half-year grants were excluded from the analysis.

* referrer

Further analysis shows that journal article production is moderately correlated with the value and duration of the grants awarded (r=0.42, n=706, p<0.001 for both independent variables). Additionally, the value and duration of grants are strongly correlated with each other: longer grants tend to have more money (r=0.71, n=706, p<0.001). Therefore, it seems as if the duration of a grant has an important relationship with the number of publications produced. However, grant duration is not consistent throughout the four pillars. Biomedical researchers have the longest grant durations on average (3.4 years) compared with the other three pillars (3.0, 2.3 and 2.8 years for Pillars II, III and IV respectively); these differences are statistically significant (p<0.001).Footnote 7

The importance of a grant's duration and the differences in duration by pillar may have an impact on the overall reported productivity for each pillar; those with longer grant durations conduct research over a longer period of time which can then lead to having more findings to publish. Biomedical and clinical researchers have longer grants than the other pillars, and also report a higher number of publications. This difference in publication output has typically been attributed to differences in publishing behaviour between those in the biomedical community and those in the social sciences. The average number of publications by pillar as reported in the RRS are: Pillar I – 8.07; Pillar II – 6.86; Pillar III – 2.93; and Pillar IV – 6.57 and the overall was 7.55 (p<0.01).

To account for the differences in grant duration between pillars, the number of journal articles was normalized by grant duration.Footnote 8 Normalization was arrived at by dividing the number of journal articles reported by the duration of the grant. The results suggest that the publication productivity for most researchers is very similar when duration of grant is controlled for. The differences between pillars approached significance (p=0.06), likely due to the effect of the different publication behavior of Pillar III researchers (Figure 1-2).Footnote 9

Figure 1-3: Journal article productivity per year of grant duration by pillar of respondent

Figure 1-3 long description

Source: Research Reporting System, Pilot (N=596) and Current (N=141)

The significance of this analysis from an OOGP evaluation perspective is that it shows how assessing productivity by simply counting publications can be misleading. Future performance measurement of the program should take account of these and other potential confounds when reporting on this measure of knowledge creation.

Publication peaks three years after competition year

Accurately measuring OOGP research outputs through data collection tools like the Research Reporting System (RRS) relies on understanding the publication behavior of researchers. Data has been collected on OOGP grants that had authority to use funds expiry dates from January 1, 2000 up to July 31, 2008. This allows for analyses of a longer duration between competition start year (CSY) and the publishing year of subsequent publications linked to these grants.

Figure 1-3 shows the publication behaviour of OOGP funded researchers following their competition year. This data is analyzed by length of grant (3 or 5 years) to assess potential differences in publication behaviour based on duration of funding. Both grant durations reported that approximately 95% of their related journal publication outputs had already been published by eight years after competition start year (CSY+8). The peak publication period for both grant durations occurred in CSY+3.

Further analyses of this data show that the average time to publish the first journal article from the start of a grant was 2.18 years, with significant variation across pillar. Pillar I researchers published their first paper, on average, 2 years after the start of their grant, followed by Pillar IV researchers after 2.6 years, Pillar II researchers after 3 years, and Pillar III researchers after 3.2 years (p<0.001).

Figure 1-4: Publication behavior by grant duration - When do supported researchers publish?

Figure 1-4 long description

Source: Research Reporting System, Pilot and Current (N=492).

Again, the significance of this analysis is that performance measurement for this and other similar programs should be sensitive to such differences in publishing behaviour. Collecting data on publications at a single point in time after the end of a grant may result in undercounting among certain communities.

Books, book chapters and reports resulting from OOGP grants

The use of bibliometric data to measure knowledge creation may disadvantage researchers from areas that do not traditionally use journal articles as their primary dissemination medium. For these researchers, books/book chapters or reports may be considered as their more significant evidence of creating and disseminating knowledge. It is therefore important to measure these other knowledge outputs.

As can be seen in Figure 1-5, data for these outputs reveal two trends. First, it appears that Pillar III researchers produce fewer books/book chapters than their peers in the other three pillars. Conversely, it seems that Pillar III researchers produce, on average, more reports than their peers in Pillars I and II. These observed differences in books/book chapters and report production behavior are not however statistically significant, due to small sample sizes of researchers reporting having produced any of these outputs.

Figure 1-5: Books/book chapters and reports published as a result of OOGP grants

Figure 1-5 long description

Source: Research Reporting System, Pilot (N=596) and Current (N=141)

Program Design and Delivery

Evaluation questions

Introduction

This evaluation takes place at a time when CIHR is considering wide-ranging options for the redesign of its open programs, including the OOGP. At the time this evaluation took place, the agency had entered into a series of consultations with the health research community to obtain their feedback on the future direction of CIHR's investigator-driven programs.

This evaluation of the Open Operating Grant Program provides evidence to feed into the agency's program redesign process. Through evaluating existing data from the OOGP, the evaluation can provide a further body of evidence for the decisions that are to be made on program redesign, as well as benchmarks against which the success of these future changes can be measured.

This section of the report relating to program design and delivery can therefore be divided into two parts:

Is the OOGP peer review process able to identify and select future scientific excellence?

A key element of the program design of the Open Operating Grant Program is that the peer review process should be able to select applications from applicants who have the most excellent research ideas. An indication of whether this process is working as anticipated is if selected applications and particularly those that were highly ranked in peer review committees lead to publications with higher scientific impact (measured using the Average of Relative Citations) than unselected applications. This approach to assessing peer review has been taken by other funders, including a study for the Alberta Ingenuity Fund (Alberta Ingenuity Fund, 2008).

The scientific impact of publications of successful OOGP applicants was well above those of unsuccessful applicants and applicants who have never been funded by the OOGP (Figure 2-1). For the period 2000-2009, successful applicants had an ARC of 1.54 compared with the Canadian average of 1.24. Supporting the hypothesis that a highly competitive OOGP attracts excellence, unsuccessful applicants also had an ARC score well above the Canadian health research average (ARC of 1.45), with even those applicants who have never been successful in obtaining OOGP funding showing above average scores (ARC of 1.36).

Figure 2-1: Impact of applicants for two years following competition by application status and Canadian papers in health fields by publication year (2000-2009) (ARCs)

Figure 2-1 long description

Source: Bibliometric data drawn from Canadian Bibliometric Database built by OST using Thomson Reuters' Web of Science (OOGP sample n=1,500)

OOGP funding decisions are determined by percentile rankings of applications based on an algorithm involving averages across committee ratings and the number of applications reviewed by that committee. If the peer review process works well, it would be expected that better ranked applications should subsequently result in stronger scientific impact scores than lesser ranked applications. Bibliometric analysis (Figure 2-2) showed that papers produced by researchers who were always ranked in the top 10 percentile (top ranked) of their peer review committee when applying to the OOGP had a stronger scientific impact (ARC of 1.91) than those who were sometimes top ranked (ARC of 1.64) or never top ranked (1.38).

Figure 2-2: Average Relative Citations of supported papers of funded researchers by peer review committee percentile ranking and publication year (2001-2009) (ARCs)

Figure 2-2 long description

Source: Bibliometric data drawn from Canadian Bibliometric Database built by OST using Thomson Reuters' Web of Science (OOGP sample n=1,500)

The evidence from this analysis therefore provides support to the hypothesis that OOGP peer review committees are selecting the 'best research ideas' as measured by resulting outcomes - subsequent publications and their impact.

How satisfied are OOGP applicants with the delivery of the application, peer review and post-award processes?

Levels of satisfaction with the application and peer review processes provide a researcher perspective on the efficacy of program delivery by CIHR. These findings can be used to identify areas of future improvement for delivering the OOGP. Footnote 10

Table 2-1 shows data on elements relating to the application and peer review processes: 1) researcher satisfaction with each element of the process; 2) the extent to which researchers identify each aspect as important; 3) whether respondents see each element as an area for improvement; and 4) whether they feel that the delivery of this element has got better or worse over the last five years.

As might be expected, levels of satisfaction are higher with the more straightforward 'transactional' aspects of program delivery, such as submission of applications, the application instructions or timeliness of posting results. By contrast, the more complex processes involved in peer review have lower satisfaction ratings, and are generally viewed as more important.

Considering these findings in the context of other benchmarks, the OOGP's scores compare favorably with those reported in the recent summative evaluation of the Standard Research Grants (SRG) program of the Social Sciences and Humanities Research Council (SSHRC, 2010). Clarity of application instructions: 67% for the OOGP vs. 61% for SRG; and ease of submission of application, 72% for the OOGP vs. 52% for SRG.Footnote 11 The data also showed that successful applicants tended to be more satisfied than unsuccessful applicants, a finding mirrored in the SRG evaluation and more generally in client satisfaction surveys.

Table 2-1: OOGP Applicant satisfaction with the application and peer review processes
Stage Element % Very or Somewhat Satisfied Most important aspect Area for improvement % worse in last five years
Application Process ResearchNet's - capabilities for supporting CIHR's application process 72.8% 7.1% 1.2% 4.1%
ResearchNet's - ease of submission of application 72.7% 21.6% 4.9% 13.0%
Completeness of the application instructions 69.6% 7.0% 1.3% 7.0%
Reasonableness of the information that you are required to provide 68.1% 14.5% 4.3% 11.7%
Clarity of the application instructions 67.7% 16.3% 3.9% 8.3%
ResearchNet's - effort required to complete application 64.2% 17.5% 5.7% 10.4%
Timeliness of posting results 61.6% 8.5% 4.1% 8.2%
Fairness of policies relating to applications to CIHR 57.2% 28.1% 10.5% 16.7%
Time available to submit an application following the launch of a funding opportunity 55.6% 13.8% 3.7% 9.5%
Usefulness of written feedback from the peer review process 48.9% 41% 12.7% 22.1%
Peer Review Process Clarity of the rating system 43.3% 11.1% 5.1% 15%
Clarity of the evaluation criteria 42.6% 20.2% 8.9% 15.8%
Quality of peer review judgments 38.9% 74.1% 47.7% 32.6%
Consistency of peer review judgments 26.2% 53.6% 25.6% 34.5%
Reasonableness of policies relating to the use of grant funds 49% Not asked* 36% 12%
Post Award Administration1 Coherence of policies on the use of funds among CIHR programs 43% Not asked* 21% 7%
Understanding of how the reports are used by CIHR 23% Not asked* 21% 8%

Source: 2011 IRP Report Ipsos Reid Survey (data filtered by researchers who have applied to the OOGP n=1,909).

Footnote *

Institutional stakeholders and not researchers were asked about post award administration (n=232). They were not asked about "importance".

* referrer

Areas for improvement

While these findings can be considered as broadly positive, when looking to make program delivery improvements it is useful to focus on elements that meet the following conditions:

  1. Lower levels of applicant satisfaction.
  2. Identified as important by applicants.
  3. Identified as areas for improvement by applicants.
  4. Perceived to have got worse over the last five years.

The shaded rows in Table 2-1 show two elements which meet all of the above criteria, both of which relate to peer review: the quality of peer review judgments; and, the consistency of peer review judgments.

The quality of peer review judgments particularly stands out on these measures. It is ranked by applicants as the most important element in the application process, with around three in four rating it as important (74.1%). Just under one in two applicants (47.7%) state that it is an area for improvement.

It is recommended that further study should be conducted to explore how applicants are rating 'quality' in this context. This could for example relate to aspects of delivering the processes involved in peer review or to an assessment of which applications are selected; if applicants feel that the strongest applications were not selected, this could be considered as evidence of a lack of 'quality'. Analysis of the findings for these two elements by CIHR pillar reveals few differences across pillars.

Is the OOGP being delivered in a cost-efficent manner?

To assess cost-efficiency in delivering the OOGP, we replicate in this evaluation a peer-reviewed published study conducted by researchers with data from the National Health and Medical Research Council (NHMRC) of Australia (Graves, Barnett & Clarke, 2011).

A key innovation used in this evaluation based on Graves et al.'s study is not to simply consider the administrative costs to CIHR of delivering the OOGP, but also to calculate the costs of the program to applicants and peer reviewers. As is noted, CIHR's proposed program reforms are aimed to reduce the amount of time spent by researchers in preparing applications and also to make peer review more efficient. This study quantifies the financial implications of time spent by researchers and peer reviewers.

The analysis conducted also improves on the study conducted by Graves et al. in that data from a larger sample survey of peer reviewers and researchers is included in the calculations shown in Table 2-2. This should give greater confidence in the validity and generalizability of these findings. We take the 'ingredient approach' to cost analysis which is based on the notion that every program (e.g. the OOGP) uses ingredients that have a value or cost (Levin & McEwan, 2001).

As can be seen in the detailed breakdown provided in Table 2-2, the average cost per OOGP application, including administrative costs (direct and indirect), applicant and reviewer 'costs' (monetized time) is $13,997. This compares with an average cost for the Australian NHMRC comparison of $18,896.

As can be seen in Table 2-3, the overall direct and indirect administrative cost per grant application ($1,307) is in fact comparable with those obtained for the Australian NHMRC ($1,022) and the US National Institutes of Health ($1,893).

Table 2-2: Cost elements of the OOGP and benchmark using the ingredient approach
Cost Items Open Operating Grant Program Australian National Health and Medical Research Council Notes
Applicants
Average number of hours to complete an application (a) 168.6 160-240 hours (20–30 eight-hour days) CIHR Open Reforms Survey February-March 2012 (N=378)
Average hourly wage (b) $64.52 $67.17-$100.76

Weighted average of academic salaries was calculated with data from Statistics Canada 2010/11.

The Australian study gives only gross figures. The hourly wage was calculated by the evaluation unit using the cost per application and dividing it by the minimum and maximum length of time it took to complete an application.

Total number of program applications (annually) (c) 4636 2705 2338 applications in Sept 2010 & 2298 in March 2011 OOGP competitions
Total cost to applicants (d) (a*b*c=d) $50,430,742 $43,610,114 A$40.85 million, converted into Can$ using XE Quick Cross Rates on Feb. 17, 2012.
Applicant cost per grant application (e) (d/c=e) $10,878 $16,122
Peer Reviewers
Average number of hours spent (at home) per reviewer independently reviewing all applications (f) 43.93 Not available Survey of OOGP Peer Review Committee Chairs, SOs & Reviewers January 2012 (N=457)*
Average number of hours spent (at home) per reviewer per application 5.25 4 Survey of OOGP Peer Review Committee Chairs, SOs & Reviewers January 2012 (N=457)*
Average number of hours spent per reviewer participating in committee meetings (g) 23.82 46 Survey of OOGP Peer Review Committee Chairs, SOs & Reviewers January 2012 (N=457)*
Average number of hours spent per reviewer travelling to committee meeting (h) 7.15 Not available Survey of OOGP Peer Review Committee Chairs, SOs & Reviewers January 2012 (N=457)*
Total hours spent per reviewer in review process including travel (j) (f+g+h=j) 74.9 Not available Survey of OOGP Peer Review Committee Chairs, SOs & Reviewers January 2012 (N=457)*
Average hourly wage (k) $64.52 Not available Weighted average of academic salaries was calculated with data from Statistics Canada 2010/11
Total annual cost to reviewers (m) (j*k*1738=m) $8,398,968 $4,739,471 CIHR runs two competitions annually; the March 2011 review meetings involved 861 reviewers while the November, 2011 meetings involved 877 reviewers for a total of 1,738 reviewers. Note that reviewers often participate in both competitions. Therefore, total number of reviewers does not represent unique reviewers.
Reviewer cost per grant application (n) (m/4636=n) $1,812 $1,752
Agency-Related Delivery Costs (administrative costs)
Peer review-related costs (travel, honoraria, hotels, meeting rooms, courier services) (p) $1,994,317 Not available Data from CIHR Finance
Personnel costs (KCP, PPP, ITAMS Branches) (q) $3,553,433 Not available Data from CIHR Finance and other groups: PPP; KCP; and ITAMS
Non-salary direct overheads, facilities, materials, supplies (r) $513,407 Not available Data from CIHR Finance and other groups: PPP; KCP; and ITAMS
Total agency-related administrative costs (s) (p+q+r=s) $6,061,157 $2,764,516 Disparities between the total administrative costs relate to the NHMRC running one competition per year compared with two for the OOGP. A$2.59 million converted into Can$ using XE Quick Cross Rates on Feb. 17, 2012.
Agency-related costs per grant application (t) (s/c=t) $1,307 $1,022
TOTAL
Full annual cost of funding exercise (u) (d+m+s=u) $64,890,867 $51,114,101
Full cost per grant application (v) (u/c=v); also (v=e+n+t) $13,997 $18,896
Footnote *

Applications submitted in 2009 to Project Grants Scheme of Australian National Health and Medical Research Council. The scheme formed 50.3% of that agency's budget in 2009.

* referrer

Table 2-3: International comparisons of agency-related program delivery direct costs
Cost Proportions CIHR NHMRC (Australia) NIH (United States)*
Reviewer costs (travel, hotel, per diem) $1,994,317 N/A $37,624,717**
Staff, space, other costs $4,066,840 N/A $71,273,149**
Total direct costs $6,061,157 $2,764,516¥ $108,902,306**
Number of applications 4636 2705 57531
Agency-related delivery cost per grant application $1,307 $1,022 $1,893**
Footnote *

Source: Dr. Nakamura, Acting Director, Centre for Scientific Review (CSR), US National Institutes of Health. Presentation to CIHR on Feb 17, 2012, titled "CSR Electronic Peer Review," Slide #9; Refers to CSR budget for FY2011. Does not specify grant type but most grants distributed in open competition.

* referrer

Footnote **

Includes cost of extra staffing for peer review but unclear whether it includes regular staffing, space & IT-related costs; could therefore be an underestimate.

** referrer

Footnote ¥

US$ converted into Can$ using XE Quick Cross Rates on Feb. 29, 2012.

¥ referrer

Is the current project-based OOGP funding model an appropriate design for CIHR and the federal government to support health research?

Project-based and program-based funding models are both used by research funding agencies worldwide to support research excellence. Project-based funding supports ideas: "a defined piece of research with a beginning, middle, and end point" (CIHR, 2012a). Project-based funding has been successfully implemented by the National Institutes of Health (e.g., NIH Research Project Grant Program – R01), and the Gates Foundation (e.g., Grand Challenges in Global Health competition) (Grand Challenges in Global Health, 2011; Ioannidis, 2011; Azoulay, Graff-Zivin & Manso, 2009; Jacob & Lefgren, 2007).

Programmatic grants support researchers by funding: "a broad program of research over a number of years, usually at a fixed rate, but sometimes varying in relation to the type of research and the costs involved" (CIHR, 2012a). Several funding agencies, such as the Wellcome Trust in the UK and the Howard Hughes Medical Institute in the US, have successfully implemented programmatic funding schemes with positive results. Both models have their merits and there is no evidence to suggest that one is necessarily 'better' than the other.

The current OOGP originates from the CIHR's predecessor, the Medical Research Council, and uses a project-based funding approach to support research. While health research and approaches to research funding have evolved in Canada and across the world, only relatively minor changes have been made to the OOGP (for example to methodologies for ranking applications in peer review committee). The agency has generally chosen to respond to new opportunities and challenges by creating a range of other programs, for example in the areas of knowledge translation and commercialization. However, while the OOGP itself has not significantly changed, the communities it serves have done so, and applicant behaviour has also evolved.

Applicants frequently renew OOGP grants

The extent to which applicants renew grants can be seen as one measure of the OOGP funding longer-term 'programmatic' grants, rather than shorter projects. One important caveat to consider here is that renewal behaviour is not consistent across all pillars of research; those in CIHR's heath systems and services and population health pillars (Pillars III and IV) are far less likely to apply for renewals and successfully have grants renewed than others, at least in part due to the nature of their research community. Notwithstanding this limitation, as the OOGP continues to fund largely biomedical research (around 80% of grant holders) this makes grant renewals one acceptable proxy measure for 'programmatic funding' behaviour.

Analysis of data on successful and unsuccessful renewal applications submitted to the OOGP from 2000-2010 shows that over this period, between 51% and 23.6% of approved applications had been previously funded at least once (Figure 2-5). If one or more renewals of a successful application is taken as an indication of programmatic funding, then the data appear to confirm the existence of programmatic funding "behavior" in the OOGP among some researchers.

Figure 2-3: Previously funded status of OOGP renewal applications with FRNs, by Competition (2000-2010).

Figure 2-3 long description

First version means not previously funded; more than one version means previously funded at least once. Data from 2000-2003 may be incomplete due to database conversion issues from the MRC to CIHR era and should be used with caution.

Source: CIHR Electronic Information System data for the OOGP (2000-2010) (N=37,604) provided by the CIHR Performance Measurement and Data Production Unit.

While the data shows a gradual decline from the 51% renewals that occurred in the March 2000 competition to under 24% in the September 2010 competition, the continued existence of programmatic funding "behavior" in one out of every five applications underlines its continued importance. There is a case to be made that these applicants are already operating in a "programmatic funding mode," applying and re-applying for the same research, without enjoying the benefits of ongoing stable programmatic funding. For these applicants, longer program grants would likely reduce the time spent on applying for funding, freeing up time to concentrate on their research program.

Evidence provided later in this report from the case studies of highly impactful OOGP funded research also illustrates the existence of programmatic funding within the OOGP's project based model and how this operates from a researcher's perspective. Some of the principal case-study participants directly supported the concept of programmatic funding, citing that:

"Writing funding grants was a time-consuming process and writing research grants on a frequent basis was taxing on [their] time and focus".

OOGP funded case study respondent

Examples of programmatic funding from the case studies include:

Case study participants agreed that regardless of funding being project or program based, their research labs needed uninterrupted funding to ensure continuity and retention of HQP and that:

"Continuity of grants through renewal processes and the ability to access operating grants were important attributes of the CIHR funding mechanism."

OOGP funded case study respondent

Generally, in competitions for programmatic grants, it would be expected that the track record of researchers plays an important role as the committee needs to have confidence that the applicant can deliver on a program of research. In the context of the OOGP, the behaviour of peer reviewers can provide further evidence around de facto programmatic funding that may be taking place. A content analysis of reviewer comments submitted by OOGP peer review committee members and scientific officers between 2004 and 2008 concluded that "track record has become increasingly important in judging the merit of grant proposals" (CIHR Evaluation Unit, 2009).

The above evidence appears to suggest that some applicants and reviewers are behaving as if programmatic funding exists within the OOGP. If this is the case, it would seem to lend support to CIHR's proposals to use both types of funding mechanisms in its open competitions (CIHR, 2012a). The current context presents an opportunity to introduce changes to support both project-based and programmatic funding. It should be noted, however, that indications of the use of track record as a criterion varies across committees and that CIHR has not formally stipulated its inclusion or assigned any weight to it in the current OOGP application review process.

What alternative designs could be considered – peer review?

CIHR has recently been consulting with stakeholders on the redesign of its open suite of programs including alternative models of delivering peer review. The potential implications of these alternative designs are considered in this section, in the context of the OOGP's future design and delivery.

In its consultations on enhancements to peer review, CIHR is exploring design elements that would reduce the overall time a reviewer spends reviewing, discussing, and providing feedback on an application. A multi-phased competition process that involves a two-stage screening process prior to face-to-face review is being considered, together with structured review criteria and conducting screening reviews and conversations in a 'virtual space.' The agency aims to make more judicious use of face-to-face committee meetings as a mechanism to integrate the results of remote reviews and determine the final recommendation for funding (CIHR, 2012a). This should improve efficiency and economy from an agency, applicant and reviewer perspective.

It is worth noting that despite the fact that peer review is the primary vehicle used by many major funding agencies worldwide to assess applications, relatively little is known about how it impacts the quality of funded research (Graves et al., 2011). A recent Cochrane study of peer review (Demicheli & di Pietrantonj, 2007) recommended greater examination of the efficiency and effectiveness of the peer review process as used by research funding organizations.

Assessing outcomes of applications selected by independent review compared with face-to-face discussion

In addressing the evaluation question on the OOGP's current peer review processes and to inform future designs, we focus here on two pertinent lines of enquiry:

  1. Assessing relationships between rankings of independent reviewer scores (submitted prior to peer review committee meetings) and rankings of face-to-face committee scores under the current OOGP peer review design.
  2. A bibliometric analysis of the scientific impact of researchers (measured by the Average of Relative Citations) to assess the 'quality' of what was funded by face-to-face committees compared with what would have been funded based only on independent reviewer scores.

In a redesigned peer review system that relies in its initial stages on independent review of funding applications, it is necessary to have confidence that those initially selected to proceed without committee discussion are the most meritorious. Without this, there is a risk that promising applications would be screened out. This type of redesign would also require evidence that selection using methods other than the existing form of face-to-face peer review will not have a detrimental effect on the outcomes of funded research. In short, CIHR needs to be sure that its open programs will continue to fund excellence.

To understand the analysis conducted, it is first necessary to briefly describe the current OOGP peer review process. This involves three main stages of selecting applications:

  1. Review scores (at-home scores) are provided by at least two reviewers working independently of each other.
  2. A 'consensus score,' agreed to by the two independent reviewers after discussion by the full review committee (15 members on average, ranging from 6 to 27 members).
  3. A final committee score representing the average of the scores of all committee members.

The first line of enquiry therefore focuses on using application scoring data from stages 1 and 3 in a 'natural experiment' that compares rankings derived from the independent assessment scores of applications by reviewers with the final rankings provided by the peer review committee after discussion.Footnote 12

If there is a high degree of congruency between the rankings derived from the independent reviewers' scores and the final committee scores, it can be hypothesized that the initial review of applications produces much the same outcome as a face-to-face discussion. A further analysis assesses the sensitivity and specificity of a hypothetical independent reviewer funding model using committee rankings as the 'gold standard' against which they can be compared.

The second line of enquiry involves a bibliometric analysis based on the scientific impact of publications produced by researchers following their application to the OOGP (using the Average of Relative Citations). This is designed to compare the impact scores of OOGP applicants selected at face-to-face peer review committee only, at the independent review stage only, at both stages or at neither stage.

For both of these lines of enquiry, several important limitations and caveats must be acknowledged. To highlight some of the more important ones:

Notwithstanding these limitations, given the global paucity of evidence on peer review, our analyses begin to address some of the questions that many funding agencies around the world are asking. It is evident that further research is needed, either within the context of future evaluations of CIHR's Open Operating Grant Program and its replacement, or as part of an agency research program to understand the impact of these changes.

Assessing relationships between independent reviewer scores and committee scores

Approximately 75% of the OOGP applications that were funded at the committee stage would also have been funded based on their initial independent review ranking. As shown in Figure 2-5, there is concordance between independent review and committee scores for the most excellent applications (top 20%) and for those ranked lowest (bottom 20%). As other studies have also shown, the greater variability in the three middle groupings reflects greater difficulty in determining merit for proposals that fall between the two extremes (e.g. Cole et. al. 1981; Martin and Irvine, 1983; Langfeldt, 2001; Cicchetti, 1991).

Figure 2-4: Concordance between independent and committee stage assessments

Figure 2-4 long description

Source: Electronic Information System OOGP data, 2005-2010 (N=21,266) provided by the CIHR Performance Measurement and Data Production Unit.

Figure 2-5 provides further support to the hypothesis that it is easier to identify the most excellent applications. Among those applications ranked in the top 5% at committee stage, approximately 95% originated in the top 20% of applications at the independent review stage. Simply put, almost all of these applications would have been funded by the independent reviewers and not screened out. This level of agreement reflects the opinions of veteran peer reviewers interviewed in a recent article in Nature (Powell, 2010). Again, the interviewed reviewers suggested that it was relatively easy to identify the strongest proposals.

Figure 2-5: Origins of top 5% of committee stage applications

Figure 2-5 long description

Source: Electronic Information System OOGP data, 2005-2010 (N=21,266) provided by the CIHR Performance Measurement and Data Production Unit.

Predictive value of independent reviewer scores

The results described above were further corroborated by analysis of sensitivity and specificity. Sensitivity and specificity are statistical measures commonly used to assess the performance of diagnostic and screening tests. These concepts can be used in the present context to assess the predictive accuracy of the independent reviewers' scores (at-home model) in relation to the "true state" or gold standard, in this case, the full committee score.Footnote 13 Sensitivity and specificity tell us how accurate the "test" is in predicting "correct"results.

Using Table 2-5 as an illustration, sensitivity seeks to answer the following question: out of all those who were funded by the full committee (a+c), what proportion was accurately predicted by the at-home model? It is computed as a/(a+c).

Specificity, on the other hand, addresses the following question: out of all proposals not funded by the full committee (b+d), what proportion was accurately predicted by the at-home model? This is computed as d/(b+d).

Table 2-5: Predictive accuracy of "at-home" model vs. gold standard (full committee model)
True State or "Gold Standard"
(Full committee)
Funded (+) Not funded (-)
"Screening Test"
At-home model
Funded (+) True positives
a=3399
False positives
b=1129
Not-funded (-) False negatives
c=1052
True negatives
d=15,686

Sensitivity=0.764, 95% confidence interval [0.751 to 0.776]; Specificity=0.933 [0.929 to 0.937].Footnote 14
Positive Predictive Value (PPV) = 0.751 [0.738 to 0.763]Footnote 15; Negative Predictive Value (NPV)= 0.937 [0.933 to 0.941]. Footnote 16

The sensitivity and specificity scores computed from the data in Table 2-5 are presented below the table. These results confirm that 76% of the applications funded at the committee stage would have been funded based on the initial independent review (i.e. the at-home model), while 93% of applications that were not funded at the committee stage would not have been funded by the at-home model. The kappa statistic, an overall statistic of chance-corrected agreement, for these data is 0.69 (95% CI 0.68-0.70), which is somewhat less than the sensitivity and PPV. This reflects the fact that most applications to the OOGP do not get funded, and thus a "bogus test" in which a reviewer predicted no funding for all applications would be correct 75% of the time (assuming a 25% overall success rate) just by chance.Footnote 17

Ideally, a test would be both 100% sensitive and specific, but in reality there is always a trade-off between the two properties.Footnote 18 The best balance between the two indices depends on the consequences of missing out on excellent proposals versus wrongly funding unworthy proposals.

The current result of high specificity (0.93) with a sensitivity of 0.76 suggests that if a proposal is ruled "in" by the at-home model, reliance on independent reviewers without face-to-face committee review could "miss" up to one-fourth of deserving proposals but is highly unlikely to be funding poor quality proposals.

It is worth reiterating that the two independent reviewers' scores are not independent of the full committee's score, since the independent reviewers are likely to have read the application in great detail and thus affect the opinions of the other committee members. This undoubtedly biases all the concordance indices upwards.

The implications of this analysis for this evaluation and for future program design are that even if only two independent reviewers are used to screen proposals at an initial stage of peer review, they are likely to select excellent applications. It is likely that using a greater number of reviewers at a screening stage would provide greater confidence that excellence is being selected (i.e., increase the sensitivity of the independent reviewers' average ratings). It is therefore recommended that CIHR conduct further analyses on the impact of the number of reviewers on funding decisions if peer review re-designs are implemented.

Bibliometric analysis of scientific impact of researchers

In this second line of enquiry, preliminary analysis was conducted to assess the scientific impact of researchers who have submitted OOGP publications in relation to outcomes of peer reviewFootnote 19. The research question investigated is whether those researchers with applications selected by OOGP face-to-face peer review committees have greater subsequent scientific impact than those who would have hypothetically been selected if only the independent reviewer rankings had been used. Table 2-6 below describes four groupings of researchers, based on face-to-face peer review being the 'gold standard' of selection used as comparison.

We would expect those researchers categorized as true positives (selected for funding based on both independent reviewer and committee rankings) to have the highest subsequent scientific impact. If independent review rankings are indeed a potentially reliable means of selecting excellence, we would also expect the false positives (those selected at independent review only) to have higher impact than the true negatives (applications not funded by either the committee or independent reviewers).

Table 2-6: Categories of applications: derived independent reviewer rankings and final committee rankings
Group of applications /researchers Independent reviewers – funded Y/N Peer review committee – funded Y/N Description
True negative
(n= 15,686)
N N Applications that would not have been funded by independent reviewer rankings and that were not funded by face-to-face peer review committee
False negative
(n= 1052)
N Y Applications that would not have been funded based only on independent reviewer rankings but which were funded by face-to-face peer review committee
False positive (n= 1129) Y N Applications that were not funded by face-to-face peer review committee but which would have been funded based only on independent reviewer rankings i.e. if no subsequent face-to-face discussion had taken place
True positive (n= 3399) Y Y Applications that would have been funded by independent reviewers and which were funded by face-to-face peer review committee

As shown in Figure 2-7, on average the true positives (selected both by independent reviewers and at committee) have a significantly higher average of relative citations compared with the other three groups (p<0.05). The true negatives (not selected at either stage) have ARC scores significantly below the other three groups (p<0.05). This first finding again supports the view that the OOGP's peer review process selects excellence.

It can however, also be observed that the false positives (applications selected for funding by the independent reviewers but not by committee) and false negatives (applications selected by committee but not by independent reviewers) are not significantly different statistically.

Based on this preliminary analysis, there is no evidence to suggest that the independent review scores provided by two reviewers are a less reliable means of selection than committee discussion scores when assessing projects likely to result in future scientific excellence.

Overall, while many limitations should be borne in mind, these findings appear to support an approach to peer review that involves independent reviewer screening of applications.

Figure 2-6: True Positive applications perform better than the other three comparison groups

Figure 2-6 long description

Source: Bibliometric data drawn from Canadian Bibliometric Database built by OST using Thomson Reuters' Web of Science (OOGP sample n=1,500).

Knowledge Translation

Evaluation questions

Introduction

CIHR's mandate states that the agency aims:

"To excel, according to internationally accepted standards of scientific excellence, in the creation of new knowledge and its translation into improved health for Canadians, more effective health services and products and a strengthened Canadian health care system" (CIHR, 2012a).

In line with this mandate, a key objective of the OOGP is "to contribute to the creation, dissemination and use of health-related knowledge."(CIHR, 2012b). As Graham and Tetroe (2007) state, while discoveries and generating new knowledge have the potential to result in improvements to health and health systems, these benefits will not be realized unless knowledge is put into action.

The commercialization of research is a form of end-of-grant knowledge translation and is important to both CIHR and the federal government more generally. Capturing the health and economic benefits of health research is one of CIHR's stated objectives in its strategic plan, the Health Research Roadmap (CIHR, 2010). The agency aims to "translate health research findings into improved health products, technologies and tools for Canadians" (p.13). One of the expected outcomes of the OOGP is to generate economic impacts and the size and scale of the program make the OOGP a potentially highly significant contributor in this area.

Commercialization of research is also an important priority for the federal government more broadly. The recent Innovation Canada: A Call to Action – Expert Panel Report – Review of Federal Support to Research and Development, sometimes known as the 'Jenkins Report' (Nicholson & Côté, 2011), calls for federal programs that support commercially oriented R&D to make "an even stronger contribution to a more innovative and prosperous Canada." The Open Operating Grant Program can also be seen as contributing to the 'Entrepreneurial Advantage' of the government's Science and Technology Strategy (Industry Canada, 2009). This is aimed at translating "knowledge and ideas into commercial products that will generate wealth and improve the lives of Canadians and others around the world."

What commercializable outputs have been produced by OOGP-funded researchers?

Data from CIHR's Research Reporting System show that a wide range of commercializable outputs have resulted from OOGP-funded research (Table 3-1). These vary between those that are more obviously related to commercialization (e.g. patents or spin-off companies), and those where there may be a less direct link (e.g. new or changed policy/program) but which can still result in an economically beneficial outcome.

Around one in five OOGP-funded researchers claim that their research has resulted in one of nine commercializable outputs. As would be expected, there are significant differences among the research pillars with biomedical researchers more likely than those in Pillars III and IV to report products like new vaccines/drugs, new patents and intellectual property claims and less likely to report new practices and new or changed policies/programs.

Table 3-1: Types of OOGP research outcomes by CIHR research pillarFootnote 20
Percent saying research has resulted in outcome
Type of Outcome Pillar I Pillar II Pillar III Pillar IV Average across Pillars Prob. Level
New Practices 17.5 46.8 28.6 29.8 22.1 p<0.005*
Intellectual Property Claim 13.3 12.9 3.6 0.0 11.8 p<0.005*
New Patent (filed or obtained) 14.0 8.1 0.0 0.0 11.6 p<0.005*
Software/Database 6.8 9.7 10.7 10.6 7.6 0.681
Direct Cost Savings 5.0 12.9 3.6 4.3 5.7 0.039
New Vaccines/Drugs 6.1 0.0 0.0 0.0 4.7 p<0.005*
New or Changed Policy/Program 2.0 11.3 17.9 12.8 4.5 p<0.005*
New Product License 4.2 6.5 0.0 0.0 3.9 0.002
Spin Off Company 4.6 3.2 0.0 0.0 3.9 0.020
N 457 62 28 47 574

Source: Research Reporting System, Pilot (N=596).

Footnote *

Statistically significant differences. To account for the possible effects of multiple testing (9 tests), the probability level for statistical significance was adjusted to p<0.05/9=0.005).

* referrer

Longer-term outcomes

Conducting case studies of commercializable outcomes resulting from OOGP-funded research allows for a more in-depth and longer-term analysis of the types of impacts achieved as a result of these grants. This approach can also be used to illustrate how OOGP funding has contributed to the achievement of high impact projects. Data from CIHR's Research Reporting System was used to sample relevant projects for subsequent in-depth qualitative interviews with multiple stakeholders.

The projects described in Table 3-2 had wide-ranging impacts, including on patients, health care providers, other researchers, students, the health care system and society at large. Dr. Fernie's sling system products are one example of impacts on multiple stakeholder groups. The products have direct benefits for nurses in reducing injuries through lifting, but can also have wider benefits on patients. Getting patients mobilized can prevent health problems such as ulcers and injuries sustained during lifting, and there are also positive outcomes for issues such as patient isolation, depression and confusion. These positive impacts can be added to the commercializable benefits of patenting and producing devices.

Table 3-2: Case study summaries of OOGP research impacts
Project Title Design and efficacy of novel interface sling systems for lifting patients Bioengineered blood clots to promote cartilage regeneration Language use inventory Sensory control of movement Evaluation of an innovative on-line education to improve evidence-based family practice
Nominated Principal Investigator
Dr. Geoffrey Fernie
University of Toronto

Dr. Caroline Hoemann
Polytechnique Montréal

Dr. Daniela O'Neill
University of Waterloo

Dr. Arthur Prochazka
University of Alberta

Dr. Moira Stewart
Western University
Need/ Issue Injuries to nurses and other health care providers are a significant concern and lead to higher health care costs. Frequency of Total Knee Replacement (TKR) surgeries rising due to aging population and lack of alternative solutions. TKR surgery is costly, invasive, risky and has long wait times. Lack of tools and experts to assess language development in children leading to long wait times. Improving movement in spinal cord injury patients has been slow due to a lack of proper understanding of how neural connects can be rehabilitated. Traditional mechanisms for continuing education have not resulted in effective improvements in health care.
Solution/ Invention Dr. Fernie developed a system to help nurses lift heavy patients on their own without causing injury to themselves. Dr. Hoemann developed new treatment options that can support effective healing of damaged knee cartilage. Dr. O'Neill developed a standardized tool to assess language development in children to help early identification of problems and provide appropriate supports. Tool can eliminate need for experts and reduce wait times. Dr. Prochazka developed several applications to help generate hand movement. His basic science work has led to development of rehabilitation options that were never considered previously. Dr. Stewart transformed how continuing education (CE) is planned and delivered by developing a technology-based integrated CE modality for primary health care physicians to increase their knowledge and competence.
Current Status of Product The SlingSerter™ is patented and pre-production quantities are being manufactured. Dr. Fernie and his team are also collaborating with several organizations in China on studies related to his devices. Dr. Hoemann's research has been translated and is being applied. Her work has led to a patented product named BSTCarGel ® which is currently owned by Piramal Healthcare Canada Ltd. The company has the product in trials and is working towards gaining authorization to sell the product in Canada and Europe. Dr. O'Neill created a company in 2009, Knowledge in Development Inc., through which she sells the LUI, scoring sheets, and a manual for the tool. The LUI is in use in thirty states in the U.S., eight provinces in Canada, and in the U.K., Ireland, Australia, and New Zealand. It is being translated and tested in several different languages including French and Arabic. The SRS is a novel type of nerve implant which after many safety trials in animals was implanted in a person with spinal cord injury in 2008 to help restore hand function. This trial has worked well and has been applied to other areas such as pain management, bladder control and hand movement. While the findings of this study were mixed, "it provided evidence-based contribution and therefore quality improvement"and "is a great example regarding policy relevant [and] interesting work that she has done" (Alison Paprica, Director, Planning, Research and Analysis Branch, Ontario Ministry of Health and Long Term Care).
OOGP Funding History He has received a total of four OGP grants, as well as other awards and grants from both CIHR and other funding organizations. She has received numerous CIHR grants as both PI and co-PI, as well as substantial funding from other organizations. She has received a total of four OGP grants, as well as other awards and grants from both CIHR and other funding organizations. He has received a total of six OOGP grants, as well as other awards and grants from both CIHR and other funding organizations. While she has only received one OOGP grant, she has had additional funding from CIHR and other funding organizations.
Importance of OOGP Funding Studies funded by CIHR directly contributed to the development of products. The initial study funded by CIHR led to 18 other funding opportunities related to her initial research on Bioengineered Blood Clots to Promote Cartilage Regeneration. Funding for the development of the LUI and its subsequent standardization and usability studies have all been attributed to CIHR. Dr. O'Neill credits CIHR for the existence of the tool: "CIHR made the LUI possible" (Dr. O'Neill). "I was fortunate that I had support from CIHR for my research for a lot of my career and this has helped me to conduct research, support students as well as access other research funding. Without this seed funding, it would have been impossible to do the basic science that underpins the clinical and commercial activity we are now seeing" (Dr. Prochazka). According to Dr. Stewart, the initial funding was very useful because it allowed for progress using technology to improve the quality of care.
Potential Benefits Will potentially save significant healthcare dollars. Potentially prevent more invasive and costly treatments like TKR thereby saving significant healthcare dollars. Benefits to children and primary care givers, empowerment; experts' time freed to focus on more serious cases. Improved movement and improved quality of life for people with spinal cord injuries. With the increased knowledge and competence, primary health care physicians can provide evidence-based and effective care to their patients.
Source: Case study data from interviews with N= 25 key informants,

The 'high impact' value of the research described in Table 3-2 often occurs over long time periods, sometimes decades, and can build on long-standing programs of research. For example, Dr. Prochazka's work on sensory control of movement builds on 40 years of research work. Similarly, Dr. Hoemann's work took her over 10 years from laboratory research with animal models to the translation of this research for human application. These frequently lengthy timelines make it essential that CIHR conducts regular 'follow-ups' with researchers who report projects or programs of research that appear promising as they are completing their grants. It is only through this approach that wider benefits can be captured, rather than an ongoing focus only on shorter term outcomes like journal publications.

As has been found in other evaluations of CIHR programs, researchers felt that obtaining CIHR grants is important not just in funding research, but also in providing acknowledgement that their research area is valuable. This provides peer recognition, nationally and internationally, in their field. This finding indicates that there is a 'prestige' value-add to receiving CIHR and OOGP funding that allows researchers to leverage grants beyond their dollar value alone. Dr. Hoemann's track record led her to be part of a collaborative application to the Canada Foundation for Innovation (CFI) which received $20.3m to support infrastructure and research in the area of Nanomaterials and Microsystems for biomedical application, such as orthopaedics, cardiovascular diseases and oncology.

The researchers who participated in the case studies stressed the need for CIHR to support early career researchers to help projects get going and allow the research to be viable in the long term. This is particularly important in commercializable endeavors where:

"Initial research start-ups to demonstrate that the findings in research are viable and applicable is essential. Only then will industry show interest, recognize the potential and be willing to get involved"
OOGP funded case study respondent

One program gap identified by the researchers was around providing adequate funds for knowledge translation and supporting the uptake of the research products. Researchers identified a range of areas where they felt CIHR could provide more support:

CIHR's current proposals for redesigning its open suite of programs, including the OOGP, involve absorbing a number of 'boutique' knowledge translation and commercialization programs into open project and program schemes. Based on these findings, it will be important that these new programs ensure adequate funding for knowledge translation and commercialization and are designed to encourage these areas.

'Migration' from OOGP to commercialization grants

As mentioned above, CIHR currently offers a suite of commercialization programs that are designed to assist health researchers and industry to engage. These include the Proof of Principle programs, designed to facilitate and improve the commercial transfer of knowledge and technology resulting from academic health research, as well as other programs such as the Industry Partnered Collaborative Research Program and the Collaborative Health Research Projects Program. CIHR's current investment in these programs is relatively small ($14.1m annually), although the agency also participates in funding commercialization-oriented programs with its NSERC and SSHRC partners (e.g. the Centres of Excellence for Commercialization and Research and the Business-Led Networks of Centres of Excellence).

Table 3-3 shows the number and proportion of OOGP-funded researchers (nominated principal investigators) who 'migrate' from an OOGP grant to one with a commercialization focus. We cannot make a direct attribution that the OOGP is funding earlier basic research or initial proof of concept/invention that is then developed in CIHR's commercialization grants to become early stage technology or product development. It is however likely that that this is what can be observed for many of the 337 researchers who have 'migrated' between an OOGP and a commercialization grant.

Table 3-3: Number and type of OOGP researchers who 'migrate' to commercialization grants 2000-2010
Number of unique researchers (NPIs) funded by OOGP grants Number of unique OOGP researchers (NPIs) holding subsequent commer-
cialization grants
Proportion of OOGP researchers (NPIs) receiving a subsequent commer-
cialization grant
Number of commer-
cialization grants received by researchers (NPIs) who have held an OOGP grant
Number of commer-
cialization grants held by researchers (NPIs) who have held an OOGP grant – analyzed by pillar
9,428 337 3.6% 458 Biomedical 406
Clinical 44
Health Systems & Services 1
Population Health 3
Source: Electronic Information System data provided by the CIHR Performance Measurement and Data Production Unit.

As can be seen in Table 3-3, a relatively low proportion of researchers (3.6%) obtain a CIHR commercialization grant subsequent to holding an OOGP grant. It is probable that the relatively small investment in commercialization programs has a part to play here.

What influence has OOGP-funded research had on wider stakeholder groups including those in the health care system, government and industry?

OOGP grants are not only expected to be used to create knowledge but also over time, to have wider impacts ranging from the individual to the system level. As has been described, the case study projects provide examples of how this has taken place in specific instances, but there is also more generalizable evidence reported by researchers that their OOGP funded research has had impacts with a range of groups (Figure 3-4).

Figure 3-1: Percent of funded researchers saying research results have had impacts

Figure 3-1 long description

Data is based on respondents who reported to a "considerable" or "Great" Extent.
Source: Research Reporting System, Pilot (N=596).

As can be seen in Figure 3-1, researchers most frequently report that the grant resulted in their generating subsequent research by either their project team or by others. However, over one in five reported impacts on various stakeholders within the health system. As has been described, the impacts of research are often long-term; it would be expected that this figure would rise significantly if the same question were asked of researchers a decade from now.

In terms of the different groups of stakeholders impacted by OOGP-funded research, most researchers report that their work has influenced to a considerable or great extent other researchers/academics and the stakeholders formally listed on their applications (Figure 3-2).

Figure 3-2: Percent saying OOGP research has influenced stakeholders to a considerable or great extent

Figure 3-2 long description

Source: Research Reporting System, Pilot (N=596).

Is there a relationship between stakeholder involvement in the research process and outcomes of the research?

To put the knowledge generated by research into action, Graham and Tetroe have argued that research findings "will more likely be relevant to and used by the end-users" if those end-users are involved in all aspects of the research process (Graham & Tetroe, 2007). This is also a stance supported by several funding agencies such as the Australian National Health and Medical Research Council (Adily et al, 2009) and the Canadian Health Services Research Foundation (Lomas, 2000). The findings from the case studies support the importance of including stakeholders in the research process; case study participants attributed some of their success to including the relevant stakeholders in their research, usually at an early stage.

When looking at OOGP grants more broadly however, data suggest that apart from researchers/academics and to some extent stakeholders formally listed on the grant application, other stakeholder groups are not frequently involved in the conduct of OOGP-funded research (Table 3-6).

Researchers/Academics had far greater involvement than other stakeholder groups across all the different stages of the research process. Among the other potential users, involvement was more likely to take place at the "KT Activities" stage, although several groups (listed study stakeholders, health system/care practitioners and patients/consumers of health care) also had more significant levels of involvement at the data collection/project implementation phase. Stakeholders formally listed on the application were consistently involved at each stage of the research process by at least 25% of the OGP grantees.

Table 3-6: Involvement of potential research users in OOGP research (% of researchers)
Potential Research User Group Full invol-
vement
Develop-
ment of research idea/ question
Develop-
ment of protocol
Data collection phase/
Project Imple-
mentation
Interpretation of results KT activities
Researchers/ Academics 71.3 84.9 84.7 84.7 88.9 77.5
Listed study stakeholders 18.5 26.3 26.3 29.2 25.7 25.7
Health system/care practitioners 9.7 18.3 14.6 21.1 18.6 24.5
Patients/ Consumers of health care 2.7 4.9 4.2 12.4 3.7 11.6
Health care professional organizations 2.0 3.2 2.5 4.4 3.4 10.4
Health care managers 1.7 3.2 2.7 5.5 3.5 6.7
Consumer groups/ Charitable organizations 0.8 1.8 1.0 1.8 1.3 8.6
Industry 0.8 1.8 2.3 2.5 2.3 8.4
Federal/ Provincial representatives 0.7 0.8 0.7 1.0 1.2 5.5
Media 0.5 0.8 0.5 1.2 0.8 19.6
Community/ Municipal organizations 0.3 0.8 0.3 1.5 0.8 6.7
*Full involvement was defined as involvement of the specified user group in all five stages from "development of research idea" to "Knowledge Translation activities".
Source: Research Reporting System, Pilot (N=596).

Full involvement in the research process was defined as involvement in all five stages from "development of the research idea" to "KT activities." When full involvement occurs, it is typically with only one user group (53.9%), occasionally with two groups (16.8%) and rarely with three or more (5.8%) of the eleven possible user groups.

Full involvement varied by research discipline with Pillar II researchers being the most likely (91.9%) to involve other stakeholders followed by Pillar III (89.3%), Pillar IV (85.1%) and Pillar I (71.3%) (p<0.01). Most of this involvement was with other researchers/academics.

Benchmark data for this area are scarce, but when Pillars III and IV are combined, full involvement with at least one user group averages out to 86.7%. This compares with 35.3% reported in a similar study in Australia (Adily et al, 2009) that focused only on involvement in non-biomedical health research grants and training awards. The wide disparity in average scores requires further scrutiny but it is noteworthy that the ranking of potential research user groups in terms of levels of full involvement for our study – researchers/academics, formally listed stakeholders, health care practitioners, patients/consumers of health care – is essentially the same as that reported in the Australian study – academic researchers other than co-investigators, health care professionals, and patients/consumer groups.

Further analyses were conducted on these data to test associations between full involvement of stakeholders and the outcomes that researchers reported from their grants. One outcome –reporting direct benefits to human research subjects – showed a tendency towards association with full involvement. About 23% of researchers who reported full involvement of at least one end-user group also reported direct benefits of their work to human research subjects as compared to 15.7% of those who reported no full involvement (p<0.058).

Capacity Development

Evaluation questions

Has the average number of research staff and trainees attracted and trained by OOGP grants since 2000 increased, decreased or remained the same?

Capacity development is an important element in maintaining a world-class research enterprise. CIHR supports capacity development directly through awards for individual researchers, and indirectly, through providing funding for research projects that develop capacity through the involvement of students, trainees and other researchers/stakeholders. Training and support behaviours for CIHR's research pillars can vary widely between biomedical research and the social sciences.

The definition of capacity development used in this evaluation includes the direct involvement in the research process of any paid or unpaid staff or trainee including: researchers; research assistants, research technicians; postdoctoral fellows, post health professional degree students (MD, BScN, DDS, etc.), fellows (not pursuing a master's or Ph.D), doctoral, master's, and undergraduate student trainees.

Table 4-1: Average and total number of qualified personnel (Individuals) supported by pillar
Total number of HQP trained/supported – based on RRS data Mean number of HQP trained/ supported - per researcher Standard Deviation Inferred total of HQP trained/supported based on all approved OOGP applications (2000-2010) Number of OOGP approved applications (2000-2010)
Biomedical (n=440) 3490 7.93 5.75 56,731 7154
Clinical (n=59) 593 10.05 7.91 9,738 969
Health Systems & Services (n=26) 204 7.85 12.17 4,176 532
Soc, Cultural, Enviro, Popln Hlth Research (n=47) 640 13.62 14.86 10,024 736
Total 4927 8.61 7.66 81,175 9428*

Differences across pillars are significant (p<0.001).

Source: Research Reporting System-Pilot (N=596) and Electronic Information System data provided by the CIHR Performance Measurement and Data Production Unit.

Footnote *

Totals for the four pillars do not add up to 9428 due to 37 "approved" applications missing pillar data.

* referrer

Table 4-1 shows the number of highly qualified personnel (HQP) trained or supported through an OOGP grant. The Research Reporting System data that is completed by OOGP funded researchers only captures a sample of the total number of researchers funded by the OOGP between 2000 and 2010, so is unable to provide a total number of HQP trained or supported over this period. However, based on data showing an average of HQP trained or supported per researcher per pillar, we are able to infer the total number of HQP who may have been trained on OOGP grants over this period; n=81,175. As completion of the Research Reporting System is now mandatory for OOGP grantees, the next evaluation of the program in five years' time will be able to validate this estimate.

There are other limitations to consider here; for example, while it would seem as if in relative terms, Pillar IV provided the largest number of opportunities (mean=13.6 HQP per researcher), this data could be misleading. Individuals can be involved by multiple researchers in multiple projects, with varying levels of involvement which can lead to double counting.

The issue of potentially 'double counting' the same person on several research grants can be explored further by analyzing the full-time equivalent (FTE) data for HQPs. This measure records the proportion of the trainee's time spent on a grant, allowing for the capture of situations where grants may involve multiple trainees for smaller amounts of time.

Indeed, as is shown in Table 4-2, on average, clinical and biomedical researchers' highly qualified personnel were more heavily involved in their grants than those in the other pillars. Pillar IV's FTE-based average is considerably lower than its overall average when unique individuals are counted without regard for time involved; 4.81 based on FTE (see Table 4-2) versus 13.62 based on total number of individuals involved (see Table 4-1). It will be important to recognize these substantial pillar-based differences in future performance measurement or evaluation studies, to ensure that accurate data on training and support is captured.

Table 4-2: Average and total number of highly qualified personnel (Full-Time-Equivalent) trained by pillar
Total number of HQP trained/supported – based on RRS data Mean number of HQP trained/ supported personnel per researcher Standard Deviation Inferred total of HQP trained/supported based on all approved OOGP applications (2000-2010) Number of OOGP approved applications (2000-2010)
Biomedical Research (N=334) 7.65 2554 28.50 54705 7154
Clinical Research (N=50) 8.10 405 28.17 7849 969
Health Systems & Services Research (N=23) 2.83 65 2.39 1503 532
Soc, Cultural, Enviro, Popln Hlth Research (N=32) 4.81 154 4.70 3542 736
Total 7.24 3178 26.64 68251 9428*

Differences are not significant across pillar.

Source: Research Reporting System - Pilot (N=596) and Electronic Information System data provided by the CIHR Performance Measurement and Data Production Unit

Footnote *

Note that the totals for the pillars do not add to 9428 due to 37 "approved" applications missing pillar data.

* referrer

In similar vein, there is a further potential confound when considering the impact of grant duration on the number of HQP involved in a project. As previously discussed, researchers from Pillars III and IV often have grants of shorter duration. In fact, Table 4-3 shows that the differences in FTE between CIHR pillars largely disappear when grant duration is controlled for statistically. This suggests that all four pillars are providing similar opportunities for the development of HQPs in relation to their specific projects.

Table 4-3: Average number of full time equivalent highly qualified personnel trained/supported per year
Average number of FTE HQP by year Standard Deviation
Biomedical Research (N=334) 2.31 8.79
Clinical Research (N=50) 1.40 1.04
Health Systems & Services Research (N=23) 1.38 0.98
Soc, Cultural, Enviro, Popln Hlth Research (N=32) 1.84 1.76
Total 2.12 7.58
*Differences across pillars are not statistically significant.
Source: Research Reporting System - Pilot (N=596)

In terms of any changes in the number of trainees listed on OOGP grants over time, the average number of full-time equivalent trainees per year of competition has generally increased, although the change is not statistically significant (Figure 4-1). Tracking data on this metric is currently limited, particularly for 2004 and due to the fact that most recent Research Reporting System data on this measure was unavailable at time of evaluation.

Figure 4-1: Average number of full-time equivalent trainees per year of grant by competition yearFootnote *

Figure 4-1 long description

Source: Research Reporting System - Pilot (N=279)

Is the OOGP funding researchers from across all areas of health research? Do researchers in Pillars III and IV face barriers in obtaining OOGP funding?

The Open Operating Grant Program aims to fund excellent research from across CIHR's four health research pillars. As a legacy funding program of the Medical Research Council, the OOGP has been viewed as largely successful in funding biomedical research. However, concerns have been raised by both internal and external stakeholders around whether the OOGP has been fully supporting research across the agency's overall health research mandate. A recent consultation document on the agency's proposed reforms to its open suite of programs states that CIHR's Governing Council is seeking to:

"Ensure that the new Open Suite will both remove barriers and create opportunities for research from all pillars." (CIHR, 2012a).

Similarly, the recent Quinquennial International Review of CIHR also raised this issue, particularly in relation to health services and policy research and population and public health (Pillars III and IV). As one example, the international reviewers of CIHR's Institute of Population and Public Health (IPPH) noted that expenditures in the OOGP relating to this research area were low and had plateaued over time (Macintyre, 2011).

As shown in Figure 4-2 below, biomedical research (Pillar I) has accounted for around 80% of expenditures on Open Operating Grants since 2002-2003 with little change since this date. As noted elsewhere in this report, however, actual expenditures on the OOGP have increased significantly from 2000-2001 to 2010-2011, including in Pillars III and IV.

Figure 4-2: Proportion of annual expenditure on the OOGP by pillar 2000/01 – 2010/11

Figure 4-2 long description

Source: Based on Electronic Information System data provided by the CIHR Performance Measurement and Data Production Unit.

The issue of specific challenges for non-biomedical researchers has been raised at the agency by past and present Scientific Directors and other stakeholders. Analyses of some of these potential barriers have also been conducted in several studies (e.g. Thorngate, 2002; Evaluation Unit, 2009; IHSPR, 2011; Tamblyn, 2011). There is also a body of qualitative evidence to be found in other documents and memoranda where these issues are scoped and discussed. In this section, we present a review of the existing evidence in this area, and implications for future development of CIHR's open suite of programs. These analyses focus on Pillars III and IV.

One limitation that should be mentioned is that there is currently a lack of data available corporately to assess these issues in greater depth. Given the importance of funding across CIHR's mandate, it will be important to put in place performance measures that can provide this data moving forward. A second limitation of these analyses is that, in most cases, the data rely on a researcher's self-identification to a pillar. Manual validation of researchers and pillars is carried out by some CIHR Institutes to increase the reliability of this data (for example, the Institutes of Health Services and Policy Research and of Population and Public Health conduct such validations for their areas of research); to do so at a corporate level would however, be highly resource intensiveFootnote 21.

Challenges

Figure 4-3 summarizes several challenges for researchers in Pillars III and IV that have been identified in analyses and through interviews with representatives of these communitiesFootnote 22. We assess each of these in more detail below. This range of challenges is by no means exhaustive; there are other potential challenges that lack sufficient evidence to support their inclusion in this report but which should be explored further in future analyses.

Figure 4-3: Barriers and challenges to Pillar III/IV applicants in the OOGPFootnote 23

Figure 4-3 long description

1. Higher proportion of applications rated as unfundable

The evidence shows that Pillar III/IV peer reviewers exhibit different reviewing behaviours to those in other communities. This is reflected both in the lower average scores given to applications by Pillar III/IV reviewers and also in how these peer reviewers discuss and rate proposals.

As shown in Figure 4-7 below, analyses by the CIHR Institute of Health Services and Policy Research (IHSPR) show that the likelihood of a Pillar III or IV application being rated as 'non-fundable' in the OOGP (a score of below 3.5 out of 5) is far higher than for a Pillar I application (IHSPR, 2011). Similarly, an analysis by IHSPR of the percentage of applications rated as non-fundable in OOGP peer review committeesFootnote 24 found non-fundable rates above 70% in the three Health Services, Evaluation and Interventions Research committees and above 60% in the two Public, Community and Population Health committees. This can be compared with an average rate of 37.2% across all committees and non-fundable rates of around 10% for several OOGP biomedical committees.

Figure 4-4: Likelihood of an application being deemed non-fundable (odds ratios)1

Figure 4-4 long description

  1. Odds ratios computed with Pillar I success as baseline.

Source: Institute of Health Services and Policy Research Report: Peer Review Reforms – Options to Generate Evidence (2011)

Two studies conducted for CIHR by an external consultant provide further evidence of differences in the reviewing behaviours of different communities. In the first of these (Thorngate, 2002), an analysis of the words and phrases used in discussing applications showed that health services and population health peer review committees typically placed greater emphasis on research design, methods and statistics, literature review and budget when discussing applications. By contrast, 'medical research' applications (e.g. in biochemistry, cancer or neuroscience) emphasized the track record of the author, the logical derivation of the research ideas, appropriate laboratory techniques and provisions for graduate students.

Linked to these differences, it was also found that average application scores are related to the level of scoring disagreements between committee members; the greater the disagreement between reviewers, the lower the ratings (Thorngate, 2002). There was found to be more disagreement in adjudicating health services and population health proposals than those in 'medical research.'

"You had one committee ranking many of the grants up into the four range while another committee only ranked their very very very best grants above the four range."
Scientific Officer (cited from Thorngate, 2002, p.6)

Since September 2005, funding decisions on the OOGP have been made based on a '100/0 formula', which allocates 100% of assigned funds according to the rank order of applications within each committee. This approach means that differences in scoring behavior between committees do not directly translate into which applications are funded by the OOGP. Indeed, data on the proportion of OOGP applications that are funded from within each pillar are in fact broadly consistent over the period 2001-2011 (IHSPR, 2011) and do not show Pillar III/IV researchers as having lower funding success rates overall, despite their lower average application scores.

However, while differences in scoring behavior may not directly impact on funding decisions in the OOGP, there are unintended outcomes of lower average scoring that were identified in interviews with representatives of these communities. One of these outcomes relates to:

As CIHR considers new systems for peer reviewing applications in its open programs, including the OOGP, it will be critical to consider the implications of these differences in how communities score and discuss applications.

2. Renewal behaviours

Evidence suggests that the research cultures of Pillar III/IV applicants to the OOGP result in different behaviours in relation to grant renewals. As has been described earlier in this report, researchers in these communities are more likely to have shorter (three vs. five year) grants than biomedical researchers, reflecting a project-base rather than a program-base for applications. Data on renewals of grant applications show lower than average application pressure for these researchers; only 3% of all renewal applications are from Pillar IV compared to the 9% share of all OOGP applications for this group. There is also a decreased likelihood of success; 32% of Pillar III/IV researchers are successful in a renewal application compared with 47% of biomedical researchers (IPPH, 2012).

In the absence of a more wide-ranging systematic qualitative or quantitative study with researchers from these communities in this evaluation, we rely on evidence from community representatives (Scientific Directors) to understand these behaviours. This evidence does, however, allow us to access the views of the key stakeholders with whom Scientific Directors regularly interact, including their Institute Advisory Boards, leading researchers in these communities and other research funders. Evidence from representatives suggests some reasons that may account for lower levels of renewals:

There are a range of possible implications of variable renewal rates among communities. A first is that there may be greater 'gaps' in the OOGP funding of researchers from these communities. As a project-based program, the intent of the OOGP is to fund the 'best ideas' rather than to provide continuous funding to individual researchers. However, as has been observed, the OOGP is already operating as a de facto programmatic funding tool for some researchers who receive ongoing funding over many years. Data from IHSPR shows lower rates of 'sustainability' or ongoing funding among Pillar III/IV researchers compared with other applications (IHSPR, 2011a).

Lower rates of renewals could also result in greater levels of effort being expounded by Pillar III/IV researchers to receive funding (project by project) and a displacement of activity from research to grant applications. As is noted in the cost-effectiveness study in this report, an OOGP applicant spends about 169 hours (or around 22 days based on a 7.5 hour working day) on average per application.

A final consideration is how these different renewal behaviours will impact on program redesign of the open programs. Evidence suggests that researchers from Pillars III/IV are not typically applying for 'programs of research' in the OOGP. However, past behaviour is not necessarily an indication of future behaviour and this may change. In discussions with IHSPR and IPPH Institute Advisory Board members, the importance of preparing Pillar III/IV communities for programmatic applications has been emphasized.

Furthermore, evidence also suggests that researchers from these pillars do take a programmatic approach in some strategic competitions. For example, there is strong application pressure to the Institute of Population and Public Health's programmatic competition and to the Institute's Applied Public Health Chairs. Further analysis by the agency on the potential impacts and opportunities for these researchers of creating an open program scheme would in any event be beneficial to understanding these issues.

3. Diverse projects cutting across disciplines and methodologies

The interdisciplinary nature of some Pillar III/IV research has been identified by members of these communities as a particular challenge in peer review. This issue was, for example, raised in a memorandum in 2005 (IPPH & IHSPR, 2005) by previous scientific directors of the Institute of Population and Public Health and of the Institute of Health Services and Policy Research. A joint meeting of CIHR Institute Advisory Boards, Governing Council and CIHR staff had also discussed this issue earlier in 2004. This has also been a topic of considerable discussion in international symposia and funding forums led or co-led by IPPH in 2009, 2010, and 2012 (Bayne, 2009; Moffatt, 2009, IPPH, 2010).

One implication of a greater number of interdisciplinary projects in these pillars is that it can be more difficult to find appropriate reviewers qualified to review these types of applications. A second issue is that under the current OOGP system of 'standing committees,' some fields of research are not explicitly mentioned in the mandate of any committees. Until recently (July 2011), population health intervention research was one such example. Anecdotally, this issue can also be a deterrent from an applicant's perspective if they are considering applying to the OOGP and do not see their area of research identified in the mandate for any committee as they may assume they will not be successful. Similarly, applicants may look at the previous composition of the committee membership and be deterred from applying if they do not feel that there is likely to be appropriate expertise to review for their field of research. Given a current success rate of around 20% on the OOGP, this seems likely.

Scientific Directors of Pillar III/IV institutes have also raised the issue that it can be difficult to find reviewers for small research communities. A smaller research community and larger research teams can result in losing peer reviewers knowledgeable about a particular field of research through declaring conflicts of interest and having to leave the committee when these applications are discussed. If proposals for a peer review system of matching reviewers to applications in open competitions is implemented to replace the current standing committees approach in the OOGP, there could be a potential issue with 'burnout' among reviewers in small communities who are over-burdened with requests to review. Actively building links internationally could provide some mitigation for this issue, and this approach has already been used in certain strategic competitions.

At a broader level of analysis, it is interesting to note that based on a recent survey of 877 peer reviewers who participated in the November, 2011 OOGP competition, peer reviewers from Pillars III and IV spend considerably fewer hours reviewing applications than those in Pillar I. When time to review, travel and attend meetings is included, biomedical researchers spend around 86.3 hours on average per competition, compared with approximately 52 hours for both Pillar III and Pillar IV researchers. This difference likely relates to reported time taken to review applications; Pillar I applicants spend around 6.4 hours to review each application compared with 4.5 hours for Pillar II, 3.5 hours for Pillar III and 3.7 hours for Pillar IV (Table 4-4). All differences between Pillars were significant (p=<0.001), except for "Average time spent travelling to Ottawa for committee discussions."

Table 4-4: Time spent by peer reviewers participating in November 2011 review process (in hours)
Average per element of process Total (N=484)* Biomedical (N=326) Clinical (N=71) Health systems/ services N=30) Social, cultural, environmental and population health (N=57)
Number of applications reviewed per competition 8.8 8.8 6.4 8.2 11.5
Total time reported spent reviewing applications per competition (hours) 44.0 52.5 26.0 27.2 26.6
Time reported spent reviewing an individual application (hours) 5.7 6.4 4.5 3.5 3.7
Time spent in committee discussions per competition (hours) 23.4 26.5 16.9 17.8 16.7
Time spent travelling to Ottawa for committee discussions per competition (hours) 7.4 7.3 6.2 8.3 8.9
Total time spent on review process per competition 74.8 86.3 49.0 53.2 52.1
Annual number of hours reviewers are willing to dedicate to review process at CIHR 75.6 86.4 48.3 55.1 58.6

Source: Peer Reviewer Workload Survey (N=485)

Footnote *

Note that one respondent was removed from the sample due to unreliable responses. Pillars are as specified by respondents in the survey.

* referrer

As can be seen in Table 4-4, the total time that reviewers spent on the review process per competition is very similar to the time they are willing to spend annually reviewing applications. If a peer reviewer is called upon for two competitions per year, they would be spending approximately double the amount of time they are willing to commit.

This evidence should be treated with some caution; it is the first time that such a survey has been fielded at CIHR, and we do not have trend data to assess the extent to which such figures fluctuate by competition depending on what applications are received. There is also a limitation in that sample sizes are insufficiently large in this survey to assess reviewer burden in sub-disciplines, particularly smaller communities.

It is recommended that CIHR continue to collect this information and also to conduct analyses to understand why time taken to review each application may vary significantly between communities. This will provide a baseline against which current review burden for researchers in the OOGP can be compared with future peer review approaches.

Program Relevance

Continued Relevance of the Program

In keeping with the requirements of the Government of Canada's Policy on Evaluation (2009), this evaluation assesses several questions of relevance; the continued need for the program and its alignment to government priorities and federal roles and responsibilities.

CIHR's mandate, as spelt out in the Act establishing the agency, is "to excel, according to internationally accepted standards of scientific excellence, in the creation of new knowledge and its translation into improved health for Canadians, more effective health services and products and a strengthened Canadian health care system" (Bill C-13, April 13, 2000). The OOGP directly contributes to the achievement of this overarching mandate by facilitating the creation, dissemination and use of health-related knowledge, as well as the development and maintenance of Canadian health research capacity by supporting original, high quality projects proposed and conducted by individual researchers or groups of researchers in all areas of health research.

The goals of the OOGP program, and of CIHR as a whole, continue to support and contribute to the priorities set out by the federal government, as spelt out in its 2007 Science and Technology Strategy (Industry Canada, 2007 & 2009). This identifies three distinct priority areas:

  1. Knowledge Advantage.
  2. Entrepreneurial Advantage.
  3. People Advantage.

The OOGP aligns to these government priorities in several ways:

Comments made in an internet petition submitted by researchers that is referred to earlier in this report suggest that our primary stakeholders believe that the OOGP is vital for maintaining a world-class research enterprise in Canada.

The evidence collected by this evaluation suggests that the OOGP continues to support the government's priorities. The 2011 federal budget outlined the importance of continued investment in innovation, education and training, and the role Canada's three primary funding agencies play in supporting "leading edge research" and "health research of national importance" (Government of Canada, 2011). The 2012 federal budget reaffirms the government's commitment to supporting advanced research (Government of Canada, 2012).

Evidence from the evaluation speaks to the continued need for the OOGP and the program’s alignment with the federal government and CIHR’s priorities and with federal roles and responsibilities. There is evidence that the program contributes directly to the fulfillment of CIHR’s mandate (Bill C-13, April 13, 2000) and aligns in several ways with the government’s priorities as spelt out in the 2007 Science and Technology Strategy (Industry Canada, 2007 & 2009). Also, primary stakeholders are of the opinion that the OOGP is vital for maintaining a world-class research enterprise in Canada. Additionally, the most recent federal budgets continue to affirm the government’s commitment to supporting advanced research and “health research of national importance” and the role of Canada’s three primary funding agencies in implementing this (Government of Canada, 2011 & 2012).

Program Background

Background

The CIHR program suite includes both open and strategic programs. The former relate to investigator-initiated research while the latter are designed around specific, pre-determined research topics strategically targeted by CIHR.

The OOGP provides operating funds to support the 'best ideas' across all four pillars of Canadian health research.Footnote 26 The program is very flexible and has no specific requirements or restrictions in relation to research activities to be undertaken, the amount of funds being requested,Footnote 27 and research team size or composition.

The OOGP has been in existence (in some form) for over fifty years, beginning as a funding program of the Medical Research Council of Canada (MRC) and continuing after the MRC was replaced by CIHR in 2000. With the establishment of CIHR, the OOGP was expanded to include research proposals falling within the new organization's broader four-pillar mandate.

The OOGP is by far the largest of the open calls for proposals within CIHR's program suite, accounting for between 43% and 54% of annual grants and awards expenditures between 2000/01 and 2010/11 (Table 5-1).Footnote 28 Since 2000, OOGP has committed a total of approximately $3.6 billion (ongoing to fiscal year 2015-16) towards supporting investigator-driven projects in all four pillars.Footnote 29

Table 5-1: Proportion of Core OOGP Expenditures as a Percentage of Total CIHR Expenditure*
2000/01 2001/02 2002/03 2003/04 2004/05 2005/06 2006/07 2007/08 2008/09 2009/10 2010/11
MOP Funding** $201.2 $245.3 $277.9 $297.3 $316.9 $345.9 $361.4 $374 $404.6 $402.8 $419.1
Total CIHR Funding ** $369.8 $494.5 $586.8 $646.9 $704.7 $758.1 $799.6 $926.7 $916.9 $929.1 $966.8
MOP % of Total CIHR Funding 54% 50% 47% 46% 45% 46% 45% 40% 44% 43% 43%

Source: CIHR Electronic Information System data.

Footnote *

Figures for Total CIHR Funding include flow-through fund programs (CRC, NCE, CECR, CERC). Core OOGP figures do not include priority announcements or bridge funding opportunities. Funding by fiscal year includes any active OOGP applications that were paid during that fiscal year

* referrer

Footnote **

Amounts are in millions of dollars

** referrer

OOGP competitions are held twice annually, in March and September; at least 400 new grants are awarded per competition per year. As depicted in Figure 5-1, the number of applications received and the numbers of fundable applications have grown since 2000/01. This has resulted in a decline in the success rate from 34% in 2000/01 to 22% in 2009/10. Grants have typically averaged between $79,000 and $134,000 per annum and most are held for either three or five years.Footnote 30

Figure 5-1: Open Operating Grant Program: Number of applications and success rates, 2000-2010

Figure 5-1 long description

Source: CIHR Corporate Statistics 2009-10.

Program Objectives

The overall objectives of the OOGP can be summarized as:

An OOGP logic model developed and approved in 2008 showing the linkages between program activities, outputs and outcomes is presented in Appendix A of this report.

Evaluation Methodology

OOGP Evaluation Framework

Evaluation Questions Indicators Methods Data Sources
Knowledge creation
1. Have publications by OOGP-funded researchers had a greater scientific impact than those of health researchers in Canada and other OECD countries? Average of relative citations (ARC) of supported publications

Average relative impact factors (ARIF) of supported publicationsFootnote 32
Bibliometric analysis

Literature review
OST bibliometric data
EIS data

Academic and professional literature
2. Has the scientific impact of OOGP-funded publications increased, decreased or remained the same since 2005? Average relative impact factors (ARIF) of supported publications

Average of relative citations (ARC) of supported publications
Bibliometric analysis

Literature review
OST bibliometric data
EIS data

Academic and professional literature
3. Has the production of OOGP research outputs per grant increased, decreased or remained the same since 2005? # Journal articles
# Books/book chapters
# Reports/technical reports
Analysis of existing data
Literature review
RRS data
EIS data
Academic and professional literature
Program design and delivery
4. Is the OOGP peer review process able to identify and select future scientific excellence? Peer-review rankings vs. scientific impact (ARC/ARIF) Bibliometric analysis Analysis of existing data OST bibliometric data
Peer review score data
5. How satisfied are OOGP applicants with the delivery of the application, peer review and post-award processes? % applicants satisfied with CIHR key delivery measures Analysis of existing data International Review survey data (Ipsos Reid survey)
RRS data
Researchers' petition data
6. Is the OOGP being delivered in a cost-efficient manner? Per unit $ cost of processing OOGP applications and delivering grants Analysis of existing data EIS data
Finance data
Data from KCP on staff FTE

Data from Program Planning and Process Branch on applicant timeFootnote 33
Data from design team
7. Is the current project-based OOGP funding model an appropriate design for CIHR and the federal government to support health research? What alternative program designs could be considered? OOGP alignment with CIHR and government priorities and mandate

Environmental scan of open research funding nationally and internationally

Assessment of alternative program designs
Document review

Key informant interviews
Analysis of existing data
CIHR documents, Previous CIHR evaluative studies, OOGP strategic review report, Government of Canada documentation

Senior managementFootnote 34

Data from design team
Peer review ranking data
RRS data
Knowledge Translation
8. What commercializable outputs have been produced by OOGP-funded researchers? # and type of commercializable outputs produced

# and type of OOGP researchers who 'migrate' to commercialization grants

'Success stories' of highly impactful outputs
Analysis of existing data

Case studies
RRS data
EIS data

Researchers, Knowledge users
9. What influence has OOGP-funded research had on wider stakeholder groups including those in the health care system, government and industry? % funded researchers who have influenced wider stakeholders with their research

In-depth analysis of the reported influence on stakeholders
Analysis of existing data

Case studies
RRS data

Researchers, Knowledge users
10. Is there a relationship between stakeholder engagement in the research process and outcomes of the research? Stakeholders engaged at different stages in the research process

Research outcomes (knowledge creation; KT; capacity development)
Analysis of existing data RRS data
Capacity development
11. Has the average number of research staff and trainees attracted and trained by OOGP grants since 2000 increased, decreased or remained the same? # and type of research staff and trainees involved in OOGP research Analysis of existing data RRS data
EIS data
12. Is the OOGP funding researchers from across all areas of health research? Do researchers in Pillars II, III and IV face structural barriers in obtaining OOGP funding? # researchers funded from each research pillar;
success rates by pillar;
# researchers funded by demographic groups (e.g. experience, prior grants etc.)

Identification of barriers facing researchers in Pillars II, III and IV
Analysis of existing data

Key informant interviews
EIS data
RRS data
CIHR corporate deck
Researchers' petition data

Researchers (funded and unfunded), peer review committee officials and members from pillars II, III and IVFootnote 35
OOGP and CIHR senior management

Evaluation Methodology

Consistent with TBS policy and recognized best practices in evaluationFootnote 36, a range of methods - involving both quantitative and qualitative evidence - were used to triangulate evaluation findings. This was to ensure that the evaluation findings were robust and credible and that conclusions drawn about program performance are valid.

Data Collection and Analysis Activities

Literature Review

The main focus of this review was on investigator-initiated research and "scientific performance;" knowledge creation, bibliometrics, peer review, the science of the management of science, cost effectiveness studies, and other relevant topics. These activities were ongoing throughout the evaluation.

Document Review

This covered an environmental scan of relevant CIHR and Government of Canada documents. It included previous evaluations conducted by CIHR, its Institutes and other funding agencies in Canada and internationally; the 2009/10 CIHR corporate statistics deck and the 10th year international review of CIHR report and related documents.

Bibliometric Analysis

Two bibliometric studies were conducted for this evaluation by the Observatoire des sciences et des technologies (OST) of Université du Québec à Montréal. The first covered articles published between 2000-2010 by a sample of successful and unsuccessful applicants to the OOGP (N=1500). The second analysis was based on articles published between 2006-2010 by successful and unsuccessful applicants to the OOGP (N=1500). The studies provided data on the scientific productivity and impact of funded and unfunded OOGP applicants compared with other health researchers in Canada and OECD countries. The key indicators of interest were the average of relative citations (ARC) and average relative impact factor (ARIF) of journals; only the former indices are presented in this report on the basis that this index is more methodologically appropriate.

Key Informant Interviews

Semi-structured in-depth qualitative interviews were conducted with representatives of OOGP and CIHR senior management including the scientific directors of the two institutes affiliated mainly with Pillars III and IV. Topics of interest included barriers faced by OOGP applicants whose areas of research fell within Pillars II, III and IV.

Case Studies

Five OOGP-funded projects were purposively sampled to be case studies using quantitative data from the research reporting system; these were selected to represent CIHR's four pillars (as self-identified by the researchers) and reflect regional balance. A short-list of suggested cases were validated and prioritized by the OOGP evaluation working group and the Vice President, Research at CIHR. The case studies provided in detail the success stories of highly impactful research outcomes. A total of N=25 qualitative key informant interviews were conducted with principal investigators, knowledge translators, partners and knowledge users.

Peer Reviewer Workload Survey

A survey was conducted of the 877 peer reviewers who participated in the November, 2011 OOGP peer review committee sessions, to obtain data on time spent on different aspects of the peer review process. An online survey was fielded between January 13 and January 24, 2012. A total of N=485 peer reviewers responded; however, salary data could not be collected for 28 of these and those were therefore excluded from further analyses leaving a total of N=457. The unadjusted response rate for the survey was 55.3%.

Open Reform Stakeholder Survey

An open online survey was launched to elicit feedback from stakeholders on CIHR's reform of its suite of open programs and the Evaluation Unit leveraged the opportunity to include questions on researcher time spent and costs of applying for an OOGP grant. The survey took place between February 13 and March 28, 2012, by which time N=386 researchers had responded to the relevant question on the number of hours it took them to complete an OOGP application. Data was cleaned to remove eight cases. As there is no defined sample universe for this open survey, a response rate cannot be calculated.

Analysis of Existing Data

Electronic Information System (EIS) Data:

The EIS is designed to collect and store data on all applicants to CIHR programs. Analysis conducted using this data included OOGP program records, administrative and financial data such as amount and duration of grants, competition date, and previous grants held. Peer review scoring data was also accessed and used in relation to bibliometric analyses.

OOGP Research Reporting System Data:

Data were drawn from reports submitted by OOGP researchers using CIHR's end of grant research reporting system (RRS); a 2009 pilot study of the RRS that targeted grantees whose authorization to use funds expired between January 2000 and June 2008 (N=596); and data from the full launch of the RRS in 2011 (N=141 responses were included, all submitted by February 2, 2012). Before combining the data between the two RRS data sources, validation of responses was conducted, including checking for differences in demographic profile between respondents to the pilot and the full survey. No significant issues were identified in this regard.

CIHR 10th Year International Review Survey Data:

Data was analyzed from an online survey conducted by Ipsos Reid between November 5 and December 5 2010 as part of CIHR's 10th Year International Review to examine satisfaction with the program delivery process among researchers and other stakeholders. A sub-set of this data was analyzed by the evaluation team to include only researchers who had applied at least once for an OOGP grant. A total of N=2,141 of researchers who had applied at least once for an OOGP grant and N=232 research administrators were included in this analysis.

Content analysis of Internet Petition:

In connection with the CIHR 10th Year International Review, researchers set up a website to express their views on the funding of health research in Canada. A total of 1900 researchers "signed" the petition with 516 of them providing additional comments. This data, available online was downloaded on October 25, 2011. A content analysis was conducted on researchers' comments to identify the most frequently occurring themes.

Limitations

In keeping with best practices in program evaluation, the limitations of this study are noted below, together with the strategies that were employed to mitigate them.

Bibliometric analysis

Bibliometric analysis has been criticized on the grounds that estimates of publication quality based on citations can be misleading and that citation practices differ across disciplines and sometimes between sub-fields in the same discipline (Ismail et al., 2009). This is a particularly salient issue for CIHR and the OOGP, with a mandate to fund across all areas of health research, including research disciplines where outputs such as books or book chapters may be a more useful and accurate measure of knowledge creation. To mitigate this, measures of other outputs are also used in this evaluation to assess knowledge creation as a result of the program. A case study approach is also taken to assess highly impactful research conducted as a result of OOGP funding.

The bibliometric analyses are based on data for publications produced by OOGP researchers while supported by these grants. While this method is commonly accepted based on an assumption that these grants are a significant contribution to research output (e.g. Campbell et al, 2010), an outright attribution between grant and publication bibliometric data cannot be made. With further development of CIHR's Research Reporting System, where researchers list publications produced as a result of the grant that can then be linked directly to bibliometric data, this type of analysis should become available for future evaluations.

The overall average of relative citations for Canada is comprised of all Canadian health researchers, including those funded by the OOGP. The OECD comparators are based on all health researchers within each country, rather than on individual funding agencies or programs. Given the differing mandates for health research funding in agencies such as the National Institutes of Health in the United States or the Medical Research Councils of the United Kingdom or Australia, direct comparisons between agencies could prove problematic. However one potential area for future evaluations to address would be to assess the feasibility of deriving agency or even program benchmarks based on matching a sub-set of data that is directly comparable (e.g. in biomedical research).

Finally, due to budget limitations, a stratified random sample of OOGP researchers was selected for analysis rather than selecting all researchers. The sample size (n=1500) was adequate for this analysis, and there is no reason to expect that the universe of all researchers would be different from the selected sample.

Research Reporting System Data

There are several limitations to the RRS data. The foremost is the use of a survey methodology that relies largely on self-reported data and memory recall from OOGP grantees. Data collection in the 'Pilot study' was halted before the fourth wave of invitations were sent out. Similarly, researchers responding to the current version of the RRS have until October of 2012 to complete their report, meaning that we do not have a full sample of these. Among the completed reports, data quality checks are still ongoing and only the responses related to knowledge creation were available to be included in the analyses. Also, in relation to estimating the numbers trained/supported by OOGP, there could be double counting since trainees could be involved in multiple projects with different nominated principal investigators.

To mitigate against the possibility that these samples may not be representative of the overall population of OOGP researchers, a comparison of demographic variables of the two RRS sets of data with the OOGP population was conducted. This suggested that the two incomplete samples were broadly representative of the overall universe of researchers. The variables compared were: pillar, language and region, with differences between the samples and the population of around 5%.

10th IRP Satisfaction Survey

Researchers were asked to respond with reference to CIHR programs they had applied to in the last five years; 87% of them said they had applied for an OOGP during the period. Several applied to other programs as well and while only those respondents who had applied to the OOGP at least once in the last five years were selected, their responses may not uniquely relate to the OOGP.

Survey Data on Peer Reviewer and Applicant Time

The survey data on peer reviewer time should be treated with some caution; it is the first time that such a survey has been fielded at CIHR, and there is no current trend data to assess the extent to which such figures fluctuate by competition depending on what applications are received. There is also a limitation in that sample sizes are insufficiently large in this survey to assess reviewer burden in sub-disciplines, particularly smaller communities. Similar caveats apply to the data on researcher time spent applying for an OOGP grant.

Content Analysis of Internet Petition

Contributors accessed the website and made their comments anonymously. There is no way to validate this self-selected data and this cannot be seen as representative of the wider population.

Case Studies

The sampling was purposive with only exemplary cases being selected. Also, only a small number of cases were selected due to budget and timing constraints. As with all qualitative data, these findings are not generalizable to a wider population but are used instead for illustrative purposes only.

Key Informant Interviews

Interviews with other stakeholders (applicants/researchers/peer reviewers, other CIHR senior management members) were cancelled to minimize respondent burden since the CIHR team in charge of the open program reforms was also consulting stakeholders.

Selection of Pillars

All analyses with reference to pillars rely on a researcher's self-identification to a pillar. Manual validation of researchers and pillars is sometimes carried out by CIHR Institutes to increase the reliability of this data (for example, the Institute of Population and Public Health conducts such validations for its areas of research) however, to do so at a corporate level would be highly resource intensive.

Appendix

Crosswalk of TBS Core Evaluation Issues by Relevant Sections of OOGP Evaluation Report

TBS Core Issue Section & Page of Report
Relevance
Issues 1-3: Continued need for program;
alignment with government priorities;
alignment with federal roles and
responsibilities
Program Relevance, pp. 61-62
Performance (effectiveness, efficiency and economy)
Issue 4: Achievement of expected outcomes Knowledge Creation, pp.11-17;
Program Design & Delivery, pp.18-37;
Knowledge Translation, pp.38-48;
Capacity Development, pp.49-60
Issue 5: Demonstration of efficiency and
economy
Program Design & Delivery, pp.18-37

References

Adily, A., Black, D., Graham, I. D., & Ward, J. E. (2009). Research engagement and outcomes in public health and health services research in Australia. Australian and New Zealand Journal of Public Health, 33(3), 258-261.

Alberta Ingenuity Fund (2008). The Ingenuity Way: Bibliometric analysis of individuals supported by the Alberta Ingenuity Fund. Report prepared by Picard-Aitken, M., Campbell, D., Labrosse, I., & Lecomte, N. and submitted to Alberta Ingenuity Fund on May 22, 2008. Accessed March 9, 2012.

Archambault, E. (2010). 30 Years in Science: Secular Movements in Knowledge Creation. Accessed February 29, 2012.

Archambault, E. (2010). Web-based Supplementary Material for 30 Years in Science: Secular Movements in Knowledge Creation. Accessed February 29, 2012.

Archambault, E., Campbell, D., Gingras, Y., & Larivière, V. (2009). Comparing bibliometric statistics obtained from the Web of Science and Scopus. Journal of the American Society for Information Science and Technology, 60,1320-1326.

Azoulay P., Graff-Zivin J. S. & Manso G. Incentives and Creativity: Evidence from the Academic Life Sciences (NBER Working Paper No. 15466). Accessed February 19, 2012.

Bayne, L. (2009). Population Health Intervention Research Invitational Funders Forum Report of Proceedings. Report prepared for the Institute of Population and Public Health.

Campbell D., Picard-Aitken M., Côté G. et al. (2010). Bibliometrics as a Performance Measurement Tool for Research Evaluation: The Case of Research Funded by the National Cancer Institute of Canada. American Journal of Evaluation, 31; 66–83.

Cicchetti, D. V. (1991). The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation. Behavioral and Brain Sciences, 14, 119-135.

CIHR (2012a). Design Discussion Document: Proposed Changes to CIHR's Open Suite of Programs and Enhancements to the Peer Review Process. Accessed February 13, 2012.

CIHR (2012b). OOGP Funding Opportunity 2011-2012. Accessed February 17, 2012.

CIHR (2010). Health Research Roadmap: Creating innovative research for better health and health care.

Cole, S., Cole, J. R., & Simon, G. A. (1981). Chance and Consensus in Peer Review. Science, New Series, 214(4523), 881-886.

Demicheli, V., & Di Pietrantonj, C. (2007). Peer review for improving the quality of grant applications. Cochrane Database of Systematic Reviews, 18, 2. Art. No.: MR000003. DOI: 10.1002/14651858.MR000003.pub2.

Evaluation Unit, CIHR (2009). Final report of evaluative study on change in OGP funding allocation method and effects on distribution of funded applications across peer review committees. Internal CIHR report.

Gates Foundation (2011). Grand Challenges in Global Health: Overview. Accessed February 19, 2012.

Goudin, B. (2005) The Impact of Research Grants on the Productivity and Quality of Scientific Research. Accessed on December 14, 2011.

Government of Canada (2012). Jobs, Growth and Long-Term Prosperity: Economic Action Plan 2012, The Budget Speech. Accessed on March 28, 2012.

Government of Canada (2011). The Next Phase of Canada's Economic Action Plan: Low Tax Plan for Jobs and Growth. Accessed on March 28, 2012.

Graham, I.D., and Tetroe J. (2007). How to translate health research knowledge into effective healthcare action. Healthcare Quarterly,10(3), 20–22.

Graves, N., Barnett, A. G., & Clarke, P. (2011). Funding grant proposals for scientific research: Retrospective analysis of scores by members of grant review panel. British Medical Journal, 343:d4797 doi:10.1136/bmj.d4797.

Industry Canada (2009). Mobilizing Science and Technology to Canada's Advantage: Progress Report 2009. Accessed February 17, 2012.

Industry Canada (2007). Mobilizing Science and Technology to Canada's Advantage. Accessed March 27, 2012.

Institute of Health Services and Policy Research (2011). Generating evidence to guide peer review reform.

Institute of Population and Public Health (2012). Renewal applications to the OOGP.

Institute of Population and Public Health (2010). Accelerating Population Health Intervention Research to Promote Health and Health Equity: Symposium Report.

Institute of Population and Public Health, & Institute of Health Services and Policy Research (2005). Letter dated September 30, 2005; addressed to Mark Bisby, Robin Hill, and Richard Snell and co-signed by John Frank, then scientific director IPPH and Morris Barer, then scientific director, IHSPR.

Ioannidis, J. P. A. (2011). [Comment]. Fund people not projects. Nature, 477, 529-531.

Ismail S., Nason E., Marjanovic S. and Grant J. (2009). Bibliometrics as a tool for supporting prospective R&D decision-making in the health sciences: Strengths, weaknesses and options for future development. Rand Corporation, Santa Monica, CA.

Jacob, B., & Lefgren, L. (2007). The impact of research grant funding on scientific productivity (working paper). Accessed September 15, 2010.

Langfeldt, L., (2001). The decision-making constraints and processes of grant peer review, and their effects on the review outcome. Social Studies of Science, 31(6), 820-841.

Larivière, V., Archambault, E., Gingras, Y., & Vignola-Gagné, E. (2006). The place of serials in referencing practices: Comparing natural sciences and engineering with social sciences and humanities. Journal of the American Society for Information Science and Technology, 57, 997-1004.

Levin, H., & McEwan, P. J. (2001). Cost-Effectiveness Analysis: Methods and Applications. Thousand Oaks, California: Sage Publications.

Lomas, J. (2000). Using 'linkage and exchange' to move research into policy at a Canadian foundation. Health Affairs, 19(3), 236-240.

Macintyre, S. (2011). Expert review team report for Institute of Population and Public Health Accessed February 17, 2012.

Martin, B. R., and Irvine, J, (1983). Assessing basic research: Some partial indicators of scientific progress in radio astronomy. Research Policy, 12, 61–90.

Moed, H. F. (2005). Citation Analysis in Research Evaluation. Dordrecht, The Netherlands, Springer.

Moffatt, L. (2009). Perspectives on Funding Support for Population Health Intervention Research: Background and Report on Key Informant Interviews. Report prepared for Population Health Intervention Research Initiative for Canada.

National Institutes of Health (2011). NIH Research Grant Program R01. Accessed February 19, 2012.

Natural Sciences and Engineering Research Council, NSERC (2007). A Review of Canadian Publications and Impact in the Natural Sciences and Engineering, 1996 to 2005. Accessed November 18, 2010.

Nicholson, P., & Côté, D. (2011). Innovation Canada: A Call to Action – Expert Panel Report – Review of Federal Support to Research and Development. Accessed February 17, 2012.

Powell, K. (2010). Making the Cut. Nature, 467, 383-385.

Statistics Canada (2011a). Salaries and Salary Scales of Full-time Teaching Staff at Canadian Universities, 2010/2011: Preliminary Report 201 (81-595 M). Accessed February 23, 2012.

Statistics Canada (2011b). Labour force survey estimates (LFS), by usual hours worked, class of worker, National Occupational Classification for Statistics (NOC-S) and sex, unadjusted for seasonality (CANSIM Table 282-0023). Accessed February 23, 2012.

Social Sciences and Humanities Research Council (2010). Summative Evaluation of the Standard Research Grants and Research Development Initiatives Programs. Accessed on December 14, 2011.

Tamblyn, R. (2011). CIHR Institute of Health Services and Policy Research – Presentation to IPPH Institute Advisory Board.

Thorngate, W., Faregh, N., & Young, M. (2002). Mining the archives: Analyses of CIHR research grant adjudications. Report presented to CIHR, November, 2002. Accessed March 9, 2012.

UK Evaluation Forum (2006) Medical research: assessing the benefits to society. London: Academy of Medical Sciences.

Date modified: