AI for Public Health Equity – Workshop Report

January 25, 2019
Toronto, ON

Table of Contents


Background

The application of artificial intelligence (AI) and machine learning (ML) in population and public health (PPH) research is an emerging area of research, and outcomes of research at the intersection of these fields could have numerous potential impacts on Canadian society. Health equity is defined as “all people can reach their full health potential and should not be disadvantaged from attaining it because of their race, ethnicity, gender, age, social class, language and minority status, socio-economic status, or other socially determined circumstance” (Institute of Population and Public Health (IPPH), 2015) and is a central tenet in the PPH field. Yet the implications of AI and ML technologies on health equity in PPH research remain relatively unexplored. It is critical that PPH researchers be active in understanding how AI approaches used in research could contribute to, or exacerbate, inequities in society, and to explore opportunities where AI approaches could be used to address or improve health equity.

The Canadian Institutes of Health Research’s Institute of Population and Public Health (CIHR-IPPH) and CIFAR partnered together in 2018 to explore opportunities to collaboratively promote and support research using AI and ML approaches to address PPH challenges (report can be found here). One of the recommended next steps of the 2018 workshop was to convene a workshop focused on AI and health equity. In support of this next step, and aligning with CIFAR’s Pan-Canadian Artificial Intelligence Strategy and CIHR-IPPH’s Equitable AI Initiative, the two organizations jointly designed and delivered an interdisciplinary workshop on AI for Public Health Equity in Toronto on January 25, 2019.

The workshop brought together 24 interdisciplinary researchers with expertise in areas including: public health, AI, ML, biostatistics, epidemiology, computer sciences, population health, clinical sciences, ethics, health services research, engineering, psychology/cognitive science, surveillance, and exposure science. The objectives of the one-day workshop were to:

  • Catalyze linkages and interactions between interdisciplinary researchers to generate cross disciplinary collaborations and learn how such networks can be supported
  • Create space for understanding how health equity can be influenced by fairness, accountability and transparency of AI approaches in public health research including identifying issues, risks and opportunities; and
  • Mobilize the development of key opportunities for action (ex. grant proposals, reports, convening) in this innovative field 

Below is a high-level overview of the key themes and recommendations from the workshop reflecting discussions and comments shared by workshop participants. Please note: this report reflects discussions arising from the workshop and should not be taken as an official endorsement by CIHR-IPPH or CIFAR on any of the recommendations.

Key Themes and Recommendations

How do we maximize inclusion in data used for research?

Background: Recent advances in AI and ML could lead to improved personalized health by analyzing patient data and being able to account for small individual differences in patients. However, machine learning algorithms often use test datasets that do not include underrepresented populations (e.g. racialized groups, Indigenous communities, gender, LGBTQ+, etc.) and currently, many ML researchers do not assess how their ML models may work when deployed across different populations. This critical shortcoming could have serious consequences for specific populations.

The data pipeline, which consists of a set of steps that involve successive processing of datasets (such as hygiene, conditioning, processing) can also lead to the removal of outliers in the data, and this can contribute to data gaps leading to a lack of representative data. This discussion prompted the question: how do we process data and maximize inclusion in a more equitable way that prevents the removal of outliers?

Recommendations/Actions/Next Steps

Calibrate models with a “lawn mower of justice” (term coined by Dr. Jutta Treviranus) perspective: Begin by asking the question “who will this model work for?” from the very start, and take an equity-driven lens when building datasets by prioritizing vulnerable populations first when calibrating/testing models. When developing algorithms, AI & ML researchers should consider outliers and individuals from underrepresented populations by adopting a “lawn mower of justice” approach which attempts to increase the influence of outlier and non-common voices by limiting the weight we put on data from the most represented groups (i.e. set a maximum number of data points that can be analyzed from any one group). With this, understand the importance of “small, thick data” and how to leverage this when using AI approaches for public health research. Researchers should engage ethicists, people with lived experiences, and communities affected by research in developing AI tools from the beginning/design stages.

Encourage open datasets and repositories: Institutions and funding agencies should allow for and encourage use and sharing of open datasets/repositories and open-source resources. The research community should work to curate and include subsets of standardized reference datasets in public health that could act as demonstration datasets for open use, given appropriate levels of access. Data storage should also be facilitated in a usable format so that large representative datasets can be stored and used efficiently.

Transparency in the data pipeline: Transparency in data pipelines is crucial, and data processing needs to be done in a more equitable way to prevent the removal of outliers.

Dedicate supplemental research funding to promote inclusion in the research process: One proposed funding mechanism supported strongly by workshop participants is to provide researchers with extra funds or research supplements to maximize inclusion of vulnerable and under-represented groups, like NSERC’s Northern Research Supplements Program. For example, funding agencies could provide supplemental funds for research projects to include and maintain relationships with under-represented and vulnerable communities.

In order to prevent and mitigate biases using AI, context matters

Background: Bias, or unfair prejudices against people in a given population, are prevalent in society and the field of public health is no exception. Biases in health data mirror biases that are often seen in society and thus are inherently a part of the health setting. Being excluded from the data from the very beginning and from the training models can lead to being excluded from benefits of the research, leading to increased inequity. In an ideal world, algorithms would have a vast set of data points in order to create representative ML models. However, most researchers tend to work with the datasets available to them which can be biased. Specific populations such as those of a certain gender or race may be at a disadvantage in terms of receiving the health support that they need and even in how their data is collected, if at all.

Bias in the design of AI models can be both intentional, such as building models with the goal of cost-saving), or unintentional, reflecting the implicit biases of those creating the models. Predictors using AI can tell you what is happening, but not why it is happening and thus there is limited context provided in research using AI approaches to support explainability and causality. There is a deeper structural issue with the potential for AI to further amplify existing biases and eliminate outliers.

Recommendations/Actions/Next Steps

Address the understanding and collection of race-based data in Canada: Racial discrimination in Canada is manifesting in the data we have and continue to collect. We need to better understand race-based data, the definitions of these variables in research and how to collect this data in Canadian research. Otherwise, AI may offer an opportunity to do bad research that amplifies biases with misinterpreted variables.

Use stratified modelling for predictive analyses: Research using ML is often reported in aggregates to optimize models, but is not balanced with specific types to ensure justice for sub-groups of populations. To promote balance when using AI and ML for public health research, integrate complexities in analyses by stratifying risks and variables to help better understand predictive analyses in context. Epidemiology has a long history of attempting to address these data limitations and biases through a variety of methodological approaches. The field of ML research could benefit from incorporating and adapting some of these approaches.

Actively communicate limits of the data collected, and the limits of inferences made, to further understanding of what underlying algorithms and AI models used in research are based on. Post-hoc balancing (insights) can be used to help interpret recommendations from machine learning and AI technologies.

How can we use AI to amplify voices of communities routinely pushed to the outset?

Background: AI has the potential to illuminate blind spots that we currently cannot see or that we know exist, but are not sure how to name. Data derived from more widely accessible tools like smartphones may allow greater representation from previously under-represented populations, and can contribute to increased inclusion in research using AI.

Could public health researchers take a bottom-up approach to data analysis using AI compared to usual research methods, such as nonparametric pattern recognition? If we take this approach, this could allow researchers to not impose assumptions, and prevent reinforcing privileges for the majority who are most often included and represented in the data.

Using AI can strengthen sensemaking for public health equity. A case study described by Dr. Dan Lizotte illustrated how an interdisciplinary group of computer scientists, public health practitioners and geographers can come together to support municipal public health services to understand who is most at risk among substance users through social media. The project involves the use of AI through archetype-based searches that help to filter social media data and identify those who may be at risk. Archetypes of those most at risk then serve as reference points to better understand who is at risk. This research highlights the potential for AI to support research to make sense of information that will help share and amplify voices of communities that may otherwise be invisible to public health services. Questions to consider for sensemaking using AI include:

  • What are the best AI and visual analytic methods to help identify those at risk?
  • How can we actively include marginalized groups to participate in sensemaking research and tell their own stories?

Recommendations/Actions/Next Steps

Research on societal impacts of AI is needed: Findings on the societal impacts of AI should be shared with communities directly to gather feedback. An example that is currently underway is the Observatory on the Societal Impacts of Artificial Intelligence and Digital Technologies, funded by Fonds de Recherche du Québec (FRQ) and based at Université Laval. The observatory and other research done on societal impacts should enable understanding of the consequences of these investments, and address uncertainties that arise as a result of AI.

Explore AI for storytelling: The current application of AI tools such as archetyping to make sense of social media data to support public health initiatives for populations at risk prompted discussion about the opioid public health emergency in BC. Stigma is a significant barrier to seeking health supports for substance users, and sensemaking research using AI could help share stories to inform anti-stigma campaigns and increase awareness about harm reduction and supervised injection sites.

Advocacy needed for equitable NLP for public health: Better understanding and investment on how natural language processing (NLP) can be leveraged as a speech recognition tool for public health is needed, particularly for communities who are underserved and at the margins. If systems using NLP are not trained for and with communities, such as immigrants or refugees who don’t speak English, or Francophone communities, then entire populations will be excluded. Design of NLP systems relies largely upon commercial investment and development, and are often designed inadequately, which has major consequences for health equity as populations will be excluded from the data and therefore will not be able to access services as a result. Information used to design NLP systems should be brought to communities directly, and communities should be actively engaged in the design of these systems. Inclusive design frameworks (like AI Commons) can help to address the need and make a case for investment in NLP for public health.

Research using AI should actively encourage reflexivity of biases

Background: Reflexivity and detailed understanding of bias is not always reflected or outlined in limitation sections of AI papers, including in studies focused on ML used to deploy models. Training of bias and fairness is necessary in curricula, especially for AI/ML and computer science disciplines. It was noted that “if you teach cross-validation, you have to teach confounding.” This is recommended not only for trainees, but also for continuing education to support researchers currently doing this work. Resources suggested to encourage reflexivity and understanding bias in public health research using AI include the TRIPOD Statement, and the Draft Ethics Guidelines for Trustworthy AI which is a European Union working document.

Recommendations/Actions/Next Steps

Integrate bias/FAT ML into training: Further training is needed on transparency, explainability, causality, uncertainty and unintended consequences. Promote the integration of training on bias and equity for AI and computer science researchers and trainees. Support continuing education for health professionals to increase awareness of the methods that are used. This can allow health professionals to understand that the methods are not simply black boxes and help them better appreciate how the methods work.

Adopt an open science attitude: Utilize the attitude of open science to encourage the use of equity-based frameworks for public health research using AI. As a collective, generate a consensus statement on the nature of research that can be done for public health using AI (see as an example CONSORT), and build in guidelines for publishing research and updated ethical guidelines of inclusion of marginalized populations in the outcomes of the research.

Communicate limitations and practice reflexivity: Actively describe the limits of data collected, and analyses and inferences made in the research. Make sure team members can communicate this effectively when sharing research findings and exchanging knowledge with stakeholders. To support transparency, convey uncertainties from the outset by drawing on concepts in public health that deal with uncertainties in research.

Equity in the research enterprise: Systemic obstacles to facilitate AI research for public health equity were outlined throughout the workshop. It was recommended that a potential research idea would be to do a study on what the barriers to equity are within the research enterprise, from tenure and promotion, to publishing and research funding systems. This research can help encourage critical self-reflection in the field, and in the long-term, help to support research for public health equity.

How can we encourage interdisciplinary research and collaboration?

Background: Public health has a wealth of expertise to offer to computer science and AI disciplines, particularly when it comes to understanding of bias and confounding. To prevent further silos being developed, it is recommended to teach AI and public health groups together to promote interdisciplinarity. The question remains on what is the best model to facilitate this? Graduate degrees, workshops, continuing professional development, and summer institutes were all raised as examples of ways to improve interdisciplinary training and collaboration.

To address bias in data and design of AI tools, interdisciplinary training can facilitate opportunities for AI researchers and trainees to be exposed to learning about bias, confounding and equity. Interdisciplinarity should not be limited to training however; diversifying research and practice teams to promote complementarity and balance can help to shed light on existing blind spots. Interdisciplinary collaboration takes time, and recognition of the tenure of research is needed by funding agencies for researchers to do interdisciplinary work well.

Recommendations/Actions/Next Steps

Frame equity as the solution: To engage in interdisciplinary collaboration with AI and ML researchers, public health should frame equity as a solution, so that once this perspective is adopted, determinants for health equity cannot be ignored.

Fund by “starting at the edge vs. at the middle”: Start at the edge, and create a targeted call that would explicitly exclude islands of existing excellence and people who have received funding in order to amplify other voices, such as community health organizations and groups traditionally considered “knowledge users.” Continue to release funding like the New Frontiers grants which are tri-council in nature and support transformative and high-risk research that foster interdisciplinary collaboration. With this type of funding, it is recommended not to tie too many restrictions to allow for flexibility. Additionally, increase funding for secondary data analysis, which can also serve as a way to facilitate collaboration between researchers and trainees to explore new insights from existing datasets.

Foster interdisciplinary training environments: Trainees have the ability to be the glue that tie researchers together, and it is recommended that to fund trainees and promote interdisciplinary research, let researchers write proposals that allow them to co-supervise students with principal investigators from different disciplines. Summer institutes are also a way to train a large number of participants from across disciplines with the ability to focus specifically on topics like health equity.

Facilitate data platform cooperatives: Involve a variety of perspectives through the development of data platform cooperatives, which are groups who are often excluded, who can come together to collect and govern their own data.

Engage a variety of disciplines: Public health and computer science/AI researchers should be encouraged and supported to engage and collaborate with multiple disciplines and groups in order to advance equitable AI for public health research, including but not limited to: sociologists, political scientists, engineers, civil society and citizen scientists, people with lived experience, policymakers, business professionals, management scientists, public health agencies, and community-based organizations.

Why isn’t health equity research in general being funded more in Canada?

Background: Complex research questions addressing health equity using AI approaches can be optimally addressed with multidisciplinary team grants. However, peer-review panels often consist of researchers with specific expertise that may not be amenable to multidisciplinary research projects. This larger systemic issue regarding peer-review panels and processes needs to be addressed and perceptions need to change for this to improve funding and peer-review publication, particularly for research addressing health and social inequities using AI approaches.

Recommendations/Actions/Next Steps.

Encourage multidisciplinarity in peer reviewer pools and review frameworks: Reviewer pools should consist of researchers who are amenable, willing to engage, and understand multidisciplinarity. Frameworks for peer review assessments should be enhanced to enable peer reviewers to support and effectively assess multidisciplinarity in research.

Educate peer review panels on gaps in health equity research: Equity research is not being adequately funded in Canada and the importance of this issue needs to be communicated to peer review panels to close current gaps in health equity research in Canada.

Develop standards and governance frameworks for peer review on multidisciplinary research focused on AI for health equity: There is a need to develop standards and governance frameworks for public health and AI to help inform peer review, funding calls and journal publication reviews at the intersection of these disciplines.

Appendix A: Workshop Agenda

IPPH-CIFAR Joint Workshop
Meeting Agenda

January 25, 2019 | MaRS Centre, South Tower, Collaboration Centre, Room CR-2 (ground floor)
101 College Street, Toronto, Ontario M5G 1L7 Canada

Time Agenda Item Lead(s)

8:45 – 9:00 am

Registration & Breakfast

9:00 – 9:30 am

Welcome and Introductions

  • Background, Meeting Purpose and Objectives of Workshop
  • Review Agenda

Elissa Strome (Executive Director, Pan-Canadian AI Strategy, CIFAR)

Marisa Creatore (Assistant Scientific Director, CIHR-IPPH)

9:30 – 10:00 am

Keynote: Fairness in AI

Marzyeh Ghassemi (Assistant Professor, Computer Science and Medicine, University of Toronto; Faculty Member, Vector Institute, Canada CIFAR AI Chair)

10:00 – 10:15 am

Networking Break

10:15 – 10:45 am

Keynote: Equity and Bias in Evidence-based Decision-Making for Public Health

Laura Rosella (Scientific Director, Population Health Analytics Laboratory, University of Toronto)

10:45 am – 12:00 pm

Case Studies: Equity in AI Applications to Public Health

  • Social Networks
  • Generation AI
  • Machine Learning and Opportunities for Health Promotion

Anna Goldenberg (Scientist, Genetics and Genome Biology, SickKids Research Institute, Varma Family Chair of Medical Bioinformatics and Artificial Intelligence, Canada Research Chair in Computational Medicine)

Dan Lizotte (Assistant Professor, Computer Science and Epidemiology and Biostats Depts., Western University)

12:00 – 1:00 pm

Lunch

1:00 – 2:00 pm

Gaps in the Research: Panel Discussion

  • Gender
  • Race & Ethnicity
  • Socioeconomic Factors
  • Different Levels of Ability

Jutta Treviranus (Director, Inclusive Design Research Centre, OCAD University)

Arjumand Siddiqi (Associate Professor, Canada Research Chair in Population Health Equity, University of Toronto)

Tara Upshaw (MHSc Student, The Upstream Lab, St. Michael’s Hospital)

Moderator: Erica Di Ruggiero (Director, Office of Global Public Health Education & Training, University of Toronto)

2:00 – 3:00 pm

Facilitated Breakout Sessions: Reflecting on
Emerging Opportunities for Equitable AI in Public Health

Facilitators:

Jennifer Gibson (Director, Joint Centre for Bioethics, University of Toronto)

Nathaniel Osgood (Professor, Dept. of Computer Science, University of Saskatchewan)

3:00 – 3:20 pm

Networking Break

3:20 – 4:45 pm

Facilitated Group Discussion: Reflecting on Emerging Opportunities for Equitable AI in Public Health

  • Opportunities for Research, Recommendations and Policy Implications

Session Chair:

Jennifer Gibson (Director, Joint Centre for Bioethics, University of Toronto)

4:45 – 5:00 pm

Closing Remarks and Next Steps

Elissa Strome (CIFAR)
Marisa Creatore (CIHR-IPPH)

5:00 – 6:00 pm

Reception

Location: MaRS Centre, West Tower, 661 University Ave, Suite 505

Appendix B: Workshop Participant List

Participants (in alphabetical order)

  • Imran Ali, Dahdaleh Institute for Global Health Research
  • Brent Barron, CIFAR
  • Nabilah Chowdhury, CIFAR
  • Amy Cook, CIFAR
  • Mélissa Côté, Canadian Research Chair in Shared Decision Making and Knowledge Translation
  • Myriam Côté, Mila
  • Marisa Creatore, CIHR-IPPH
  • Natasha Crowcroft, PHO
  • Krista Davidson, CIFAR
  • Erica Di Ruggiero, Dalla Lana School of Public Health, University of Toronto
  • Elham Dolatabadi, Vector Institute
  • Rebecca Finlay, CIFAR
  • Marzyeh Ghassemi, University of Toronto
  • Jennifer Gibson, University of Toronto
  • Anna Goldenberg, SickKids Research Institute, University of Toronto
  • Zachary Kaminsky, The Royal's Institute of Mental Health Research
  • Tino Kreutzer, York University
  • Jacqueline Kueper, The University of Western Ontario
  • Daniel Lizotte, The University of Western Ontario
  • Fatima Mussa, CIHR-IPPH
  • James Orbinski, York University
  • Nathaniel Osgood, University of Saskatchewan
  • Alison Paprica, Vector Institute
  • Mina Park, University of British Columbia
  • Samira Rahimi, McGill University
  • Laura Rosella, University of Toronto
  • Arjumand Siddiqi, University of Toronto
  • Elissa Strome, CIFAR
  • Jutta Treviranus, Inclusive Design Research Centre, OCAD University
  • Tara Upshaw, The Upstream Lab
  • Scott Weichenthal, McGill University
  • Merrick Zwarenstein, Western University; ICES
Date modified: