Design, Application, and Actionability of US Public Health Data Dashboards: Scoping Review.
Background: Data dashboards can be a powerful tool for ensuring access for public health decision makers to timely, relevant, and credible data. As their appeal and reach become ubiquitous, it is important to consider how they may be best integrated with public health data systems and the decision-making routines of users.
Objective: This scoping review describes and analyzes the current state of knowledge regarding the design, application, and actionability of US national public health data dashboards to identify critical theoretical and empirical gaps in the literature and clarify definitions and operationalization of actionability as a critical property of dashboards.
Methods: The review follows PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. A search was conducted for refereed journal articles, conference proceedings, and reports that describe the design, implementation, or evaluation of US national public health dashboards published between 2000 and 2023, using a validated search query across relevant databases (CINAHL, PubMed, MEDLINE, and Web of Science) and gray literature sources. Of 2544 documents retrieved, 89 (3.5%) met all inclusion criteria. An iterative process of testing and improving intercoder reliability was implemented to extract data.
Results: The dashboards reviewed (N=89) target a broad range of public health topics but are primarily designed for epidemiological surveillance and monitoring (n=51, 57% of dashboards) and probing health disparities and social determinants of health (n=27, 30%). Thus, they are limited in their potential to guide users' policy and practice decisions. Nearly all dashboards are created, hosted, and funded by institutional entities, such as government agencies and universities, that hold influence over public health agendas and priorities. Intended users are primarily public health professionals (n=34, 38%), policy makers (n=30, 34%), and researchers or practitioners (n=28, 32%), but it is unclear whether the dashboards are tailored to users' data capacities or needs, although 30% of articles reference user-centered design. Usability indicators commonly referenced include website analytics (n=22, 25%), expert evaluation (n=19, 21%), and users' impact stories (n=14, 16%), but only 30% (n=26) of all articles report usability assessment. Usefulness is frequently inferred from presumed relevance to decision makers (n=17, 19%), anecdotal stakeholder feedback (n=16, 18%), and user engagement metrics (n=14, 16%) rather than via rigorous testing. Only 47% (n=42) of dashboards were still accessible or active at the time of review.
Conclusions: The findings reveal fragmentation and a lack of scientific rigor in current knowledge regarding the design, implementation, and utility of public health dashboards. Coherent theoretical accounts and direct empirical tests that link usability, usefulness, and use of these tools to users' decisions and actions are critically missing. A more complete explication and operationalization of actionability in this context has significant potential to fill this gap and advance future scholarship and practice.