Annual Review of Biomedical Data Science - Volume 4, 2021
Volume 4, 2021
-
-
Using Phecodes for Research with the Electronic Health Record: From PheWAS to PheRS
Vol. 4 (2021), pp. 1–19More LessElectronic health records (EHRs) are a rich source of data for researchers, but extracting meaningful information out of this highly complex data source is challenging. Phecodes represent one strategy for defining phenotypes for research using EHR data. They are a high-throughput phenotyping tool based on ICD (International Classification of Diseases) codes that can be used to rapidly define the case/control status of thousands of clinically meaningful diseases and conditions. Phecodes were originally developed to conduct phenome-wide association studies to scan for phenotypic associations with common genetic variants. Since then, phecodes have been used to support a wide range of EHR-based phenotyping methods, including the phenotype risk score. This review aims to comprehensively describe the development, validation, and applications of phecodes and suggest some future directions for phecodes and high-throughput phenotyping.
-
-
-
The 3D Genome Structure of Single Cells
Tianming Zhou, Ruochi Zhang, and Jian MaVol. 4 (2021), pp. 21–41More LessThe spatial organization of the genome in the cell nucleus is pivotal to cell function. However, how the 3D genome organization and its dynamics influence cellular phenotypes remains poorly understood. The very recent development of single-cell technologies for probing the 3D genome, especially single-cell Hi-C (scHi-C), has ushered in a new era of unveiling cell-to-cell variability of 3D genome features at an unprecedented resolution. Here, we review recent developments in computational approaches to the analysis of scHi-C, including data processing, dimensionality reduction, imputation for enhancing data quality, and the revealing of 3D genome features at single-cell resolution. While much progress has been made in computational method development to analyze single-cell 3D genomes, substantial future work is needed to improve data interpretation and multimodal data integration, which are critical to reveal fundamental connections between genome structure and function among heterogeneous cell populations in various biological contexts.
-
-
-
Integration of Multimodal Data for Deciphering Brain Disorders
Vol. 4 (2021), pp. 43–56More LessThe accumulation of vast amounts of multimodal data for the human brain, in both normal and disease conditions, has provided unprecedented opportunities for understanding why and how brain disorders arise. Compared with traditional analyses of single datasets, the integration of multimodal datasets covering different types of data (i.e., genomics, transcriptomics, imaging, etc.) has shed light on the mechanisms underlying brain disorders in greater detail across both the microscopic and macroscopic levels. In this review, we first briefly introduce the popular large datasets for the brain. Then, we discuss in detail how integration of multimodal human brain datasets can reveal the genetic predispositions and the abnormal molecular pathways of brain disorders. Finally, we present an outlook on how future data integration efforts may advance the diagnosis and treatment of brain disorders.
-
-
-
African Global Representation in Biomedical Sciences
Vol. 4 (2021), pp. 57–81More LessAfrican populations are diverse in their ethnicity, language, culture, and genetics. Although plagued by high disease burdens, until recently the continent has largely been excluded from biomedical studies. Along with limitations in research and clinical infrastructure, human capacity, and funding, this omission has resulted in an underrepresentation of African data and disadvantaged African scientists. This review interrogates the relative abundance of biomedical data from Africa, primarily in genomics and other omics. The visibility of African science through publications is also discussed. A challenge encountered in this review is the relative lack of annotation of data on their geographical or population origin, with African countries represented as a single group. In addition to the abovementioned limitations,the global representation of African data may also be attributed to the hesitation to deposit data in public repositories. Whatever the reason, the disparity should be addressed, as African data have enormous value for scientists in Africa and globally.
-
-
-
Phenotyping Neurodegeneration in Human iPSCs
Vol. 4 (2021), pp. 83–100More LessInduced pluripotent stem cell (iPSC) technology holds promise for modeling neurodegenerative diseases. Traditional approaches for disease modeling using animal and cellular models require knowledge of disease mutations. However, many patients with neurodegenerative diseases do not have a known genetic cause. iPSCs offer a way to generate patient-specific models and study pathways of dysfunction in an in vitro setting in order to understand the causes and subtypes of neurodegeneration. Furthermore, iPSC-based models can be used to search for candidate therapeutics using high-throughput screening. Here we review how iPSC-based models are currently being used to further our understanding of neurodegenerative diseases, as well as discuss their challenges and future directions.
-
-
-
Perspectives on Allele-Specific Expression
Vol. 4 (2021), pp. 101–122More LessDiploidy has profound implications for population genetics and susceptibility to genetic diseases. Although two copies are present for most genes in the human genome, they are not necessarily both active or active at the same level in a given individual. Genomic imprinting, resulting in exclusive or biased expression in favor of the allele of paternal or maternal origin, is now believed to affect hundreds of human genes. A far greater number of genes display unequal expression of gene copies due to cis-acting genetic variants that perturb gene expression. The availability of data generated by RNA sequencing applied to large numbers of individuals and tissue types has generated unprecedented opportunities to assess the contribution of genetic variation to allelic imbalance in gene expression. Here we review the insights gained through the analysis of these data about the extent of the genetic contribution to allelic expression imbalance, the tools and statistical models for gene expression imbalance, and what the results obtained reveal about the contribution of genetic variants that alter gene expression to complex human diseases and phenotypes.
-
-
-
Ethical Machine Learning in Healthcare
Vol. 4 (2021), pp. 123–144More LessThe use of machine learning (ML) in healthcare raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of healthcare. Specifically, we frame ethics of ML in healthcare through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to postdeployment considerations. We close by summarizing recommendations to address these challenges.
-
-
-
The Ethics of Consent in a Shifting Genomic Ecosystem
Vol. 4 (2021), pp. 145–164More LessThe collection and use of human genetic data raise important ethical questions about how to balance individual autonomy and privacy with the potential for public good. The proliferation of local, national, and international efforts to collect genetic data and create linkages to support large-scale initiatives in precision medicine and the learning health system creates new demands for broad data sharing that involve managing competing interests and careful consideration of what constitutes appropriate ethical trade-offs. This review describes these emerging ethical issues with a focus on approaches to consent and issues related to justice in the shifting genomic research ecosystem.
-
-
-
Modern Clinical Text Mining: A Guide and Review
Vol. 4 (2021), pp. 165–187More LessElectronic health records (EHRs) are becoming a vital source of data for healthcare quality improvement, research, and operations. However, much of the most valuable information contained in EHRs remains buried in unstructured text. The field of clinical text mining has advanced rapidly in recent years, transitioning from rule-based approaches to machine learning and, more recently, deep learning. With new methods come new challenges, however, especially for those new to the field. This review provides an overview of clinical text mining for those who are encountering it for the first time (e.g., physician researchers, operational analytics teams, machine learning scientists from other domains). While not a comprehensive survey, this review describes the state of the art, with a particular focus on new tasks and methods developed over the past few years. It also identifies key barriers between these remarkable technical advances and the practical realities of implementation in health systems and in industry.
-
-
-
Mutational Signatures: From Methods to Mechanisms
Vol. 4 (2021), pp. 189–206More LessMutations are the driving force of evolution, yet they underlie many diseases, in particular, cancer. They are thought to arise from a combination of stochastic errors in DNA processing, naturally occurring DNA damage (e.g., the spontaneous deamination of methylated CpG sites), replication errors, and dysregulation of DNA repair mechanisms. High-throughput sequencing has made it possible to generate large datasets to study mutational processes in health and disease. Since the emergence of the first mutational process studies in 2012, this field is gaining increasing attention and has already accumulated a host of computational approaches and biomedical applications.
-
-
-
Single-Cell Analysis for Whole-Organism Datasets
Vol. 4 (2021), pp. 207–226More LessCell atlases are essential companions to the genome as they elucidate how genes are used in a cell type–specific manner or how the usage of genes changes over the lifetime of an organism. This review explores recent advances in whole-organism single-cell atlases, which enable understanding of cell heterogeneity and tissue and cell fate, both in health and disease. Here we provide an overview of recent efforts to build cell atlases across species and discuss the challenges that the field is currently facing. Moreover, we propose the concept of having a knowledgebase that can scale with the number of experiments and computational approaches and a new feedback loop for development and benchmarking of computational methods that includes contributions from the users. These two aspects are key for community efforts in single-cell biology that will help produce a comprehensive annotated map of cell types and states with unparalleled resolution.
-
-
-
Neoantigen Controversies
Vol. 4 (2021), pp. 227–253More LessNext-generation sequencing technologies have revolutionized our ability to catalog the landscape of somatic mutations in tumor genomes. These mutations can sometimes create so-called neoantigens, which allow the immune system to detect and eliminate tumor cells. However, efforts that stimulate the immune system to eliminate tumors based on their molecular differences have had less success than has been hoped for, and there are conflicting reports about the role of neoantigens in the success of this approach. Here we review some of the conflicting evidence in the literature and highlight key aspects of the tumor–immune interface that are emerging as major determinants of whether mutation-derived neoantigens will contribute to an immunotherapy response. Accounting for these factors is expected to improve success rates of future immunotherapy approaches.
-
-
-
The Exposome in the Era of the Quantified Self
Vol. 4 (2021), pp. 255–277More LessHuman health is regulated by complex interactions among the genome, the microbiome, and the environment. While extensive research has been conducted on the human genome and microbiome, little is known about the human exposome. The exposome comprises the totality of chemical, biological, and physical exposures that individuals encounter over their lifetimes. Traditional environmental and biological monitoring only targets specific substances, whereas exposomic approaches identify and quantify thousands of substances simultaneously using nontargeted high-throughput and high-resolution analyses. The quantified self (QS) aims at enhancing our understanding of human health and disease through self-tracking. QS measurements are critical in exposome research, as external exposures impact an individual's health, behavior, and biology. This review discusses both the achievements and the shortcomings of current research and methodologies on the QS and the exposome and proposes future research directions.
-
-
-
Metatranscriptomics for the Human Microbiome and Microbial Community Functional Profiling
Vol. 4 (2021), pp. 279–311More LessShotgun metatranscriptomics (MTX) is an increasingly practical way to survey microbial community gene function and regulation at scale. This review begins by summarizing the motivations for community transcriptomics and the history of the field. We then explore the principles, best practices, and challenges of contemporary MTX workflows: beginning with laboratory methods for isolation and sequencing of community RNA, followed by informatics methods for quantifying RNA features, and finally statistical methods for detecting differential expression in a community context. In thesecond half of the review, we survey important biological findings from the MTX literature, drawing examples from the human microbiome, other (nonhuman) host-associated microbiomes, and the environment. Across these examples, MTX methods prove invaluable for probing microbe–microbe and host–microbe interactions, the dynamics of energy harvest and chemical cycling, and responses to environmental stresses. We conclude with a review of open challenges in the MTX field, including making assays and analyses more robust, accessible, and adaptable to new technologies; deciphering roles for millions of uncharacterized microbial transcripts; and solving applied problems such as biomarker discovery and development of microbial therapeutics.
-
-
-
Artificial Intelligence in Action: Addressing the COVID-19 Pandemic with Natural Language Processing
Vol. 4 (2021), pp. 313–339More LessThe COVID-19 (coronavirus disease 2019) pandemic has had a significant impact on society, both because of the serious health effects of COVID-19 and because of public health measures implemented to slow its spread. Many of these difficulties are fundamentally information needs; attempts to address these needs have caused an information overload for both researchers and the public. Natural language processing (NLP)—the branch of artificial intelligence that interprets human language—can be applied to address many of the information needs made urgent by the COVID-19 pandemic. This review surveys approximately 150 NLP studies and more than 50 systems and datasets addressing the COVID-19 pandemic. We detail work on four core NLP tasks: information retrieval, named entity recognition, literature-based discovery, and question answering. We also describe work that directly addresses aspects of the pandemic through four additional tasks: topic modeling, sentiment and emotion analysis, caseload forecasting, and misinformation detection. We conclude by discussing observable trends and remaining challenges.
-
-
-
Data Science in the Food Industry
Vol. 4 (2021), pp. 341–367More LessFood safety is one of the main challenges of the agri-food industry that is expected to be addressed in the current environment of tremendous technological progress, where consumers’ lifestyles and preferences are in a constant state of flux. Food chain transparency and trust are drivers for food integrity control and for improvements in efficiency and economic growth. Similarly, the circular economy has great potential to reduce wastage and improve the efficiency of operations in multi-stakeholder ecosystems. Throughout the food chain cycle, all food commodities are exposed to multiple hazards, resulting in a high likelihood of contamination. Such biological or chemical hazards may be naturally present at any stage of food production, whether accidentally introduced or fraudulently imposed, risking consumers’ health and their faith in the food industry. Nowadays, a massive amount of data is generated, not only from the next generation of food safety monitoring systems and along the entire food chain (primary production included) but also from the Internet of things, media, and other devices. These data should be used for the benefit of society, and the scientific field of data science should be a vital player in helping to make this possible.
-
-
-
Illuminating the Virosphere Through Global Metagenomics
Vol. 4 (2021), pp. 369–391More LessViruses are the most abundant biological entity on Earth, infect cellular organisms from all domains of life, and are central players in the global biosphere. Over the last century, the discovery and characterization of viruses have progressed steadily alongside much of modern biology. In terms of outright numbers of novel viruses discovered, however, the last few years have been by far the most transformative for the field. Advances in methods for identifying viral sequences in genomic and metagenomic datasets, coupled to the exponential growth of environmental sequencing, have greatly expanded the catalog of known viruses and fueled the tremendous growth of viral sequence databases. Development and implementation of new standards, along with careful study of the newly discovered viruses, have transformed and will continue to transform our understanding of microbial evolution, ecology, and biogeochemical cycles, leading to new biotechnological innovations across many diverse fields, including environmental, agricultural, and biomedical sciences.
-
-
-
Probabilistic Machine Learning for Healthcare
Vol. 4 (2021), pp. 393–415More LessMachine learning can be used to make sense of healthcare data. Probabilistic machine learning models help provide a complete picture of observed data in healthcare. In this review, we examine how probabilistic machine learning can advance healthcare. We consider challenges in the predictive model building pipeline where probabilistic models can be beneficial, including calibration and missing data. Beyond predictive models, we also investigate the utility of probabilistic machine learning models in phenotyping, in generative models for clinical use cases, and in reinforcement learning.
-
-
-
Satellite Monitoring for Air Quality and Health
Vol. 4 (2021), pp. 417–447More LessData from satellite instruments provide estimates of gas and particle levels relevant to human health, even pollutants invisible to the human eye. However, the successful interpretation of satellite data requires an understanding of how satellites relate to other data sources, as well as factors affecting their application to health challenges. Drawing from the expertise and experience of the 2016–2020 NASA HAQAST (Health and Air Quality Applied Sciences Team), we present a review of satellite data for air quality and health applications. We include a discussion of satellite data for epidemiological studies and health impact assessments, as well as the use of satellite data to evaluate air quality trends, support air quality regulation, characterize smoke from wildfires, and quantify emission sources. The primary advantage of satellite data compared to in situ measurements, e.g., from air quality monitoring stations, is their spatial coverage. Satellite data can reveal where pollution levels are highest around the world, how levels have changed over daily to decadal periods, and where pollutants are transported from urban to global scales. To date, air quality and health applications have primarily utilized satellite observations and satellite-derived products relevant to near-surface particulate matter <2.5 μm in diameter (PM2.5) and nitrogen dioxide (NO2). Health and air quality communities have grown increasingly engaged in the use of satellite data, and this trend is expected to continue. From health researchers to air quality managers, and from global applications to community impacts, satellite data are transforming the way air pollution exposure is evaluated.
-