{ "cells": [ { "cell_type": "markdown", "id": "97d8ede8", "metadata": {}, "source": [ "# Lesson 1. Supervised Learning\n", "Supervised learning uses labeled data to train a model to make predictions and judgments. In the area of health equality, this kind of learning can be applied to pinpoint health inequities, estimate the likelihood of health-related outcomes, and create focused interventions. Supervised learning algorithms can be used to identify groups of people who are more likely to benefit from preventative treatments, to identify groups of people who are at risk for particular health disorders, for instance, to examine patient data. Moreover, supervised learning can be used to pinpoint subgroups of people who have various health outcomes and create individualized therapies to deal with health outcomes inequalities {cite:p}`ghassemi2020review`." ] }, { "cell_type": "markdown", "id": "c9b971ce", "metadata": {}, "source": [ "
\n", "

Health Equity and Supervised Learning

\n", " \n", " \n", " \n", " \n", "
\n", "
    \n", "
  • Identify patterns in public health data to help inform policy decisions and interventions.
  • \n", "
  • Use supervised learning to predict and diagnose public health issues and to detect potential outbreaks.
  • \n", "
  • Help to identify risk factors for public health issues and develop interventions to address them.
  • \n", "
  • Assist in automating and streamlining data collection and analysis, resulting in increased efficiency and better decisions.
  • \n", "
\n", "
\n", "
" ] }, { "cell_type": "markdown", "id": "5a14c485", "metadata": { "tags": [ "full-width" ] }, "source": [ "## What is Supervised Learning?\n", "\n", "Supervised learning models are defined by their use of labeled datasets for training models to learn a mapping between an input and target variable, and ultimately leveraged for prediction on new instances. A labeled dataset refers to data that comes with known characteristics such as discrete categorical labels (*class labels* or *targets*) and attributes (*features*). There are two main categories of supervised learning problems:\n", "\n", "- **Classification** algorithms are used to identify which discrete category or class a given data point belongs to given its features.\n", "- **Regression** algorithms differ from classification problems in that they are used to predict continous numeric quantities. Regression problems are used to predict the value of a dependent variable based on given independent variables.\n", "\n", "This lesson focuses on using classification and regression models for prediction. However, regression is also commonly used to perform statistical inference, which estimates the associations between two or more variables of interest. See the lesson on [Biostatistics](5-1-0.-biostatistics.ipynb) for more details regarding statistical inference.\n" ] }, { "cell_type": "markdown", "id": "f0dcafe0", "metadata": { "tags": [ "full-width" ] }, "source": [ "### How Supervised Learning is Used\n", "\n", "Below is are examples of common supervised model uses within public health:\n", "\n", "* **Forecasting**: Regression models can be used to forecast future rates of infection based on historical data.\n", "* **Risk Assessment**: Regression analysis can be used in identifying cardiovascular risk factors from EHR data and generating risk scores for individual prognostication. \n", "* **Decision Making**: A regression analysis can help enable aid decision-making under uncertainty. For example, risk scores generated from regression analysis on patient history, diagnosis, and prognosis can help patients and care providers make more informed decisions for successful treatment recommendations.\n", "* **Categorical Classification**: Classification can be used to discriminate among a set of features and group samples according to a predicted target. An example would be a classification model that predicts mortality risk for disease based on demographics, symptom, and epidemiologic data features. A second example could be a binary classification model used to predict malignant vs. non-malignant skin lesions based on dermatoscopic image data." ] }, { "cell_type": "markdown", "id": "78edf595", "metadata": { "tags": [ "full-width" ] }, "source": [ "### Methods in Supervised Learning\n", "Below is a table that lists common supervised learning methods. This table builds upon a [Machine Learning Quick Reference guide](https://github.com/sassoftware/enlighten-apply/tree/master/ML_tables) provided by SAS. For more in depth information regarding regression analyses and statistical inference, see the lesson on [Biostatistics](5-1-0.-biostatistics.ipynb)." ] }, { "cell_type": "markdown", "id": "dd8ab49c", "metadata": { "tags": [ "full-width" ] }, "source": [ "```{admonition} If you are already familiar with methods for supervised learning, please continue to the next section. Otherwise click here.\n", ":class: dropdown \n", "\n", "
\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ModelCommon Usage Suggested Usage Suggested Scale Interpretability Common Concerns
Linear RegressionSupervised regressionMultiple linear regression, simple linear regressionSmall to large datasets HighMissing values, outliers, standardization, parameter tuning
Polynomial RegressionSupervised regressionModeling non-linear data using a linear model, analyze the curve towards the end for signs of overfitting, often used when linear models are unclearSmall to large datasets HighMissing values, outliers, overfitting, standardization, parameter tuning
Logistic RegressionSupervised classification Most commonly used in classification but can be used in regression modeling, dependent variable (target) is categoricalSmall to large datasets HighMissing values, outliers, standardization, parameter tuning
Penalized RegressionSupervised regression, Supervised classification Modeling linear or linearly separable phenomena, manually specifying nonlinear and explicit interaction terms, well suited for N << p , where the number of predictors exceeds the number of samples {cite:p}'brownlee2022bigp', specific types of penalized regression (Bayesian Linear Regression, Ridge, Lasso, Elastic Net)Small to large datasets HighMissing values, outliers, standardization, parameter tuning
Naïve Bayes Supervised classification Modeling linearly separable phenomena in large datasets, well-suited for extremely large datasets where complex methods are intractable Small to extremely large datasets Moderate Strong linear independence assumption, infrequent categorical levels
Decision Trees Supervised regression, Supervised classificationModeling nonlinear and nonlinearly separable phenomena in large, dirty data, interactions are considered automatically but implicitly, missing values and outliers in input variables handled automatically in many implementations, decision tree ensembles (e.g., random forests and gradient boosting) can increase prediction accuracy and decrease overfittingMedium to large datasets ModerateInstability with small training datasets, gradient boosting can be unstable with noise or outliers, overfitting, parameter tuning
Support Vector Machines (SVM) Supervised regression, Supervised classification Modeling linear or linearly separable phenomena using linear kernels, modeling nonlinear or nonlinearly separable phenomena using nonlinear kernels, anomaly detection with one-class SVM (OSVM) Small to large datasets for linear kernels, Small to medium datasets for nonlinear kernels   LowMissing values, overfitting, outliers, standardization, parameter tuning, accuracy versus deep neural networks depends on the choice of the nonlinear kernel; Gaussian and polynomial are often less accurate {cite:p}'singh2019svm'
k-Nearest Neighbors (kNN)  Supervised regression, Supervised classification Modeling nonlinearly separable phenomena, can be used to match the accuracy of more sophisticated techniques, but with fewer tuning parameters Small to medium datasets Low Missing values, overfitting, outliers, standardization, curse of dimensionality
Neural Networks (NN) Supervised regression, Supervised classification Modeling nonlinear and nonlinearly separable phenomena, deep neural networks (e.g., deep learning) are well suited for state-of-the-art pattern recognition in images, videos, and sound, all interactions considered in fully connected, multilayer topologies, nonlinear feature extraction with auto-encoder and restricted Boltzmann machine (RBM) networksUsually medium to large datasets. LowMissing values, overfitting, outliers, standardization, hyperparameter tuning
\n", "
\n", "\n", "```" ] }, { "cell_type": "markdown", "id": "12875df5", "metadata": { "tags": [ "full-width" ] }, "source": [ "## Health Equity Considerations\n", "Supervised learning methods in the context of health equity require careful consideration in data preparation, model training, and interpretation/dissemination of results. Addressing biases, ensuring diversity in the evaluation dataset, and quantifying algorithmic fairness using appropriate metrics are essential steps to promote equitable outcomes in health care decision-making. Additionally, incorporating qualitative evaluation through user research can provide valuable insights into the model's impact on diverse communities.\n", "\n", "Model training and tuning is a process where training data is introduced to an algorithm and model performance is optimized through methods such as iterative hyperparameter tuning and cross-validation. Below are several challenges you may encounter while training and tuning a supervised learning model. For more information on [data preparation](../unit_4/4-0.-data-preparation.ipynb) and [dissemination](../unit_6/6-0.-dissemination.ipynb), please visit their respective units." ] }, { "cell_type": "markdown", "id": "63513fad", "metadata": {}, "source": [ "|Challenge | Challenge Description | Heath Equity Example | Recommended Best Practice |\n", "|:-------- | :---------------- | :----------------- | :-----------------|\n", "| **Overfitting**| High-variance and low-bias models that fail to generalize well. | When trying to make a policy decision that impacts public health, an initial thought may be to include all socio-demographic variables as well as data on the population's health/co-morbidities into your model. Including too many variables in smaller datasets could lead to a model fitting to values that are not significant indicators of the target value that can lead to biased and inaccurate predictions. | |\n", "| **Discriminatory Classification** | Different prediction error rates for different subgroups suggest that the model discriminates against particular subgroups. The choice of the target variable can introduce bias and underrepresented populations in the training data can lead to discriminatory predictions. Further, due to protected attributes, \"existing approaches for reducing discrimination induced by prediction errors may be unethical or impractical to apply in settings where predictive accuracy is critical, such as in healthcare.\" {cite:p}`chen2018classifier` | A healthcare model predicts which patients might be discharged earliest to efficiently direct limited case management resources in order to prevent delays and open up more beds. If the model discovers that residence in certain zip codes predicts longer stays, but those zip codes are socioeconomically depressed or predominantly African American, then the model might disproportionately allocate case management resources to patients from richer, predominantly white neightborhoods {cite:p}`rajkomar2018ensuring` | |\n", "| **Different costs of misclassification**| Carefully consider when a false positive error can cause harm to an individual in a protected class. The use of ROC AUC to measure diagnostic accuracy does not account for different costs of misclassification; it lacks clinical interpretability, and confidence scales used to make ROC cures can be inconsistent and unreliable | For a colonography or mammogram, using ROC-AUC to determine the diagnostic accuracy of radiological tests could be problematic as ROC-AUC does not consider misclassification costs (important to assessing classification fairness). | |\n", "\n" ] }, { "cell_type": "markdown", "id": "2c5e0dd7", "metadata": { "tags": [ "full-width" ] }, "source": [ "{cite:p}`yeom2018hunting,d2017conscientious,kamiran2010discrimination,kumar2020identifying,har2019near,halligan2015disadvantages,halligan2015disadvantages,chen2018classifier, rajkomar2018ensuring`\n" ] }, { "cell_type": "markdown", "id": "8f69b180", "metadata": { "tags": [ "full-width" ] }, "source": [ "## Case Study Example\n", "\n", "Case study is for illustrative purposes and does not represent a specific study from the literature.\n", "\n", "**Scenario:**\n", "MG is an epidemiologist who is interested in creating a model to predict the effectiveness of a community weight loss intervention in young adults with Type 2 diabetes mellitus. MG hopes that identifying optimal individuals for intervention will enable a broader rollout of this program.\n", "\n", "**Specific Model Objective:**\n", "Predict successful weight loss (> 5% initial body weight) after 6 months of program participation in adults aged 18-33 with established diagnosis of Type 2 diabetes and at least one treatment with oral hypoglycemics. Program intervention consisted of bi-weekly online counseling for 15 minutes covering the patient's nutrition, exercise, and medication adherence.\n", "\n", "**Data Source:**\n", "Clinical and demographic data from outpatient diabetes center EHR at an academic institution in Boston. Per EHR records: 75% of the participants were white, 15% were black, and 10% were Hispanic/Latino or another race. \n", "\n", "**Analytic Method:** \n", "Decision Tree Classifier\n", "\n", "**Results:**\n", "The model achieved an overall AUC of 89.2 and an F1 score of 0.41.\n", "\n", "**Health Equity Considerations:**\n", "\n", "While the model achieved an overall high performance in terms of AUC, there are several considerations that should be made:\n", "* A limitation of the data was the lower representation of Hispanic/Latinos. Due to imbalanced data, the model may present **discriminatory classification** performance for Hispanic/Latino participants even with the observed high AUC overall. As discussed in a previous section, MG may want to consider techniques to mitigate imbalanced data such as over-sampling, under-sampling or trying another model.\n", "* **Overfitting** may limit the model's portability, particularly to settings in which the population demographics differs from that found in the Boston sample set. Specific to decision tree-based models, certain choices when training the model can lead to overfitting:\n", " * Using all the features to train the model, while perhaps a good idea at first, can lead to overfitting. This means the model may not perform as well when new datasets are introduced.\n", " * If the decision tree is allowed to split in an unlimited manner (e.g., no maximum depth), it may become too closely aligned to the specific features of the training data, and make errors when new data are introduced.\n", "* It is important to consider that the model results may be biased due to the design of a virtual intervention, which is likely limited to individuals with internet access.\n", "* As noted in other lessons, the quality of health data from EHRs can be highly variable and data may be missing, which would otherwise have an effect on the outcome of this study (e.g., medication adherence data).\n", "* The model may be **incorrectly interpreted** in its application for deciding where and for whom the weight loss intervention should be deployed. For example, a classification threshold for predicted weight loss that is dependent on race.\n" ] }, { "cell_type": "markdown", "id": "e3d283f6", "metadata": {}, "source": [ "
\n", "

Considerations for Project Planning

\n", " \n", " \n", "
\n", "
    \n", "
  • Are you currently using or considering supervised learning methods, with a focus on health equity, in your project?
  • \n", "
    • If yes, what specific supervised learning methods have you been using or plan to use to address health disparities, promote equitable health outcomes, or support public health interventions?
    • \n", "
    • If not, are there areas in your work where you see the potential benefits of employing supervised learning methods to analyze health-related data, identify social determinants of health, and develop strategies for achieving health equity and improving overall well-being in diverse populations?
    \n", "
\n", " \n", "
\n", "\n", "\n", "\n", "\n", "\n", "\n" ] } ], "metadata": { "celltoolbar": "Tags", "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.17" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": false }, "vscode": { "interpreter": { "hash": "aee8b7b246df8f9039afb4144a1f6fd8d2ca17a180786b69acc140d282b71a49" } } }, "nbformat": 4, "nbformat_minor": 5 }