Data Ethics Case Studies

  1. Using Wearable Devices in Clinical Trials
  2. Reusing Health Data for Secondary Research
  3. Collecting Social Determinants of Health Data
  4. Using AI to Predict Patient Outcomes
  5. Social Media Data for Mental Health Research

Case Study 1: Using Wearable Devices in Clinical Trials Scenario:

Participants in a clinical trial are provided wearable devices to monitor their health. Some participants report concerns that employers or insurers might gain access to their data.

Discussion Questions:

  1. How can researchers ensure that wearable device data remains confidential?
  2. What policies should be in place to prevent misuse of wearable data?
  3. How can researchers address participants’ concerns without undermining the trial’s objectives?

Case Study 2: Reusing Health Data for Secondary Research Scenario:

A research institution plans to use previously collected clinical data for a new study without obtaining additional patient consent, arguing that the original consent covered broad data use.

Discussion Questions:

  1. Is the reuse of data ethical if explicit consent for the new study wasn’t obtained?
  2. How can researchers ensure that participants’ expectations about data use are respected?
  3. Should there be a universal framework for broad versus specific consent in health research?

Case Study 3: Collecting Social Determinants of Health (SDOH) Data Scenario:

A health system wants to collect data on patients' housing, income, and food access to better address social determinants of health. Some patients express discomfort, fearing their data might be used against them.

Discussion Questions:

  1. How can the system ensure informed consent and build trust with patients?
  2. What safeguards should be in place to prevent misuse of sensitive SDOH data?
  3. How can the data be used to improve care without perpetuating stigma?

Case Study 4: Using AI to Predict Patient Outcomes Scenario:

A hospital adopts an AI tool to predict patient readmissions. While it performs well overall, it systematically underestimates risk for certain demographic groups, leading to unequal care.

Discussion Questions:

  1. What steps should be taken to identify and address bias in the AI model?
  2. Who is responsible for monitoring and updating the model to ensure fairness?
  3. How can transparency in algorithm decision-making be improved?

Case Study 5: Social Media Data for Mental Health Research Scenario:

Researchers propose using publicly available social media posts to study mental health trends. While the data is technically public, participants did not explicitly consent to its use in research. Concerns arise about privacy, re-identification, and stigmatization.

Discussion Questions:

  1. Does the public nature of the data justify its use in research without explicit consent?
  2. How should researchers balance societal benefits with individual privacy?
  3. What safeguards can reduce the risk of harm to individuals?
  4. Should researchers inform the public about their study or allow opting out?
  5. How does this case reflect the evolving concept of informed consent?