Facebook recently unveiled a tool that assesses whether someone might be contemplating suicide or self-injury - these predictions are made using an individual's Facebook activity. As these new technologies and potentials emerge, questions also emerge about the methodologies these studies use as well as the ethical implications of such work. How should researchers provision social media data to make predictions? What methods and measurements are appropriate to use when creating these predictive algorithms? What rights might users have in these studies, and what other ethical implications arise?
This project explores the ethical implications for making individualized predictions on mental wellness using social data. We find three areas of unresolved ethical contention. First, we look at research design and ethics approval, addressing how ethics committees and IRBs handle such studies and how we define individuals in these studies. Second, we look at data collection, analysis, and methods in this field. Finally, we look at the implications and outcomes of these algorithms when applied to clinical contexts. We consider how these methods impact clinicians, researchers, and the users/patients themselves. Finally, we conclude with three calls for action in the field to set standards and propose strategies to address these ethical dilemmas.
By understanding the intricacies of the ethical challenges of designing, developing, and deploying these algorithms from start to finish, we hope that our analyses and calls to action invite deeper conversations. We hope this discussion prompts principled actions from a diverse set of stakeholders to be taken to address these issues in this emergent field.