FABBS Invites Case Studies on Ethical Challenges
The Federation of Associations in Behavioral and Brain Sciences is creating a compendium of principles and case studies regarding ethical challenges facing scientists in the behavioral and brain sciences. The case studies will be relatively short—ranging from 100 to 300 words—and will be accompanied by commentaries provided by members of the editorial board. The editorial board currently consists of Max Bazerman, Harvard University; Jenny Crocker, Ohio State University; Susan Fiske, Princeton University; Joshua Greene, Harvard University; Todd Heatherton, Dartmouth University; Joseph Simmons, University of Pennsylvania; Uri Simonsohn, University of Pennsylvania; and Sam Sommers, Tufts University. Robert Sternberg will serve as the editor for the project.
We are asking behavioral and brain scientists to contribute case studies representing ethical challenges they have faced in their own careers. Scientists will of course be credited in the resulting volume as the author of the case study, unless they ask to remain anonymous. Here is what we would need:
- A description of the ethical challenge.
- What, if anything, made solving the ethical challenge difficult.
- How you resolved the challenge.
- What you might have done differently were you to face the situation today.
- What general principle, if any, you can infer from the case study you provide.
We need your help! Please consider submitting a case study. Please keep in mind that the ethical challenges do not have to represent deliberate ethical lapses. They well may represent lapses or potential lapses that at the time did not even seem to engage ethical issues. Case studies must be submitted no later than July 31, 2012. They are subject to editing by members of the editorial board. Acceptance of submissions is not guaranteed.
Two example cases follow so that you can see what the case studies might look like.
For more information or to submit a case study, contact Robert Sternberg, PhD, at Robert.Sternberg@okstate.edu.
SAMPLE CASE STUDY 1
Some years ago, I had a grant from a major federal agency. I had several monitors on the grant to whom I reported. One of them, I knew, was finishing up a PhD thesis. The monitor knew that I taught a course on multivariate data analysis. One day I was talking to this monitor on the phone and, to my surprise, the monitor asked for help in analyzing the data for the monitor’s PhD thesis. This request struck me as ethically inappropriate. I did not know what to say so I said I would consider further what the person was asking. I was unsure of what to do so I called another of my grant monitors and asked her confidentially for her advice. She told me it would be a conflict of interest to help and that, moreover, she was ethically obligated to report our conversation to their joint superior, even though this was not what I intended. I never heard about the issue again from anyone but I did feel that my relations with the grant monitor who had asked for her help became strained.
If I were in the same situation again, I probably would have done the same thing, realizing that once I spoke of the situation to the other monitor, she was ethically obliged to report it.
Possible general principle: Scrupulously avoid conflicts of interest when working with grant monitors and if one emerges, deal with it immediately.
SAMPLE CASE STUDY 2
Early in my career I was analyzing data for a research project. In general, I thought the descriptive statistics were looking quite good. We were looking at the effect of a particular manipulation on reaction times for solving reasoning problems. When we got the inferential statistical data, however, we found, disappointingly, that the effect of the manipulation was only marginally statistically significant—at the .07 level. It occurred to me, at that point, that I was using a two-tailed test and that although I had originally conceived the testing as two-tailed, upon reflection, there was no reason that the particular manipulation should have made reasoning worse. If I were to re-conceptualize the significance testing as looking only at whether the manipulation made reasoning better, I could do a one-tailed test and the result would be statistically significant at better than the .05 level. I knew, however, that changing the nature of the statistical test after I already had done it was essentially fudging the data so I left it alone and reported the result as marginally significant.
If I were in the same situation again, I would have done the same thing.
Possible general principle: Do not change a posteriori what you have stipulated as the priori bases of your hypothesis testing.