On Hauser 20 Jul 201120 Jul 2011 Marc Hauser, the primatologist psychologist at Harvard who recently was accused of mistreating evidence and graduate students, has resigned. I am in two minds about this. His work, although I am unconvinced by some of it, was very important, and he was good at communicating to the lay reader (including philosophers). I met him and was impressed by his demeanour and generosity. On the other hand, if he did deliberately misinterpret his data, that is an offence. Whether it is a hanging offence is moot. Pasteur is reported to have ignored 90% of his data that did not fit his contagion model. We excuse him because he was able to identify bad data and it turned out he was right (mostly). Hauser also trimmed and interpreted his data. This is a common scientific activity that cannot be automated by statistics, no matter how often or how hard people try to do so. Some data is not really important because of obvious biases (the equipment failed, the reagents were contaminated, etc.). But in interpreting behaviour, one has no automatic procedure, just interpretation. And when the organisms themselves are very like us, we can overinterpret without being aware of it. Add to the mix that the hypothesis in play is that they are very like us (that is, they behave like us because they have the same motivations and capacities in various degrees), and the likelihood that overinterpretation will occur is high. That seems to be what happened here. But is Hauser therefore a bad scientist? If all we have is interpretation, then no, although he is probably “guilty” of confirmation bias, a persistent and ubiquitous problem in all science. His data are less reliable than Pasteur’s, so he should have used his judgement. The problem is that it is entirely subjective and he clearly overinterpreted. That is not enough to kill a career. Now as to how he treated his graduate students I cannot say anything, except to say that if he did, he would hardly be the first supervisor to treat them badly. But like marriage difficulties, the only ones who know what transpires between an advisor and a student are those involved. Biology General Science Science
General Science New paper on science blogging 22 Sep 2008 Science blogging is a relatively new phenomenon, the impact of which is slowly becoming clear. Gone are the days when an albino silverback philosopher was a top ranking science blogger (I am quite happy to be a philosophy of science blogger, and not take credit for anything I didn’t earn)…. Read More
Biology So much to say, so little time 9 Jul 2009 As I prepare for the conferences I start attending and blathering at from tomorrow, I of course find a treasure trove of things to blog about. Since I can’t really do them all justice, I will merely put a one liner for each below the fold. Read More
General Science Rodney Stark’s idiotic history 6 Sep 200818 Sep 2017 Thony Christie, a regular commenter on this blog, is also a historian of science, and he sent the following guest post that I thought well worth publishing. Commentator “Adam” asked John’s opinion on a book he is reading, The Victory of Reason: How Christianity Led to Freedom, Capitalism, and Western… Read More
Isn’t it also the case that of the three contested papers two have since been replicated and the results confirmed and only one was retracted? Not ideal, but difficult to pin on Hauser the crime of intentional and systemic manipulation of results. He’s no Hwang Woo-suk. But the bigger problem, in my opinion, is the lack of transparency surrounding his investigation. I employ some of his theoretical work in my own thesis, and I want to know precisely what, if anything, is to be viewed with scepticism. I look forward to Harvard releasing the full report.
I seem to recall someone who reviewed his research said there was no sign of the described behavior among the tamarins and that some of the students working on it had serious reservations about what was written. I also seem to recall one “co-author” hadn’t done more than read the conclusions. That studying behavior is extremely difficult isn’t an excuse for lax standards, it’s a crisis in the scientific nature of the entire enterprise. Building on this kind of thing is the endemic disease of the behavioral, would be, sciences. The whole purpose of doing science is to try to get more reliable knowledge, after all. If you’re going to follow practices that don’t result in that level of reliability it would be more honest to call it what it is, lore. Though much of lore is more modest in its claims and ideological motivation.
The problem is that none of us know exactly what scientific misconduct Hauser was found solely responsible for. Were Harvard to release the report then we could all judge what of his work to be skeptical about and what we could continue to trust. However, it appears that Harvard will never release the report. That is a problem for the field as it casts a cloud over all of Hauser’s work. Tim Dean says that two of the papers in question have been replicated. That is not my understanding. One source says that missing videos and field notes were found while another says that Hauser went back to the field and collected more data and found the same thing. However neither Science nor the Proceedings of the Royal Society have released what was actually done and how they know that the new data, if there are new data were known to be nonfraudulent. Hauser seems to have been given an awful lot of benefit of the doubt for someone who was found solely responsible for 8 different incidents of scientific misconduct. In the end it is the scientific community who gets the short end of the stick. My own suspicion is that what Hauser was actually found guilty of is only the tip of the iceberg. One doesn’t accidently commit 8 instances of scientific fraud. Personally, I would not cite any of Hauser’s work that has not been replicated by someone other than his laboratory. It is good that Hauser has resigned. It is appropriate and welcomed. It is not appropriate that the field is left to guess the extent of his transgressions and wonder what, if any, of his work is to be trusted. In that Harvard has failed the academic community.
Hauser seems to have been given an awful lot of benefit of the doubt for someone who was found solely responsible for 8 different incidents of scientific misconduct. In the end it is the scientific community who gets the short end of the stick. My own suspicion is that what Hauser was actually found guilty of is only the tip of the iceberg. One doesn’t accidently commit 8 instances of scientific fraud. Personally, I would not cite any of Hauser’s work that has not been replicated by someone other than his laboratory. Kim Wallen Would you cite his co-authors or those who supposedly reviewed him? And how do you know those replications are any more reliable since they would be the product of the same system that allowed those lapses over such a long period of time? I strongly suspect that benefit of the doubt is regularly given in the study of behavior, animal and human and, I’m confident, now over fMRI and other pictures of unknown reliability and meaning. How many entire schools of psychology have been given decades of that benefit of doubt only to be entirely overturned in the relatively short history of psychology? How many schools in the other behavioral would be sciences? That’s a symptom of a scientific enterprise that has serious, basic problems. In light of the Hauser scandal, which I suspect is at least as serious as you have said, how much review of other, widely taught and used stuff is being looked at critically enough to find out if there aren’t similar or other problems with it? If someone can get to a position such as the one he had with problems that serious, it is certainly evidence that there are problems at least that serious with its foundations. There are a lot more people who get published and accepted in that and allied fields whose work has glaring problems. Kevin MacDonald, for a glaring start. I’ve been doing a lot of reading about the problem in human research of relying on self-reporting, which an enormous part of social science absolutely depends on. It’s obvious that deceptive or inaccurate or confused self-reporting of personal ideas and experiences happens, I’d suspect and others would seem to suspect, it happens quite frequently. How much of published research routinely incorporates unverifiable self-reporting of experience or ideas or, heaven help us, opinon and even action. How much resulting inaccuracy is routinely incorporated in papers which are allegedly part of science? Even in rare cases in which it is theoretically possible to check self-reports how often is that required, especially in samples large enough to be truly representative? It is impossible to know how many responses given in a specific research project are really in line with the reality that the paper assigns it. The level of accuracy in even that essential aspect of the method is unknowable and almost certainly of great variability. There is no way to evaluate or quantify that and on that rests the accuracy of the results. I believe that the deceptions in this case were almost certainly not accidental if what has been reported is accurate. I suspect that a science that so routinely cuts corners on the basis of exigency invites deception to be routinely incorporated into it. I believe the culture such a science develops tolerates higher instead of lower levels of deception and that the chances of gross deception increase as the eminence of the deceiver rises and the cost of questioning them become higher. I am certain that accounts for the huge and routine overturning of entire schools that dominate until their accumulated lapses reach a point where the whole thing falls down. The desire or even need for sciences of behavior don’t erase the problems with studying behavior scientifically. Your results don’t become more accurate or reliable by pleading to the difficulty of applying science to it. It might be nice to have a reliable science of economics or history or the law but the topics are too complex to really apply science to it, making the attempt only creates pseudo-science and in the worst cases economic depressions, totalitarian governments and gross injustice. The results of the attempt to apply science to behavior have been bad enough and have gone on for long enough that the widespread credulity in the results is an indictment of the integrity of our intellectual culture. The widespread acceptance by educated people of the horse feathers of Freudianism and then Behaviorism should be a far bigger scandal than it is. As the hulk of evolutionary psychology gives way, it should be an occasion of deeper questioning of the intellectual assumptions that have produced all three.
I agree with John Wilkins that our brain can trick us and misconduct is not necessarily conscious and therefore not necessarily intended fraud. But this sort of unconscious misconduct is only possible in isolation or as a folie à deux. A folie à dix is unlikely. As Hauser’s students are said to have aired their doubts, maybe, he should have treated his data/interpretations rudely rather than his students. That’s what skepticism would have demanded.
Good post. If we’re going to pillory someone, I can think an ex-Oz media mogul who could use a couple of rotten tomatoes in the face.