Today, nearly all academic research involving human participants is subject to ethical regulation. Proposals must be approved in advance by an ethics committee, or what is referred to in the US as an Institutional Review Board (IRB). However, this has not always been the case, even in the field of medicine.
Until the postwar period, in the UK, the US and many other countries, what was ethical in medical research was decided by the doctors carrying out it out. While most exercised restraint, some early investigations were judged to be unethical even by colleagues. The researchers concerned were criticised for privileging the likely value of their findings, as well as the payoff for their own careers, over the interests of the patients involved in their investigations.
It was in response to these concerns that regulation of medical research began, initially in the US in the 1950s. There, the requirement for ethical regulation of federally funded projects came to be enshrined in law. This not only required IRBs to operate within institutions receiving federal funds for medical research, it also created an overarching bureaucratic structure that laid down requirements for IRBs, as well as monitoring and policing them.
Over time, the complexity of the requirements and the threat of suspension of federal funds to institutions for breaches led to IRBs and their supporting bureaucracies becoming substantial administrative departments. IRBs were charged with ensuring that research met the federal and local institutional requirements. Furthermore, the regulation came to be extended beyond research that received federal funding. And, later, it was applied to non-medical fields, including the social sciences.
As Simon Whitney, a US doctor turned ethicist, shows in his new book From Oversight to Overkill: Inside the Broken System That Blocks Medical Breakthroughs – And How We Can Fix It, the driving force behind ethical regulation of medical research has increasingly become institutions’ concerns with protecting themselves against funding penalties and patient litigation. In the book, published last week, Whitney argues that the situation is now one of gross over-regulation, which does not even achieve its declared goal of protecting patients involved in research.
For example, the informed consent forms that are mandated for potential recruits to a study have become so complex and detailed that many patients are unable to understand them or are unwilling to spend the time trying. Indeed, patients frequently see them as unnecessary, designed only to protect the interests of the institution. Perhaps even more significantly, this over-regulation also costs lives by delaying the introduction of new treatments, by months or even years.
Whitney joins other critics of ethical regulation, such as Carl Schneider in his 2015 book The Censor’s Hand: The Misregulation of Human-Subject Research. Their argument is not, of course, that regulation of medical research should be abolished, but rather that it ought to be more selective, focusing only on cases where there is a high risk of serious harm.
They also argue that it must be more flexible, attuned to the variable characteristics of particular forms of research and their distinctive institutional locales. This surely also applies to non-medical research involving human participants. Generally speaking, the risks of harm from research in fields such as psychology, the social sciences and the humanities are much less than in medical investigations. Yet, while the complexity and detail of the regulatory requirements are not usually as demanding in these fields, there has nevertheless been a creeping extension of the breadth and depth of regulation.
So the criticisms of Whitney and others apply to ethical regulation of these fields too. Timeliness can sometimes be just as important in applying the results of social research as it is in the medical arena, and ethical regulation introduces significant delays.
More worrying still is that risk-averse ethical restrictions can distort social research by ruling out particular methods or hampering their application. One small example is ethics committees’ frequent demand that education researchers operating in secondary schools obtain informed consent not only from all participants who could be observed or may be interviewed, but also from the parents of the children – requiring that they opt in. These requirements are not necessary in most cases to protect participants from harm, and they can stymie effective research.
What all this highlights is that we can have too much of a good thing. While ethical regulation of research in some areas is clearly necessary, elsewhere it can damage not just the research itself but also the societal benefits from it. Regulation should be applied more selectively, and proportionately according to the risk involved, so as to minimise the harm it currently inflicts.
A great deal of social research, and even some in medicine, does not require regulation, and we cannot afford the consequences of the system now in operation.
Martyn Hammersley is emeritus professor of educational and social research at The Open University.