I’m a neuroscientist. Or, well, an aspiring one (let’s not get into the philosophical discussion of when it is that someone can call themselves a scientist. That’s a whole other post for another day). When I mention that people automatically assume a lot of things, with the most common assumptions being an incredible intelligence, and perhaps a lack of social life. However, to some people science also carries with it a connotation of distance; of the ivory tower; of something experts do over ‘there’ that has no bearing on the average person’s life in the ‘real world’. In the very worst case scenario, people might assume malevolence — that you’re in the hands of Big Pharma to propagate vaccinations, or part of China’s elaborate plot to make US manufacturing non-competitive by creating (and apparently recruiting the scientific community to proselytise) the concept of ‘global warming’. The musings of the Leader of the Free World aside, I think there is some merit to the claim that science is at times far removed from the ‘real world’. Sometimes that is a good thing, and sometimes that can cloud our judgment.
In February 2017 The Atlantic published an article about advances in technologies used in neuroscience, and the criticism some scientists have when it comes to using these techniques. These critics warn of the spectre of reductionism looming over our quest to understand the brain. John Krakauer and colleagues (2017) published an article in Neuron discussing the problem of reductionism in neuroscience. They postulate that the advances in technology have created a class of researchers who are well-versed in the novel techniques, but have a tendency to disregard the organism: behaviour, development and evolution are treated as secondary to the neural circuits and the exciting new technologies. As mentioned in the The Atlantic article, wanting to include behavioural research is at times looked at with scepticism in the neuroscientific community, with the idea that behavioural research is the sole domain of psychology as an underlying apprehension. However, it disregards the fact that the lines between psychology and neuroscience are often much blurrier than people give it credit for (not to mention the fact that inferring behaviour from circuits seems to be the wrong order to go at it). Basic biomedical research into disorders such as autism spectrum disorder (ASD) can at times run the risk of disregarding the voices of the autistic community who have called for conditions such as ASD, previously simply classed as ‘disorders’, to be seen as variants of normal human behaviour instead (see more on neurodiversity here).
Neuroscience is hardly the only life science that runs the risk of forgetting the human component of research or treatment. Medicine is famously known for occasionally treating patients as their illnesses and conditions rather than human beings. One reason for this, it is suggested, is caused by the need to distance oneself from the patient and consequently individual responsibility for what happens to the patients. In his book Do No Harm, Mr Henry Marsh hypothesised that a practice as common in neurosurgery as shaving a patient’s head might have its origin in dehumanising the patient in order to make it easier for the surgeon to operate. However understandable it may be to distance oneself, and prevent oneself from getting emotionally attached to patients, it is surely possible to do that without veering into the territory of dehumanisation.
Recently, Ed Yong in The Atlantic, looked at the ethics of a virus study that resurrected a dead horsepox virus. It is more important, the argument goes, to push the boundaries and expand knowledge, even if it is at times at the expense of ethics or concern for global consequences. The scientific quest for knowledge is an honourable one, however, in my opinion, scientists cannot disregard ethics or consequences to humans and the environment in pursuit of it.
I doubt anybody expects scientists to make these ethical decisions on their own, or to constantly think of all the possible consequences of their research. However, I believe all of the aforementioned cases highlight the importance of communication outside of the (biomedical and/or scientific) community with ethicists, psychologists, government, and importantly the public. If we want government, the public, and our colleagues in the humanities to respect science and its place in society, then we have to be more responsible as a community. In the life sciences in particular it is important to avoid reductionism and to remember that most of the research we do will affect people. We cannot recklessly sacrifice our humanity in the quest for knowledge consequences be damned. Science is not removed from society, and if we want the public to believe us when we say that we will have to act as if we believe it ourselves.