Science, control & the enactment of bias: an autoethnographic case study

Alex Haagaard
10 min readApr 21, 2017

In the run-up to the Science March, I’d like to talk about how science — as an ideal and a methodology — necessarily enacts bias. I’ve come prepared with a case study but first, I think a little pedantry is in order.

What is science? Oxford Living Dictionaries defines it as, “The intellectual & practical activity encompassing the systematic study of the structure & behaviour of the physical and natural world through observation and experiment”, and as, “A systematically organized body of knowledge on a particular subject.”

Notice the centrality of “intellectual & practical activity” to the first definition. Already, we have entered the realm of the human, the biased, the fallible, the limited. Science is an inherently, unavoidably human activity. There is no science without human cognition. Of course, it has been & I’m sure it will again be argued that science is above all an attempt to minimise that inherent bias through application of the scientific method, which systematises observations & sources of error & imprecision. Systematisation is indeed central to both the ideology & practice of science. It’s right there in both those OLD definitions. It also happens to be a prominent part of their definition for the “scientific method”: “ A method of procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement and experiment, and the formulation, testing, and modification of hypotheses.”

For the sake of our sample, let’s check those definitions from another respected English dictionary, Merriam-Webster. [Image description: screenshots of definitions of “Science” and “Scientific Method” from Merriam-Webster online dictionary. ‘Systematized’ and ‘system of knowledge’ are each highlighted in yellow twice. ‘System’ and ‘systematic’ are each highlighted in yellow once.]

So, it appears that systematization plays an integral, structural role in the very idea of science. Okay, so what is systematisation? The definitions from Oxford Living Dictionaries and Merriam-Webster both depend on the use of a ‘definite’ or ‘fixed’ plan or scheme. The very idea of science presupposes the existence of frameworks that can be used to sort observed phenomena. (This is known as representationalism, and its ontologically problematic qualities became manifest through the description of the uncertainty principle.)

Presumably, the staunchest defenders of science will argue that science’s principles of iteration & falsifiability absolve its presupposed epistemic frameworks of bias. Any framework that we use to sort data into separate categories has been established through prior experimentation, and is subject to rejection if it is subsequently found not to reflect reality. But this neglects the frameworks we use to determine what counts as data in the first place.

What predefined frameworks allow us to set the very boundaries between categories of phenomena — between what belongs to a phenomenon and what doesn’t? The significance of presupposition to scientific thinking manifests as much in the way observations are made as in how they’re characterised. This is a structural component of all science, but becomes more readily apparent in the enactment of ‘human’ sciences, such as medicine.

To which end, story time! I spent this past Monday doing a Maintenance of Wakefulness Test (MWT). The goal of this test is to quantify a patient’s ability to stay awake (under certain conditions for a defined period of time). There are two main protocols for this test, but both involve several evenly-spaced, limited napping periods in which the patient is asked to sit in bed, with their back and head supported by pillows (in an environment with controlled lighting, temperature, etc.), and to try to stay awake for as long as they can.

At the sleep clinic. [Image description: A fat white nonbinary person wearing a black tank top and black leggings reclines against a bed with beige sheets and brown pillows. On their stomach sits a blue box from which emerges a tangle of bright, multicoloured wires, several of which are connected to their face with patches of white tape. Their left arm is draped above their head. They are wearing saturated red lipstick. The walls are beige, with rectangles of white fabric and dark brown wood panelling breaking up the space.]

This test is used to evaluate how well a person with a sleep disorder is responding to treatment. Its primary role lies in evaluating whether a person with a treated sleep disorder should be allowed to retain their driver’s license. However, it is also the primary means by which the efficacy of prospective sleep disorder treatments is trialed.

Onset of sleep is defined as the appearance of 90 seconds of sustained stage 1 sleep or 30 seconds of any other stage of sleep. This is significant because it defines what qualifies as sleep & what does not (on the basis of brainwave activity observed via electroencephalogram). That is, it is one of those predefined frameworks that defines what qualifies as phenomenon and what does not.

As you begin each testing period of an MWT, you receive a scripted set of instructions from the technician. You are told to relax with your eyes open, & to try to stay awake as long as possible without doing anything like singing or pinching yourself. Now here’s the tricky thing: My autistic self tends to take instructions very literally. So when I’m told to try my best to stay awake, that’s precisely what I am going to do. And having lived with narcolepsy for nearly eight years, and functioned more or less passably in public for most of those, I’ve gotten DAMN good at trying to stay awake.

In particular, I’ve gotten damn good at feeling when my brain is entering sleep & performing small, discreet (procedurally permissible) actions to stave that sleep off. It’s not a functional solution from a quality of life perspective, or even from a personal or public safety perspective, because all it does is momentarily interrupt the neural patterns of sleep, only for them to return within a matter of minutes or, more likely, seconds. So my procedurally allowable attempts to stay awake really don’t count from a functional perspective. But you know where they do count? On the Maintenance of Wakefulness Test score. Because every time I blink or roll my eyes to jerk myself back from the precipice of sleep, that sleep timer resets to zero. So, I feel myself drifting into Stage 1 (yes, I can by now subjectively distinguish my sleep stages) & manage to drag myself back to consciousness after 30 seconds or 60 seconds or even 89 seconds — and by the predefined logic of the test, I have not fallen asleep. I start to enter Stage 1 again 10 seconds later, & yank myself back another 30 seconds after that— and I still haven’t fallen asleep. This is of course, exactly what transpired over the course of my first testing period on Monday.

Objectively speaking, with the benefit of all the controls put in place to eliminate ‘error’ and ‘bias’, I did not fall asleep. Except, I did. Many times, over the course of 30 minutes. I would be terrified to drive like that — but I’d be allowed to. I could not — can not — function normally in society like that. But objectively speaking, I was nonetheless awake. I am experienced enough with my own narcolepsy and with neuroscience in general to know what neural activity my subjective experiences over that half hour were reflecting (or vice versa). But scientifically, objectively speaking, neither my subjective experiences nor my brainwaves achieved significance.

Functionally, we have several problems here.

  1. We have a testing protocol that is implicitly performative, particularly if considered in context as a pre-qualification for a drug trial. The instruction to “try your best to stay awake” is not meant to be taken at face value because ultimately it is expected that the patient *will* fall asleep during the testing period. Notably, it’s expected that even non-sleep disordered patients are likely to fall asleep. Latency of greater than 8 minutes (out of 20, 30 or 40 minutes) is considered normal under usual protocols. So in fact the enactment of the test ends up being a performance of trying-but-not-really-trying-too-hard to stay awake. Objective, controlled, unbiased as this scientific procedure is, it is ultimately a performative act.
  2. We have inaccessible clinical language. This is a very common problem for autistic folks as we regularly encounter imprecise diagnostic & procedural language. (I have taken to populating every clinical questionnaire I complete with marginalia to explain my answers, for fear of misinterpretation. And also just to be a dick because it pisses me off that a supposedly rational profession uses such imprecise language & logic so consistently. But whatever.) The point is that poorly considered clinical language creates a barrier to diagnosis, treatment and general participation in healthcare. for many people. This is by no means a problem just for autistic or other neurodiverse people. When doing ethnographic research with medical professionals on the subject of medical communication devices a unanimous complaint was that patients “don’t know enough about their medical history”. This is of course also a common complaint I hear from friends and colleagues who entered the medical profession. There is a very pervasive idea within medicine that failure to use or understand clinical or other medical terminology equates to an insufficiency of understanding. And this is reflected in the language of clinical questionnaires, clinical interviews, and clinical testing protocols. (It seems to me that the MWT procedure could be easily remedied by changing the instruction to “try to stay awake” to “do not try to fall asleep”. Ultimately, the MWT is more or less functionally opposed to the Multiple Sleep Latency Test, in which patients are given the instruction to “try to fall asleep”. Given the functional opposition of this pair of tests, and taking into account the performative context of the testing environment, the instruction not to try to fall asleep would more accurately communicate the intent of the test, thereby improving its accessibility.)
  3. Benchmarks & significance levels: these are crucial to the production of reproducible, falsifiable — objective — scientific findings. As occurred during my test Monday, they also erase phenomena, and do so with potentially substantial functional consequences. The very frameworks and procedures by which science secures its reproducible, falsifiable, minimally-biased aims ultimately enact bias as well.

It’s worth noting at this point that the staff at this testing centre were great; none of this is intended as a criticism of their scientific activity but rather of the inbuilt limitations of science-the-ideal and of science-the-method, since most critiques of science seems to focus on science-as-practiced — somehow supposing that this is separable from science-as-method — and usually pausing to defend the virtue of The Scientific Method. (Representationalism strikes again.)

It’s important to me that just as we recognise the strengths of the scientific method, we also recognise its limitations and flaws. It’s important to me that we be allowed to critique the scientific method and the ideal of science without meeting a knee-jerk defense of “well it’s better than anything else” / “religion’s worse” / “you’re an anti-vaxxer”. It’s important to me that we critique the ideals & methods of science in order understand who & what is being excluded and how their inclusion might change science for the better.

It’s also worth noting that my MWT had (I think) a happy ending because the sleep centre staff seemingly realised my literal tendencies changed their wording slightly and helped me to understand the performative expectations of the test. Their ability to be inclusive ultimately yielded a test that (I think) more accurately reflected my somnolent tendencies. Which speaks to my interest in how critique of science, and pursuit of inclusivity, can in fact produce scientific knowledge that more accurately reflects the world in which and from which it is produced. And which is ultimately, thus, more scientific.

To end with a touch of philosophical pomposity, I’d like to discuss how quantum ontology may be instructive as to the value of making science more inclusive. As noted earlier, quantum mechanics presented a major ontological challenge with the definition of the uncertainty principle. Very briefly, the fundamental limit with which information about a system can be known raises questions about the ‘actuality’ of that system’s state. Traditional mechanics and ontology take the representationalist approach which assumes that the words and numbers and other devices we use to describe (represent!) a system are something separate from how that system actually is. And by extension, they assume that our system of interest has an ‘actual’, definite state of being in the first place.

With ideas like the uncertainty principle and Schrodinger’s Cat challenging both the precision with which we can ever observe a system and the fundamental distinction between observer and system, this raised the ontological question of whether in fact ‘things’ have a definite existence outside of their representation, observation, measurement, etc.

However, from a pragmatic perspective, the ontological arguments of the Copenhagen interpretation of quantum mechanics — and, relatedly, of poststructuralism, seemingly provide little guidance in terms of how to understand and intervene in reality. This is why I like David Bohm’s ‘“hidden” variables’ interpretation of quantum theory, from a specifically pragmatist perspective. Again very briefly, Bohm’s theory postulates an ontology in which a particle or system has (manifests? enacts?) a determinate state. However, in contrast to traditional representationalism, this state is defined by one or more organising properties or phenomena that are neither apparent to, nor incorporated by, classical representations.

Within Bohm’s ontological interpretation, he recognises a potential role for both the intrinsic configuration of the system itself, and for the extrinsic conditions within which the system exists, to influence the state and trajectory of the system. It seems to me this is powerfully resonant in particular with human-focused sciences, and may be read as an ontological argument for the value — indeed the necessity — of integrating subjectivities, along with contextually-sensitive logics into scientific methodology. As the Science March and popular depictions of science continue to espouse rhetoric that paints science as apolitical and identities as nothing more than confounding noise, it is worth keeping this in mind.

--

--

Alex Haagaard

Disability-led design & health justice. Director of Communications for The Disabled List. They / theirs. Tip jar: paypal.me/alexhaagaard