The Wrong Way To Conduct EBD Research
In a recent conversation with Jason Schroer, director of the Houston office of HKS, he posed an interesting question that made me think deeply about research in healthcare design and, in particular, evidence-based design. He drew a parallel between our field and the popular Animal Planet show Finding Bigfoot.
The program chronicles a team of three researchers who’ve spent decades tracking evidence of Big Foot in different parts of the world. They all believe in the existence of Big Foot and claim to have had their own personal encounters. A fourth member of the team is a field biologist who is a skeptic but accompanies the team on its investigations, providing logical and rational explanations to the rustles, bumps, and whoops that the others see as evidence.
The big question that one has to ask is, does all of the evidence they find point to Big Foot because they believe in Big Foot? In other words, is the hypothesis forcing them to accept this single explanation and reject other potential explanations?
This analogy holds surprisingly true for evidence-based design (EBD). Are we so hypothesis-driven that we’re rejecting all other explanations for improved outcomes, and zooming in on the ones that support our hypothesis for design?
Let’s say we argue that the physical environment reduces infections. In our studies, are we only looking at data that supports design as the solution as opposed to other possible reasons for reduced infections? Are we, in a way, seeking to find “Big Evidence,” simply because we believe or want to prove its existence?
The power of the null hypothesis
It’s a question many skeptics pose for EBD and many researchers ask themselves. It’s very easy to go down the slippery slope of testing a hypothesis and finding evidence to support it—but this approach is, in fact, fundamentally contrary to scientific method.
If you take a statistics class, one of the first things you learn is that you never set out to prove the hypothesis. In fact, you set out to disprove the opposite. The null hypothesis is the assumption of innocence in a scientific experiment and corresponds to a default or neutral state, assuming an absence of relationship between key elements studied.
The null hypothesis provides a neutral starting point and should be where we start a study. So if we’re interested in exploring, say, the relationship between flooring and falls, the starting assumption is that no relationship exists. An alternative hypothesis is that some relationship exists and that types of flooring can either increase falls or decrease falls. This fundamental approach of starting from a neutral ground is key; it ensures that we’re not busy proving what we hope to find.
Correlation is not causality
The other key component to conducting good research is the distinction between causality and correlation. Just because two things happened simultaneously doesn’t mean that one caused the other.
For example, a new replacement hospital opened and its HCAHPS scores went up—two events that happened simultaneously. We can go back and study if the increase in HCAHPS scores coincided with the new hospital and whether it had a sustained effect. If it did, we can argue for a correlation. We cannot, however, claim without a shadow of doubt that the facility design of the new hospital caused improved HCAHPs scores.
There could be many factors that affected the score, such as Lean process improvements that were implemented, a change in management, shifts in staffing models, etc. We can speak of the potential relationship between these concurring changes and see some patterns emerge. But when so many factors change simultaneously, we have to be careful before we state “our new hospital resulted in improved HCAHPs scores.”
It’s more accurate to say that the HCAHPs scores increased by a meaningful margin, this effect has been sustained over the last three years since opening, and based on an analysis of the data, one can argue that the building design was a contributing factor (not a cause) of this improvement.
It’s critical that we make the distinction, so designers and researchers stay credible during conversations with a healthcare community that understands scientific methods very well.
Once a pattern emerges in the HCAHPs scores—for example, ratings for “quiet at night” went up after the new hospital was opened—one can go back and do an exploratory study. If good baseline information was conducted during design and we know what the noise levels were in the old hospital and, more importantly, what the various design elements were that controlled/contributed to noise previously, then it’s possible to collect the same information after the new hospital opens.
The null hypothesis is that the design had no impact on the noise levels. The alternative hypothesis is that the new design (with new ceiling tiles, new floor, new pager systems, and new configuration) has lower noise levels. Notice that we’re still not at the point where we can say that acoustic-grade tiles reduced noise levels, because we have no way of testing if it was the ceiling tiles and not other elements that had an effect.
In this study, we would also have to account for confounding variables, or other elements such as the number of staff members, which could also have an impact.
Causality is not everything
Many studies result in disappointment because we posed too narrow a question for too complex a scenario. If you look at the current field of evidence, we know a lot about very little: single patient rooms, handwashing, artwork, ceiling lifts, daylight, and exposure to nature. But we know very little about a lot: unit configurations, interdepartmental adjacencies, flexibility, etc. This may be because these issues don’t lend themselves to a causal framework.
We should be seeking patterns, testing scenarios, and remembering that we set the stage for human performance. Design enables human performance and improved outcomes. This doesn’t imply that design exclusively causes an effect. Overstating the impact of design can, in fact, undermine how seriously our claims are taken. We know that design can make a difference—we just need to be sure it isn’t exaggerated and that our data is relevant, and accurately represented.
Avoiding this Big Foot conundrum is about making sure that we don’t focus prematurely on “proving” our designs are better. Often, we get so focused on proving and justifying decisions that we forget the purpose of research isn’t to prove; it is to investigate. Rather than using EBD as a post-rationalization tool, we must see research as an opportunity to be more analytical in our process and more innovative in our solutions—and, most importantly, to use our findings to advance the field, one project at a time.
Upali Nanda, PhD, is the director of research for HKS and executive director for the Center for Advanced Design Research and Education (CADRE), the research arm of HKS. She can be reached at [email protected].