matterien

thoughts

Problematic UX research: analysing, reporting and presenting

A while ago I wrote about what I’ve come to think as the “many pitfalls of UX, an industry eager to illuminate and provide bright and clamant insights into the world of ‘users’.” There, I’ve touched on some of the problems that arise from the recruitment phase of a research project. Today, I want to attend to the research phases that lie far at the other end but, I belief, are equally important to think through: analysing the collected data, preparing a report and presenting the ‘results’.

Before I do that, let me recap that post where I first reflected on problem areas in UX research. In recruiting research participants, typically at the initial stage of a research endeavour, ‘UXers’, sometimes obsessively but almost always unknowingly, become preoccupied with the reduction of complexity: taking out the messiness of a real life context before carrying it over into a neat lab/testing environment. This can happen because, typically, one doesn’t simply go up and talk to or observe just anyone but a very specific set of users. Those kinds of users come very close to the idea and image of the user whose needs, requirements and expectations have been considered and envisaged in the design phase. Everyone else is mostly left out.

Whatever one might make of this, I think it’s fair to say that, in general, recruitment in commercial research contexts is so embedded in routines, so the very first thing that gets done just to simply set the wheels in motion, that its profound implications on the course and outcome of the research are rarely, if ever, acknowledged. The way the research is wrapped up through the final triad of analysis, report and presentation is in many ways a whole different beast – especially if the research is qualitative. Arguably, much more thought and consideration goes into this phase but I can’t help but think that much of the process still doesn’t receive the attention, reflection and time (!) that it should. This is a first attempt at organising some of my thoughts and observations on these final phases of qualitative research – not only in UX but in any commercial context where research is carried out. I’ll break my thoughts down into what I think are five different problem areas across analysis, report and presentation.

Analysing

1. More than just words: Leveraging the full potential of qualitative interviews

Doing qualitative research in commercial contexts rarely means doing ethnography. In some cases, that’s because other decision makers and stakeholders involved in and around the research dismiss its value. In most cases, however, it comes down to time and budget constraints which make long-winded ethnographic methods highly unattractive. For a qualitative researcher this means having to work with what is typically seen as the feasible: one-hour long semi-structured qualitative interviews. And most of the time, that’s fine and a good starting point. That is, as long as its potential is fully understood and leveraged by the researcher. The problem is, often, interviews get reduced to simple one-dimensional speech. Let me explain.

Semi-structured interviews are designed to provide certain structure, so that all topics and questions of interests are guaranteed to be covered across interviews. This simply makes drawing comparisons and therefore analysis easier for the researcher. The ‘semi’ in semi-structured indicates that there’s still a bit of flexibility to play around with the prepared and pre-defined questions. The interviewer will start by asking broad questions, trying to avoid insistence on certain themes or words if they seem irrelevant or misguided. Instead, they will change the course of the interview, whether that’s simply asking prepared questions in a different order or creating new questions in situ. The interviewer stays alert, notices gestures, reactions and all the spectacle of the social interaction of the interview and picks up the clues as the interviewee gets into more and more details. Contradictions and tensions start to surface at various points in the interview. The emerging terms, labels and categories used by the interviewee start to replace the questions and terms written out in the script and will form the basis for the wording of the newly emerging questions. This semi-structured process ultimately informs the content of the interview: what gets talked about and how one talks about the prepared topics.

Warm-up and closing sections and broad, malleable interview questions tease out attitudes, values and life worlds beyond the narrow lens of the topic of interest. Not only do those elements of the interview set the tone, but more importantly, they create the context and backdrop for interpretation. It allows us to later go back, recognise the wider implications of the given answers and enables us to (very) tentatively identify the complex tension between values, actions and speech. In that sense, qualitative interviews do more for us than just give one-dimensional replies to our research questions. We can and should make use of the multifaceted character of the semi-structured interview to broaden our understanding of the topic and to carry all of the complexity of the interview over into our analysis. Which takes me to my next point.

2. This shouldn’t even be a point: Analysis requires time

This point touches on what I think is a structural issue in many workplaces: There’s simply no time for analysis. As opposed to the other points I make, this one is harder to tackle at the level of an individual researcher. It’s not just about someone not choosing to put time aside for analysis but more about someone not having access to time that is purely designated for analysis.

I’ve seen it many times in my work as a freelancer, coming in and out of many different, but mostly big, agencies and corporations: the work environments in some of those organisations have just not been designed with the role and tasks of a researcher in mind. Regular check- and stand-ins that mark start and end of the day and multiple meetings squeezed in-between mornings and afternoons drag individuals out of deep thinking mode into an ‘actionable’ next-step mind frame. Even in design teams with deep-seated collaborative structures of consecutive collective brainstorming and white boarding sessions, individual thinking time and solitary work is devalued to such a point that there’s practically no time to do what proper analysis requires researchers to do. Which is: Simply staying and spending time with and on the data, allowing it to surprise and challenge us in new ways.

As a freelance researcher I often think I’m privileged and advantaged in that regard. I can often sneak out of those set structures and ideally make the time to think through the collected data, all while working from the calmness of my private home. However, most of the time, I find that the failings of analysis go beyond a simply mis-prioritastion or lack of appreciation of tasks. People positioned in and around the research often lack a basic understanding of what sort of attention and ‘processing’ qualitative data requires from one. Which, again, brings me right to my next point.

Reporting

3. Qualitative data makes a difference: When to decide on the structure of a report

This point goes back to a fundamental problem that runs through all of my previous points: A lack of understanding of qualitative data analysis. Even when data is collected through a qualitative (or qualitative-inspired) approach, in the end, it is often not treated much different to quantitative data. Initial research questions (which, for example, set the structure of the interview guide) take on the job of closed survey questions and are used to scan notes and transcripts for direct answers. These answers are then cut and pasted onto a seperate sheet and into pre-defined sections (‘buckets’). The result of this are lists of bullet points which are then neatly moved into a final document for presentation. Work done.

Just that analysis never got to happen. If the collected data doesn’t challenge the very nature of the questions asked at the start of the research and how one (structurally) thinks of the researched topic, then, put simply, it could or should have been a quantitative research piece. And, quite frankly, it ends up having the same value of and making as much sense as conducting a quantitative survey with 15 people: very little or none.

Copying and pasting pieces of data into pre-defined sections (initial research questions) are a good starting point but should quickly turn into a more messy process: crossing out and re-writing section headings, dissecting and moving pieces around and opening up new ‘buckets’. In that sense, it is only after the properly completed analysis that one can settle on a final structure for the report or presentation. It is also only with the properly completed analysis that the real value of qualitative research comes in, allowing us to challenge our assumptions pre-research and getting us closer to defining the very question that need to be asked post-research.

4. The questions of questions: Report or presentation?

This point should form the basis of any initial discussions and negotiations of project requirements. What should happen after all the data is collected? If a team is in the middle of synthesising the data and translating this into some form of document but finds itself going back to disagreeing on and contesting things such as the required length or brevity of a document, the level of detail (number of maximum words per slide) and the appropriateness of bullet points, lists or paragraphs of text, then this might very well be a sign that varying intentions are still at play which weren’t sufficiently clarified at the start. A Powerpoint document, the go-to format to conclude a piece of research, is often at the centre of those discussions. Truly embraced by few and (unwillingly) adopted by most, Powerpoint is, admittedly, a complicated thing to deal with. While it’s intended for presentations it’s often used to do much more – to produce reports that can circulate on their own, that is, without anyone talking or interfering over it. And that distinction in purpose is rarely reflected on or discussed when using the software.

Producing a presentation is producing a visual aid that only works through the presenter’s mediation in a given time and space: The presentation meeting. Unless this meeting is recorded, receiving the presentation afterwards (via email) would be of limited use at best and ill-advised at worst. In most organisations, it’s almost never appropriate to simply produce and disseminate a presentation because documents tend to travel far. Ending up with someone unfamiliar with the presentation’s context of production, the document will not be able to speak for itself and ideas will be susceptible to loose interpretation and might be mobilised for unrelated purposes. A report, however, already comes with all of its required context and guidance. I will be lengthy and overloaded if shown as a presentation but, at least, the presenter can walk through the different sections, pause, skip and summarise. In that sense, a report can more easily be retrofitted into a presentation. The other way around? Very problematic.

Let me be clear. Whatever the most appropriate solution is for what comes after data collection will always depend on the specificities of a project. On its wider intentions and the role of the research piece in the broader context of the project. On the level of detail required to communicate findings. On the scope of dissemination. How long research details and outcomes need to remain intelligible and accessible to others beyond the time and space of the project. What role accountability plays in an organisation. While possibilities are practically endless and always negotiable, confronting questions about how to conclude a research never is.

These are just some examples of possible questions to be asked well in advance of commencing a research:

  • Should synthesised findings feed into a detailed documentation of the research process and outcomes?
  • Should synthesised findings (instead of being captured and disseminated in a formal way) be directly translated into new designs?
  • Should synthesised findings be captured in more process-oriented documentation, for example, in personas and user journeys, which might ultimately reflect and very well embed all details of the research but omit its framework?
  • Should the documentation be presented in a formal and singular setting?
  • Should the documentation form the basis of ongoing discussions held in regular work processes, meetings or workshops?
  • Should the documentation be published or made accessible to a wider audience beyond those directly involved in the project?

I’d also wish that the same openeness and curiosity was embraced when deciding on the final research output format. Similar to settling on a report structure, this is best decided once a clearer picture of the research emerges, and wrapping up the research with a Powerpoint document is never the only and often not the best possible solution.

Presenting

5. How 12 become 7.6 billion: Minding and being specific with language

This point turned out much longer than I thought, which might say something about (both) its complexity and importance. To start, let me contrast two statements and then explain why I think, as qualitative researchers working in commercial contexts, we’re not doing ourselves any favour defending our work and its validity if we make claims such as the one in the second statement.

(1) Participants wanted privacy but showed apathy when asked to actively engage in privacy control.

(2) People want privacy but show apathy when it comes to actively engaging in privacy control.

Descriptions like these are often written up under so-called research ‘insights’. While the first statement refers to ‘participants’, the other talks more generally of ‘people’. I’ve also seen instances where the word ‘humans’ is used instead of ‘people’. Either way, it’s not hard to see that there’s a noticeable difference in specificity depending on the term used. And for whoever finds that ‘people’ or ‘humans’ is appropriate in this context (or even more appropriate than ‘participants’), will very likely refer to my point at the start: that those statements are usually found in an ‘insights’ section of a research report or presentation. And I would say, that’s a fair point. As opposed to the more sober and detailed research ‘findings’, insights are somewhat allowed to escape reference to direct proofs because they are more akin to intuitive realisations, the sum of various small parts (findings). The Cambridge dictionary defines ‘insight’ as follows:

(the ability to have) a clear, deep, and sometimes sudden understanding of a complicated problem or situation.

I get it. Referring to ‘humans’ or ‘people’ instead of the more specific ‘participants’ is a way for researchers to highlight that they have an inkling of something that concerns human nature more generally, something that is valid way beyond the scope of a research conducted with, say, 15 participants. Perhaps something that is even informed by the experience of previous research.

Now, anyone who has read anthropological literature and ethnographic depictions of cultural ‘others’, might be very hesitant to make claims about anything relating to ‘human nature’. The strangeness of other people’s way of doing, making and relating which ethnographies depict very vividly never cease to surprise. And whether there is something (of actual substance) that can be said cross-culturally about human beings and human nature has been, and continues to be, a long-standing debate within anthropology (one which has traditionally divided practitioners into cultural universalists and relativist). My point here, however, goes beyond this debate. Wherever one stands on this, my point is that if we keep on making references to ‘humans’ instead of ‘participants’ in the context of commercial research, we’re actually muddling a whole different debate, one where all anthropologists and most social scientist actually stand united: The question of whether there’s validity at all in qualitative research and claims derived from it.

In a room full of qual skeptics and quant fanboys, the validity of ‘people’ tends to be questioned and challenged very quickly. The move that turns ‘participants’ into ‘people’ is,  I have witnessed, taken as a dangerous insinuation that we’re playing the quantification game. I have found that by turning a small group of people into a much larger one in order to strengthen claims, we’re reinforcing the mechanisms by which the validity of our qualitative research is challenged all the time. By trying to make our statements appear more compelling, we’re making this all about questions of quantification: The more people you ask, the more representative your results are, the ‘truer’ they become. Having the term ‘people’ openly questioned during a presentation, not only turns attention to the very fact that we’ve ‘only’ asked 15 people but is actually implying that that’s the only thing that matters here.

That is why I suggest sticking to the type of statement made in (1). Not because it concedes ground to quantitative research as the only ‘proper’ way to do research but because it allows us to stand our own ground: Qualitative research is all about the situatedness and embeddedness of its produced data. Instead of offering an enormous pool of data points, we can point to how disparate data relates and speaks to each other and talk about the quality of these links. If instead, we move away from the concreteness of our field site, decontextualise and turn to more generalised representations, we’re severing the links we’ve so dedicatedly gathered, disassociating our research from the richness we have managed to expose and thus destroying the very value we can offer to a game where everything that counts is quantified.