Generalizability in DP
In classical conceptions of research, part of a research findings utility and/or validity is in its generalizability. Of a finding that reveals how fourth graders in New York perform in mathematics, we might ask about the case for fifth graders, or for Maine, or for English.
When it comes to Discursive Psychology (DP), I’m wondering about how generalizability is considered. Is it something that any good DP program of research should address? Is it theoretically assumed that patterns generalize across contexts, or that patterns are intimately tied to their contexts, or something in between? Moreover, how often can or should a program of research come under scrutiny for not generalizing?
Potter (2012) outlines key features of DP. He writes that data management is “a prelude for data reduction and involves the systematic building of a particular corpus that is of a size small enough to be easily worked with but large enough to be able to make appropriate generalizations” (p. 18). This sweet spot is reminiscent of quantitative traditions, which seek an N that is large enough to be considered representative but small enough that relatively minor differences do not appear to be statistically significant. But there is something non-intuitive about applying quantitative ideas to this fundamentally qualitative approach.
Potter adds just one other reference to generalizability. He notes, “Deviant cases are often analytically and theoretically informative. They can show whether a generalization is robust or breaks down.” (p. 37). This has a great deal to do with analysis, as deviance can be useful for delineating the boundaries of a generality. We might think of the phrase “the exception that proves the rule.”
But this brings us to a point about rules. Are discourse analysts in general and discursive psychologists in particular all that interested in extracting rules about the human condition? Potter and Wetherell (1988) are not totally on board with prior conceptions of language rules, which do not “somehow carry implicit instructions for their precise and proper application; the way they are applied is, in fact, as much dependent on people’s constructive use of language” (p. 72). In the interest of full disclosure, I might say it looks like they will say a lot more about this in the next chapter, which I’ve not yet read. But rules are perhaps only important in DP to the extent that participants view them as rules (drawing off an ethnomethodological ethos).
Why do we care about generalizability in DP? Well, for one thing, most researchers in any content area care about generalizability to a greater or lesser extent. For example, I am hoping this semester to use DP approaches to analyze interviews with crafters about their crafting practices, identities as crafters, and identities as “math people.” From a personal standpoint, I am interested enough in just how these identities are constructed for an interview and how language is deployed to align with some communities and not others, and to draw lines around knowledge domains or the nature of activities (e.g., “Oh, there’s no math in sewing”). But a Learning Scientist may not be interested in those constructions for what they reveal about language use; rather, they care about the content of what is said. Based on these interviews, an LS folk might say, do you think we can design a curriculum that leverages crafting practices to create buy-in to mathematical content standards? So the jury is still out on where these two interest overlap, even though they take the same corpus as data. In other words, can DP’s attention to generalizability potentially be used to answer content-related questions? That’s something I am hoping to think through in the coming months.
Potter (2012) points out:
a characteristic feature of contemporary discursive psychology is that participants do the data collection themselves. This is designed to minimize the reactivity generated by extended researcher involvement and allows the participants to manage ethical issues in a way that suits them best. (p. 17-18)
Although this makes sense, I certainly would not have thought of it as a “characteristic” feature. This (perhaps inadvertently) reflects the preference for naturally-occurring data as it more or less minimizes the chances that the researcher is capturing something that otherwise would not have occurred. This approach is participatory in the sense that it does forefront the interactants’ agency in, at least, the data collection. However, it’s quite different from other participatory approaches which generally require the researcher to be deeply involved in the organization of study, to the point where they are thought of as a legitimate participant in the organization, that is, they are so present that they might as well be absent.
However, with DP’s general analysis of relatively sensitive topics – Potter (2012) talks about crying on crisis hotlines, which is not an easy topic – it seems curious that participants would buy in. In this case, it may be normal for callers to hear a message saying their call will be recorded. However, it seems unlikely, had I been a call operator, that I would be so on board for some “big brother” monitoring my performance.
Furthermore, I dispute the notion that ethical management can be outsourced to participants. I appreciate the conversations across parties in such situations, but researchers are (or should be) specifically trained to protect the safety of all participants. In this case, there may have been approaches that better protected callers (i.e., did not make them less likely to report child abuse because they didn’t want the call recorded or some such). The people doing the recording would certainly have felt more at home with this “outsourcing,” but they are not the only people being recorded.
Besides – how does this characteristic extend to the maligned non-naturally occurring data (i.e., interviews).