Validity in research is a thorny topic. How do we know if a study is good (i.e., if its findings are valid)? In academia if your work is accepted to a peer-reviewed journal, that’s a pretty good sign that there’s some merit in it. Applicability and generalizability are some kinds of “external validity” checks, whereas internal coherence and well-supported arguments are supposed to be “internal validity” markers.
The question of DP validity is complicated because of the large set of assumptions it carries that may differ from classical approaches (which of course have their own assumptions, but as the established methods can just play defense). Importantly, though, DP is both a theoretical and methodological orientation. You can’t “do DP” on a particular phenomenon the way you might “do a quantitative analysis” of almost anything. That’s because the object of analysis is the talk itself and not the people speaking. Therefore, the claims that the research makes are not about the people or the constructs; they are about the talk. This is quite different from traditional approaches for which mental furniture or knowledge “in the head” is the target of analysis. As Potter and Wetherell put it, “One is interested in language use rather than the people generating the language” (p. 161).
So briefly, let’s take for granted that we are doing a DP study and consider what might traditionally be thought of as internal validity. P&W list four criteria for validation:
- Participants’ orientation
- New problems
Coherence refers to whether or not the analysis “gives coherence to a body of discourse” (p. 170). I read this as thinking about the explanatory power of the analysis. As P&W point out, “border cases” and exceptions. This all seems fine, if it’s just the researcher by themselves in a room. Trickier is how an outsider could get the tools to evaluate the coherence of an analysis. Given the hours and hours of data we’re frequently talking about, and the inability to present almost any of it in a journal article, how can someone else make sense of an analysis’s explanatory power? Granted, the analyst’s write-up should include these border and exception cases, but still.
Participants’ orientation helps us see how words, etc. are taken up. It take a very particular approach because obviously DP would not want to say what someone “meant” by something in a cognitive sense, but if a turn is not taken up in an expected way, that will generally appear in the talk (often in the form of trouble, repair, etc.). That all seems fine to me – I buy “next-turn proof” as a great kind of check on analytical claims.
Less compelling to me is P&W’s argument in favor of the interpretive repertoire validity check (Mulkay & Gilbert). They write of the scientists’ IRs, “On the one hand, their discourse was organized in such a way that the two repertoires were kept separate. On the other, when the repertoires were produced on the same occasion special difficulties were created for the scientists which had to be resolved by the use of a particular interpretive device: the TWOD. If the participants had not experiences these predicted difficulties, that is, if they had not oriented to the suggested inconsistencies, then we would be very suspicious about the validity of the findings” (p. 171).
So perhaps this makes some sense as this arguably separates IRs from qualitative coding. But will this work for any kind of study that draws upon IRs? Will all of them find conflicting IRs and be able to demonstrate that participants oriented to that conflict? The power of IR is in question to me if its only check is that two conflicting ones must co-exist.
New Problems refers to the problems created by analysis. I find this a little bizarre. “The existence of new problems, and solutions,” P&W write, “provides further confirmation that linguistic resources are being used as hypothesized” (p. 171).
What??? I don’t really know what to make of this. How exactly in a paper would one demonstrate that they found a solution which caused a problem that in turn had a solution? I don’t doubt that all analyses have this quality, but I doubt that that quality is what makes them valid. Besides, should all this be wrapped up in “coherence”? P&W’s argument seems more or less contingent on the metaphor of a car engine, which is not nearly compelling enough for me to understand why they view this as so central.
Fruitfulness is how useful an analytic scheme is for looking at new kinds of discourse and new kinds of explanations. This seems normal and may be thought of as applicability and/or a kind of “external” validity. Bound up in this is the notion of generalizability, however, which we have already talked about as problematic in a previous post.
Alright, so there we have a handful of potential ways to think about the validity of a DP study. How many of these would be compelling to a non-DP audience? If a reader does not share a general appreciation for constructionist and qualitative viewpoints than they are a lost cause anyway. So assuming that one can guide a reader through these premises, what sorts of pushback would a reader give? They might ask if things generalize, but as far as I can tell the greatest worry is whether or not the analytical claims are seen as supporting all of the data. Qualitative coding, I think, gets so much traction because it is seen as “covering” all of the data. Although this approach has plenty of weaknesses, it does not have the same concern as DP that the findings may not have “actually happened’ all across the talk.
Hmmmm. Bound up in all this – and the previous two posts – is whether or not a DP dissertation in LS is plausible and what that might look like. Could it be more useful for me this semester to just forego the learning aspect and do a DP study using public records, as Potter and Edwards might support? I’m not yet sure.