I do wonder about online talk. The vast majority of my talk, I would conjecture, takes place through the good book (Facebook). Moreover, a deep amount of our everyday (institutional) work is done through text-based media. I grade, for example, through writing as communication, and I in turn am evaluated for jobs through cover letters, CVs, etc. While these are artifacts, they perhaps are not “conversation” in the sense that they do not involve two or more somewhat-proximal (spacio-temporally?) interlocutors seeking to be mutually understood. Nonetheless, when I think about online CA, my first thought is to what degree CA can analyze written text at all.
Given that prosody, pitch, and pauses are all so central to a line-by-line analysis, one must wonder, given that these analytic categories cannot be applied to textual talk, if it is worth using CA for text at all. But the close analysis of text is a well-established discipline: Literature. English criticism has developed some deep infrastructure for analyzing every semicolon, every line break, every comma. Considerable debate in this field has historically resolved around the dead or undead author & the intentionality that can or should be assumed in the text. Are these micro-features (punctuation, which mostly does not exist in oral talk) the written analogue of prosody, etc?
I am particularly taken by this need to compare face-to-face and online talk. We have seen the same in research about online learning, where the conversation really began around “Is online learning as good as face-to-face learning?” To some degree this move was silly because we had no reason to believe that the average f2f learning environment was “working,” and this focus surely hinders progression of the conversation around the affordances unique to online learning. With this in mind, I think it’s bizarre that we assume that online talk will be, what, LESS communicative (than its f2f counterpart)? Why do we assume that f2f conversation is the natural/normal/right kind when it is so prone to disrepair? (I know why.)
In my everyday life, the efforts to create online environments that replicate f2f ones (like synchronous video-based chat rooms Skype, Google Hangouts, Zoom, etc) are the ones that do not get taken up nearly as often for everyday talk (though they do get used a lot for institutional interaction). Rather, the things I use daily are facebook messenger, email, and twitter, none of which bear much resemblance to f2f chat. I think the most striking example was “Collister’s (2011) study of the use of an asterisk(*) as a repair morpheme, for example,” which “focused on this online-only phenomenon as there is no spoken English counterpart” (Paulus et al., 2016, p. 5). Such explicit repair is fascinating and, I conjecture, much more prevalent in online discourse. That’s why I appreciated point number two – “Understanding how online talk is coherent to participants.” This shifts the question from “how talky is online talk?” to “how is online talk talky?”
I am curious about the argument by Meredith, Potter, and Stokoe, who “have made a strong case as to why data in CA studies of online talk should include screen recordings of synchronous chat participants’ real-time interactions(and new forms of transcribing),and we support their arguments. There is indeed a need for continued study of the range of human activities involved in producing online talk” (Paulus et al., 2016, p. 7). I can think of a variety of LS studies that do screen capture AND learner capture (with a webcam, usually, or a videocamera behind the user) that help us understand how learners negotiate an online learning environment (e.g., Roschelle, 1992), but these environments may not always have learners trying to communicate THROUGH technology (rather, they are communicating AROUND technology). From the perspective of understanding individual learning, this extra data seems key.
But from a CA perspective, I wonder if these other production aspects should be studied deeply given that they are not interpretable/visible to other interlocutors and therefore are not IN the interaction. For example, if I stop typing in a chatroom to look up a piece of information, then return with that info, the other interlocutors have no sense of what I was doing during this time. Alternatively, if I copy-and-paste that, and it is RECOGNIZED as copy-and-pasted by the others, participant orientation indicates that a conversation around that ensues. For example, if a friend asks, “What did Jessica want us to bring for class?” I might copy-and-paste the email she most recently sent into the chat box. That said, it is usually not oriented to as unusual although it is sometime obvious it is an email (a kind of revoicing – the typing sounds different or meta-data is copied in), so perhaps this participant orientation bit is what fails us in analyzing this online talk.
Moreover, CA (and DP) are not DOGMATIC about the role of the mind in talk. They do not “black box” it or claim that it does not affect speakers. In fact, scholarship around micropauses and gesture may sometimes take up biological/physiological/neural explanations for why a pause here, why this move there. So perhaps this extra screen grab data would be useful for being able to understand the larger conversational ecosystem that surrounds talk. We presumably must be careful, though, in how we use that data analytically given that it is something to which the researcher has access that the interlocutors do not.
All told, I’ve basically never thought about using CA for CMC data even though that data is so prevalent in my field and in the world. Perhaps it is possible and perhaps thinking it through would be quite revealing of how we make ourselves understood to one another.
PS – The Pomerantz work seems like it might be useful in my analysis, but I’m not quite sure how or where yet.