Looking to speak: On the temporality of misalignment ...

5 downloads 0 Views 10MB Size Report
repair Karl's situation by adjusting the map and the brightness of the lights ..... light fixture over his head while the utterance is being spoken by his device.
jircd (print) issn 2040–5111 jircd (online) issn 2040–512x

Article

Looking to speak: On the temporality of misalignment in interaction involving an augmented communicator using eye-gaze technology Christopher R. Engelkea* and D. Jeffery Higginbothamb University of California Los Angeles The State University of New York at Buffalo, Buffalo, NY a

b

Abstract This study investigates the different temporal orders that manifest in interactions involving a participant using an Augmentative Alternative Communications (AAC) device. Studies examining the use of AAC devices have regularly incorporated a particular understanding of temporality by using time as a measuring device to compare inter- and intra-individual action. In this paper we present an alternative perspective on time-in-interaction by showing how participants attend to time by examining their interactive behavior. Here, time is conceived in terms of how participants experience the duration and unfolding of a particular utterance. Through a close analysis of an interaction between a man with late-stage ALS and his wife, this paper shows how different orientations to time can underpin breakdowns of intersubjectivity. The analysis traces elements of this temporal disconnect to a variety of sources including the normative temporal expectations for the production of utterances through mouth-speech and the functioning of the device itself. The temporal misalignment leads to slippages in the participants’ orientation to the sequential relevance of utterances and utterance parts that leads to misunderstanding. Keywords: temporality; time; sequence; augmentative alternative communication; talk-in-interaction; conversation analysis; human-computer interaction.

*

Corresponding author: [email protected]

jircd vol 4.1 2013 95–122 ©2013, equinox publishing

doi : 10.1558/jircd.v4i1.95

96

Looking to speak

1. Introduction A focus on time has occupied a great deal of the attention that various academic fields have paid to the use of AAC devices, with special attention paid to the ways that turns and actions unfold in spontaneously occurring talkin-interaction, and, specifically, the rate at which augmented communicators produce their utterances (e.g. Beukelman and Mirenda 2005; Hill and Romich 2002; Smith et al. 2006; Todman and Rzepecka 2003; Todman et al. 2008; Trnka et al. 2007). To our knowledge, all of these studies have relied on objectified understandings of ‘time’ in order to document the interactional phenomena of investigation – e.g. words per minute, bits per second, delay at pre-utterance pause length, etc. In such studies, ‘time’ is understood in terms of the succession of uniform, discrete, measureable units that exist prior to and independent of any individual, situation, or event. Philosophers of the experience of time have argued that such a Newtonian model of duration is befitting of investigations of physical occurrences, but fails to account for the experience of duration necessary to make such measures appropriate to the study of humans. This owes to the fact that we do not typically experience time as an object, per se, but experience objects and events in time: both with respect to the duration of the sensations they engender and the relationships they conjure with remembered pasts and anticipated futures (Husserl 1964). More recently, scholars interested in conversation have applied tools developed in the field of conversation analysis (CA) to the field of AAC in order to focus on the sequentiality of action in an attempt to move beyond this sort of analytically imposed temporal objectivity. Examples of such work have been produced by members of the Sheffield/University College London group (e.g. Clarke and Wilkinson 2007; Clarke and Wilkinson 2008; Clarke and Wilkinson 2010; Clarke 2005; Wilkinson 1999; Wilkinson et al. 2003; Wilkinson et al. 2011) as well as others (e.g. Goodwin 2004; Goodwin 2006; Goodwin et al. 2002; Heeschen and Schegloff 1999; Higginbotham and Engelke, in press; Higginbotham and Wilkins 1999). These analysts examine the ways that turns at talk (including those augmented by various graphic resources) are positioned and understood by the participants with respect to preceding and subsequent actions. This effort to reconceive the unfolding of an interaction in terms of sequential actions rather than elapsed time has produced a rich and valuable understanding of the ways in which interactions involving disabled and/or augmented speakers are organized. In this article we will show how such research has opened the door to exploring the ways that participants’ experience the unfolding of interaction and what implications these experiences have for the progression. We will demonstrate how the experience of time is a practical issue with real-world consequences by examining an interaction between an augmented communicator and his mouth-speaking interlocutor.

Christopher R. Engelke and D. Jeffery Higginbotham

97

For individuals with amyolateral sclerosis (ALS), interactions are accomplished with an increasingly paralyzed body, a progressively restricted gestural repertoire and a set of extrinsic communication technologies equipped with a unique set of semiotic and temporal characteristics. Albert Robillard (1999; 2006), an ethnomethodologist describing his experience with ALS, notes that paralysis and loss of speech make it impossible to swim in the timestream of normative interaction practices. He argues that, for a paralyzed individual without functional mouth-speech, one cannot keep up with the pace of conversational turn-taking. ‘Any elaborated response cannot be formulated quickly enough to be designed to the previous turn at talk’, because ‘by the time the computer speaks the interlocutors have lost the specific turn at talk’ (Robillard 2006: 1999–2000). In this paper we investigate how two individuals with vastly different arrays of body-based and external communication technologies, and temporal expectations, conduct their talk-in-interaction, by focusing on a series of interactions involving Karl, an individual with ALS and his wife Jess. Provoked by the circumstances occasioning the interaction (i.e., calibrating Karl’s communication device for the upcoming experiment) and each equipped with a different array of semiotic resources available for expression, Karl and Jess engage in two co-occurring and loosely interrelated interaction projects, each project manifesting a distinctly different temporal-sequential order. As we will show in this analysis, during the ongoing sequence of diachronically unfolding sign productions, each participant’s actions, although inhabiting different temporal orders, constitute and react to ‘complete utterances’ out of an incomplete series of signs. The communication device Karl uses plays a central role in shaping these interactions: first, by extending the time required for message preparation, and second, by providing Jess with visual access to Karl’s ongoing message construction, and thereby allowing her to assimilate these materials into her own utterances and interaction goals. Additionally, Karl’s typing activity produces a second, co-existing time order which his interlocutors must contend with. We will therefore be concerned to show the ways that the participants' distinct temporal attunements to actions at different points in an ongoing sequence of diachronically unfolding sign production, motivate a series of misunderstandings and sequential slippages that are realized as breakdowns of multiple levels of intersubjectivity. In brief, through a close analysis of the talk-in-interaction between Karl and Jess, including the ways in which Jess constitutes ‘complete utterances’ out of Karl’s incomplete series of signs, this paper demonstrates the work done by mouth speaking interlocutors in ‘turn’ construction and raises questions regarding the unproblematic treatment of turn construction and deployment.

98

Looking to speak

2. The participants The interaction analyzed here came from two participants, Karl, a 44-year-old man with late-stage ALS, and his wife Jess, who was 42 at the time of taping. Karl’s paralysis is severe: he does not produce any vocalizations or volitional limb movements. Volitional movement appears restricted to slight lateral head movements (head shakes), upward movement of the brow and directed gaze. Jess and Karl were part of a larger research project involving the performance of ALS speakers and their partners during a set of structured interaction tasks (Lou 2007; Lou et al. 2008). Two female experimenters in their late 20s and early 30s were also present during the videotaping. The study was conducted in Jess and Karl’s home in Karl’s bedroom. Figure 1 provides a schematic of the room and general locations of the participants. In the video, Karl lies in his bed on his back, with his head propped up by pillows. The display screen of the AAC system is on a table, positioned over the bed, which is approximately 2 feet (60 cm) in front of him. A map and stand that are used for the research task are positioned on the table to one side of the device. The bed is positioned so that Jess and the experimenters can move around either side of it. Two video cameras are used to record the interactions. One camera is positioned approximately 8 feet (2.4 m) away, facing Karl and used to record the participant interactions. A second camera is positioned behind and to one side of Karl and is used to record Karl’s use of his communication device.

Figure 1: Schematic of Karl’s Room.

Christopher R. Engelke and D. Jeffery Higginbotham

99

2.1. Karl’s Eye-Tracking Device Karl uses an Erica™ eye-tracking AAC device that provides him computer access by mapping his eye movements into cursor movements onto the computer display screen positioned in front of him (Figure 2). The interface on the Erica display consists of the following: 1. 2. 3. 4. 5.

a 32-item alphabetically organized on-screen keyboard; eleven keys designed to manipulate text and the application (e.g., backspace, shift, delete, speak text, pause eye-tracker); a 5-item word prediction list; a message window which contains the typed text; and a set of control keys that retrieve a numeric keypad, phrases and an options window containing additional device controls.

To control the Erica, Karl directs his gaze to the desired key, dwelling on the key for 1 second before the selection is registered, which, after actuation, appears on the message display. Each selection is preceded by a short auditory ‘beep’ and a momentary change in the color of the selected key (i.e., green to red). Karl selects the button in order to issue a spoken utterance through the Erica’s speech synthesizer. Also, on the upper right side of the display, a real-time display of Karl’s tracked eye is presented for calibration purposes.

Figure 2: Erica® Eye Tracker Display.

100

Looking to speak

3. Analysis 3.1. Gaze and other gestured actions Despite his profound paralysis, Karl interacts through gesture, including head shakes, eyebrow flashes, and eye-pointing, coordinating these in temporally precise ways with his device use and his partner’s actions (cf. Goodwin 2003; 2004; Goodwin et al. 2002; Wilkinson 1999; Wilkinson et al. 2011). Control over these body movements provides a crucial body-based semiotic resource. For example, in the transcript provided here (Extract 1), Karl’s gaze movements (e.g., eyebrow flash, directed gaze) are tightly coordinated with Jess’ talk (Lines 1 and 2), as well as with his own device use (Lines 28 and 30). However, the gestured sign is effective only as long as it is actively displayed and relevant within the immediate temporal-sequential context, requiring Jess to be perceptually available to receive and interpret it at its point of production. In addition to using gestures to indexically tie to the contexts of conversation in which they appear (cf. Goodwin 2011), Karl also moves his eyes to activate his eye-tracking device to generate text. His eye movements are fast and accurate, allowing him to produce typed utterances with his eye-tracker.1 However, in sharp contrast with the time taken to produce contextually grounded gestures, the time taken to type out and speak an utterance with his communication device is in the of tens of seconds.2 Several types of signs are created through this eye-typing activity. First, Karl’s eye-typing gestures themselves constitute a publicly available sign complex, displaying his involved interaction with the eye-tracker. Second, each button selection is accompanied by a noticeable auditory click. Third, the movement of the screen cursor, keyboard highlighting and pop-up windows are visible indicators of Karl’s eye typing actions. Fourth, the text produced through Karl’s typing is noticeable and stays on the screen until it is erased. Finally, by selecting the 3 button, Karl can issue his text creations through a highly intelligible speech synthesizer. As shown in the analysis below, the display of Karl’s typing actions and typed products serve as important temporal-sequential resources for interpretation and interaction. Extract 1 Participants: J = Jess, K = Karl, Es = Experimenters Transcription Conventions: ^ = Point at which device selection is made relative to Jess’ talk or actions. Bracketed text (e.g. ) identifies the button selected by Karl. →= Identifies the line in the transcript represented by the screenshot.

Christopher R. Engelke and D. Jeffery Higginbotham

101

102

Looking to speak

Christopher R. Engelke and D. Jeffery Higginbotham

103

Light up a little In the stretch of interaction analyzed here (Extract 1), Jess, Karl, and the experimenters try to adjust Karl’s environment so that he can operate his eyetracking device for the upcoming experiment. In the almost three minutes prior to the start of the transcript, the participants work to position Karl’s head and adjust the lighting in the room. During this time Jess asks Karl a series of questions related to his ability to see the maps and use his device. Karl responds by quickly glancing up at the ceiling, presumably in order to nominate the overhead light as an impediment. However, as he does so, Jess turns to look at Karl’s display screen just before Karl’s gestured expression and misses his gestured answer to her question. Although the timing of Karl’s gesture was precise, proper uptake was dependent on Jess’s availability at the moment of its production, reflecting the spatial and temporal specificity of gestured action. Unlike speech or other sonic productions, the recipient of a gesture must actively attend to their interactant in order to register and interpret the sign. In Karl’s case, paralysis prevents any self-positioning of his body, and the physical configuration of Karl’s body, bed, and communication device force Jess to choose between looking either at Karl or the display screen of his device, but not both at the same time. Although Karl produced a gesture that coincided with Jess’s utterance (Line 2), she had already shifted her gaze to the screen by the time the gesture was produced. Without the ability to reposition himself, Karl’s gesture failed to reach Jess. As will become evident during the analysis, this ‘near miss’ has substantial consequences for the rest of the interaction. Segment 1: Persistent typing (Lines 2–30) Immediately after his failed gesture, Karl begins to type out a message on his eye-tracking display (Line 2) and continues to type his message out until complete (Line 30: Light up a little). Karl does not relinquish his gaze toward the screen, typing out his utterance for approximately 55 seconds before it is issued, despite Jess’s attempts to solicit his attention. For example, on lines 6 to 14, Jess asks Karl two questions (Is it ok?, Closer?), while looking at Karl during and after issuing the query. Despite Jess’s demonstrable intent to solicit an answer, Karl persists in typing without breaking from his typing activity to visibly acknowledge Jess or produce a gestured response to her questions. In fact, all of Karl’s typing activity throughout the corpus of interaction exhibits this character of persistent typing: a slow, deliberate typing activity, within which Karl does not display any overt attention to others until his utterance is prepared and issued. These characteristics make persistent typing resistant to the momentary influence of others’ talk during interaction (Kraat 1985). As Karl types his utterance, concentrating on his task and not attending to the co-occurring interactions, Jess progresses through the courses of

104

Looking to speak

action she initiates without waiting for the marked completion of Karl’s typed response to her questions. That is, Jess enacts a series of parallel agendas to repair Karl’s situation by adjusting the map and the brightness of the lights in the room, asking him a series of questions about his use and satisfaction with the operation of his device. Jess’s actions assume their own interactional rhythm and expectations of responsivity, constituting a temporal-sequential order different from that of Karl’s persistent typing, and more in line with normative practices of mouth-speech interaction. At different points during this interaction, Jess interprets Karl’s persistent typing as embodying a tacit response to her questions. For example, after bumping into the table holding Karl’s AAC device, Jess questions Karl as to the operability of the device (Line 6: Is it ok Karl?). After asking her question, Jess gazes silently at Karl for 1.3 seconds before producing a subsequent turn in which she verbalizes a response to her own question (yea, it’s ok). In her utterance following the 1.3-second silence, Jess positions Karl’s actions (indicated by persistent typing, auditory clicks) as ‘answering’ her questions. In this sequence, we begin to glimpse one of the central issues of the current investigation in terms of the ways that the participants display their individual temporal orientations to the unfolding of separate but co-occurring projects. It is important to note, however, that both Karl’s and Jess’s projects are nested in the same activity of getting ready to participate in a videotaped experiment, thereby allowing them to act on the assumption that that they are attuned to one another in other respects as well. This can be seen in the ways Jess’s question opens a sequential slot within which Karl’s actions can be and, in fact, are interpreted as relevant and appropriate responses. Karl, who is already engaged in composing an utterance to answer one of Jess’s earlier questions (i.e. from Line 1), offers no indications of uptake. The fact that Karl does not look at Jess at any point in this sequence is notable in that it seems to violate Goodwin’s rules for gaze orientation within conversation, i.e. that ‘A speaker should obtain the gaze of his recipient during a turn at talk’ (1980: 275), and ‘A recipient should be gazing at the speaker when the speaker is gazing at the hearer’ (1980: 287). Although particular variations of these rules have been observed in regards to augmented communicators who employ graphic resources (e.g. Wilkinson et al. 2011), one issue deserving special consideration in this instance stems from the fact that both participants are simultaneously engaged in the process of composing utterances: Jess verbally, and Karl on his eye-tracking device. That is, while Karl’s refusal to look at Jess or demonstrate uptake of her utterances could be read as a tacit rejection of her question and the sequences of action it proposes, the participants here are both simultaneously composing messages in disparate modalities (cf. Wilkinson et al. 2011: 158–159). Studies of concurrent semiosis (e.g. Black 2008; Goodwin 1979; Jefferson 1984; 1986) have noted that interlocutors can

Christopher R. Engelke and D. Jeffery Higginbotham

105

often attend to and understand each other’s talk, gesture, and other semiotic productions produced in overlap with their own. Thus, Jess is able to verbalize a question and, by assuming that she and Karl are engaged in a shared project, use the inherent properties of Karl’s letter-by-letter method of utterance construction to infer the answer. However, the flip side of this affordance is a constraint on the interlocutors’ ability to connect or attend to one another in a shared time. That is, while each party may compose and, in Jess’ case, perform their utterances, the unimodal nature of Karl’s engagement with his device means that he would be forced to abandon his incomplete project in order to demonstrably attend to Jess’s more rapid queries. Here, it is Karl’s very same typing action that allows Jess to apprehend the answer to her question without requiring Karl’s explicit response. This is due in part to the fact, as noted above, that Karl’s device is set to speak the contents of the display only when he activates the button, which he does only after having composed an utterance in its entirety. Karl’s act of utterance production is therefore built out of several discrete constituent actions, their relation to the final utterance remaining opaque until the utterance’s completion and delivery (cf. Goodwin and Goodwin 1987). While such a practice affords Jess’ act of interpreting the ‘clicks’ produced by Karl’s typing as an answer to her question in Line 6, it has the additional effect of pulling her out of one sort of intersubjective frame with Karl. By focusing on the ‘clicks’ as objects in their own right rather than in terms of their contribution to or advancement towards Karl’s projected utterance, Jess unknowingly shifts out of a shared frame of reference (cf. Schutz 1972 [1932]; Throop 2003). Jess’s shift in perspective stems from and contributes to the fact that the participants’ actions are organized at such dissimilar timescales as to mutually prevent the moment-by-moment cooperative alignment that characterizes normative face-to-face interactions. That is, the participants’ habits, expectations, and intentions, adapted to meet the affordances and constraints of the media through which Karl and Jess interact, produce two incongruent orientations to the temporally unfolding interaction and its constituent parts (e.g. actions, utterances, turns). As the next several examples demonstrate, the dissimilarity with which the participants attend to the duration and progress of signs ultimately prevents the them from coordinating their attention and leads to the sort of slippages in understanding and sequence that contribute to various forms of conversational breakdown documented elsewhere (e.g. Clarke and Wilkinson 2008; Higginbotham and Wilkins 1999; Robillard 1999). Segment 2: Temporally bounded sequentiality (Lines 17–20) On at least two occasions, Jess treats the yet incomplete text – the text on Karl’s device display that he is in the midst of creating – as complete and responsive to questions that she has directed toward him. Unlike the example in segment

106

Looking to speak

1 where Jess uses the fact that Karl was typing, without reference to the content of his actions, in order to infer answers to her questions, the following examples show Jess constituting the words that are emerging on Karl’s screen as complete and discrete turns at talk, appropriate to the sequential position in which she accesses them. While such action is not typically problematic, in the examples below, Karl is in the process of typing out a single utterance (light up a little). However, because of the points at which Jess accesses Karl’s display screen, Jess is able to read this as at least two separate utterances (light up & a little), each of which is read as responding to the contexts Jess supplies shortly before looking at Karl’s display, rather than the context at the time when Karl began typing. Here we suggest that both Jess’ opportunistic engagements with the text of his unfolding utterance and Karl’s persistent typing and singular engagement with his act of utterance production demonstrate the participants’ misaligned temporal orientations and contribute to the further breakdown of intersubjective engagement. After acknowledging that Karl’s device is working properly, Jess works to reposition Karl’s map stand in lines 17–20. While doing so, she turns and looks at Karl’s screen, on which he has just finished typing the word ‘Light’. Treating the word as proffering a topic relevant to her project of setting up the space for the experiment, Jess looks at Karl, points to a table lamp across the room and says, ‘Do you want the other light on?’ (Line 17). In the following 1.4 seconds, Jess glances between Karl and the display screen after Karl starts typing out the letters ‘u’ and ‘p’, and producing the first letter while Jess is speaking her utterance (Line 18). Immediately following Karl’s production of the letter ‘p’, Jess demonstrates her uptake of Karl’s word ‘up’ as an affirmative response to her question posed in Line 17 by walking to the other side of the room and switching on the light. However, she simultaneously registers the utterance as nonconforming with her understanding check, ‘yeah::?’ (see Raymond 2003), which invites a repair from Karl (cf. Schegloff 1992). Jess’ actions demonstrate her interpretation of this part of the interaction as a Question-Answer pair with an embedded insertion sequence (Figure 3). In doing so, Jess relies on a set of potentially non-contingent materials – the displayed text from Karl’s ongoing typed productions as relevant objects within the temporal and sequential expectations of her own project. First, Jess recognizes the just typed text ‘Light’ within the context of her ongoing task of arranging the map orientation and illumination for Karl. This, even though Karl started typing the word – without pause – 26 seconds before, and prior to Jess’s current focus on making the map more readable. Using Karl’s word ‘Light’ in conjunction with the context of her current project, Jess then offers a prediction as to the type of operations Karl would want performed. Jess’ prediction anticipates a topic-comment structure, which is common among

Christopher R. Engelke and D. Jeffery Higginbotham

107

people with communications disabilities. One reason for which may be that it affords inter-modal and inter-individual distributions of labor wherein the person with the communication disability can interactively ground the topic verbally, graphically, or gesturally before commenting on it in another modality or eliciting the comment from their interlocutor (e.g. Goodwin 2004; 2010; 2011; Wilkinson et al. 2003; Wilkinson et al. 2011).

Figure 3: Jess’ interpretation of utterances as question-answer sequence.

In the case presented here, Jess produces an understanding check, ‘Do you want the other light on?’ in which she offers a candidate understanding of an ambiguity/trouble source (Schegloff et al. 1977). The sequential environment introduced by this utterance allows Jess to treat Karl’s ongoing production of the word ‘up’ as a relevant response to her understanding check, though neither Karl’s sequential production of the text characters, his continued attention to the act of typing, nor the nonconforming nature of Karl’s next word (up) within the narrow context of Jess’ interrogative make for an easy fit as a relevant answer (cf. Raymond 2003). However, what does make sense is that the timing of these productions fits within Jess’s temporal expectation for sequential adjacency. That is, both in this example and in segment 1, Jess produces a yes/no interrogative, pauses more than 1.5 seconds for reply, and then demonstrates having registered uptake of a preferred (i.e. affirmative) response. However, as Raymond (2003) explains, the uses of yes/no interrogatives are not neutral but involve preferences for both type-conformity and, in these cases, affirmation. Jess demonstrates her sensitivity to these characteristics in her turns following the slots allocated for Karl’s responses wherein she construes Karl’s lack of an unambiguous (i.e. explicit and type-conforming) response as affiliating with the courses of action implicitly proposed by her interrogatives after allowing more than 1.5 seconds for him to respond otherwise. What is particularly notable here is the fact that the responses Jess registers are affirmative/preferred ones despite the fact that her queries are not immediately met with unambiguous demonstrations of uptake. This is significant in that studies of mouthspeaker-mouthspeaker interaction have shown that interlocutors often anticipate a dispreferred response after an inter-turn gap following an interrogative and rephrase or invert the preference of the interrogative so as to preempt the anticipated dispreferred response (Heritage

108

Looking to speak

1984; Levinson 1983: 320; Pomerantz 1984; Roberts et al. 2011; Schegloff 2007: 67). While Clarke (2005: 249) has noted that ‘the realization of [AAC users’] turn initial pauses subsequent to questions and meta-interactional turns are not, typically, oriented to as problematic by speaking partners’, we propose that an additional explanation for Jess’s actions has to do with the ways that she orients to the timing of Karl’s contributions. That is, Jess’s first pair parts provide a sequential slot for Karl’s responses, and demonstrably bounds the temporal perimeter of this slot in accordance with her own sensibilities of temporal orders of action. Both here and in the example above, Jess allows only a relatively short amount of time (~1.5 seconds) from the moment she finishes her utterance for Karl to produce his responses, ultimately using whatever signs Karl does produce in those moments to fill the sequential opening provided by her questions. The sequential slots Jess allocates to Karl are temporally bounded such that Karl’s action, whether conceivably directed to Jess (as in segment 2) or not (as in segment 1) is bundled as a single object and interpreted within the context of Jess’ most recent utterance. This tactic thereby reflects Jess’s orientation to the temporality of adjacency, effectively demonstrating the temporal limits of her willingness and/or ability to apperceive Karl’s semiotic displays within the contexts opened by her questions. Segment 3: Sequential and temporal slippage (Lines 21–26) After turning on a table lamp in the opposite side of the room, Jess stands at the foot of Karl’s bed, facing him, and asks, ‘Does that help, Karl?’ (Line 21). She then gazes at Karl’s face over the top of his device for 3.5 seconds before walking to the far side of the bed and turning to look at his device display. By the time Jess looks at the display, Karl, who has been typing continuously and at a consistent pace since beginning his utterance in Line 2, has completed the text ‘Light up a l-i’. Within two seconds of looking at the screen, Jess segments and expands the most recently typed text, speaking the anticipated phrase: Line 25 ‘a little’. Note that despite the fact that ‘Light up’ was still displayed on the screen ahead of the text ‘a l-i’, Jess does not speak it. Moreover, her downward intonation and subsequent actions (i.e., looking away from the device and returning to her earlier project of adjusting Karl’s map) positions the text segment as a self-contained, bounded unit: a second pair part that recovers and responds to Jess preceding question (Figure 4).

Figure 4: Jess’ interpretation of Lines 22–26

Christopher R. Engelke and D. Jeffery Higginbotham

109

It is at this point in the interaction that Jess and Karl are most clearly ‘out of sync’ with one another in terms of the ways in which they are attending to the sequentially grounded meaning of each other’s turns at talk. We might therefore refer to this sort of an intersubjective breakdown as a type of ‘sequential slippage,’ a misalignment or misattunement to the sequentially grounded conversational or pragmatic relevancies of an action or actions stemming from occasions when the participants in an interaction perceive the contextual relevancies of an action differently due to a disagreement over what has been accomplished prior to it. This sort of slippage has been documented in work by Clarke and Wilkinson (2008) wherein augmented speakers’ utterances are misunderstood as responding to the context of a mouth speaker’s utterance, which was produced while the augmented communicator was typing, often leading to extensive repair sequences (cf. Robillard 1999). In the examples considered here, this form of sequential slippage is predicated on a ‘temporal slippage’: the competing temporal expectations and contrasting temporal experiences that each participant has of the moments in which the interaction unfolds (cf. Engelke and Mangano 2007, 2008). As discussed above, from within Jess’s perspective, wherein she is only given access to Karl’s emerging text at the moments when she looks at his display, her ongoing treatment of the displayed text as responding to two separate projects seems to be substantiated here by the sequentially and contextually appropriate nature of the text Karl has typed since she asked her question in Line 22: ‘Does that help, Karl?’ However, while the segment is appropriate to answering Jess’s question, Karl actually began producing this segment around Line 19, typing the letter ‘a’ more than 15 seconds before Jess asks her question.4 Here, Jess demonstrates an attunement to a particular temporal order for the unfolding of utterances characterized by extremely tight responsivity and little or no gaps or overlaps between speakers’ turns at talk (Sacks et al. 1974). This orientation presents the additional feature of binding the prior utterances’ unfolding into a discrete unit, the appearance of each ‘next’ utterance providing the terminal boundary of the preceding action. As such, Jess’s question at Line 21 – ‘Does that help, Karl?’ – sequentially deletes the frame that had been in play prior to her utterance. It is thereby that Jess demonstrates her orientation to Karl’s action, and the text it produces, as bounded on one end by her initiation of the frame though her question at Line 21, and on the other end by her anticipation and utterance of the phrase in Line 25. Karl’s perspective on the same text appears quite different, however, in that the fact that he produces no break in his typing rhythm between completing the word ‘up’ and beginning the next word – ‘a’ – suggests his orientation to the complete utterance – light up a little – as a single, bounded, object of attention or experience (Husserl 1970). This is not to say that Karl has preconceived

110

Looking to speak

his utterance in its entirety and is merely going through the motions of typing it out, but rather that there is an element of his engagement with the utterance that considers the actions producing it as united in service of a single act. That is, building from Schutz’s (1972 [1932]) distinction between ‘act’ and ‘action’ – wherein an ‘act’ is the projected outcome or intended consequence and thereby embodies an element of past-ness in that orienting to it involves imagining it as already having been completed. ‘Action(s)’, on the other hand, embody a quality of futurity in being deployed in service of bringing about an ‘act’. Thus, contra to Jess’ segmentative approach through which she partitions the text on Karl’s display according to her implicit expectations for the temporality of adjacency, Karl demonstrates a monothetic approach to his emerging text, treating it as a single object irrespective of Jess’ co-occurring actions and utterances. Segment 4: Speaking it out (Lines 28–32) This segment concludes after 48 seconds as Karl completes his typing project and issues his utterance (Line 28: Light up a little) by selecting the button on his device, glancing at the map Jess has been adjusting and at the light fixture over his head while the utterance is being spoken by his device. Upon hearing Karl’s device speak the entire utterance, Jess pauses, looks at Karl, points to the overhead light, and reformulates Karl’s utterance (Lines 27 and 29: Turn that light on a little?). Recasting Karl’s utterance with question intonation, Jess uses her turn to check her understanding of Karl’s meaning before acknowledging the error of her prior actions (Line 31: O:h I know what he wants). As Jess begins her utterance, Karl starts moving his cursor and actuates the button, issuing his utterance a second time at the end of Jess’s talk (Line 30). Karl’s re-issuance of the line ‘Light up a little’ appears to respond to the 1.6 second delay between his first production of the utterance and Jess’s indication of uptake, suggesting Karl’s interpretation of Jess’s response as inadequate in some way. To the extent that Karl is responding to Jess’s failure to demonstrate proper uptake, he demonstrates a temporal orientation to the unfolding interaction similar to that displayed by Jess. That is, in having his AAC device repeat the utterance aloud, Karl indicates his interpretation of Jess’s 1.6 seconds of inaction as problematic, suggesting that he too shares in an expectation characterized by tight temporal coordination of inter-speaker transfer. Thus, although Karl demonstrates a highly protracted temporal orientation during his own utterance formulation, he makes no provisions for delays during the moments of performance or for delays attributable to Jess. The fact that Karl attends to the 1.6 second delay in Jess’s response as problematic suggests that Karl’s temporal orientation is linked to the means by which he is participating in the interaction. When engaged with the task of formulating his message, Karl’s temporal attention is, in part, a consequence of

Christopher R. Engelke and D. Jeffery Higginbotham

111

his device; however, when issuing his utterance or listening to others, Karl does not encounter the same constraints. Rather, mouth-speech is characterized by a particular temporal order, perceived as reflecting the temporal organization of thought (Clark 1996), while the temporal order inherent in eye-gaze typing affords either an orientation to each individual act of selecting a letter, or to the overarching act of spelling a word or phrase (Schutz 1971 [1964], 1972 [1932]; Engelke in submission). We will return to the ways that the modalities though which interaction takes place impact the temporal orientation of the participants.

4. Discussion In this article we have been concerned with elucidating the ways that the participants in an interaction engage with the duration and unfolding of the event, using their individual temporal experiences to guide their actions with one another. To demonstrate this, we focused on the ways that ‘intersubjective’ breakdowns occur around two participants’ misalignment or misattunement with respect to how each individual orients to semiotic actions/objects5 within the flow of the interaction. Implicit in this work has been a notion of ‘intersubjectivity’ common in the field of phenomenology, but slightly different from the one that conceives of intersubjectivity in terms of referentially shared meaning or ‘mutual understanding’. As Duranti explains, for Edmund Husserl, the founder of the field of phenomenology, ‘intersubjectivity means the condition whereby I maintain the assumption that the world as it presents itself to me is the same world as it presents itself to you, not because you can “read my mind” but because I assume that if you were in my place you would see it the way I see it’ (2010: 6). As such, intersubjectivity underpins the possibility of sharing reference and mutual understanding, but is not coterminous with it. In the work presented here, Husserl’s expanded notion of intersubjectivity has been instrumental in that it has allowed us to consider moments of misalignment or ‘breakdown’ that occur based on the ways that participants orient to each other’s actions prior to grounding meaning. In the concluding pages of our discussion we will make some of these claims more explicit through a brief discussion of the phenomenology of time and the ways it relates to the types of slippage outlined above. 4.1. A brief phenomenology of time Edmund Husserl’s investigations and analyses of internal time consciousness are known for, among other things, having unseated the primacy of clock(ed) time, showing it as a derived and theoretical – rather than natural – mode of attending to duration (Husserl 1991, 2001). Husserl did not deny that time could be segmented and measured, but that such a way of attending to time

112

Looking to speak

required a special act of reflection, uncharacteristic of the way people ordinarily engage with everyday experience (Husserl 1970; Husserl 1991; cf. Duranti 2009, 2010). In so doing, Husserl moved away from the understanding of individual and isolated moments in time, arguing that experience of time was characterized by the synthetic nature of any given present as extending to a ‘horizon’ of attention that included attention to both the recent past and anticipated near future (Husserl 1991, 2001). Moreover, far from being merely a way by which we can measure duration, Husserl argued that time was crucial to the ways we perceive and experience objects, entities, and actions. Summarizing Husserl’s argument, Throop (2003: 230) writes, ‘every intentional object is surrounded by a “horizon” that contains multiple arrays of “retention” and “protention” which serve to partially structure what is given focally to our awareness at any given moment, while also serving to connect the existing moment of awareness to both its antecedent and subsequent arisings’. However, it is important to note that while such ‘temporal objects’ are presented to us within a temporal halo or horizon, the object to which we are attending itself does not change, but only the object’s mode of temporal appearance (Husserl 1991: 68[66]). This caution is important as it allows us to consider the ever-present ‘now-moment [as] characterized above all as the new’ (Husserl 1991: 65[63]), and thereby position the experience of temporal objects within a continuous flow of ‘new’ and ‘present’ experiences. As long as we are conscious, new contents will continue to enter our awareness in the mode of temporality characterized by the ‘now,’ and will gradually fade into a mode of presentation characterized by retention as new temporal objects appear in the now-moment. Husserl refers to this process as the ‘running-off ’ phenomenon (Husserl 1991: 29[27]; Merleau-Ponty 1962: 419; cf. Schutz 1972 [1932]) and it is crucial to understanding the breakdowns outlined throughout this article. That is, as discussed below, at issue here are the ways that the actors experience Karl’s utterance as unfolding in terms of contrasting halos of retention/protention, thereby allowing each individual to attend to the constituent words and phrases differently – i.e. as part of single utterance or as individual utterances within the unfolding interaction. 4.2. Constituting temporal objects and their running-off In the examples discussed above, Jess constitutes Karl’s actions as objects and utterances, relying on the frameworks of anticipated action stemming from each of her own actions/utterances to create a field of contextually appropriate possibilities. In so doing, Jess applies temporal boundaries to slots she creates for Karl’s action, using them to constitute turn boundaries in the midst of his ongoing typing actions, and anticipating his utterances. In each case, Jess makes a quick, on the spot determination of the constitution of Karl’s

Christopher R. Engelke and D. Jeffery Higginbotham

113

utterance, (e.g., interpreting persistent typing or expanding a partially typed word or phrase), as well as their discourse-pragmatic status (e.g., agreement, directive). One possible explanation of these phenomena hinges on the fact that Jess and Karl are engaged in what, at a general level, amounts to ‘the same’ project, setting up for the upcoming experiment. Jess and Karl both indicate their attention to this frame, making use of it to ground their utterances and actions. The breakdown occurs, however, as a result of Jess’ attempt to apply this level of intersubjective attunement to the more dynamic level of turn-byturn context, which responds to each of her actions/utterances. Jess thereby uses the ‘shared-ness’ that characterizes the participants’ engagement at one level to project her own temporal attention within the interaction onto Karl, thereby assuming a tighter coordination of attention than was the case. Here, the simultaneity Jess experiences is a transference based on her own engagement, seemingly confirmed by her interpretations of Karl’s utterance segments as responses to her queries. That is, as Schutz notes, like one’s own, the other’s ‘stream of lived experience is also a continuum, but I can catch sight of only disconnected segments of it … When I become aware of a segment of your lived experience, I arrange what I see within my own meaning-context … Thus, I am always interpreting your lived experience from my own standpoint’ (Schutz 1972 [1932]: 106). It is by virtue of this projection that Jess accommodates Karl’s actions to her own agenda and temporal expectations for the interaction. As an extension of this, we might read Jess’s actions as reflecting a sedimented orientation to time in terms of a habitually conditioned halo for experiencing the temporal objects in conversation. That is, while ‘time’ itself may be conceived as an unabating flow, we do not experience a constant, uniform unfolding ‘now’. Rather, we attend to objects as emerging within a particular horizon or halo of time. As William James explains, ‘we are constantly conscious of a certain duration – the specious present – varying in length from a few seconds to probably not more than a minute’ (2010 [1890]: 430). And, it is thereby, for example, that notes played on a piano depend on the listener’s temporal orientation in order to transcend to the level of a song, the experience deriving from what the individual retains and anticipates from one moment to the next (Bergson 1913; Husserl 1964; Schutz 1972 [1932]). In as much as Jess’s orientations to the temporal boundaries of Karl’s turns reflects a preference for ‘no gaps, no overlaps’ (Sacks et al. 1974; Stivers et al. 2009), her perception of these boundaries reflect a ‘specious present’ which characterizes mouth speakers’ interactional contributions. We might therefore posit that her repeated exposure to, and interaction with other mouth speakers (including Karl, prior to the onset of his ALS induced paralysis) conditioned her to orient

114

Looking to speak

to Karl’s actions – qua temporal objects – in terms of the same or similar expected horizons of retention and protention which similar semiotic acts – i.e. turns at talk – evoke (cf. Duranti 2009). Moreover, the fact that Jess’s own contributions to the interaction took place though mouth speech suggests an additional, and more immediate, way through which habitual temporal horizons for attending to mouth speech are at play in this interaction. That is, as we have shown throughout this article, Jess’s attention to Karl’s actions is greatly influenced by her expectations of adjacency with respect to her own utterances. Karl’s own behavior while composing his utterance, however, suggests a far more protracted attentional halo, a comparatively expanded spacious present. 4.3. Modalities and temporalities In addition to the habitual temporal horizons evoked by Jess’s mouth speech and the simultaneity she projects, the temporality created by Karl’s mode of producing his utterances also plays a hand in the temporal and sequential slippage seen here. For example, as we have shown above, Karl’s method of typing his utterances straight through affords Jess the ability to decompose the single utterance/text object into constituent segments. First, the durative temporality of the text on Karl’s display allows Jess to engage it at various points in the midst of its unfolding, applying an anticipated boundary based on what is shown at the moment in which she looks at it. What is interesting, and perhaps most unfortunate for the participants, are the ways that the boundaries that Jess applies to Karl’s actions are responsive to the context supplied by her most recent question, rather than the boundary that existed at the moment that Karl began typing. Second, the visual medium (despite the co-occurring clicks) through which Karl’s turn is composed does not force his interlocutor to attend, as do primarily auditory signals. That is, several authors interested in the phenomenology of perception have noted that visually presented objects are more easily ignored (e.g. by closing one’s eyes) than are auditory objects, which seem to invade our senses (see Gadamer 2004 [1975]; Husserl, 1989; Merleau-Ponty, 1962). And, finally, as discussed above, the lack of prosodic contours (by virtue of it being text) offers little in the way of scaffolding to bind words into meaningful constructions or project points of in/completeness (cf. Wilkinson et al. 2011: 161).

5. Conclusion In this article we have examined an instance of breakdown in shared reference and mutual understanding, tracing its origins to a misalignment in the ways that the participants experience the temporal organization of actions

Christopher R. Engelke and D. Jeffery Higginbotham

115

within the unfolding of the interaction. We have used this analysis to show how a breakdown of intersubjectivity at one level – in this case the experience of time – can motivate breakdowns at the conversational level, manifest through ‘sequential slippage’ and other forms of misunderstanding. Here, we have demonstrated the importance of examining issues related to temporal experience of the objects of interaction (e.g. utterances and actions) in addition to their sequentiality by showing ways in which normative interpretations and assumptions about adjacency cannot be taken for granted. Finally, we have illustrated the importance of considering the inherent properties of the modalities through which interactions are accomplished by demonstrating the ways that participants attend to actions/objects with respect to the affordances and constraints of the media through which the signs are produced, but also, and importantly, with respect to their expectations for the unfolding of the interaction more generally. The work presented here, as well as the emerging literature on talk-in-interaction and augmentative communication has helped to shift attention from the extrinsic indices of communication performance to examine the actual methods used by interlocutors to carry out their interactions using their bodies and communication technologies. More than just illuminating as to how individuals orient to and deploy their various communication modalities during talk-in-interaction, this research can also provide significant insights in addressing the communication problems experienced by persons with ALS, as well as other individuals who use AAC devices. From our analysis, the communication device served as a primary resource for coordinating interactions between Karl and Jess. The configuration of its features, when operated by the participants, appeared to play a fundamental role in shaping the sequential and temporal aspects of their interactions. The coordination of bodies, gaze, gestures and talk were bound to the affordances and constraints of the device, as were the emergent sequentialtemporal orders of their interactions. The level of detail provided here and by other talk-in-interaction investigations, so far, has eluded mainstream AAC research, which has tended to focus on issues of communication rate, intelligibility, vocabulary coverage and pragmatic behavior (Higginbotham and Engelke in press). We hope the results from this study provoke researchers and technology designers to find ways to help keep people in-time together. As analyzed and discussed in this paper, each semiotic modality, including those represented by the communication device, possesses its own set of temporal characteristics. By examining the role of modality on the ways adjacency is maintained during augmented interactions, its relationship to sequential and temporal slippage, and the interactive consequences of such slippages,

116

Looking to speak

these data could be used to design devices which help compensate for an individual's modality-specific interaction requirements. Such practices may follow those being used to inform the development of utterance-based AAC devices (e.g. Higginbotham and Wilkins 2006; Todman et al. 2008). Current AAC technologies could be redesigned with features that provide the user with a variety of temporal options. These could include ways for maintaining normative temporal expectations, retaining one's preferred temporal order, as well as supporting the discourse context for communication partners. With respect to the latter, ongoing research by Higginbotham and Engelke (in press) focuses on developing technologies that support the memory needs of AAC users’ communication partners by displaying relevant aspects of the discourse history during the social interaction. Finally, reconceptualizing automaticity as an interactional achievement provokes us to consider designing interfaces that helps the augmented communicator focus on achieving their interaction goals by minimizing the operational requirements of their technologies.

Notes 1. One way to understand how quickly Karl can move his eyes to type is to examine the time it takes him to move the onscreen cursor between selected keys. The average (median) interkey selection latency was 0.6 seconds with half the data (interquartile range) falling between 0.3 and 1.3 seconds in duration. That is, Karl moves between keys in rapid succession, wasting little time between typing selections. 2. During the 4 minute, 17 second interaction analysed for this paper, Karl produced six typed utterances: ‘Head’ (14.5 sec), ‘Light’ (11 sec), ‘No’ (4.9 sec), ‘Light up a little’ (47.9 sec), ‘overhead’ (16.3 sec), and ‘Good (10.1 sec)’ averaging approximately 10 seconds per word. This prolonged message preparation delay is due, in part, to the 1-second dwell time accompanying each keystroke selection. 3. Bracketed words (e.g., ) indicate buttons selected by Karl on his eye tracking device. 4. On Karl’s first attempt to select the button, he selected the adjacent button. Karl quickly recovered by exiting the phrase screen then selecting the button. 5. Throughout this article we have elected to use this format (i.e. ‘action/object,’ or ‘action/ utterance’) in order to reflect the duplicitous nature of the object being attended to. That is, as demonstrated throughout, Karl’s typing actions as well as the emerging text on his display can be attended to as either in service of a larger ‘act’ or as a complete object in its own right.

Acknowledgments This article draws on research that is funded under grant #H133E080011 from the National Institute on Disability and Rehabilitation Research and the Wenner-Gren Foundation for Anthropological Research. The authors would like to thank Charles Goodwin, Stephen Black, Jeff Good, Chris Klein, Michael Phillips and an anonymous reviewer for reading and

Christopher R. Engelke and D. Jeffery Higginbotham

117

providing valuable comments on previous versions of this article. The work has benefited greatly from the guidance and direction provided by Alessandro Duranti and C. Jason Throop. We would also like to thank Regina Estrada and Netta Avineri for their assistance and suggestions, as well as Katrina Fulcher, Jenn Seale, Haesik Min, Anna Corwin, Rachel George and Hanna Garth for their input at data sessions for this project. Finally, we are extremely grateful to the individuals who participated in this research. Any deficiencies remain our own.

About the authors Christopher Engelke is a doctoral candidate in the linguistic anthropology programme at the University of California–Los Angeles. After doing research on pilgrimage and healing ceremonies in South Asia for several years, Engelke returned to the United States to complete his Ph.D. Integrating theory and methods from linguistic anthropology, phenomenology, and talk-in-interaction research, Engelke's dissertation research examines the design and use of assistive communications technologies. The primary focus of this work is on the ways that people with speech disabilities participate in everyday interactions, and the effects that various tools and practices have on the structures of interaction and manifestations of intersubjectivity. Engelke's research also explores the ways that able-bodied designers/engineers approach the task of creating devices for people who encounter a different set of affordances than themselves and the narrative presentations of augmented communicators. Engelke has worked on a number of assistive technology design projects including various technologies for people with sensory disabilities and InTRA, a suite of applications that leverage partner speech to improve the flow of interactions involving augmented communicators. Address: Department of Anthropology, University of California, Los Angeles, 341 Haines Hall, 375 Portola Plaza, Los Angeles, CA 90095-1553, USA (email: [email protected]). Dr. D. Jeffery Higginbotham is a professor of Communicative Disorders and Sciences at SUNY- Buffalo as well as Director of Buffalo’s Signature Center for Excellence in Augmented Communication. The thrust of his research focuses on how individuals use assistive technologies to interact with their social world. To accomplish this, he studies the talk-in-interaction of augmented speakers and their partners and the human and device design factors associated with assistive technology use. Dr Higginbotham is a founding member of the Rehabilitation Engineering Research Center for Communication Enhancement. He also consults with industry on augmentative communication device design.

118

Looking to speak

References Bergson, H. (1913). Time and Free Will: An Essay on the Immediate Data of Consciousness. London: George Allen & Co. Beukelman, D. R. and Mirenda, P. (2005). Augmentative and Alternative Communication: Supporting Children and Adults with Complex Communication Needs. Baltimore, MD: Paul H. Brookes Publishing Co. Black, S. P. (2008). Creativity and learning jazz: The practice of ‘listening’. Mind, Culture, and Activity 15 (4): 279–295. http://dx.doi.org/10.1080/10749030802391039 Clark, H. H. (1996). Using Language. Cambridge: Cambridge University Press. http://dx. doi.org/10.1017/CBO9780511620539 Clarke, M. and Wilkinson, R. (2007). Interaction between children with cerebral palsy and their peers 1: Organizing and understanding VOCA use. Augmentative and Alternative Communication 23 (4): 336–348. http://dx.doi.org/10.1080/07434610701390350 Clarke, M. and Wilkinson, R. (2008). Interaction between children with cerebral palsy and their peers 2: Understanding initiated VOCA mediated turns. Augmentative and Alternative Communication 24 (1): 3–15. Clarke, M. and Wilkinson, R. (2010) Communication aid use in children's conversation: Time, timing, and speaker transfer. In H. Gardner and M. Forrester (eds) Analysing Interaction in Childhood: Insights from Conversation Analysis, 249–266. London: Wiley. Clarke, M. (2005). Conversational interaction between children using communication aids and their peers. Department of Human Communication Science, Ph.D. Thesis, University College London, London. Duranti, A. (2009). The relevance of Husserl’s theory to language socialization. Journal of Linguistic Anthropology 19 (2): 205–226. http://dx.doi.org/10.1111/j.1548-1395.2009.01031.x Duranti, A. (2010). Husserl, intersubjectivity and anthropology. Anthropological Theory 10 (1–2): 16–35. http://dx.doi.org/10.1177/1463499610370517 Engelke, C. R. (in submission). Multi-modal and inter-modal communication in rapid prompting method mediated interaction. Engelke, C. R. and Mangano, D. (2007). Using the world: Phenomenology and semiotic practice in interactions with children with severe autism: The American Association of Applied Linguistics-Annual Meeting. Costa Mesa, CA. Engelke, C. R. and Mangano, D. (2008). Temporal cues: What children with severe autism can teach us about the organization of intersubjectivity. SALSA. University of Texas-Austin. Gadamer, H. G. (2004 [1975]). Truth and method. London and New York: Continuum. Goodwin, C. (1979). The interactive construction of a sentence in natural conversation. In G. Psathas (ed.) Everyday Language, 97–121. New York: Halsted Press. Goodwin, C. (1980). Restarts, pauses, and the achievement of a state of mutual gaze at turnbeginning. Sociological Inquiry 50 (3–4): 272–302. http://dx.doi.org/10.1111/j.1475-68 2X.1980.tb00023.x

Christopher R. Engelke and D. Jeffery Higginbotham

119

Goodwin, C. (2003). Conversational frameworks for the accomplishment of meaning in aphasia. In C. Goodwin (ed.) Conversation and Brain Damage, 90-116. Oxford: Oxford University Press. Goodwin, C. (2004). A competent speaker who can't speak: The social life of aphasia. Journal of Linguistic Anthropology 14 (2): 151–170. http://dx.doi.org/10.1525/jlin.2004.14.2.151 Goodwin, C. (2006). Human sociality as mutual orientation in a rich interactive environment: Multimodal utterances and pointing in aphasia. In N. J. Enfield, and S. C. Levinson (eds) Roots of Human Sociality: Culture, Cognition, and Interaction, 97–125. Oxford: Berg. Goodwin, C. (2010). Building action in public environments with diverse semiotic resources. Versus (Special Issue ‘The External Mind: Perspectives on Semiosis, Distribution and Situation in Cognition’ edited by Roccardo Fusaroli, Tommaso Granelli, and Claudio Paolucci), 112–113, 165–178. Goodwin, C. (2011). Contextures of action. In J. Streeck, C. Goodwin and C. D. LeBaron (eds) Embodied Interaction: Language and the Body in the Material World, 182–193. Cambridge: Cambridge University Press. Goodwin, C. and Goodwin, M. H. (1987). Concurrent operations on talk: Notes on the interactive organization of assessments. IPRA Papers in Pragmatics 1: 1–54. Goodwin, C., Goodwin, M. H. and Olsher, D. (2002). Producing sense with nonsense syllables: Turn and sequence in conversations with a man with severe aphasia. In C. E. Ford, B. A. Fox and S. A. Thompson (eds) The Language of Turn and Sequence, 56–80. Oxford: Oxford University Press. Heeschen, C. and Schegloff, E. A. (1999). Agrammatism, adaptation theory, conversation analysis: On the role of so-called telegraphic style in talk-in-interaction. Aphasiology 13 (3–4): 365–405. Heritage, J. (1984). Garfinkel and Ethnomethodology. Cambridge: Polity Press. Higginbotham, D. J. and Engelke, C. R. (in press). A primer for doing talk-in-interaction research in augmentative and alternative communication. Augmentative and Alternative Communication. Higginbotham, J. D. and Wilkins, D. P. (1999). Slipping through the timestream: Social issues of time and timing in augmented interactions. In D. Kovarsky, J. Duchan and M. Maxwell (eds) Constructing (In)competence: Disabling Evaluations in Clinical and Social Interaction, 49–82. Mahwah, NJ: Lawrence Erlbaum Associates. Hill, K. and Romich, B. (2002). A rate index for augmentative and alternative communication. International Journal of Speech Technology 5 (1): 57–64. http://dx.doi.org/10.1023/ A:1013638916623 Husserl, E. (1964). The Phenomenology of Internal Time-consciousness. Bloomington, IN: Indiana University Press. Husserl, E. (1970). Logical Investigations. London: Routledge and Kegan Paul; Humanities Press.

120

Looking to speak

Husserl, E. (1989). Ideas Pertaining to a Pure Phenomenology and to a Phenomenological Philosophy. Second book: Studies in the Phenomenology of Constitution. Dordrecht: Kluwer Academic Press. http://dx.doi.org/10.1007/978-94-009-2233-4 Husserl, E. (1991). On the Phenomenology of the Consciousness of Internal Time (1893– 1917). Dordrecht/Boston/London: Kluwer Academic Publishers. http://dx.doi.org/10. 1007/978-94-011-3718-8 Husserl, E. (2001). Analyses Concerning Passive and Active Synthesis: Lectures on Transcendental Logic. Dordrecht: Kluwer Academic Publishers. James, W. (2010 [1890]). The Principles of Psychology. Lawrence, KS: Digireads.com Publishing. Jefferson, G. (1984). Notes on some orderlinesses of overlap onset. In V. D’Urso, and P. Leonardi (eds) Discourse Analysis and Natural Rhetorics, 11-38. Padova: CLEUP. Jefferson, G. (1986). Notes on ‘latency’ in overlap onset. Human Studies 9 (2/3): 153–183. Kraat, A. W. (1985). Communication Interaction between Aided and Natural Speakers: A State of the Art Report. International Commission on Technical Aids, Building, and Transportation. Levinson, S. C. (1983). Pragmatics. Cambridge: Cambridge University Press. Lou, F. (2007). Personal narrative telling by individuals with all who use AAC devices. Department of Communicative Disorders and Sciences, Ph.D. Thesis. State University of New York at Buffalo, New York. Lou, F., Bardach, L., Cornish, J. and Higginbotham, D. J. (2008). Personal narrative telling of AAC users with ALS. Poster presented at the Annual convention of the American Speech-Language and Hearing Association. Philadelphia, PA. Merleau-Ponty, M. (1962). Phenomenology of Perception. London: Routledge. Pomerantz, A. M. (1984). Agreeing and disagreeing with assessments: Some features of preferred/dispreferred turn shapes. In J. M. Atkinson and J. Heritage (eds) Structures of Social Action: Studies in Conversation Analysis, 57–101. Cambridge: Cambridge University Press. Raymond, G. (2003). Grammar and social organization: Yes/no interrogatives and the structure of responding. American Sociological Review 68 (6): 939–967. http://dx.doi. org/10.2307/1519752 Roberts, F., Margutti, P. and Takano, S. (2011). Judgments concerning the valence of interturn silence across speakers of American English, Italian, and Japanese. Discourse Processes 48 (5): 331–354. http://dx.doi.org/10.1080/0163853X.2011.558002 Robillard, A. B. (1999). Meaning of a Disability: The Lived Experience of Paralysis. Philadelphia, IL: Temple University Press. Robillard, A. B. (2006). Paralysis. In G. L. Albrecht (ed.) Encyclopedia of Disability, 1197– 1201. Thousand Oaks, CA: Sage Publications.

Christopher R. Engelke and D. Jeffery Higginbotham

121

Sacks, H., Schegloff, E. A. and Jefferson, G. (1974) A simplest systematics for the organization of turn-taking for conversation. Language 50 (4): 696–753. http://dx.doi. org/10.2307/412243 Schegloff, E. A. (1992). Repair after next turn: The last structurally provided defense of intersubjectivity in conversation. American Journal of Sociology 97 (5): 1295–1345. http:// dx.doi.org/10.1086/229903 Schegloff, E. A. (2007). Sequence Organization in Interaction: A Primer in Conversation Analysis. Cambridge: Cambridge University Press. http://dx.doi.org/10.1017/CBO97805 11791208 Schegloff, E. A., Jefferson, G., and Sacks, H. (1977). The preference for self-correction in the organization of repair in conversation. Language 53 (2): 361–382. Schutz, A. (1971 [1964]). Making music together: A study in social relationship. In A. Brodersen (ed.) Collected Papers, Vol. 2, 159-179. The Hague: Martinus Nijhoff. Schutz, A. (1972 [1932]). The Phenomenology of the Social World. London: Heinemann Educational. Smith, L. E., Higginbotham, D. J., Lesher, G. W., Moulton, B. M. and Mathy, P. (2006). The development of an automated method for analyzing communication rate in augmentative and alternative communication. Assistive Technology 18 (1): 107–121. http://dx.doi. org/10.1080/10400435.2006.10131910 Stivers, T., Enfield, N. J., Brown, P., Englert, C., Hyashi, M., Heinemann, T., Hoymann, G., Rossano, F., de Ruiter, J. P., Yoon, K.-E. and Levinson, S. C. (2009). Universals and cultural variation in turn-taking conversation. Proceedings of the National Academy of Sciences of the United Sates of America 106: 10587–10592. http://dx.doi.org/10.1073/ pnas.0903616106 Throop, C. J. (2003). Articulating experience. Anthropological Theory 3 (2): 219–241. http:// dx.doi.org/10.1177/1463499603003002006 Todman, J., Alm, N., Higginbotham, D. J. and File, P. (2008). Whole utterance approaches in AAC. Augmentative and Alternative Communication 24 (3): 235–254. http://dx.doi. org/10.1080/08990220802388271 Todman, J. and Rzepecka, H. (2003). Effect of pre-utterance pause length on perceptions of communicative competence in AAC-aided social conversations. Augmentative and Alternative Communication 19 (4): 222–134. http://dx.doi.org/10.1080/0743461031000 1605810 Trnka, K., Yarrington, D., McCaw, J., McCoy, K. F. and Pennington, C. (2007). The effects of word prediction on communication rate for AAC. NAACL-HLT Companion Volume: Short Papers, 172–176. Wilkinson, R. (1999). Sequentiality as a problem and resource for intersubjectivity in aphasic conversation: Analysis and implications for therapy. Aphasiology 13 (4): 327–343. http://dx.doi.org/10.1080/026870399402127 Wilkinson, R., Beeke, S. and Maxim, J. (2003). Adapting to conversation: On the use of

122

Looking to speak

linguistic resources by speakers with fluent aphasia in the construction of turns at talk. In C. Goodwin (ed.) Conversation and Brain Damage, 59-89. Oxford: Oxford University Press. Wilkinson, R., Bloch, S. and Clarke, M. (2011). On the use of graphic resources in interaction by people with communication disorders. In J. Streeck, C. Goodwin and C. D. LeBaron (eds) Embodied Interaction: Language and Body in the Material World, 152–168. Cambridge: Cambridge University Press.