New User Registration

In order to register on this site, you must first submit the passphrase below.

1999 - July 5/6/7 University of Durham

DURHAM MEETING 1999

A scientific meeting will be held in the Psychology Department (Science Site, Stockton Road) of the University of Durham on 5/6/7 July, 1999. The Local Secretary will be Professor J. Findlay.

 

PROGRAMME

 Monday 5 July
 

Lecture Room L50

2.00 Tim Valentine, Jarrod Hollis* and Viv Moore* (Goldsmiths College, University of London)
The nominal competitor effect: When one name is better than two.

2.30 Viv Moore* and Tim Valentine (Goldsmiths College, University of London)
The effects of age of acquisition in processing famous faces and names: Exploring the locus and proposing a mechanism.

3.00 Peter E. Morris and Lee H. V. Wickham* (University of Lancaster)
Typicality and face recognition: A critical re-examination of the two factor theory.

3.30 Tea

4.00 Tom Campbell* and D. C. Berry (University of Reading)
Lip-reading and irrelevant speech: Changing yet not unchanging speech disrupts memorial and perceptual-attentional processes.

4.30 Maxine McCotter* and Tim Jordan (University of Nottingham)
Investigating the role of colour and luminance in visual and audiovisual speech perception.

5.00 Sharon Thomas* and Tim Jordan (University of Nottingham)
Investigating the role of low spatial frequency information in visual and audio-visual speech perception.

5.40 Business Meeting (members only)

6.00 - 7.30 Reception (Room L49)
 

Tuesday 6 July
 

Lecture Room L50

9.00 Annette Kinder* and David Shanks (Philipps-University, Marburg, Germany and University College London)
Is implicit learning selectively spared in amnesia?

9.30 S. Helman* and D. C. Berry (University of Reading)
Qualitative differences between implicit and explicit processing.

10.00 L. T. Butler*, D. C. Berry and R. Shepherd* (University of Reading and University of Surrey)
Implicitly formed preferences for food labels.

10.30 Coffee

11.00 Richard J. Tunney* and Gerry T. M. Altmann (University of York)
The transfer effect in artificial grammar learning: A functional dissociation between two modes of classification.

11.30 Jeffrey S. Bowers* (University of Bristol. Introduced by Dr. Iain D. Gilchrist)
Long-term priming as a by-product of perceptual learning.

12.00 Alan Richardson-Klavehn*, A. J. Benjamin Clarke* and John M. Gardiner (University of Westminster and City University. Introduced by Professor D. C. Berry)
Involuntary "perceptual" priming from generating at study, as revealed by conjoint dissociations between incidental and intentional memory tests.

12.30 Angus Gellatly, Matthew Johnson* Clare Fox* and Geoff Cole (University of Keele)
Motor inhibition in reaction time tasks.

1.00 Lunch

2.00 Holly P. Branigan*, Martin J. Pickering and Alexandra A. Cleland* (University of Glasgow)
Syntactic co-ordination in dialogue.

2.30 Martin J. Pickering, Holly P. Branigan* and Andrew J. Stewart* (University of Glasgow)
A hierarchical model of syntactic activation during language production: Evidence from syntactic priming.

3.00 Jane L. Morgan* and Linda R. Wheeldon (University of Birmingham)
Monitoring the inner speech code.

3.30 Tea

4.00 Jelena Havelka* and Clive Frankish (University of Bristol)
What happens when case mixing disrupts functional spelling units?

4.30 Kate Nation (University of York)
Which units of sound-to-spelling correspondence are important when skilled adults spell novel words? Effects of lexical priming and frequency of occurrence.

5.00 Tim Jordan and Ken Scott-Brown* (University of Nottingham)
Word superiority effects with visually-filtered strings: Evidence for coarse visual cues in word recognition.
 

6.00 Twenty-seventh Bartlett Lecture. Professor L. L. Jacoby (McMaster University)
When recollection fails: Memory dissociations.

8.00 Conference Dinner
Collingwood College, South Road, Durham
 
 

Wednesday 7 July
 

START OF PARALLEL SESSION

Session A: Lecture Room L48

9.00 C. O. Fritz* and P. E. Morris (Bolton Institute and Lancaster University)
Part set cuing effects in coherent contexts.

9.30 Teresa McCormack*, Gordon D. A. Brown* Elizabeth. A. Maylor and Richard J. Darby (University of Warwick)
Time estimation from childhood to old age.

10.00 R. Walker, D. Maurer*, S. Mannan*, A. Pambakian* and C. Kennard* (Royal Holloway, University of London, McMaster University, Ontario and Imperial College School of Medicine)
Naso-temporal asymmetries in saccade latency in normal and hemianopic subjects.

10.30 Coffee

Symposium: Sequential sampling and active vision. (Organised by Dr. John Findlay)

11.00 Keith Rayner (University of Massachusetts).
Eye movements during reading, scanning and visual search.

11.30 A. Pollatsek* (University of Massachsetts, visiting MRC Cognition and Brain Sciences Unit, Cambridge)
The Impact of "Non-Vision" on Vision.

12.00 Michael Land*, Neil Mennie* and Jenny Rusted (University of Sussex)
The roles of eye movements in activities of everyday life.

12.30 Iain D. Gilchrist, Valerie Brown* and John M. Findlay (University of Bristol and University of Durham)
Keeping up with your head: active vision without eye movements.

1.00 Lunch
 

Session B Lecture Room L50

9.00 Gerry T. M. Altmann and Y. Kamide* (University of York)
Incremental interpretation at verbs: Who needs nouns when verbs will do just as well?

9.30 Agnieszka A. Reid* and William D. Marslen-Wilson (MRC Cognition and Brain Sciences Unit, Cambridge)
Combination and alternation in Polish: Cross-linguistic studies of the mental lexicon.

10.00 Andrew J. Calder, Mike Burton, Paul Miller* and Shigeru Akamatsu* (MRC Cognition and Brain Sciences Unit, Cambridge, University of Glasgow and ATR Human Information Processing Research Laboratories, Kyoto, Japan)
It's written all over your face: A principal component analysis of facial expressions.

10.30 Coffee

11.00 Jonathan K. Foster (University of Manchester)
The Hippocampus and memory: A convergent operations approach.

11.30 Carlo De Lillo (University of Leicester. Introduced by Dr. B. O. McGonigle)
Chunking by spatial proximity and search efficiency in rats.

12.00 Robert Boakes (University of Sydney)
Was it the chicken or the rice that made him ill? Within-event associations and causal judgements

12.30 A. J. Lloyd* (Oxford Brookes University. Introduced by Dr. K. Nation)
The perception of benefit: Does it modulate the perception of risk?

1.00 Lunch
 

END OF PARALLEL SESSION
 

Lecture Room L50

2.00 Lilach Shalev* and Glyn W. Humphreys (The Open University of Israel and University of Birmingham)
Length and size perception in unilateral neglect: Compression of magnification?

2.30 Jules Davidoff and Debi Roberson (Goldsmiths College, University of London)
Similarity and categorisation: neuropsychological evidence for a dissociation in explicit categorisation tasks.

3.00 Andy Young, Jill Keane*, Alan Parkin, Barbara Wilson* and Andy Calder (University of York, MRC Cognition and Brain Sciences Unit, Cambridge and University of Sussex)
Fear recognition after brain injury.

3.30 Tea

4.00 Glyn W. Humphreys and Derrick Watson (University of Birmingham and University of Warwick)
Dual task interference on visual marking: Modality-independent and modality-dependent components set-up of the marking state.

4.30 Anne P. Hillstrom*, Charles Spence, Kimron Shapiro and Salvador Soto*(University of Wales, Bangor, University of Oxford and Barcelona University)
Tactile and cross-modal attentional blinks.

5.00 Charles Spence and Salvador Soto* (University of Oxford and Universidad de Barcelona)
Intramodal and crossmodal temporal processing deficits.

5.30 Meeting ends
 
 

ABSTRACTS



The nominal competitor effect: When one name is better than two.

Tim Valentine, Jarrod Hollis and Viv Moore

    Goldsmiths College, University of London

An interactive activation and competition (IAC) model of familiar face naming (Brédart, Valentine, Calder and Gassi, 1995) predicts that people are slower to name a celebrity for whom two names are equally available than they are to name an equally familiar celebrity for whom only one name is available. However, naming should only be slowed by competition from another name; a highly available biographical property should not increase face naming latency. These predications were confirmed in a simulation of the model. Experiment 1 demonstrated that the predictions are supported in an empirical study in which production of two names, or one name and a biographical property of famous actors are highly practiced. Experiment 2 shows that the inhibitory effect persists after many intervening items have been named. The effect, termed the nominal competitor effect, is distinguished from the semantic competitor effect (Wheeldon & Monsell, 1994) and is attributed to increases in connection strength that give rise to repetition priming.

    Brédart, S., Valentine, T., Calder, A., & Gassi, L. (1995). An interactive activation model of face naming. Quarterly Journal of Experimental Psychology, 48A, 466-486.
    Wheeldon, L. R., & Monsell, S. (1994). Inhibition of spoken word production by priming a semantic competitor. Journal of Memory and Language, 33, 332-356.
 

The effects of age of acquisition in processing famous faces and names: Exploring the locus and proposing a mechanism.

Viv Moore and Tim Valentine

    Goldsmiths College, University of London

Words acquired early in life are recognised and produced faster than words acquired later in life. It has been proposed that the effect of age of acquisition (AoA) arises from the development of phonological representations during language acquisition. Moore and Valentine (1998) found an effect of AoA on the speed of naming celebrities' faces. This result is problematic for an account of AoA based on language development because knowledge of celebrities is acquired after the early representations in the phonological output lexicon are formed. Three experiments explore the locus and mechanism of AoA in face and name processing tasks. Participants read aloud early-acquired names faster than late- acquired names (Experiment 1). Familiarity decisions are faster to early-acquired celebrities' names and faces than to late-acquired names and faces (Experiments 2 & 3). These findings present a challenge for connectionist models to provide an adequate model of both AoA and cumulative frequency. It is argued that temporal order of acquisition rather than age of acquisition may be the chief determinant of processing speed.
 

Typicality and face recognition: A critical re-examination of the two factor theory.

Peter E. Morris and Lee H. V. Wickham.

    University of Lancaster

Vokey and Read (1992) proposed that the effect of typicality on face recognition was a function of familiarity and rated memorability, reporting that typicality loaded equally on components they identified with these variables. However, our examination of subsequent research that has claimed to replicate this finding revealed only limited support. We attempted to replicate and extend the original study. We found three components that we identified as attractiveness, distinctiveness and residual memory. Three of the four measures of typicality that we employed failed to load with familiarity on the attractiveness component as the Vokey and Read model predicts. The typicality measures and rated memorability loaded heavily on the distinctiveness component, with moderate loading from recognition hits. Only hits and false positives loaded on the residual memory factor. We offer an alternative interpretation of Vokey and Read's (1992) findings in terms of the metamemorial beliefs of participants, the mere exposure effect and the relationship between typicality and attractiveness.

    Vokey, J. R., & Read, J. D. (1992). Familiarity, memorability and the effect of typicality on the recognition of faces. Memory & Cognition, 20, 291-302.
 

Lip-reading and irrelevant speech: Changing yet not unchanging speech disrupts memorial and perceptual-attentional processes.

Tom Campbell and D. C. Berry

    University of Reading

Four experiments re-examined Jones' (1994) finding that memory for lip-read items obeys changing state principles: A changing state sequence of irrelevant speech sounds, synchronous with to-be-remembered items, significantly disrupts immediate written recall of lip-read digit items, whereas an unchanging repeated syllable is not disruptive. Experiment 1 replicated this finding. Subsequent experiments showed that changing speech significantly disrupted perceptual report of lip-read items more than unchanging speech, when report was both written (Experiment 2) and spoken (Experiment 4). Finally, Experiment 4 showed that when irrelevant speech occurred in the retention interval of a delayed recall, only changing irrelevant speech was significantly disruptive. Memory for lip-read items thus obeys changing state principles but there is also a changing state disruption of perceptual-attentional processes.
 

Investigating the role of colour and luminance in visual and audiovisual speech perception.

Maxine McCotter and Tim Jordan

    University of Nottingham

Research has shown that auditory speech recognition is influenced by the appearance of a talker's face but the actual nature of this information has yet to be established. Two experiments are reported which investigated the importance of colour and luminance information in visual and audiovisual speech perception using facial images presented in colour, grey scale, and photographic negative. Auditory and visual forms of the syllables /ba/, /bi/, /ga/, /gi/, /va/, and /vi/ were used to produce auditory, visual, congruent and incongruent audiovisual speech stimuli. Experiment 1 showed that visual speech identification and visual influences on identifying the auditory components of congruent and incongruent audiovisual speech were identical for colour and grey scale faces and only slightly reduced for negative faces. Experiment 2 presented the same stimuli in a background of white noise and showed the same pattern of effects. These results suggest that luminance, rather than colour, underlies the perception of visual and audiovisual speech. Moreover, comparisons with negative images suggest that the role of more complex luminance variations may be relatively minor and underscore the importance of relatively basic luminance boundaries available in colour, grey scale and negative images. Theoretical and practical implications for the processing of visual and audiovisual speech are discussed.
 

Investigating the role of low spatial frequency information in visual and audiovisual speech perception.

Sharon Thomas and Tim Jordan

    University of Nottingham

We have known for some time that seeing a person's face move when they are speaking affects the speech sounds we hear. Previous theories have argued that this powerful means of visual communication relies heavily on the perception of fine visual detail in a talker's face. However, research in static face recognition suggests that viewers are sensitive to structural relations among facial features which are transmitted through low-spatial-frequency information. We investigated the importance of low spatial frequency information in dynamic visual and audiovisual speech recognition by presenting a variety of low spatial frequency audiovisual images of talking faces in which the configuration of facial features is maintained or disrupted. Results show that the perception of visual speech per se, and the influence of visual speech on perceived speech sounds can be robust even when low spatial frequency images of talking faces are presented. These findings are discussed in terms of the quality of visual information required for visual and audiovisual speech perception and its relationship to facial configurational information.
 

Is implicit learning selectively spared in amnesia?

Annette Kinder1 and David Shanks2

    1. Philipps-University, Marburg, Germany
    2. University College London

A key claim of current theoretical analyses of the memory impairments associated with amnesia is that certain distinct forms of learning and memory are spared. In a well-known research program supporting this claim, Knowlton and Squire found that amnesic patients and controls were indistinguishable in their ability to learn about and classify strings of letters generated from a finite-state grammar, but that the amnesics were impaired at recognising the training strings. We show, first, that this pattern of results is predicted by a single-system connectionist model of artificial grammar learning in which amnesia is simulated by a reduced learning rate. We then show in 2 experiments that a counterintuitive assumption of this model, that classification and recognition are functionally identical in grammar learning, is correct. We conclude that the performance of amnesic patients in this implicit learning task is better understood in terms of a general, rather than a selective, memory deficit.
 

Qualitative differences between implicit and explicit processing.

S. Helman and D. C. Berry

    University of Reading

Following exposure to grammatical letter strings (created by an artificial grammar), participants typically rate novel grammatical strings as more preferable than novel ungrammatical strings (e.g. Manza & Bornstein, 1995). This finding shows that preference ratings can reveal the implicit application of knowledge about artificial grammars. In two experiments, we sought to demonstrate qualitative differences between implicit and explicit versions of the preference task. Experiment 1 showed that introducing a secondary task at test interfered with performance on the explicit version, but not the implicit version, of the preference task. Experiment 2 showed that when participants were forced to make ratings within a very short deadline (2500 msecs), performance was reduced (relative to no-deadline performance) on the explicit version of the task, but not on the implicit version of the task. Results are discussed in terms of subjective approaches to the distinction between implicit and explicit processing.
 

Implicitly formed preferences for food labels.

L. T. Butler1, D. C. Berry1, and R. Shepherd2

    1. University of Reading
    2. University of Surrey

Two experiments are reported which sought to investigate the role of implicit memory in consumer choice. To do this, priming for simple food labels was examined, using a Preference Judgement Task (e.g., Kunst-Wilson and Zajonc, 1980). As the task has been rarely employed, a related aim was to explore whether it displayed similar properties to other more commonly used implicit memory tasks. The first experiment showed that a study-test change in the physical features of the labels did not reduce priming. However, in a second experiment, priming on both auditory and visual versions of the PJT was reduced by changes in modality, as expected. Additionally, responses given to Awareness questionnaires suggested that performance was not affected by explicit contamination. Surprisingly though, modality impaired performance on an auditory, but not visual, recognition task. Results and implications are discussed in relation to current theories of implicit memory and models of consumer choice.
 

The transfer effect in artificial grammar learning: A functional dissociation between two modes of classification.

Richard J. Tunney and Gerry T. M. Altmann

    University of York

Participants can transfer grammatical knowledge acquired implicitly in one vocabulary to sequences instantiated in another vocabulary (they perform above chance when classifying these new sequences as grammatical or not). We present two experiments that find differences in sensitivity to sequential dependencies between repeating vocabulary elements (repetition structure) and sensitivity to dependencies between non-repeating elements (bigram information), in both the same vocabulary as training and in a novel one. Varying the strength of the dependencies between repeating and non-repeating elements provides a functional dissociation between these two modes of classification. We found in one study that participants were equally sensitive to the patterns of repeating elements in both the same vocabulary as learning and in a different one. When the dependencies between repeating vocabulary elements were reduced (the two elements did not co-occur in every exemplar sequence) sensitivity to them at test was reduced by a comparable amount in each vocabulary. We found no evidence that participants were sensitive to the dependencies between non-repeating elements. In a separate study we did find sensitivity to dependencies between non-repeating elements in both vocabularies. However, when these dependencies were again reduced (one element did not always predict the other), participants remained as sensitive to them in the original vocabulary but were unable to transfer that knowledge to the novel vocabulary. These data have implications for models of implicit grammar learning which assume the representation of individual bigrams.
 

Long-term priming as a by-product of perceptual learning.

Jeffrey S. Bowers (Introduced by Dr. Iain D. Gilchrist)

    University of Bristol

A series of experiments are described that attempt to better characterize the representations and processes that support long-term priming for written words. Priming for low-frequency words is shown to reflect a change in sensitivity in perception rather than pure bias, as a number of authors have maintained (e.g., Ratcliff and McKoon, 1997). Furthermore, both orthographic and phonological codes are shown to contribute to this priming. Based on these findings, it is argued that word priming is best understood as an incidental by-product of learning within orthographic and phonological systems. It is concluded that priming may provide constraints regarding how these perceptual systems learn.

    Ratcliff, R., & McKoon, G. (1997). A counter model for implicit priming in perceptual word identification. Psychological Review, 104, 319-343.
 

Involuntary "perceptual" priming from generating at study, as revealed by conjoint dissociations between incidental and intentional memory tests.

Alan Richardson-Klavehn1, A. J. Benjamin Clarke1 and John M. Gardiner2 (Introduced by Professor D. C. Berry)

    1. University of Westminster, London
    2. City University, London

Incidental perceptual memory tests with visual test cues reveal repetition priming for words that are generated orally from a semantic cue at study and not visually presented. This priming could reflect (a) contamination by voluntary retrieval strategies, (b) sensitivity of involuntary retrieval to conceptual processes at study, or (c) involvement of modality-independent lexical processes in involuntary retrieval. We tested these hypotheses using a generate study condition, two read study conditions that differed in depth of processing (read-phonemic vs. read-semantic), and incidental and intentional word-stem completion tests. The results are the first conjointly showing a crossed double dissociation, a single dissociation, a parallel effect, and a reversed association across memory tests with identical physical retrieval cues. They show that "perceptual" priming from generating can be involuntary and suggest that modality-independent lexical processes are responsible.
 

Motor inhibition in reaction time tasks.

Angus Gellatly, Matthew Johnson, Clare Fox and Geoff Cole

    University of Keele

An important requirement when an observer must make a speeded discriminative response to an abruptly presented signal is that response initiation should be withheld until the correct response has been selected. The process by which response withholding is achieved may be termed "motor inhibition". We will begin by describing performance on a simple detection task and by suggesting that it provides evidence for the influence of motor inhibition. Specifically, the task condition that yields higher target detection rates is also associated with slower response times, and there is good evidence that this is not the result of a speed/error trade-off. We will then describe a series of experiments designed to test the motor inhibition interpretation of this pattern of data. Some methodological issues in the study of motor inhibition will be illustrated, and implications of these results for the design of reaction time experiments will be discussed
 

Syntactic co-ordination in dialogue.

Holly P. Branigan, Martin J. Pickering and Alexandra A. Cleland

    University of Glasgow

Studies of dialogue have demonstrated that speakers start to speak in similar ways with respect to semantic and lexical structure (e.g. Brennan & Clark, 1996; Garrod & Anderson, 1987). Such 'co-ordination' is argued to have functional benefits: it reduces the speaker's computational load on the one hand and reduces the danger of the listener misunderstanding the speaker on the other. We present three experiments that used a novel 'confederate-priming' technique to investigate whether speakers also co-ordinate the syntactic structure of their utterances in dialogue. Pairs of speakers took it in turns to describe pictures to each other, ostensibly as part of a picture-matching experiment. One speaker was a confederate of the experimenter and produced scripted descriptions that systematically varied in syntactic structure. We examined whether the syntax of the confederate's prime description affected the syntax of a subject's immediately subsequent target description under various conditions. Our results demonstrate syntactic co-ordination in dialogue: subjects tended to produce descriptions that had the same syntax as the confederate's immediately preceding description. This occurred even when the descriptions involved unrelated actions and when the subject could overhear but did not directly address the confederate. We discuss the implications of our results for current models of language production.

A hierarchical model of syntactic activation during language production: Evidence from syntactic priming.

Martin J. Pickering, Holly P. Branigan and Andrew J. Stewart

    University of Glasgow

Language production requires the linearisation of an inherently non-linear message (e.g., Bock, 1991; Vigliocco & Nicol, 1998). This entails ensuring that each sentential element appears in an appropriate linear position. We propose a hierarchical account of syntactic processing in which the processor activates relevant syntactic information and deactivates irrelevant information. Specifically, we suggest that syntactic information associated with main clauses may be implicated in subsequent processing and therefore remains active, whereas information associated with embedded clauses will not be relevant and hence is deactivated following processing. This account contrasts with a simple linear account, in which activation is dependent on non-linguistic factors (e.g. time). We test this model in four syntactic priming experiments. Under our hierarchical account, only main clause structure will retain residual activation after processing and hence prime subsequent structure; embedded clause structure is inhibited following processing and hence will not prime. As predicted, the syntax of participants' completions for fragments like (1b) was primed following main clause primes like (1a), but not following embedded clause primes like (2a). Other experiments excluded explanations based on differences in length (2b) or number of verbs and discourse entities (2c) between prime and target. Our results therefore support the hierarchical model of syntactic activation.

    1a. The racing driver shows the torn overall/helpful mechanic...
    1b. The patient shows...

    2a. Anne claimed that the racing driver showed the torn overall/helpful mechanic...
    2b. On Friday, the racing driver showed the torn overall/helpful mechanic...
    2c. As Anne claimed, the racing driver showed the torn overall/helpful mechanic...
 

Monitoring the inner speech code.

Jane L. Morgan and Linda R. Wheeldon

    University of Birmingham

There is much evidence to suggest that it is possible to monitor our speech production for errors at a prearticulatory level (Levelt, 1983). However, the nature of the level of representation being monitored is still unclear. Borrowing the sequence of monitoring paradigm from the field of speech perception Wheeldon and Levelt (1995) required Dutch subjects to monitor internally produced words for either syllable or phoneme segments. It was found that subjects were basing their responses on a syllabified abstract, phonological representation; potentially the output of phonological encoding. The series of experiments reported here aimed to replicate and extend this finding in the monitoring of internally generated English words. In contrast to the Dutch study, no evidence of any reaction time advantage to syllable over nonsyllable segments was found. However, a consistent finding was seen when subjects were required to monitor their internal speech for the constituent consonants of bisyllabic words. A clear left-to-right effect emerged and a significant difference in rts to the two phonemes which flanked the syllable boundary was observed. The effect was not replicated, however, using a perception version of the task. Implications of the results in terms of current speech production and perception models are discussed.

    Levelt, W. J. M. (1983). Monitoring and self repair in speech. Cognition, 14, 41-104.
    Wheeldon, L. R., & Levelt, W. J. M. (1995). Monitoring the time course of phonological encoding. Journal of Memory and Language, 34, 311-334.
 

What happens when case mixing disrupts functional spelling units?

Jelena Havelka and Clive Frankish

    University of Bristol

Two experiments examined the effect of case mixing on visual word recognition. In the first experiment case mixing affected response times more when it led to visual disruption of the functional spelling units as well as the overall word shape (e.g. pLeAd) compared with the situation when it disrupted overall word shape only (e.g. plEAd). The second experiment replicated this finding with words in which the functional spelling unit corresponded to the vowel sound (e.g. bOaST vs. bOAst) and words where it corresponded to the consonant sound (e.g. sNaCK vs. SNAcK). These results are discussed with regard to the relative importance of the overall word shape and sublexical units in visual word recognition.
 

Which units of sound-to-spelling correspondence are important when skilled adults spell novel words? Effects of lexical priming and frequency of occurrence.

Kate Nation

    University of York

Lexical priming of nonword spelling was investigated by asking adults to spell nonwords in a neutral baseline condition, or following phonologically-related auditory primes. Primes contained sound-spelling patterns that were either frequent (as in kite) or less-frequent (as in fright). Strong priming effects were observed, even for less- frequent spelling patterns, suggesting that nonwords are not spelled according to a fixed set of canonical phoneme-grapheme correspondences. To test the hypothesis that consistency between sound and spelling at the level of the rime unit is an especially salient source of information when spelling, lexical priming based on shared rime units (e.g., kite-/dait/) was compared with priming based on shared initial consonant plus vowels (e.g., kite -/kaib/) and shared vowels (e.g., kite -/maip/). In contrast to the results of previous work concerned with visual word recognition, shared rime units were no more important than other units of sound-spelling correspondence. Thus, when spelling, people are equally sensitive to regularities and consistencies that exist at the beginnings and middles - as well as the endings - of words.
 

Word superiority effects with visually-filtered strings: Evidence for coarse visual cues in word recognition.

Tim Jordan and Ken Scott-Brown

    University of Nottingham

Despite over a century of research highlighting the distinct possibility that words are naturally recognized using coarse scale information, current conceptualizations of visual word recognition place an overarching emphasis on the role of fine detail. Here we present a series of experiments in which words and nonwords were filtered to disrupt their internal detail (e.g., letter strokes, intraletter and interletter spaces) but leave their external horizontal and vertical parameters intact. Using briefly presented displays and the 2AFC Reicher-Wheeler testing procedure, strong word-superiority effects were obtained which were about the same magnitude as those obtained with normal unfiltered displays. The findings suggest that while the detail of words may contribute to word recognition, word recognition may also rely heavily on coarse visual cues which, although coarse, are sufficiently refined to enable accurate word recognition under even the stringent testing conditions of the Reicher- Wheeler task.
 

Part set cuing effects in coherent contexts.

C. O. Fritz1 and P. E. Morris2

    1. Bolton Institute
    2. Lancaster University

The part set cuing effect, the failure of cues from a target set to facilitate remembering, remains an enigma more than 30 years after it was first demonstrated by John Brown (1968). The present research sought to investigate the limits of the effect by using sets presented not as a list, but as a coherent body of information. In one experiment participants read information presented as expository text passages and were later tested with or without half of the information being re-presented during the recall phase. In another experiment, a line drawing of a sitting room containing drawings of items that might reasonably be found there was presented to participants who were instructed to remember the target items. Participants¹ recall was cued by the line drawing of the room and background furnishings; for half of the participants the cue included half of the target items. A part set cuing effect was observed in both experiments. Thus, the effect is not limited to list-type items but is observed with coherently constructed meaningful materials of the sort one might find in a text book or a visual scene. This finding raises implications for pedagogy and for eyewitness questioning.

    Brown, J. (1968). Reciprocal facilitation and impairment of free recall. Psychonomic Science, 10, 41-42.
 

Time Estimation from Childhood to Old Age.

Teresa McCormack, Gordon D. A. Brown, Elizabeth A. Maylor and Richard J. Darby

    University of Warwick

Participants aged from 5 to 99 years completed two time estimation tasks: a temporal generalization and a temporal bisection task. Developmental differences in overall levels of performance were found at both ends of the lifespan, and were more marked on the generalization task than the bisection task. Older adults and children performed at lower levels than young adults, but there were also qualitative differences in the patterns of errors made by the older adults and the children. The findings can be accounted for by assuming developmental changes across the lifespan in the level of noise in temporal encoding and, in addition, developmental changes across childhood in the extent to which memory representations of time intervals become distorted. Young children's representations of time appear to be distorted such that durations are remembered as shorter than the durations actually presented. To examine this further, the extent to which children's memory representations of time intervals decay with delay was investigated using the subjective shortening technique developed by Wearden and Ferrara (1993).

    Wearden, J. H., & Ferrara, A., (1993). Subjective shortening in humans' memory for stimulus duration. Quarterly Journal of Experimental Psychology, 46B, 163-186.
 

Naso-temporal asymmetries in saccade latency in normal and hemianopic subjects.

R. Walker1, D. Maurer2, S. Mannan3, A. Pambakian3 and C. Kennard3

    1. Royal Holloway, University of London
    2. McMaster University, Hamilton, Ontario
    3. Imperial College School of Medicine, London

Visual distractors have been used as an implicit test of residual visual functioning (or 'blindsight') in patients with cortical blindness. We examined the influence of visual distractors on saccade latency in hemianopic and normal subjects. Eye movements were made to a target presented in one hemifield, under monocular viewing conditions and on some trials a distractor appeared in the opposite hemifield. For the hemianopic subjects distractors always appeared in the functionally blind hemifield. The hemianopic subjects were not aware of the distractors. Visual distractors in the hemianopes¹ blindfield did not influence saccade latency. By contrast, the latency of saccades made by normal subjects was increased under similar distractor conditions. Furthermore, a small naso-temporal asymmetry was observed with a greater interference effect for temporal field distractors. These results are inconsistent with the view that the small crossed projection from the nasal hemiretina to the midbrain may mediate an inhibitory effect in the absence of the geniculostriate pathway.
 

Eye movements during reading, scanning, and visual search.

Keith Rayner

    University of Massachusetts

In this talk, I will contrast eye movement characteristics in a number of different tasks including reading, scanning and visual search. Across these tasks there are obvious similarities, but also some interesting differences. These differences are related to what observers are required to do in each task and perhaps suggest ways in which eye movement control varies in different tasks. The latter issue, eye movement control in the different tasks, will be discussed in the context of the E-Z Reader model of eye movement control in reading.
 

The Impact of "Non-Vision" on Vision.

A Pollatsek

    University of Massachsetts (visiting MRC Cognition and Brain Sciences Unit, Cambridge)

The "active vision" task that is best understood is reading. This involves continuous saccadic movements of the eyes 4-5 times a second. The uptake of information, however, affects the movement of the eyes on a moment-to-moment basis. The low- level visual information, such as the spaces between words have a clear influence on the pattern of eye movements. However, non-visual factors play a key role as well. For example, the frequency and predictability of a word in text affect both the time spent fixating on a word (given that it is fixated) and the probability that the word is fixated in the first place. However, the information on the printed page is rapidly recoded into non-visual forms. For example, phonological recoding of a word is often taking place even before the word is fixated and there is little indication that the visual featural information from words survives a saccadic eye movement. Work on pictures and scenes has also indicated a surprising lack of memory of visual information. Yet there is some work indicating that certain types of visual information may be automatically encoded and utilized to integrate information across fixations.
 

The roles of eye movements in activities of everyday life.

Michael Land, Neil Mennie and Jenny Rusted

    University of Sussex

Many everyday activities (cooking, carpentry) consist of a loose sequence of acts involving different objects (pick up kettle, locate sink, turn on tap etc.). During each act it is important to look in the right place to pick up the information needed to guide action, which means that the oculomotor system must work closely with the system that controls the limbs. We have examined tea making, a task which involves about 45 different acts. We used a video-based eye monitor that provides a view of the scene ahead with a spot on it showing the foveal line of sight, and a second video that showed the actions of the subject. The principal results were: 1. The individual acts are organised around particular objects. These are fixated about 0.5 s before there is any sign of motor manipulation. A typical 'object related act' lasts about 3 s and involves 6 or 7 fixations on and around the object. 2. Gaze usually moves to the next object in the sequence about 0.5 s before manipulation of the preceding one is complete, implying that visual information is stored in a buffer. 3. About half the fixations 'make obvious sense' in that they provide specific information about the identity, location or state of some component of the act. 4. Eye movement sequences are directed by instructions from the internal 'script', by what is directly visible in scene, and by the remembered locations of non-visible objects.
 

Keeping up with your head: active vision without eye movements.

Iain D. Gilchrist1, Valerie Brown2 and John M. Findlay2

    1. University of Bristol
    2. University of Durham

We present data from a single case study AI who as a result of extraocular muscle fibrosis has no eye movements. AI uses movements of her head for active looking. Although considerably slower, these head-movements are remarkably similar to saccadic eye-movements (Gilchrist, Brown, Findlay & Clarke 1998). In addition they are very successful at supporting everyday visual tasks including reading (Gilchrist, Brown & Findlay 1997). Here we report data from two new studies. We show a systematic main sequence for AI's head movements and we report data on AI's ability to produce smooth pursuit head movements to track a visual moving target. AI showed a systematic main sequence and could successfully use her head for tracking. Although both studies highlight minor limitations of using the head for active looking, the remarkable feature of AI's vision is the minimal visual impairment that results from her condition.

    Gilchrist, I. D., Brown, V., Findlay, J. M., & Clarke, M. P. (1998). Using the eye movement system to control the head. Proceedings of the Royal Society: Biological Sciences. 265, 1831-1836.
    Gilchrist, I. D., Brown, V., & Findlay, J. M. (1997). Saccades without eye movements. Nature, 390, 130-131.
 

Incremental interpretation at verbs: Who needs nouns when verbs will do just as well?

Gerry T. M. Altmann and Y. Kamide

    University of York

In this paper we demonstrate that information extracted at a verb can function in much the same way as the information extracted at, for example, adjectives like 'plain' or 'red', or nouns such as 'cake' - that is, we show that information extracted at verbs during auditory sentence processing can on occasion serve to identify directly the (real or mental world) entities that play a role in the event (or state) being described. Participants' eye movements were recorded as they inspected a semi- realistic visual scene showing a boy, a cake, and various distractor objects. Whilst viewing this scene, they heard sentences such as 'the boy will move the cake' or 'the boy will eat the cake'. The cake was the only edible object portrayed in the scene. The onset of saccadic eye movements to the target object (the cake) was significantly later in the 'move' condition than in the 'eat' condition; they occurred after the onset of the spoken word 'cake' in the move condition, but before its onset in the 'eat' condition. The results suggest that information at the verb can be used to restrict the domain within the context to which subsequent reference will be made by the yet-to-be- encountered post-verbal grammatical object. The data supports a hypothesis in which sentence processing is driven by the predictive relationships between verbs, their syntactic arguments, and the real-world contexts in which they occur.
 

Combination and alternation in Polish: Cross-linguistic studies of the mental lexicon.

Agnieszka A. Reid and William D. Marslen-Wilson

    MRC Cognition and Brain Sciences Unit, Cambridge

The study of the representation of morphologically complex words and of regular and irregular alternations in the verbal system has relied very much on English. Here we report some results from Polish, which has a much richer derivational morphology and a much more widespread system of morphophonological alternations. Cross-linguistic evidence of this type is essential to deconfound language specific and language universal characteristic of the mental lexicon. Experiment 1, using cross-modal priming, investigated the representation of stems and affixes. The affixes involved mostly do not occur in English, and range from the aspectual, through derivational and aspectual-derivational to diminutive affixes. The data supports a combinatorial model of the mental lexicon with stems and affixes stored separately. Experiments 2 and 3 probed the representation of verbs with regular and irregular alternations, using both the cross-modal technique as well as an auditory-auditory delayed repetition task designed to dissociate semantic effects from morphological ones. The results indicate that both regular and irregular alternants map onto the same underlying morpheme. Overall, the results are consistent with existing models derived from English. Potential areas of divergence between the languages are being studied in a new series of experiments.
 

It's written all over your face: A principal component analysis of facial expressions.

Andrew J. Calder1, Mike Burton2, Paul Miller2 and Shigeru Akamatsu3

    1. MRC Cognition and Brain Sciences Unit, Cambridge
    2. University of Glasgow
    3. ATR Human Information Processing Research Laboratories, Kyoto, Japan

Recent studies have suggested that principal components analysis (PCA) of the visual information in faces is an effective analogue of 'front-end' processes underlying face recognition (Burton, et al., 1999; Craw & Cameron, 1992). In this study we investigated whether PCA can also code visual information relevant to facial expression recognition. Our stimulus set comprised Ekman and Friesen's (1976) Pictures of Facial Affect; this set contains examples of seven facial expressions (happiness, sadness, anger, fear, disgust and surprise) posed by a number of different models. Following Craw and Cameron (1992), two different methods of standardising the images prior to the PCA were compared. For the first method, the images were standardised for their eye position. For the second, they were 'morphed' to an average face shape. The results of the PCA were submitted to a series linear discriminant analyses which revealed three principal findings: (1) that PCA can capture effectively the visual information relevant to different facial expression categories (with average morphed images producing the best performance), (2) the first two canonical discriminant functions for facial expression, bear a close resemblance to continuous two-dimensional models of emotion concepts (e.g., Russell, 1980), and (3) that the PCs critical for facial expressions are largely different to those critical to facial identity. In line with the third result, we were able to show that our PCA system can account for the double dissociation between the recognition of facial identity and facial expression reported in the neuropsychological literature. These results suggest the interesting possibility that, contrary to current thinking, the perceptual representation of facial identity and facial expression may be coded within the same system.

    Burton, A. M., Bruce, V., & Hancock, P. J. B., (1999). From pixels to people: a model of familiar face recognition. Cognitive Science, 23, 1-31.
    Craw, I., & Cameron. (1992). Face recognition by computer. In D. Hogg & R. Boyle (Eds.), British Machine Vision Conference 1992. London: Springer-Verlag.
    Ekman, P., & Friesen, W. V., (1976). Pictures of facial affect. Palo Alto, California: Consulting Psychologists Press.
    Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161-1178.
 

The Hippocampus and memory: A convergent operations approach.

Jonathan K. Foster

    University of Manchester

I will here report a series of findings indicating that the hippocampus may be specifically involved in mediating delayed recall memory. Rather than focus on one particular line of research enquiry to the exclusion of all others, we have adopted a convergent operations perspective on hippocampal memory functioning. First, work using Magnetic Resonance Imaging in patients with Alzheimer's disease, age-matched controls and healthy young participants has shown that the hippocampus is associated with the reproduction of verbal and non-verbal materials, although the nature of this relationship appears to differ between patient and control groups (Foster et al, 1997a; Foster et al., in press; Koehler et al., 1998). Second, work conducted with the naturally-occurring agent glucose indicates that this substance (which may exert its effect on memory at least partially through the hippocampus) preferentially affects recall (Foster et al, 1998). Third, previous work on rats using a GO/NO-GO tasks showed that hippocampal animals were more disinhibited than control animals, but not memory impaired (Foster & Rawlins, 1992a; 1992b). Finally, computational simulations of the hippocampus indicate that the role of the hippocampus in recall may not be as predicted by the Treves-Rolls model of hippocampal memory functioning (Foster et al, 1997b).
 

Chunking by spatial proximity and search efficiency in rats.

Carlo De Lillo (Introduced by Dr. B. O. McGonigle)

    University of Leicester

The effect of the relative spatial proximity of a set of nine baited poles on rats¹ search behaviour was investigated using a within subject design. Ten male hooded rats were tested for a total of 180 trials divided into three conditions of 60 trials each. In the baseline condition the poles were arranged as a 3 x 3 squared matrix, with a distance of 40 cm between adjacent poles in any row or column. The rats were then tested in an experimental condition where the nine poles were arranged into 3 spatial clusters, where the distance between adjacent poles within a cluster was 30 cm, whereas the minimal distance between poles in different clusters was 60 cm. This was the critical condition to access whether rats spontaneously chunk spatially grouped locations to organise a hierarchical search pattern and whether their search performance benefits from such organisation. Finally, a control condition was presented in which the spatial arrangement of the poles used for the baseline condition was re-established to ensure that any effects observed in the experiential condition was not due to mere task practice. Rats' search behaviour was analysed both in terms of search length (total number of moves employed to perform an exhaustive search of the set of poles, where less moves are indicative of a better performance) and in terms of search trajectories (relative frequency of transitions from one particular pole to another). The total number of moves per search decreased dramatically in the experimental condition compared to both baseline and control condition. The analysis of search trajectories revealed that this improvement was accompanied by a strong tendency to exhaustively explore each of the clusters before moving on to the next. In terms of overall performance this pattern of results parallels that observed in capuchin monkeys tested in similar conditions (De Lillo, Visalberghi and Aversano, 1997). However, a comparative analysis of search trajectories suggests that rats and monkeys might differ in the way they hierarchically organise a series of moves on the basis of the spatial structure of the search space.

    De Lillo, C., Visalberghi, E., and Aversano, M. (1997). The organisation of exhaustive searches in a patchy space by capuchin monkeys (Cebus apella). Journal of Comparative Psychology, 111, 82-90.
 

Was it the chicken or the rice that made him ill? Within-event associations and causal judgements.

Robert Boakes

    University of Sydney

Causal beliefs can involve deciding which of two or more potential causes produced a particular outcome. Beliefs about one factor, A, may later change in the light of further information about another factor, B. An analysis of such 'retrospective revaluation' is provided by Dickinson and Burke (1996) on the basis of associations between A and B 'within-event associations'. The present study examined whether their account provided an explanation of when retrospective effects occur and when they do not. The experimental task required a subject to act as an allergist trying to determine what foods cause an allergic reaction in a client. Over a series of trials subjects are informed that a meal (event) consisting of either one or two foods either did or did not produce an allergic reaction (outcome). The design allowed assessment in each subject of both forward (e.g. blocking: A+ ->AB+; Test B) and retrospective (e.g. EF+ ->E+; Test F) effects. A manipulation designed to increase within-event associations had little effect on event-outcome learning or on forward effects. However, consistent with Dickinson & Burke (1996), it increased the likelihood of obtaining retrospective revaluation.
 

The perception of benefit: Does it modulate the perception of risk?

A. J. Lloyd (Introduced by Dr. K. Nation)

    Oxford Brookes University

Background: The perception of risk has been studied extensively both in psychology and economics. Much less is known about how people understand benefit and whether the two interact. Work by Alhakami & Slovic (1994) concludes that there is an inverse relationship between perceived risk and perceived benefit. Patients undergoing an operation to prevent stroke were surveyed regarding their understanding of the risks and benefits associated with surgery.
Methods: Patients (n=74) on the waiting list were sent questionnaires. Patients were asked to estimate their risk of stroke with and without surgery and to indicate possible health benefits of surgery. Patients were asked to complete the questionnaire again when they were admitted to hospital.
Results: Most patients falsely believed that in addition to preventing stroke, the operation would confer significant benefits for their health (e.g. relieving angina, improving memory, help difficulty walking). Many patients also perceived that the operation had more risk than it did in reality. Patients who perceived that the operation carried the most risk also believed that the operation conferred the most benefit.
Conclusion: Patients failed to understand the risks and benefits of this operation. The evidence suggests that some patients off set the very high perceived risk by believing that the operation would also confer additional benefits for their health. This data indicates a positive relationship between the perception of risk and benefit.
 

Length and size perception in unilateral neglect: Compression of magnification?

Lilach Shalev1 and Glyn W. Humphreys2

    1. The Open University of Israel
    2. University of Birmingham

Spatial neglect has been explained as an impairment in the representation of extrapersonal space. Milner and Harvey (1995) have proposed that the representation of the affected side is compressed but not completely lost. Thus, in size judgement tasks, neglect patients consistently underestimate the size of contralesional relative to ipsilesional stimuli. The present study tested this account and demonstrated that when the to be judged stimuli are small (up to 1 deg of visual angle) our neglect patients consistently overestimate the size of contralesional stimuli. However when the size of the stimuli is large (3 to 4 deg of visual angle, similar to Milner and Harvey's stimuli) they underestimate the size of contralesional stimuli. Therefore, the account of compression of representations cannot provide a full explanation for the results of the present study. We discuss a possible explanation for the opposite trends we obtained in the present study and the implications for some of the competing theoretical accounts of unilateral visual neglect.

    Milner, A. D., & Harvey, M. (1995). Distortion of size perception in visuospatial neglect. Current Biology, 5, 85-89.
 

Similarity and categorisation: neuropsychological evidence for a dissociation in explicit categorisation tasks.

Jules Davidoff and Debi Roberson

    Goldsmiths College, University of London

A series of experiments are reported on a patient (LEW) with difficulties in naming. Initial findings indicated severe impairments in his ability to freesort colours and facial expressions. However, LEW's performance on other tasks revealed that he was able to show implicit understanding of some of the classic hallmarks of categorical perception; for example, in experiments requiring the choice of an odd-one-out, the patient chose alternatives dictated by category rather than by perceptual distance. Thus, underlying categories appeared normal and boundaries appeared intact. Furthermore, in a two-alternative forced-choice recognition memory task, performance was worse for within-category decisions than for cross-category decisions. In a replication of the study of Kay and Kempton (1984), LEW showed that his similarity judgements for colours could be based on perceptual or categorical similarity according to task demands. The consequences for issues concerned with perceptual categories and the relationship between perceptual similarity and explicit categorisation are considered; we argue for a dissociation between these kinds of judgements in the freesort tasks. LEW's inability to make explicit use of his intact (implicit) knowledge is seen as related to his language impairment.

    Roberson, D., Davidoff, J., & Braisby, N. Similarity and Categorisation: Neuropsychological Evidence for a Dissociation in Explicit Categorisation Tasks. Cognition (in press)
 

Fear recognition after brain injury.

Andy Young1, Jill Keane2, Alan Parkin3, Barbara Wilson2 and Andy Calder2

    1. University of York
    2. MRC Cognition and Brain Sciences Unit, Cambridge
    3. University of Sussex

Recent studies have shown that some people who have suffered brain injury become poor at recognising facial expressions of fear. For these individuals, fear is not usually the only emotion affected, and some degree of poor recognition of anger has also been fairly consistently noted, but the impairment of fear recognition can be differentially severe. We sought to establish whether people who are poor at recognising facial expressions of fear would also be poor at recognising signals of emotion expressed in other ways, and fear in particular. We therefore investigated recognition of signals of emotion by three individuals who met the behavioural criterion of showing differentially severe impairment for fear on a test of recognition of facial expressions. These people were tested for ability to recognise different basic emotions from a second facial expression test, and from body posture, tone of voice, non-verbal sounds (laughter, screaming, etc.), and emotion expressed in music. Results showed that problems in recognising emotion were not restricted to facial expressions, and that difficulties with fear can arise for any of these domains. Such findings present an interesting contrast to the pattern observed in prosopagnosia (inability to recognise faces after brain injury). In prosopagnosia the recognition of nearly all faces is affected, whereas for impairments of emotion recognition some emotions can be more severely affected than others. In prosopagnosia recognition from other domains (e.g. the voice) can still be achieved, whereas for impairments of emotion recognition all domains may be compromised.
 

Dual task interference on visual marking: Modality-independent and modality- dependent components set-up of the marking state.

Glyn W. Humphreys1 and Derrick Watson2

    1. University of Birmingham
    2. University of Warwick

Efficient selection of new visual objects depends not only on a passive prioritisation process to the new items but also on top-down inhibition of old objects: a process we have termed visual marking (Watson & Humphreys, 1997). Marking can be demonstrated in visual search tasks in which potentially harmful ?old¹ distractors can be shown to have little effect on search through large sets of new objects, when the old and new items are temporarily separated. However, there is a systematic decrease in the ability to ignore the old items if participants have to perform a secondary task whilst the old items are present. In the present paper we examined the factors determining the effects of the secondary task on marking. We manipulated the modality of the secondary task (which was either visual or auditory) and its occurrence with respect to the old items: it occurred at the onset of the old items, or it occurred after a period when marking of the old items should have taken place. When the secondary task begins concurrently with the onset of the old items, performance is disrupted irrespective of the modality of the task. When the secondary task begins at a later period there is disruption from a visual secondary task only. The results indicate that there may be both modality-independent and modality-dependent aspects of marking. The ability to establish the marking state involves central processes utilised in processing auditory as well as visual signals. The ability to maintain the marking state relies on the process specific to vision.

    Watson, D. G., & Humphreys, G. W. (1997) Visual marking: Prioritising selection for new objects by top-down attentional inhibition. Psychological Review, 104, 90-122.
 

Tactile and cross-modal attentional blinks.

Anne P. Hillstrom1, Charles Spence2, Kimron Shapiro1 and Salvador Soto3

    1. University of Wales, Bangor
    2. University of Oxford
    3. Barcelona University, Spain

The attentional blink (AB) is a deficit in reporting the second of two targets presented in close temporal proximity. Such deficits have been shown both within and between visual and auditory modalities, but the parameters under which they are found are currently under debate, particularly for the cross-modal AB. As a step towards establishing those parameters, we investigated the AB within the tactile modality, using vibrotactile stimuli and between the visual and tactile modalities. A cross-modal AB was found, but an AB within the tactile modality has been more difficult to establish. This is the first reported case of within-modality judgements not producing an AB but cross-modality judgements producing one. The ramifications of both the tactile null effect and the cross-modal effect will be discussed.
 

Intramodal and crossmodal temporal processing deficits.

Charles Spence1 and Salvador Soto2

    1. Oxford University
    2. Barcelona University, Spain

We report a series of experiments in which streams of rapidly presented auditory and visual stimuli were monitored by participants who had to detect 1 or more target stimuli presented in either the same or different modalities. In contrast to the majority of previous studies, auditory and visual targets were presented from the same spatial location, and target modality was always uncertain. People found it very difficult to report the second of two visual targets when presented in close succession, a phenomenon known as the attentional blink (AB). A similar AB was demonstrated for pairs of auditory targets, though there was no evidence for a crossmodal AB. When the two targets were identical, participants showed repetition blindness (RB) for the second of two visually-presented targets, but facilitation for the second of two auditory targets, with no evidence for a crossmodal repetition deficit. Even though there was no evidence for a crossmodal processing deficit time-locked to the onset of the first target, participants¹ performance was worse when monitoring both auditory and visual streams rather than focusing on just a single modality, suggesting a residual divided attention cost.