New User Registration

In order to register on this site, you must first submit the passphrase below.

2000 - April 6/7 University of Nottingham

NOTTINGHAM MEETING April 6-7 2000

 

A scientific meeting will be held at the Department of Psychology, University of Nottingham on 6/7 April, 2000. The local organiser will be Professor A M Derrington.

There will be two symposia:

Thursday 2.00-5.30
    Impact of hearing impairment on language and cognition
    Organised by Professor D V M Bishop

Friday 9.00-1.00
    Vision Research 2000
    Organised by Professor A M Derrington
    Posters will also be presented, linked to this symposium.
 

Wednesday 5 April
The bar in Florence Boot Hall will be open during the evening
 

Thursday 6 April

Lecture Room 326 (First Floor)

9.00 Tim Jordan, Geoff Patching*, Sharon Thomas* and Ken Scott- Brown* (University of Nottingham)
Lateralized word recognition: Assessing the parallel-sequential distinction.

9.30 Simon P Liversedge, Valerie Brown*, Sarah White* and Iain D Gilchrist (University of Durham and University of Bristol)
Orthographic status and lexical identification.

10.00 S W Kelly*, A M Burton and B Riedel* (Institute of Cognitive Neuroscience, University College London and University of Glasgow)
Implicit sequence learning requires action.

10.30 Coffee (Psychology department foyer)
 

START OF PARALLEL SESSION
 

Session A:

Lecture Room 326 (First Floor)

11.00 Chris Donlan, Nata Goulandris* and Lorna McNab* (University College London)
Phonological and semantic systems supporting literacy and numeracy development.

11.30 M P Haggard, S C Smith*, S E Hind* and E Nicholls* (MRC Institute of Hearing Research, Nottingham)
Developmental impact of hearing impairment studied by logically prospective and controlled experimental designs

12.00 Lyn Jackson* (Cardiff University. Introduced by Professor J M Pearce)
Do deaf children understand the "language of the eyes"? A preliminary study.

12.30 Judith A Hosie* (University of Aberdeen. Introduced by Professor R. Campbell)
Emotional development in deaf children: Facial expressions, emotive stories and display rules.

1.00 Lunch
 

START OF PARALLEL SESSION
 

Session B:

Lecture Room 201 (Ground Floor)

11.00 Claire Martin* and Tim Jordan (University of Nottingham)
Effects of viewing angle on visual and audiovisual speech perception across different talkers.

11.30 Sharon Thomas* and Tim Jordan (University of Nottingham)
Read my face: contrasting contributions of face and mouth movements to visual and audiovisual speech perception.

12.00 J H Wearden (Manchester University)
Where does the scalar property of time come from?

12.30 S O'Rourke* and J H Wearden (Manchester University)
Arousal and the speed of the internal clock

1.00 Lunch
 

Session A:

Lecture Room 326 (First Floor)
 

Symposium: Impact of hearing impairment on language and cognition (Organised by Professor D V M Bishop)

2.00 J Briscoe*, D V M Bishop and C Norbury* (University of Oxford)
Mild-to-moderate sensorineural hearing loss and specific language impairment in childhood: a direct comparison of literacy and phonological skills.

2.30 Ruth Campbell, M MacSweeney*, M Brammer*, G Calvert*, P McGuire*, A S David* and B Woll* (Human Communication Science, University College London; Institute of Psychiatry, London; Oxford Centre for Function Magnetic Resonance Imaging, and Clinical Communication Science, City University)
Does speechreading activate auditory cortex in deaf people? An fMRI study.

3.00 Jennifer Utman* (University of Oxford)
Finding meaning in a degraded signal: Spoken word recognition under acoustic distortion.

3.30 Tea (Psychology department foyer)

4.00 Margaret Harris (Royal Holloway University of London)
Early language development in profoundly deaf children.

4.30 Adrian Davis* and Shirley Grimshaw* (MRC Institute of Hearing Research and Psychology Department, University of Nottingham)
Factors affecting performance of hearing impaired children on auditory and cross-modal Stroop tasks.

5.00 Mair*ad MacSweeney*, Ruth Campbell and Chris Donlan (Human Communication Science, University College London)
Development of STM coding in deaf and hearing children.

5.40 Business Meeting: members only

5.45 Wine reception in Psychology Department

7.30 Conference Dinner at Sonny's Restaurant, Carlton Street. Bus will leave Psychology department at 7.00 and Florence Boot Hall at 7.15
 

Session B:

Lecture Room 201 (Ground Floor)
 

2.00 J Richard Hanley and Jennifer M Turner* (University of Essex and University of Liverpool)
Why are familiar-only experiences more frequent for voices than for faces?

2.30 Christopher O'Donnell* and Vicki Bruce (University of Stirling)
The Batman effect: Selective enhancement of eyes following familiarisation with faces.

3.00 S Walker*, M N O Davies* and P Stacey* (The Nottingham Trent University. Introduced by Professor A.M. Burton)
Eye movement asymmetries in facial sex discrimination.

3.30 Tea (Psychology department foyer)

4.00 Helen J Cassaday and Christine Norman* (University of Nottingham)
Effects of amphetamine on trace conditioning in the rat: Attentional or motivational?

4.30 Stephen R H Langton (University of Stirling)
The mutual influence of gaze and head orientation in the analysis of social attention direction.

5.00 Jules Davidoff and Debi Roberson (Goldsmiths College University of London)
The categorical perception of colours and facial expressions: the effect of verbal interference.

5.40 Business Meeting: members only (Lecture Room 326, First Floor)

5.45 Wine reception in Psychology Department

7.30 Conference Dinner at Sonny's Restaurant, Carlton Street. Bus will leave Psychology department at 7.00 and Florence Boot Hall at 7.15.
 
 
 

Friday 7 April
 
 

START OF PARALLEL SESSION

NB: Start times of talks not synchronised in two lecture rooms

Session A:

Lecture Room 326 (First Floor)

Symposium: Vision Research 2000 (Organised by Professor A M Derrington)

9.00 Michael Morgan (Institute of Ophthalmology, University College, London)
Is Psychophysics a subject with a bright new future behind it?

9.45 Andrew Parker (University Laboratory of Physiology, Oxford)
The cortical representation of binocular disparities.

10.30 Coffee
        Posters in Seminar Room 210

11.00 Andrew T Smith (Royal Holloway, University of London)
Functional magnetic resonance imaging: demon or saviour?

11.45 Peter Lennie (Center for Neural Science, New York University)
Energy-efficient operation of visual cortex

12.30 Discussion

1.0 Lunch

        Posters in Seminar Room 210

END OF PARALLEL SESSION
 
 
 

START OF PARALLEL SESSION

NB: Start times of talks not synchronised in two lecture rooms
 

Session B:

Lecture Room 201 (Ground Floor)

9.00 Ann Dowker (University of Oxford)
Marked discrepancies between abilities in normal individuals: The case of estimation and calculation.

9.30 Martin H Fischer* (University of Dundee. Introduced by Professor R. A. Kennedy)
Number processing affects spatial accuracy.

10.00 Amanda Holmes* and Anne Richards (Birkbeck College, University of London)
Processing of affectively valenced words in anxiety: An RSVP study.

10.30 Coffee

11.00 Jennifer Rodd*, M Gareth Gaskell and William D Marslen- Wilson (MRC Cognition and Brain Sciences Unit, Cambridge)
Effects of ambiguity in visual and spoken word recognition.

11.30 Michael Lewis (Cardiff University)
Age-of-acquisition effects are cumulative-frequency effects in disguise: re-opening the debate.

12.00 Annabel S C Thorn*, Susan E Gathercole and Clive R Frankish (University of Bristol)
The origins of language differences in verbal short-term memory

12.30 Kate Cain, Jane Oakhill, Marcia Barnes*, and Peter Bryant (University of Nottingham, University of Sussex, University of Toronto and The Hospital for Sick Children and University of Oxford)
Comprehension skill, inference making ability and their relation to knowledge

1.00 Lunch

END OF PARALLEL SESSION
 

2.00 Elaine Funnell and John M Wilding (Royal Holloway University of London)
Visual perceptual deficits contributing to a case of visual object agnosia acquired in early infancy.

2.30 Jamie Ward and Alan J Parkin (University College London and late of the University of Sussex)
Recognition memory and source memory deficits following frontal lesions.

3.00 Helen Moss, Marinella Cappelletti*, Paul De Mornay Davies*, Eli Jaldow* and Mike Kopelman (University of Cambridge, St. Thomas's Hospital, London and Middlesex University)
Lost for words or loss of memories: Autobiographical memory in a semantic dementia patient.
 

End of meeting

ABSTRACTS


 
 

Lateralized word recognition: Assessing the parallel-sequential distinction.

Tim Jordan, Geoff Patching, Sharon Thomas and Ken Scott-Brown

    University of Nottingham

A popular view is that words are processed more efficiently in the right (RVF) than in the left (LVF) visual hemifield because of parallel versus sequential orthographic analyses. We investigated this view using the Reicher-Wheeler task to suppress effects of partial word information and an eye-tracker to ensure central fixations. RVF advantages for words obtained across all serial positions and "U- shaped" serial-position curves obtained for both visual hemifields. Moreover, whereas words and nonwords produced similar serial-position effects in each hemifield, only RVF stimuli produced a word-nonword effect. These findings support the view that left hemisphere function underlies the RVF advantage but not that different modes of orthographic analysis are used by each hemisphere.
 

Orthographic status and lexical identification.

Simon P Liversedge1, Valerie Brown1, Sarah White1 and Iain D Gilchrist2,

    1. University of Durham.
    2. University of Bristol.

We investigated the effects of target eccentricity and distractor similarity on identification of letter strings. Target letter strings were words, illegal nonwords, and legal nonwords, in Experiments One to Three respectively. Distractor letter strings were always orthographically illegal. The visual similarity of the distractor strings in relation to the target strings was manipulated (Townsend, 1971). The eccentricity of target and distractor presentation was also manipulated.
The sequence of presentation was as follows: A central fixation cross appeared (150 ms). This was replaced by a central presentation of the target letter string (150 ms) followed by a blank screen (150 ms). Both items were displayed at an equal distance from the central cross at either 4.15 or 8.3 degrees. Subjects were required to maintain central fixation during each trial and used a button box to indicate whether the target letter string appeared on the right or the left of the screen.
Visual similarity and eccentricity interacted in Experiments One and Three when the target strings were orthographically legal, but not in Experiment Two when target strings were orthographically illegal. The orthographic status of the target letter string clearly modulated the influence of visual similarity and eccentricity. We interpret our results as indicating that orthographic legality dictates the initiation of lexical identification procedures.
Two follow up experiments in which orthographically illegal strings like FBI and orthographically legal strings like NATO were used. Such strings differ from those used in Experiments Two and Three in that they have semantic meaning. The results of these experiments will be discussed in relation to those of Experiments One to Three.

    Townsend, J.T. (1971). Perception and Psychophysics, 9, 40-50, (1971).
 

Implicit sequence learning requires action.

S W Kelly1, A M Burton2 and B Riedel2

    1. Institute of Cognitive Neuroscience, University College
    2. University of Glasgow

The serial reaction time (SRT) task is widely used in studies of implicit learning. In responding to a repeating sequence of stimuli on a screen, knowledge about the sequence structure is acquired even when the viewer does not realise there is structure. Learning in this task is taken to be perceptual rather than motoric but this view has come under recent criticism. We present evidence that learning may be perceptual but will not occur without a response to the stimuli. Passive watching is not enough to learn the sequential knowledge. We will describe three experiments which demonstrate a failure to learn when subjects are not required to make responses to learning items. In the final experiment, we will show that it is possible to make the same sequence more salient, and under these circumstances learning occurs. By manipulating salience experimentally, we show that observational learning seems to require conditions under which explicit knowledge can develop. Indirect learning of sequential knowledge seems to require action as well as observation.
 

Phonological and semantic systems supporting literacy and numeracy development.

Chris Donlan, Nata Goulandris and Lorna McNab.

    University College London

24 five year olds were divided into two groups (High-Rep/Low-Rep) according to their non-word repetition scores. Group performance was compared on measures of phonological fluency, semantic fluency, rote-counting (highest number reached in accurate recitation of the number word sequence), number judgement (accuracy and latency in choosing the greater of two hindu-arabic numerals within the range 1-5). A measure of non-verbal ability was also taken.
With the effects of non-verbal ability removed, High-Rep outperformed Low- Rep in phonological fluency and rote-counting, but no difference was found for semantic fluency or number judgement accuracy. A strong positive correlation was found between semantic fluency and number judgement accuracy. Mean latencies for number judgement were calculated according to the numerical difference between items in each trial. A Symbolic Distance Effect was found, consistent with an account of the judgement process as based on direct mental representations of magnitude, and not on recitation of the verbal number sequence.
Findings suggest that common cognitive resources subserve children's early literacy and numeracy, but that across these domains phonological systems (as indicated by phonological fluency and rote-counting) are in some measure independent of semantic systems (as indicated by semantic fluency and number judgement).
 

Developmental impact of hearing impairment studied by logically prospective and controlled experimental designs

M P Haggard, S C Smith, S E Hind and E Nicholls

    MRC Institute of Hearing Research, Nottingham

Human studies of cognitive and other functioning in impairment are usually obliged to use non-experimental designs, the matched case-control design being the most common. Usually this is 'logically retrospective' i.e. stratified on the dependent not the independent variable, according to the implicit causal model. One disadvantage of this is overestimation of disease effects (i.e. of the impairment) due to comorbidity bias in the referral and diagnostic process, even when biases from taking inappropriate controls are studiously avoided. We have been able to minimise comorbidity and other biases by using the double-sampled population design to provide cases and controls, and also by introducing an experimental element via a randomised controlled trial (RCT) of treatments. The ethical prerequisite for a trial is that sufficient impact (i.e. disability/handicap) is agreed to exist, such as to justify some intervention in general, but that enough uncertainty exists about the nature, duration etc. of benefits from available interventions to enable recruitment with truly informed consent in an RCT. The penalty incurred for the above design opportunities is that the treatable condition, otitis media with effusion (OME, aka 'glue ear'), whilst the commonest cause of hearing impairment in childhood, is mild, fluctuating, non-permanent and variable in its presentation. However high prevalence permits prospective cohort studies.
In a previous EPS paper we showed that the impact of an OME history in children of 5-7 years on cognitive and cognitive performance tasks (phonological processing and abstract reasoning) was measurable, but not large enough to be sensibly or affordably used as an outcome measure in an RCT (disease effect size less than 0.3 SD). Language effects at this age were null, as in most other studies. We therefore transferred our attention to behaviour; the behaviour problems in OME documented by the literature have plausible cognitive antecedents, but appear to cluster in ways generating effect sizes large enough to permit practicable studies.
We first developed a suitable instrument (BAI) sensitive to behaviour problems near the 70th rather than the 95th percentile, within appropriate psychometric constraints (good completion rate, non-extremity of responses, reliability, consistency, and construct validity). Parent-reported, and teacher-reported versions of BAI exist with careful emphasis on observable behaviours and minimisation of interpretative biases; the two versions have complementary strengths/weaknesses in respect of sensitivity/objectivity. The resulting scale gives a general overall problem score and four factor-scores: antisocial behaviours, anxious behaviours, social confidence, and inappropriate/undirected behaviours (e.g. attention, which is in the domain of cognition).
We next documented impact (disease effect sizes which quantify deficit or 'need') for having a persistent OME history comparing _700 population controls with _350 in-trial confirmed cases. For behaviour problems overall, the disease effect size was 0.60 population SD. This is conservative, as affected individuals were not screened out of the general population. Our large randomised trial confirms that improvements in hearing do work through to broader outcomes such as behaviour and quality of life. Comparison of treated with untreated children showed a 1-year treatment effect size of 0.40 SD; this remains highly significant after correction (ANCOVA) for baseline, and SEG. Consistency of interpretation was further supported by the sub-scores; all but antisocial behaviour gave a disease effect, hence met the logical prerequisite for showing a treatment effect. Furthermore all but antisocial behaviour did show some treatment effect. The consistency of disease and treatment effects suggests a developmental breakdown of communication due to hearing loss, which is partly reversible by treatment but not entirely so. Reports of language/speech problems in these children were minimal. Interestingly, the older children (>5 years) showed larger effects for the cognitive dimension of inappropriate/undirected behaviours.
We conclude that behaviour problems are indeed an important aspect both of the manifestation and the developmental sequelae of even a mild hearing impairment. The hearing loss is a major causal factor but not the only one. Although the chief domain of impact may be pragmatic or paralinguistic, all developmental studies of language, cognition and behaviour, should include at least a small set of questions on otitis media such as those offered as a cost-effective alternative to sweep audiometry for a school-entry screen1.

    1S E Hind, R L Atkins, M P Haggard, D Brady and G Grinham. 'Alternatives in screening at school entry'. British Journal of Audiology 1999, 33, 403-412.
 

Do deaf children understand the "language of the eyes"?: A preliminary study.

Lyn Jackson (Introduced by Professor H D Ellis)

    Cardiff University

The ability of 13- to 16- year-old deaf children from hearing homes to pass theory of mind (ToM) tasks and to judge emotions was tested using two first-order ToM tasks, one second- order ToM task, and Baron-Cohen's (1999) Eyes Task. The Eyes task is a more subtle test of ToM and one that requires less receptive language. The majority of deaf children passed the first- order ToM tasks with half passing the second-order ToM task. Deaf children's performance on the Eyes task was significantly worse than hearing children aged 8- to 12- years. Three reasons were posited to explain deaf children's poor performance. This study confirmed there is a delay in the development of ToM ability which may be upwards of five years.
 

Emotional development in deaf children: Facial expressions, emotive stories and display rules

Judith A Hosie (Introduced by Professor R Campbell)

    University of Aberdeen

In this paper we present the results of a series of studies examining deaf children's understanding of the putative basic expressions of emotion (happiness, sadness, anger, fear, disgust and surprise), and their self-reported use of display rules. Deaf children of elementary and secondary school age, raised in a spoken language environment, were tested for their ability to match, label and comprehend labels for photographs of facial expressions and to link facial expressions to emotive stories. Comparable levels of accuracy were observed for deaf and hearing children of the same age. Moreover, inspection of the distribution of responses for different emotion categories indicated that deaf and hearing children made similar types of errors of interpretation (e.g. they confused fear with surprise), indicating strong similarities in the way that they conceptualise different emotions. Deaf children's knowledge of display rules as measured by the reported concealment of emotion was also comparable to that of hearing children of the same age. However, deaf children were less likely to report that they would conceal happiness and anger. They were also less likely to produce reasons for concealing emotion and a smaller proportion of their reasons were pro-social, that is, relating to the feelings of others. The results are discussed in relation to how children acquire an understanding of the reasons for emotional concealment in different contexts.
 

Effects of viewing angle on visual and audiovisual speech perception across different talkers.

Claire Martin and Tim Jordan

    University of Nottingham

Seeing the face of a talker can affect the speech we hear. However, the generality of these effects across different talking faces remains to be determined. We investigated this issue using two talking faces which differed substantially in their physical appearance (in fact, one male and one female). Moreover, to provide a full evaluation of the effects of these visual differences, each face was presented in full face, three-quarters and profile. Viewing angle affected identification accuracy for visual and audiovisual speech stimuli. However, overall levels of performance and the effects of viewing angle were essentially the same for both talking faces. These findings suggest that identification of visual and audiovisual speech information was not differentially affected by the substantial differences in the visual physiognomy of the two talkers. These results are discussed in relation to the importance of basic visual cues in visual and audiovisual speech perception and their implications for current theories of audiovisual speech perception.
 

Read my face: Contrasting contributions of face and mouth movements to visual and audiovisual speech perception.

Sharon Thomas and Tim Jordan

    University of Nottingham

We investigated the effectiveness of observing mouth-only (face static) and face-only (mouth static) movements during visual and audiovisual speech recognition. Face static images in which only the mouth moved were as effective as whole moving faces for enhancing perception of auditory signals and producing McGurk effects. More surprisingly, mouth static images in which everything but the mouth moved improved audiovisual speech recognition and also produced substantial McGurk effects. These results imply that while mouth movements enhance and alter auditory speech perception, extra-oral facial movements play an important role in visual and audiovisual speech recognition.
 

Where does the scalar property of time come from?

J H Wearden,

    Manchester University

It is well known that variance in the timing behaviour in humans and animals almost always exhibits the scalar property, a kind of conformity to Weber's Law. What is less clear is where in the timing system this scalar property arises. Modern theories of timing usually account for timing behaviour in terms of an interaction of internal clock, long and short-term memory, and decision processes (Wearden, 1999), so the scalar property could arise at any of these stages, and mathematical modelling suggests that scalar variance incorporated anywhere in the system would produce more or less the same observed result. The problem of where scalar timing comes from can, however, be addressed using techniques in which different parts of the timing system are excluded. For example, "episodic" timing tasks (developed from a procedure originally devised by Rodriguez-Girones and Kacelnik, 1998) minimize or prevent the use of long-term memory for time. This paper reports data from two episodic timing tasks, employing both auditory and visual stimuli, which use a "temporal generalization" method. The first task severely discourages the use of long- term memory, the second seems logically to prevent it altogether. In both cases, however, close approximation to scalar timing is found, suggesting that the scalar property of time may arise early in temporal processing, possibly from the internal clock itself.

    Rodriguez-Girones, M.A., & Kacelnik, A. (1998). Response latencies in temporal bisection: Implications for timing models. In V. DeKeyser, G. d'Ydewalle, and A. Vandierendonck (Eds.), Time and the dynamic control of behavior (pp. 51-70). Gottingen: Hogrefe and Huber.
    Wearden, J.H. (1999). "Beyond the fields we know....": Exploring and developing scalar timing theory. Behavioural Processes, 45, 3-21.
 

Arousal and the speed of the internal clock

S O'Rourke and J H Wearden,

    Manchester University

Penton-Voak et al. (1996) used a method derived from work by Treisman and colleagues (1990) to apparently 'speed up' the pacemaker of an internal clock in humans. When the events were preceded by a train of clicks, estimates of the durations of auditory and visual stimuli, as well as the length of intervals produced, were affected exactly as predicted by an increase in pacemaker speed. The click-train procedure has produced similar effects in other studies using different time-judgement methods, but how does it actually work? Treisman et al. (1990) suggested that the click-trains increased pacemaker speed by increasing ÒarousalÓ, but there was no independent measure of arousal in their work. We report two experiments intended to shed light on the how click trains affect time judgements. In the first, six seconds of periodic clicks preceded tones, but in some cases there was a silent delay (long or short) between termination of the clicks and tone presentation. This manipulation reduced the effect of the clicks, suggesting that the putative arousal wore off over a few seconds. A second study took measurements of skin conductance and heart rate during trials in which tones were preceded by regular or irregular clicks or a putatively arousing visual display. All preceding events tended to increase estimates of tone duration, but the strongest effects were obtained with regular clicks, and these were accompanied by increases in heart rate and skin conductance, consistent with Treisman et al.'s suggestion that regular clicks increased arousal. Treisman, M., Faulkner, A., Naish, P.L.N., & Brogan, D. (1990). The internal clock: Evidence for a temporal oscillator underlying time perception with some estimates of its characteristic frequency. Perception, 19, 705-748. Penton-Voak, I.S., Edwards, H., Percival, A., & Wearden, J.H. (1996). Speeding up an internal clock in humans? Effects of click trains on subjective duration. Journal of Experimental Psychology: Animal Behavior Processes, 22, 307-320.
 

Mild-to-moderate sensori-neural hearing loss and specific language impairment in childhood: a direct comparison of literacy and phonological skills.

J Briscoe, D V M Bishop and C Norbury (introduced by D V M Bishop)

    University of Oxford

An influential theory of specific language impairment (SLI) argues that their linguistic difficulties derive from auditory perceptual processing deficits. This suggests that similar patterns of language impairment might be seen in children with auditory deficits due to hearing loss. Little is known of language outcomes in children with mild to moderate levels of sensori-neural, rather than conductive, hearing loss (SNH). This study explored the possibility that auditory perceptual deficits have negative consequences for oral and written language in childhood. We predicted that phonological tasks such as discrimination of word and nonword pairs, nonword repetition, or phonological awareness may be particularly vulnerable to auditory perceptual deficits. Younger and older children with SLI were compared to children with bilateral sensori-neural hearing loss (mild, moderate or high frequency), plus children of similar ages and general abilities without language impairment. We shall present evidence that the linguistic profile of children with SNH differs from children with SLI, even though we did identify some language-impaired children within the SNH group.
 

Does speechreading activate auditory cortex in deaf people? An fMRI study

R Campbell1, M MacSweeney 1, M Brammer 2, G Calvert 3, P McGuire2, A S David2 and B Woll 4

    1. Human Communication Science, University College London
    2. Institute of Psychiatry, London
    3. Oxford Centre for Functional Magnetic Resonance Imaging of the Brain
    4. Clinical Communication Science, City University

Silent speechreading activates auditory cortex (superior temporal gyrus and superior temporal plane - BA 41,42,22) in hearing people. To what extent is activation in these areas observed when deaf people speechread? Six profoundly congenitally deaf adults, all of whom were highly 'oral' in language affiliation and background, were recorded while speechreading spoken numbers. Auditory cortex activation, while not completely abolished, was much reduced in this group compared with hearing subjects. One profoundly deaf woman who became deaf at the age of 30 months showed high levels of activation in right auditory cortex when speechreading. Early exposure to heard language may be required for the left auditory cortex activation pattern for speech processing, observed in hearing speakers, to develop.
 

Finding meaning in a degraded signal: Spoken word recognition under acoustic distortion

Jennifer Utman (introduced by D V M Bishop)

    University of Oxford

At least two sources of information contribute to the comprehension of spoken words: the acoustic signal associated with the word itself, and the semantic context in which the word occurs. However, both of these sources of information may be disrupted by distortions of the acoustic signal. Certain types of distortion, such as masking noise or filtering, can interfere with the peripheral processing of the acoustic signal, thus reducing the overall intelligibility of the input. Other types of distortion, such as competing speech or accelerated speaking rate, have relatively little effect on intelligibility of the signal, but can increase the attentional load required to encode the linguistic message and thus interfere with central language processes. The effects of the two types of distortion on different aspects of lexical processing were examined in a series of studies using a sentence-word priming paradigm. The results suggest that peripheral distortions disrupt the sensory encoding of phonetic information and the initial activation of lexical entries, whereas central distortions disrupt the selection and integration of activated lexical items. These findings may have implications for populations with endogenous disturbances in peripheral and central processing, including hearing-impaired and elderly individuals.
 

Early language development in profoundly deaf children

Margaret Harris

    Royal Holloway, University of London

There remains considerable uncertainty about the extent to which the early language development of prelingually deaf children mirrors that of their hearing peers in rate and composition. This study reports data on the vocabulary development of 14 deaf children at 24 months and in the preceding year. Within the sample there was considerable variation in the modality of communication used in the home with some children receiving fluent British sign Language (BSL) but the majority of children being presented with both signing and oral language. Information about the size and composition of the children's vocabulary was derived from a modified version of the Communicative Development Inventory which was given to parents. The production of both signs and spoken words was assessed and these were compared to the norms derived for hearing children. The pattern of results showed very considerable variation among the deaf children in total vocabulary size but, in general, the deaf children had significantly smaller vocabularies than their hearing peers. Differences within the sample, and between deaf and hearing children, are considered in the light of evidence about the nature of early vocabulary learning in relation to visual attention and mode of communication.
 

Factors affecting performance of hearing impaired children on auditory and cross- modal Stroop tasks

Adrian Davis and Shirley Grimshaw (introduced by Professor Mark Haggard)

    MRC Institute of Hearing Research and Psychology Department, Nottingham University,

Previous work with hearing impaired children's performance on selective attention tasks (Jerger et al, 1995) has suggested that relative to hearing children, hearing impaired children experience similar amounts of interference from the voice in which a word is spoken when the task is to respond to the word, but reduced interference from the word when the task is to respond to the voice. It is suggested that this may be due to the impoverished auditory experience of the hearing impaired children prior to aiding. We extended this previous work to investigate (i) whether the age at which hearing impaired children were aided (ie the length of time for which there was impoverished auditory experience) might affect the extent of interference on the task (ii) whether there were any other factors such as age or degree of hearing impairment that might modify the interference effect and (iii) whether interference from another modality eg visual, would enhance or diminish the effect.
The subjects were 65 children aged 3 to 11 years (55 aged 5 to 11 years) with no hearing impairment and 36 children with congenital bilateral sensorineural hearing impairment who were aged 5 to 11 years. All the hearing impaired children wore their hearing aids for the test sessions except for two children, one of whom had a cochlear implant and one of whom had only a mild impairment and did not wear the aid. There were 4 profoundly, 9 severely, 22 moderately and 1 mildly hearing impaired children. Eight replications for each condition were presented to the child and the accuracy and reaction time were noted for each trial. Only trials on which correct responses were made were counted in the analysis of the reaction time data. The results of the study showed that (i) there were significant developmental differences in normal hearing children such that it became easier for them to ignore the voice of a word (male or female) when the task was to respond to the word (man or girl) (ii) this developmental pattern was not observed in the hearing impaired children (iii) the hearing children found it easier to ignore a picture when the task was to respond to a sound (iv) there was some evidence that the age of identification and aid fitting of hearing impaired children was correlated with the ability of hearing impaired children to ignore information that was not on the 'to be attended channel'. These results are interesting because it suggests that hearing impaired children react differently to hearing children when competing demands are placed upon their attention. Hearing impaired children may find it more difficult to 'stay on task' when faced with multiple inputs. Providing early enhanced auditory input may overcome some of these developmental problems, however the origin of these problems may not be simple to assess as the majority of hearing impaired children had been wearing hearing aids for several years prior to testing.

    Jerger, S., Martin, R., Pearson, D. and Dinh, T. (1995) Childhood hearing impairment: Auditory and linguistic interactions during multi-dimensional speech processing. Journal of Speech and Hearing Research, 38, 930-948
 

Development of STM coding in deaf and hearing children

Mairead MacSweeney, Ruth Campbell and Chris Donlan. (introduced by Ruth Campbell)

    Department of Human Communication Science, University College London.

The dominant form of representation used in short-term memory (STM) by hearing people is a code based on the phonological and/or phonetic aspects of speech. Deaf people do not have full access to the auditory component of spoken language. The experiments reported here address what impact this has on the development of STM coding. Experiment 1 explored the use of speech and sign-based STM representations using concurrent linguistic and non-linguistic tasks. There was substantial use of speech-based STM coding by deaf teenagers. Deaf adults also showed this effect (Experiment 2). Group analyses did not indicate sign-based STM coding was used by deaf teenagers or deaf adults. However, recall by a small number of Deaf adults with Deaf parents suggests that early and extensive exposure to British Sign Language is necessary for a sign-based STM code to be reliably established. In Experiment 3 the developmental progression in STM coding in younger deaf children was investigated. Deaf 8-and-14-year-olds and hearing controls were tested on recall of speech similar and visually similar pictures. Deaf 8-year-olds used a visual code alone while deaf 14-year-olds used both visual and speech-based coding. The development in the use of STM codes in deaf children appears to parallel that of hearing children of a similar reading age
 

Why are familiar-only experiences more frequent for voices than for faces?

J Richard Hanley1 and Jennifer M Turner2

    1. University of Essex
    2. University of Liverpool

Hanley, Smith & Hadfield (1998) showed that when participants were asked to recognise famous people from hearing their voice, there was a relatively large number of trials in which the celebrity's voice was felt to be familiar but biographical information about the person could not be retrieved. When a face was found familiar, however, the celebrity's occupation was significantly more likely to be recalled. This finding is consistent with the view that it is much more difficult to associate biographical information with voices than with faces. Nevertheless, recognition level was much lower for voices than for faces in Hanley et al.'s study, and participants made significantly more false alarms in the voice condition. In the present study, recognition performance in the face condition was brought down to the same level as recognition in the voice condition by presenting the faces out of focus. Under these circumstances, it proved just as difficult to recall the occupations of faces found familiar as it was to recall the occupations of voices found familiar. In other words, there was an equally large number of familiar-only responses when faces were presented out of focus as in the voice condition. It is argued that these results provide no support for the view that it is relatively difficult to associate biographical information with a person's voice. It is suggested instead that associative connections between processing units at different levels in the voice processing system are much weaker than is the case with the corresponding units in the face processing system. This will reduce the recall of occupations from voices even when the voice has been found familiar. A simulation was performed using the latest version of the IAC model of person recognition (Burton, Bruce & Hancock, 1999) which demonstrated that the model can readily accommodate the pattern of results obtained in this study.

    Burton, A.M, Bruce, V., & Hancock, P.J.B., (1999). From pixels to people: A model of familiar face recognition. Cognitive Science, 23, 1-31.
    Hanley, J.R., Smith, S.T., & Hadfield, J., (1998). I recognise you but I can't place you: An investigation of familiar-only experiences during tests of voice and face recognition. Quarterly Journal of Experimental Psychology, 51A, 179-195.
 

The Batman effect: Selective enhancement of eyes following familiarisation with faces.

Christopher O'Donnell and Vicki Bruce

    University of Stirling

Two experiments were designed to isolate the facial information utilised in the learning of new faces. In experiments 1 & 2, two groups of subjects were each trained on different groups of faces, using a dynamic video presentation. They were then shown both trained and novel faces in a same-different decision task, where "different" trials included manipulations of internal and external facial features, and their task was to decide whether two images were identical or had a difference in one or more features. Both experiments showed that hair change was most easily detected in untrained (unfamiliar) faces. When faces had been trained (familiar), detection of eye changes was selectively enhanced and sensitivity to hair changes was maintained. While previous studies have suggested that familiar face representations are weighted towards their internal features, our experimental results show that this is due to selective enhancement of the eyes alone, with no reduction in the salience of the hair. Moreover, within the limits of the familiarisation used here, there was no enhancement of the representation of the other internal face features examined. It appears that by concealing both hair and eyes, a familiar face identity can be effectively disguised - the Batman effect.
 

Eye movement asymmetries in facial sex discrimination.

S Walker, M N O Davies and P Stacey

    The Nottingham Trent University

Previous research has shown that facial sex discrimination may be influenced by (a) the saliency of facial features (e.g. Brown & Perrett, 1993), (b) the kinds of facial representations employed during testing (e.g. Bruce et al., 1993) and (c) perceptual asymmetries in processing (e.g. Burt & Perrett, 1997). It has also been pointed out that some of the different influences may not be mutually exclusive (e.g. Bruce et al., 1993). In the experiment reported here, eye-movement recordings were taken from 30 participants as they made sex discrimination judgements on male and female faces during a free fixation-viewing task. The twelve target faces presented had been pre-classified as being either easy or difficult to categorise by gender. Preliminary findings suggest that for both 'easy' and 'difficult' to categorise faces, participants spent significantly longer looking at the left side of the target faces (i.e. the left side of the stimuli from the viewer's perspective) than at the right side, but this was only the case for male target faces. However, for target faces that were difficult to categorise, participants spent more time looking at the bottom part of the face (i.e. the base of the nose and the jaw line) than when the target faces were easy to classify. The findings from this experiment, which relies on the use of on-line eye-movement data rather than on behavioural measures, offer some support to Burt & Perrett (1997) and also suggest that feature saliencey strategies employed by participants asked to make facial sex discrimination judgements are influenced by the kinds of target faces seen. The value of eye-movement studies for research into the other aspects of face processing is considered.
 

Effects of amphetamine on trace conditioning in the rat: Attentional or motivational?

Helen J Cassaday and Christine Norman

    University of Nottingham

Learning that stimuli (e.g. flashing lights CS) predict motivationally significant outcomes (e.g. food or shock UCS) is normally reduced when these events are separated in time. This aspect of selective learning is called trace conditioning and is demonstrated between subjects. Within subjects, it reflects selective attention in that when the designated CS is temporally distant and so less informative, animals should show attentional responses to alternative stimuli in the background (e.g. mixed frequency noise). Schizophrenics show deficient selective learning in latent inhibition procedures and these are very sensitive to treatments affecting the dopaminergic system. We now investigate selective learning using a wider range of behavioural tests with drug studies to be followed by lesions. As a first step, we have examined the effects of amphetamine in both aversive and appetitive trace conditioning procedures. We do not always see the inverse relationship between conditioning to CS and background that should follow from Rescorla-Wagner theory. Nevertheless, the effects of dopaminergic treatments can still point towards (or away from!) the neural substrates necessary to the apportioning of associative strength seen in untreated normal animals. We examine the contribution of amphetamine's effects on baseline response rates and on the impact of the reinforcers in use to determine whether the observed results have any likely attentional component.
 

The mutual influence of gaze and head orientation in the analysis of social attention direction

Stephen R H Langton

    University of Stirling

Three experiments are reported that investigate the hypothesis that head orientation and gaze direction interact in the processing of another individual's direction of social attention. A Stroop-type interference paradigm was adopted in which gaze and head cues were placed into conflict. In separate blocks of trials, participants were asked to make speeded keypress responses contingent on either the direct of gaze, or the orientation of the head displayed in a digitised photograph of a male face. In Experiments 1 and 2 head and gaze cues showed symmetrical interference effects. Compared with congruent arrangements, incongruent head cues slowed responses to gaze cues, and incongruent gaze cues slowed responses to head cues, suggesting that head and gaze are mutually influential in the analysis of social attention direction. This mutuality was also evident in a cross-modal version of the task (Experiment 3) where participants responded to spoken directional words whilst ignoring the head/gaze images. It is argued that these interference effects arise from the independent influences of gaze and head orientation on decisions concerning social attention direction.
 

The categorical perception of colours and facial expressions: the effect of verbal interference

Jules Davidoff and Debi Roberson

    Goldsmiths College University of London

A series of five experiments examined the Categorical Perception previously found for colour and facial expressions. Using a two-alternative forced-choice recognition memory paradigm, it was found that verbal interference selectively removed the defining feature of Categorical Perception. Under verbal interference, there was no longer the greater accuracy normally observed for cross-category judgements compared to within-category judgements. The advantage for cross- category comparisons appeared to derive from verbal coding both at encoding and a storage. It thus appears that while both visual and verbal codes may be employed in the recognition memory for colours and facial expressions, subjects only make use of verbal coding when demonstrating Categorical Perception.
 

Is Psychophysics a subject with a bright new future behind it?

Michael Morgan

    Institute of Ophthalmology, University College

Science that refuses to adopt new techniques and remains encapsulated in an esoteric world rapidly takes on the attributes of a 'Glass Bead Game' (Hesse, 1943). Psychology furnishes several examples, which it would be impolite to enumerate. Suspicion is growing amongst Neuroscientists that at least some aspects of Psychophysics are beginning to resemble the Glasperlenspiel. Psychophysicists attempt to deduce the highly complex rules of neural functioning without opening the skull. Is it not simpler, and more direct, to look at the brain directly with new imaging techniques? It would be unwise to dismiss the possibilities of functional imaging for the future, but so far, it does not have the technical capacity to look at what psychophysicists (possibly wrongly) call 'mechanisms'. On the other hand, psychophysics has been highly successful in the mechanistic analysis of colour vision, where psychophysical knowledge of mechanism still leads our physiological knowledge. I shall argue that motion is another such case. Psychophysical evidence strongly supports the existence of a Reichardt (or motion energy) detector of motion, but neither single-unit recording nor functional imaging has told us exactly how it works, below the computational level. Equally, the growing consensus that motion computation is a two-stage process is supported by functional imaging, but does not depend upon it[1].
Psychophysics is good at dissecting a temporal chain of processes into components. To illustrate this I shall describe recent work on the Reichardt mechanism. In a two-frame motion sequence, the threshold contrast of one of the frames (the test) varies a function of the other (the pedestal)[2]. Near-threshold pedestals facilitate detection of the test while larger contrasts cause masking. However, the masking is greater if the high-contrast pedestal precedes the low- contrast test than if it comes second, suggesting that the masking non-linearity precedes the site of motion detection. The temporal asymmetry is found with 2cpd Gabor patches but not with very low frequency gratings[3], suggesting that the asymmetry is not found in motion stimuli detected primarily by the Magnocellular pathway. I shall describe the range of spatio-temporal conditions under which the temporal asymmetry is found.

    1. Schrater, P.R., D.C. Knill, and E.P. Simoncelli, Mechanisms of visual motion detection [In Process Citation]. Nat Neurosci, 2000. 3(1): p. 64-8.
    2. Morgan, M. and C. Chubb, Contrast facilitation in motion detection. Vision Res, 1999. 39: p. 4217-4231.
    3. Burr, D.C., M.J. Morgan, and M.C. Morrone, Saccadic suppression precedes visual motion analysis. Curr Biol, 1999. 9(20): p. 1207-9.
 

The cortical representation of binocular disparities.

Andrew Parker

    University Lab of Physiology, Oxford

The presence of cortical neurons selective for binocular disparity has been established for more than 30 years. Nonetheless, the contribution of such neurons to stereoscopic depth perception remains uncertain. In this work we advance a number of tests which are capable of evaluating whether cortical neurons have receptive field properties compatible with a direct involvement in depth perception. Application of these tests causes us to conclude that the large population of disparity selective neurons in the primary visual cortex (V1) does not carry signals directly suitable for the perception of depth. Our current interest is therefore directed towards sites in extrastriate visual cortex. It is clear that at least some neurons in the second visual area (V2) have novel properties, more closely associated with the perceptual characateristics of stereoscopic vision.
This work is in collaboration with Dr Bruce Cumming and is supported by the Wellcome Trust.
 

Functional magnetic resonance imaging: demon or saviour?

Andrew T Smith

    Royal Holloway, University of London

Functional MRI has (i) been lauded by some psychologists as doing everything neuropsychology does but doing it much faster and rather better and (ii) been criticised by some computationally minded vision researchers and others on the grounds that, like neuropsychology, it will never tell us how the brain actually works. I shall discuss these points, disputing the second and arguing that the first begs a prior question, by reference to results from two fMRI studies of the human visual cortex. The first concerns visual attention. Recent neurophysiological studies have shown that, in a wide variety of anatomical locations and experimental contexts, attention to a specific location or stimulus modulates the responsiveness of sensory neurons that are sensitive to that stimulus. With colleagues, I have shown that when visual attention becomes focused at a given point in space, this is accompanied not only by increased responsiveness of neurons at the corresponding cortical location but also by a widespread decrease in baseline activity levels at all other locations. This (i) is information of a qualitatively different type from that obtained using neuropsychological methods and (ii) bears very directly on computational models of attention.
The second concerns the way in which visual space is represented in the visual areas of the cortex. We have developed a way of estimating, from fMRI data, the average receptive field size of visual neurons at a given location in the cortex. We find that average receptive field size increases with stimulus eccentricity and also increases from V1 to V2 or from V2 to V3 at constant eccentricity. This, again, goes beyond what can be done with neuropsychology since it relates directly to physiological studies of single neurons in the primate visual system. Brain research has always been multidisciplinary and all techniques have their place. FMRI not only provides confirmation and extension of neuropsychological results but also offers something unique.
 

Energy-Efficient Operation of Visual Cortex

Peter Lennie

    Center for Neural Science, New York University

Neuronal activity is metabolically expensive--so much so that energy conservation must be a major principle governing the operation of cortex. The generation and propagation of action potentials accounts for most of the energy expended by the brain, so one would expect cortical systems to work in ways that minimize the numbers of action potentials they use. I will explore some implications of this idea for the operation of visual cortex, and will argue that the need to conserve energy should confer some distinctive functional properties on visual cortical neurons. In particular, one might expect that the earliest stages of a neuron's response following the onset of a visual stimulus would be a rich source of reliable information about the stimulus. These are just the parts of a neuron's response that are often ignored in physiological studies.
 

Marked discrepancies between abilities in normal individuals: the case of estimation and calculation.

Ann Dowker

    University of Oxford

Two main studies are reported.
1. 70 unselected state primary school children between the ages of 5;2 and 9;10 were given tasks involving (a) exact mental calculation; (b) derived fact strategy use; and (c) arithmetical estimation. There were high overall correlations between these abilities. However, some children showed marked discrepancies, in either direction, between calculation and estimation.
2. 44 adults from the general population with self-reported mild calculation difficulties, and 28 who reported no such difficulties, were given a set of arithmetic- related tasks, including among others Hitch's (1978) Numerical Abilities Tests and an estimation task for multiplication and division. Again, correlations between calculation and estimation were high, but some individuals showed marked discrepancies, in either direction, between these abilities.Such findings support the view that arithmetical ability is not a single entity, but consists of many components. The relevance of these findings to cognitive psychology and neuropsychology; to findings from brain imaging studies; and to education are discussed. In particular, the author's studies with regard to the componential nature of arithmetic have led to a 'Numeracy Recovery' early intervention pilot project for 6-year-olds who have been identified by their teachers as having difficulty with arithmetic. The children are assessed on different components of early numeracy, ranging from counting to solving word problems, and are given individual remedial work relating to the components in which they demonstrate weaknesses. The results of the project so far are summarized.
 

Number processing affects spatial accuracy.

Martin H Fischer (Introduced by Professor R A Kennedy)

    University of Dundee

The hypothesis of a Spatial-Numerical Association of Response Codes (SNARC) claims that visual presentation of a digit automatically activates that digit's magnitude representation along a left-to-right oriented mental number. This hypothesis can account for spatial compatibility effects in response speed during parity judgments line (Dehaene et al., 1993). It was here tested with spatial accuracy as the dependent measure.
In several experiments, neurologically healthy subjects bisected long strings of digits (e.g., 111111111111111111 or 99999999999999999), or lines of similar length and position on a page. In Experiment 1, performance was biased to the left for strings made from digits 1 or 2 and to the right for strings made from digits 8 or 9. This observation is consistent with the SNARC hypothesis. A second experiment attempted to control for visual differences between strings. Bisecting strings made of 5's that were presented among context strings of digits of either larger or smaller magnitude did not systematically affect spatial performance, although the earlier observation was replicated on the context strings. In the final experiment, a magnitude-dependent spatial bias was again observed when subjects bisected lines with two flanking digits.
These results extend previous findings and support the notion of an automatic association of numerical magnitudes with spatial response codes.

    Dehaene, S., Bossini, S., & Giraux, P. (1993). The mental representation of parity and number magnitude. Journal of Experimental Psychology: General, 122, 371-396.
 

Processing of affectively valenced words in anxiety: An RSVP study.

Amanda Holmes and Anne Richards

    Birkbeck College, University of London

Recent research has suggested that anxiety is associated with a bias in the prioritisation and selective processing of threat-relevant information, rather than with an increase in the actual speed or efficiency with which the information is processed. In the present study, we employ a dual-target detection in rapid serial visual presentation (RSVP) task to explore this proposal further. The findings from the first experiment reveal that high trait anxious (HA) participants are more sensitive to detecting g the presence of negative and positive words than neutral words, in RSVP streams, relative to low trait anxious (LA) individuals. In the second experiment, we were interested in whether negative, positive and neutral words would produce different levels of interference on the processing of subsequent targets. The findings were negative; however, overall performance was found to be significantly better in LA than in HA participants. The main conclusion to arise from the present research is that a processing bias in anxiety occurs as a consequence of the increased competitive value and processing efficiency of emotionally valenced material. However, this advantage for processing will only be evident when performance is resource-limited.
 

Effects of ambiguity in visual and spoken word recognition.

Jennifer Rodd, M Gareth Gaskell and William D Marslen-Wilson

    MRC Cognition and Brain Sciences Unit, Cambridge

In many recent models of word recognition, words compete to activate distributed semantic representations. Reports of faster visual lexical decisions for ambiguous words compared with unambiguous words are problematic for such models; why does the increased competition at the semantic level not slow the recognition of ambiguous words? In an experiment presented at a previous meeting we challenged these findings by showing slower lexical decisions for ambiguous words with two meanings than for words that have only one meaning. We suggested that previous reports of an ambiguity advantage were due to the use of ambiguous words that have clusters of highly related senses. This explanation relied on the assumption that while multiple meanings produce a processing disadvantage multiple word senses are beneficial. This is now confirmed by two new experiments that show the predicted advantage for words with many dictionary senses over those with only few senses, for both ambiguous and unambiguous words. Further, this pattern of results where an ambiguity disadvantage coexists with a sense advantage is also found in the auditory domain. Finally, we explore several possible explanations for this benefit for words with many senses
 

Age-of-acquisition effects are cumulative-frequency effects in disguise: Re-opening the debate.

Michael Lewis

    Cardiff University

Tasks such as reading, object naming and lexical decisions are affected by the factors of Age-of-acquisition (AoA) and frequency of the stimulus words. Further, similar effects have been found for faces. A parsimonious explanation for these effects would be that it is the total number of times the stimulus has been encountered that predicts reaction time. This cumulative-frequency hypothesis, however, has always been rejected on the grounds that AoA and frequency effects are additive and not multiplicative as was predicted by the hypothesis. Reanalyses of data from two key AoA papers indicates that, if learning is assumed to follow a power function, then the effects of AoA and frequency can be interpreted as being multiplicative. The conclusion, therefore, is that the cumulative-frequency hypothesis has been rejected prematurely.
 

The origins of language differences in verbal short-term memory

Annabel S C Thorn, Susan E Gathercole and Clive R Frankish

    University of Bristol

Four experiments examined the origins of language familiarity differences in bilingual short-term recall. In Experiments 1A and 1B, bilinguals were tested on their serial recall and probed serial recall of both familiar words and unfamiliar nonwords in their first and second languages. A first-language advantage was obtained on both measures, indicating that the beneficial effects of language familiarity are not attributable to lesser output delay during overt recall. In Experiments 2A and 2B, the same group of bilinguals were tested on their serial recall and serial recognition of word lists in their first and second languages. A sizeable first language advantage was obtained on the serial recall measure but performance was comparable in the two languages on the recognition task. The insensitivity of serial recognition to language familiarity suggests that language differences in bilingual immediate memory may arise as a consequence of the differential availability of language-specific long-term knowledge with which to support recall performance.
 

Comprehension skill, inference making ability and their relation to knowledge

Kate Cain1, Jane Oakhill2, Marcia Barnes3 and Peter Bryant4

    1. University of Nottingham
    2. University of Sussex
    3. University of Toronto and The Hospital for Sick Children
    4. University of Oxford

In this experiment we investigated the relation between young children's comprehension skill and inference making ability using a procedure that controlled individual differences in general knowledge (Barnes & Dennis, 1998). Children were first taught a knowledge base to criterion. A multi-episode story was then read out to the children, and their ability to make two types of inference, coherence and elaborative was assessed. Both inference types required the integration of a textual premise with an item from the knowledge base. There was a strong relation between comprehension skill and inference making ability even when knowledge was equally available to all participants. Subsidiary analyses of the source of inference failures revealed different underlying sources of difficulty for good and poor comprehenders.
 

Visual perceptual deficits contributing to a case of visual object agnosia acquired in early infancy

Elaine Funnell and John Wilding

    Royal Holloway University of London

This paper will investigate the visual-perceptual difficulties of a child RAS who suffered a brain infection (viral encephalitis) in early infancy. While RAS has no apparent visual problems, she has profound difficulty with recognising drawings and pictures of objects and with copying. The main focus of our study has been to attempt to identify the key factors in her difficulty in identifying objects. We have studied RAS's abilities, using a variety of materials (illusory Kanisza triangles, visual search, detecting letters in dot matrices which require use of proximity, similarity, continuity etc to segment the display). We will attempt to demonstrate that 1) RAS has severe problems with visually noisy displays in segmentation and/or grouping of elements, with the result that she often fails to select a target from the background. 2) RAS has a weakness in parallel processing, which may be responsible for the problem in segmentation. 3) Simple shapes may also be inadequately defined in RAS's visual vocabulary, since differences between circles and ellipses or between squares and rectangles are often not detected. We will consider which, if any, of these difficulties may be fundamental to her problems with object recognition.
 

Recognition memory and source memory deficits following frontal lesions

Jamie Ward1 and Alan J Parkin2

    1 University College London
    2 University of Sussex

A classical double dissociation between impaired hit rates (patient CS) and impaired false alarm rates (patients JB and MR) in episodic yes/no recognition tasks is presented. It is argued that the pathologically high false alarm rates are due to a reliance on shared (rather than distinctive) features of the target items to be remembered. Given that distractor items are also likely to share these properties, they get incorrectly recognised. These patients have difficulties in discriminating the source of an item (e.g. was it seen as a picture or word?), despite having normal hit rates. This is consistent with the notion that it is the quality of their memory representations that is altered, rather than a response parameter such as bias. The converse pattern (low hit rates, normal false alarm rates), however, is associated with normal source memory performance. That is, the patient can often identify the source of a memory even though in comparable testing situations he is unable to recognise it. This suggests that performance on source memory tasks employs different mechanisms to recognition memory, and does not simply reflect an increase in task difficulty.
 

Lost for words or loss of memories: Autobiographical memory in a semantic dementia patient.

Helen Moss1, Marinella Cappelletti2, Paul De Mornay Davies3, Eli Jaldow 2 and Mike Kopelman2

    1. University of Cambridge
    2. St. Thomas's Hospital, London
    3. Middlesex University

Recent reports suggest that patients with semantic dementia show superior recollection of recent than remote autobiographical memories (Graham, 1999). It is possible however, that this effect is due, at least in part, to the patients' linguistic deficit. It may be easier to retrieve the words referring to people, places and events that are part of current experience, leading to improved expression of recent incidents. We investigated this hypothesis in IH, a 65-year old man with an 8- year history of semantic dementia. We devised a questionnaire about incidents in his life ranging from childhood to the last few weeks, including a structured series of cues consisting of increasingly specific lexical items relevant to each target incident. Results from this test were compared with a conventional measure of autobiographical memory, the AMI. While the AMI showed the expected step-like advantage for recent events, the cued test revealed that with a few exceptions IH was in the normal range for age- matched control subjects, and his memory of early adult life, in particular, was often remarkably preserved. These results support the claim that there may be linguistic and strategic components to the autobiographical memory profile observed for patients with semantic dementia.

    Graham K. S., (1999). Semantic dementia: A challenge to the multiple-trace theory? Trends in Cognitive Sciences, 3, 85-89.