The distinction between pure and applied mathematics is a commonplace. Yet, the distinction is very different in the minds of philosophers from the way mathematicians conceive it. I will explore this difference and its significance for evidence, focussing on the role of intuitions and visual reasoning.
Spatial demonstratives – terms including this and that – are among the most common words across all languages. Yet, there are considerable differences between languages in how demonstratives carve up space and the object characteristics they can refer to, challenging the idea that the mapping between spatial demonstratives and the vision and action systems is universal. Overviewing findings from multiple experiments, I show direct parallels between spatial demonstrative usage in English and (non-linguistic) memory for object location, indicating close connections between the language of space and non-linguistic spatial representation. Spatial demonstrative choice in English and immediate memory for object location are affected by a range of parameters – distance, ownership, visibility and familiarity - that are lexicalized in the demonstrative systems of some other languages. The results support a common set of constraints on language used to talk about space and on (non-linguistic) spatial representation itself. While demonstrative systems are not diagnostic of the parameters that affect demonstrative use in a language, demonstrative systems across languages may emerge from basic distinctions in the representation and memory for object location. In turn, these distinctions offer a building block from which non-spatial uses of demonstratives can develop.
The articulators in sign languages of the deaf are visible, capable of retaining a signal for some time, and can produce different information simultaneously. Furthermore, signers make use of directions in space to represent discourse referents. Both facts are exploited for discourse purposes: reference tracking, foregrounding-backgrounding and to express a certain viewpoint. I use the term referent projection of an articulator (one of the hands or the signer’s body) or a direction in space used to represent a discourse referent. Based primarily on data from monologues (signers talking about events of their everyday life or narrating stories based on cartoons), I shall exemplify general tendencies of the choice of direction from the signer for representing referents, how signers use space flexibly for referent separation and viewpoint (Engberg-Pedersen 1993), and how the hands can be used to background a referent in relation to the information conveyed by the other hand (Engberg-Pedersen 2011). I shall, moreover, demonstrate that these iconic features of sign languages are grounded in experience not only with perception of space from a specific point, but also with manipulation of objects and with interaction with another human being.
Based on exploratory experiments in joint problem solving, this paper gives an overview over different solution strategies. CogWheel experiments presented on a screen require two subjects to solve tasks cooperatively with different solution strategies as the outcome. A general finding seems to be a large measure of instability where two or more strategies compete or interact. There seems to be an important interplay between linguistic and bodily interaction in this example of joint diagrammatical reasoning. It seems that certain linguistic constructions prompt for shifts in gesture type, moving from more embodied, continuous strategies to more abstract, discrete strategies. These different roles played by gestures in reasoning strategies, give rise to investigations as to the exact role of embodiment.
Generalizing from our first talk, this paper outlines some important conceptual problems in the relation between diagrams and language. Often, these two representation systems are taken to be very different, giving rise to large problems of interconnecting them; our perspective goes in another direction – that of seeing language as a highly sophisticated tool for diagrammatical reasoning. Verbs – and more generally predicates – are schematic and may be seen as a part of language facilitating diagrammatical reasoning. Such a view also goes some way in explaining the ease with which linguistic predicates, gesture, and externalized diagrams interact pragmatically in problem solving situations.
1) Isomorphism (= the principle of one Meaning – one Form) has been widely confused with iconicity, but there are fundamental differences between the two:
2) In typological research syntagmatic isomorphism has recently been rechristened ‘monoexponentiality’. The monoexponentiality thesis claims that the grammatical meanings connected with verbs – e.g. speech act, polarity, voice, modality, tense, aspect – are typically expressed by dedicated formatives. This claim is falsified by each (finite) verb of every language (cf. Itkonen & Pajunen 2011). Thus, (syntagmatic) isomorphism is dramatically violated by cross-linguistic evidence. By contrast, sentences exemplifying analytical structures à la Man kill lion, with no (or ‘zero’) formatives for grammatical meanings, are – when true – the most perfect exemplifications of iconicity. Cross-linguistically, such ‘minimal’ sentences exhibit the following semantic features: declarative, affirmative, active, factual, present-state or past-event, 3SG-subject (3SG-object, if any) (cf. Itkonen 2013).
3) Taking sentences like Man kill lion to be the perfect exemplifications of iconicity presupposes that the ‘ontological counterparts’, mentioned in 1-(ii), are restricted to (the properties of) observable things and relations. Thus, notions with no ontological counterparts (like identification, negation, information structure, etc) must be excluded from the domain of iconicity. The requirement of observational ontology entails that iconicity deals with space, but – Nota Bene – also with time. In many languages the prior event E1 must be referred to by the prior sentence S1; in some languages E1 may be referred to by the posterior sentence S2 (or the posterior event E2 by S1); but there is no language where E1 must be referred to by S2. This is a genuine linguistic universal explained by (temporal) iconicity.
4) Iconicity is based on the structural similarity between the signifier and the ontology signified; it is a semiotic notion not confined to human language. Because ‘structural similarity’ is the very definition of analogy, iconicity (but not isomorphism!) turns out to be an exemplification of analogy. Their relation is asymmetrical: such cognitive domains as language, logic, music, and vision are structurally similar, or analogous (cf. Itkonen 2005), but there is no iconic relationship between these four domains. New vistas might well be opened by embedding (semiotic) iconicity in the superordinate context of (intrinsically non-semiotic) analogy.
5) For decades, generativism officially banned analogy, while over-using it in reality. As a result, analogy came – wrongly – to be regarded with suspicion. Today it has been smuggled back into theoretical linguistics under such pseudonyms as (Langacker-type) ‘schema’ or (Goldberg-type) ‘construction’. These are needed in order to account for the existence of generalizations. But what we have here is traditional analogy, pure and simple: there are no non-analogical generalizations. Any purported objections against analogy just repeat the earlier errors and misunderstandings of generativism (cf. Itkonen 2005).
References:
Itkonen, Esa. 2004. Typological explanation and iconicity. Logos and Language, V:1.
___. 2005. Analogy as structure and process: Approaches in linguistics, cognitive psychology, and philosophy of science. Amsterdam: Benjamins.
___. 2011. Papers on typological linguistics. University of Turku: Publications in General Linguistics 15.
___. 2013. Functional explanation and its uses. S. Bischoff & C. Jany (eds): Functional approaches to language. Berlin: Mouton.
___ & Anneli Pajunen. 2011. A few remarks on WALS [= World Atlas of Language Structures]. In Itkonen 2011.
Linguistic structure emerges and evolves in response to a multitude of cognitive, social and environmental pressures (Tylén, Fusaroli, Bundgaard, & Østergaard, 2013). Nonverbal gesture lends itself as a particularly interesting window to conceptualization processes underlying word order due to their spontaneous and non-conventional nature. Previous studies have indicated that people of any linguistic background will use only one specific gesture order (SOV – subject, object, verb) when asked to describe transitive events using only gesture(Goldin-Meadow, So, Ozyurek, & Mylander, 2008). This is presented as evidence for innate biases in the conceptualization of events, thus transcending acquired linguistic structure. However, a competing explanation holds that gestural representations are influenced by the event structure of the referent situation itself. We will present data from an experiment in which pairs of participants engaged in a simple referential game (Fay, Garrod, & Swoboda, 2010) matching stimulus pictures using only gesture as a means for communication. Our results suggest that the event structure of the referent situations indeed affect the gestural ‘word order’ (syntax). Object manipulation events (e.g. ‘a ballerina throwing a sweater’) motivated SOV structure, while object construction events (e.g. ‘a ballerina painting a sweater’) motivated SVO. We argue that these representational orders reflect the two event types through iconicity. In addition, we show how social communicative pressures and the frequency distribution of event types may work to stabilize (and possibly conventionalize) a particular gesture order.
References:
Fay, N., Garrod, S., & Swoboda, N. (2010). The interactive evolution of human communicative systems. Cognitive Science, 34, 351-386.
Goldin-Meadow, S., So, W. C., Ozyurek, A., & Mylander, C. (2008). The natural order of events: how speakers of different languages represent events nonverbally. Proceedings of the National Academy of Sciences of the United States of America, 105, 9163-9168.
Tylén, K., Fusaroli, R., Bundgaard, P., & Østergaard, S. (2013). Making Sense Together: a dynamical account of linguistic meaning making. Semiotica, 194, 39-62.
What does the architecture of a textual diagram tell us about the cognitive processes involved in constructing it and what instructions does it give us for the interpretation of this text? As one of the three types of iconicity, diagrams are icons of relations, whereas images share “simple qualities” with their object, and metaphors represent by “a parallelism in something else” (CP 2.277, 1903). Diagrams are external or internal (mental) images of relations, which must be perceived to become activated. They are not only representations of relations between loci in space, as they are on maps or architectural drawings, but they also more generally schemata of thought whichrepresent relations between the words of concepts, ideas, and larger structures of discourse. Spatial mapping is similar in structure to creating as well as interpreting diagrammatic cognitive schemata. We also form and activate diagrams when we orient ourselves in mental spaces and conceptual domains. In thought, reasoning, reading, and understanding texts, the diagrammatic mapping of concepts from one domain to another plays an important role.
Diagrams can be found at all levels of texts and verbal discourse. Diagrammatic organization is reflected in the layout, structure, syntax, paragraph, or chapter organization, in the chronological order or topology of its events, in the structure of its ideas or lines of argument of texts. Mental diagrams are crucial for the study of texts, discourse, reasoning, and thinking in general. This is why the architecture of a textual diagram reveals the core arguments of the text, since diagrams visualize, clarify, and organize it. Mental diagrams account for the coherence of a text; therefore, they allow us to ‘map’ its contents and guide us in the process of reading. The present contribution to the Aarhus Symposium will explore the role of diagrammatic icons in literary texts by Henry James, Virginia Woolf, the Sri Lankan-born Canadian novelist and poet Michael Ondaatje, and the Irish writer of fiction Colum McCann.
In this talk, I describe a collection of studies indicating that internal cognitive processes are often constructed in, and of, two-dimensional spatial formats of representation, much like the topographic neural maps that populate so much of mammalian cortex. Language comprehension, verbal recall, visual imagery, problem solving, and rule learning all appear to recruit sensorimotor components of spatial relationships and locations as markers for organizing, and even externalizing, perceptual simulations of objects, events, and complex ideas. This infusion of spatial formats of representation for cognitive information is particularly prominent in the frameworks of embodied cognition and cognitive linguistics, where representational entities and structures are treated not as static logical symbols that are independent of perception and action but instead as spatially dynamical processes that are grounded in perception and action.