Navigation – Plan du site

AccueilNumérosVolume 3« Gesture verbs »

« Gesture verbs »

Cognitive-visual mechanism of "classifier verbs" in Norwegian Sign Language
Sonja Erlenkamp

Résumés

Depuis quatre décennies, de nombreux spécialistes des langues signées parlent d’un type particulier de signes en terme de “verbes classifieurs” (classifier verbs) par analogie à la description des prédicats classifieurs dans les langues parlées. L’application de la notion de verbes classifieurs à ces constructions signées est cependant de plus en plus considérée comme problématique, comme l’indiquent un certain nombre de publications récentes (p.e. Emmorey 2003). En adoptant une approche cognitive, cet article examine ces constructions telles qu’elles sont utilisées dans la Langue des Signes Norvégienne et tente d’expliquer la construction du sens dans les occurrences de ces signes. En appliquant la théorie des espaces mentaux (Fauconnier 1997; plus amplement développée par Liddell 2003 pour la Langue des Signes Américaine), il est démontré que ces constructions signées – aussi nommées “verbes de représentation” (depicting verbs) – partagent certains principes cognitifs élémentaires d’alignement iconique avec le discours accompagnant les gestes iconiques, tout en étant parties intégrantes de la grammaire de la langue des signes. Basés sur trois types différents de blending, trois types de verbes de représentation sont proposés: manipulateurs, substituteurs et descripteurs.

Haut de page

Notes de la rédaction

This contribution is provided with a sample video file showing the examples discussed in the article « in motion » (cf. Appendix). This video has been encoded using the DivX codec; if you have trouble playing the file, you will probably have to download and install the codec available at http://www.divx.com/en/divx (free of charge).

Cette contribution est accompagnée d’une vidéo de démonstration montrant les exemples discutés dans l’article « en mouvement » (en annexe). Cette vidéo a été encodée avec le codec DivX ; si vous rencontrez des problèmes pour lire le fichier, vous devrez probablement télécharger et installer le codec disponible à l’adresse suivante : http://www.divx.com/fr/divx (gratuit).

Texte intégral

Introduction

  • 1  Other terms used for these signs are “polysynthetic signs” (Wallin 1994) parallel to polysynthetic (...)

1In sign language linguistics, a particular type of signs is traditionally being referred to as “classifier verbs” (see for example Supalla 1986) as a parallel to the description of classifier predicates in spoken languages according to, for example, Allan (1977).1 All known signed languages seem to share this phenomenon. In a relatively large number of articles and books (for an overview see Schembri 2003), “classifier verbs” in signed languages are described as verbs containing a classificatory element (henceforth classifier) expressed by handshape and hand orientation. This notion has not been without problems and there is still little agreement among sign language linguists concerning the question as to whether the notion classifier is an appropriate application to the signs in question. One reason for this is the lack of agreement in linguistics as to how to define and apply the notion of “classifier.” Different attempts have been made (e.g., Allan 1977, Aikhenvald 2000, and Craig 1986 and 1992), but even for spoken languages the concept of classifiers seems to be somewhat problematic. This is due to the fact that the term “classifier” is an umbrella label for a continuum of noun categorization devices, from lexical numeral classifiers of South-East Asia to highly grammaticalized gender agreement classes of Indo-European languages, and that “classifiers in spoken languages come in different guises” (Aikhenvald 2003: 87).

2Over the years different phenomena in spoken and signed languages have been identified as classifiers and – according to their function and grammatical status – have been divided into sub-groups like numeral classifier, nominal classifier, verbal classifier (see e.g., Aikhenvald 2000 and 2003, and Craig 1986 for spoken languages). For signed languages, labels such as handling classifier, body classifier, and size-and-shape classifier were suggested (cf. Supalla 1986). The property shared by all these phenomena is claimed to be the open reference of the language symbols: the same language form is used to refer to a class of different entities – thus the term “classifier”.

3Taking into account that one of our main cognitive abilities as humans is to categorize and classify, it should be no surprise that this cognitive process also is mirrored in language structure. In fact, it would be surprising if any particular language did not make use of this cognitive process at all. As Aikhenvald (2003: 87) argues, “almost all languages have some grammatical means for the linguistic categorization of noun referents.” Thus, languages which usually do not fall under the category “classifier languages” do at least have lexemes or constructions with classifying elements. English for example uses classificatory lexemes to talk about amounts of entities in mass nouns: a glass of water, a bottle of wine, a cup of tea etc. As a result, the following question arises: how much classification does a language need to be called a “classifier language”? And furthermore: what kind of analysis on the classification constructions in a language is the most useful to gain more insight in this particular language - or language in general? As always in linguistics – and science in general for that matter – the theory in use will define much of the outcome of the analysis of classificatory devices within the language in question. Erlenkamp (2000) for example did an evaluation of German Sign Language based on Barron’s (1982) definition of classifier verbs and his criteria for identifying them. The results showed that German Sign Language (DGS) seemed to have a verbal system which includes a class of “classifier verbs”. Although this analysis is not incorrect, I am not sure if it provides much insight into the language system of DGS.

4In this article, I will not even attempt to make such an evaluation of Norwegian Sign Language (NTS), simply because I do not think the notion classifier is useful in terms of gaining insight into these signed language constructions. Instead, I will give an overview of how meaning construction in so called “classifiers verbs” can be understood by applying a conceptual blending model based on the mental space theory originally developed by Fauconnier (1985, 1997) and further developed for a signed language by Liddell (2003) and for spoken language as conceptual blending theory by Fauconnier & Turner (2002). This approach gives insight in the role of iconicity in meaning construction in these verbs and as a result offers an explanation why so called “classifier verbs” in NTS actually can and should be described as three different phenomena. The article is based on a relatively large amount of data analyzed over the past 8 years, including narratives, informational texts, dialogues, elicited data and data on Norwegian gestures accompanying spoken Norwegian based on the Frog Story (Meyer 1980). To illustrate my analysis, one short, but typical Norwegian Sign Language narrative is used as a showcase throughout the article.

1. Analyzing “classifiers verbs”: metonymical iconicity as a key

  • 2  A space is grounded when « its elements are conceptualized as existing in the immediate environmen (...)

5In NTS there are a large number of signs (if not all signs) which make use of, or originally were developed through, the combination of mainly two different cognitive processes: iconicity and metonymy. Cognitive iconicity is according to Wilcox et al. (2004: 142) “defined not as a relation between the form of a sign and real world referent, but as a relation between two conceptual spaces”. In terms of a conceptual blending these conceptual spaces are often called INPUT 1 and INPUT 2. Following Liddell’s (2003) analysis for American Sign Language these two spaces are the “real space”, a grounded space, and the “event space”, a non-grounded space.2

6Iconicity in signed languages is, however, not based on a one-to-one relationship between one possible appropriate image and its correspondent linguistic representation. As Taub (2001: 45) claims, there are often a number of appropriate images that can be used to represent a complex associated concept in (iconic) linguistic representation. Thus the selection process for an image to become part of iconic signed language representation “is an example of the cognitive process metonymy, which has been treated by a number of cognitive linguists (Fauconnier 1985, Kövacecs & Radden 1998 [sic], Lakoff & Johnson 1980, etc.)”. Following the definition of Radden & Kövecses (1999: 21), "metonymy is a cognitive process in which one conceptual entity, the vehicle, provides mental access to another conceptual entity, the target, within the same cognitive model”. The distinction to metaphoric mapping is that both target and vehicle in metonymic mapping are within the same domain. With regard to iconic representations in NTS the images used are often based on part-whole or whole-part mappings. Thus, an important part in the creation of many iconic signs is the use of metonymic iconicity, where the process of cognitive iconicity is combined with a metonymic mapping process. As a consequence, metonymic iconicity can be described as combined process of metonymical and iconic mapping where “it is not only the semantic pole of a sign that plays a role in metonymy; the phonological pole, the visible moving articulators, also is conceptualized [namely in real space, according to Liddell 2003; S.E.] and becomes an important element of metonymic representations (Wilcox et al. 2004: 143). As part of metonymical iconicity, schematizations and other basic cognitive processes play probably an important role in the creation of iconic signs. Take for example the Norwegian Sign Language sign BIL (car) (see picture 1).

Picture 1. The Norwegian sign BIL (car)

  • 3  In highly lexicalized signs like BIL, this is probably not what happens when the sign is used in N (...)

7The sign BIL is a highly lexicalized sign in NTS, still it is to a high degree transparent to both signers and non-signers, giving the opportunity to analyse the basis of its origin. The process of the sign’s development can be assumed to be based on metonymical iconicity since the configuration of the hands can trigger a conceptualization of hands holding something, in this case a conceptualized steering wheel.3 This sign does not refer to “holding a steering wheel” in a discourse, but to a car. The selection of the image of “hands holding a steering wheel” to refer to a car can be described as based on the combined cognitive process of metonymy and iconicity, where hands may be conceptualized iconically as hands performing an action, motivating a metonymical understanding in which the hand’s interaction with an object stands for the object. In the selection of this particular image (ontological) salience, salient features of prototypes and schematization probably played some role. Humans using objects might be ontologically more salient to other humans than the objects itself and holding a steering wheel might be a more salient feature of the prototypical event of “driving a car” than gear shifting. This is supported by the fact that iconic gestures are often based on the same schematized images (see section 4). At the same time is the use of a car in real life much more complex than simply “holding a steering wheel”, but as part of the cognitive process for developing the sign BIL, the schema of “holding a steering wheel” was the one that became part of the iconic linguistic representation. In the case of the sign BIL, the reference to a car is part of its current lexicalized meaning. But there are other signs in NTS and other signed languages, where the meaning is not as fixed; meaning construction for every occurrence of these signs, the so called “classifier verbs”, is based on the prompting of a blend between the real space and the event space.

  • 4 In the instance of the sign for wall, window etc. it is a vertical oriented, large, flat surface.
  • 5  The term symbol is used here without reference to Peirce’s semiotic categories, but refers to a fo (...)

8Due to the process of metonymical iconicity, the same classifier verb can potentially refer to different types of real world entities when these real world entities share features that can be conceptualized in the same vehicle entity, but provide access to different targets. Signers in NTS, for example, refer to horizontal structures, like a floor, sheet of papers on a desk, several cars standing beside each other etc. by using their hands in a b-handshape to stand for or trace parts of a horizontal surface. Through an iconic mapping process the hands are conceptualized as objects, motivating an understanding of hands standing for objects with a certain shape. In this case the iconic resemblance is based on a motivation that the orientation of the hands stands for the orientation of the surface and that the shape of the hands stands for the schematized shape of the surface/object.4 In consequence, this sign can refer to different real world entities that share these properties. In principle this is true for any symbol based on metonymical iconicity5, but there seems to be a difference between classifier verbs and highly lexicalized signs. In NTS a large number of signs – for example the sign BIL – have undergone a process of limitation of semantic scope and have become conventionalized lexemes, which refer to only one single entity type. In sign language research literature, these signs sometimes are referred to as “frozen forms” or “frozen signs” in opposition to classifier signs” (c.f. Brennan et al. 1984). For the purpose of this article I will use this terminology.

9The lexicalization process of frozen signs in NTS often involves structural mechanisms for the limitation of the semantic scope, among them borrowing the mouth-picture of the corresponding word in the surrounding spoken majority language, i.e., Norwegian, and using it in combination with the sign, so called “mouthing” (Vogt-Svendsen 2001). Despite these mechanisms, NTS has many signs which do not make use of these “direct reference” mechanisms in discourse. In fact, we find the use of classifier verbs in NTS in all types of texts with reference to different types of entities in different contexts. These are the signs which so far have been called “classifier verbs” or simply “classifiers”. Liddell (2003) describes similar signs in American Sign Language (ASL) as being based on a cognitive-mapping mechanism for visualization and thus names them “depicting” verbs. I will follow Liddell’s approach, but in addition I will show that NTS uses three different kinds of mappings leading to three different depicting sign constructions. Furthermore, I propose that depicting verbs share several qualities with speech accompanying gestures as well as showing linguistic properties of verbs, opening up for an understanding of these verbs as “gesture verbs”. Thus, in the last section of this paper it will be shown how depicting verbs share some properties with gestures while at the same time showing linguistic properties like “potential for syntactic combination with other gestures”, which according to McNeill (2005: 7) gestures typically lack. But first I will give a brief introduction into some of the problems that arise when using the traditional approach to analyze depicting verbs as “classifier verbs”.

1.1 Traditional analysis of “classifier verbs”, morphology, and the root problem

10One of the main concerns with respect to the notion of so called “classifier verbs” has been the identification of morphemes as well as the root of the sign or the verb stem in a traditional structuralistic sense. Different suggestions have been made, amongst others the suggestion to treat the handshape, the hand orientation, the location in space, and/or the movement of the hand(s) as different morphemes (e.g., Supalla 1982 and 1986 for ASL). This is due to the fact that these different sign parts seem to be combined differently in signs in order to refer to different objects and their movements in space. In some theories (e.g., McDonald 1983 for ASL), the handshape is regarded as the core morpheme of these signs. Other theories (e.g., Supalla 1982 for ASL) argue that the movement is the verb stem or root since the movement indicates more than the other parts the semantic description of the verb’s action. Another issue that hardly has been addressed yet is the question what kind of construction these signs resemble. So far they have been treated by most sign language linguists according to the hypothesis that “signs in signed languages are like words in spoken languages”; consequently “classifier” verbs – being signs comparable to words – are supposed to consist of morphemes. However, the traditional notion of word, covering a range of different construction types in different spoken languages, might not be the best concept to use for descriptions of constructions in relatively little described languages like signed languages. Neither is the notion of morpheme generally accepted nor unproblematic in spoken language linguistics: Anderson (2005) for example, rejects the use of the classical notion of morpheme by arguing that the “notion that the elemental building blocks of words can be discovered by looking for recurrent relations between form and meaning is falsified in both directions” (2005: 198) and furthermore that “the notion that components of meaning are related in a one-to-one fashion to elements of form […] is similarly falsified in both directions (2005: 199). Other theories like (radical) construction grammar (e.g., Goldberg 1995, Croft 2001) use a notion of construction where different construction types like words and complex constructions are regarded as pairs of form and meaning differing only in internal symbolic complexity. In this article I won’t go further into the discussion on what kind of construction depicting verbs are in a construction grammar sense, but as shown in the examples below, depicting verbs can be part of rather complex constructions where different body parts refer simultaneously to different entities interacting with each other. Thus, it should be kept in mind that the “signs are like words”- hypothesis might not be the best way to approach these constructions nor is a morpheme analysis, in particular with regard to the search for core morphemes. As Engberg-Pedersen (1993) points out for Danish Sign Language, the handshape, orientation and movement in depicting verbs are mutually interdependent making an appropriate morphological analysis of these constructions difficult.

11In my opinion, this issue cannot be resolved by looking at the language data since the definition and use of the morpheme concept are – as well as the concept of root and stem – based on the idea of arbitrary symbols that combine form and meaning of a language symbol “by coincidence” and convention, in contrast to depicting verbs which are dependent on the use of iconicity in their meaning construction. The morpheme concept is a concept of spoken language description and its application is not without problems even in spoken language linguistics. This, however, is not a discussion which has been focused on in signed language linguistics, and the traditional notions of phoneme and morpheme are still used by a relatively large part of the sign language linguistic community. As a result, the segmentability of signs in terms of phonemes and morphemes is another issue in signed language linguistics and is therefore briefly discussed in the next section.

1.2 The problem of segmentability in a morpheme based model

12In accordance with Engberg-Pedersen (1993), I claim that the different parts of depicting verbs are mutually interdependent in order to refer to an entity. As already mentioned, I argue that a morphological analysis is not an appropriate approach when describing these signs. Take for example the drawings 1 and 2:

  • 6  These signs can be regarded as a minimal pair since they differ in only one visual-formal feature (...)

13Both drawings show the same handshape, but different orientations: in drawing 1 the palm is facing down, in drawing 2 it is facing to the side. Both hand configurations are used in NTS and can refer to different types of objects, but can also refer to the same object, depending on the context. The sign configuration in drawing 1 is – amongst many others – used to refer to large, four-wheel-vehicles. The sign configuration in drawing 2 is – amongst many others – used to refer to vehicles or animals of a certain size and type: quadrupeds or vehicles with maximum three wheels. Both signs can be used with the same movement. In addition, both signs can be used to refer to surfaces in general, however, with different orientations. Thus, one could argue that the hand orientation has a morphological or phonological status in these signs because it is the only form-difference between these signs, which allows the signs to refer to different types of entities. But as already mentioned causes this approach some analytical problems: First, it has to be decided if “hand orientation” should be considered a phonological or a morphological feature of the sign. Since the hand orientation seems to be the smallest form unit in these signs that is capable of conveying a distinction in meaning, one could argue that it belongs more to the phonological than the morphological level.6 On the other hand, the hand orientation does add some meaning to the sign with regard to its reference to an entity, namely how an object is oriented in relation to its form in space. The understanding of what the signer is signing about when using depicting verbs is dependent on clues like hand orientation: how is the object in relation to its form oriented in space? In addition, the hand orientation is dependent on the existence of a handshape and a location/movement in space. The same is true for the handshape and a possible movement: they cannot be used without any hand orientation, they are the smallest form units conveying a distinction in meaning, and they carry without any doubt some meaning, based on the form/movement resemblance to an overall form of an object and its movement in space: handshape, orientation, and movement together form an iconic resemblance to some conceptualized physical features of the entity that the sign refers to, while at the same time each part carries an important meaning contribution to the iconic resemblance; hand orientation, handshape, location, and movement and in many instances also non-manual parameters are interdependent parts of a sign making it difficult to conclude which part is the smallest meaningful unit or even the smallest unit creating a distinction in meaning. I suggest that these signs are constructions making use of Gestalt like perception. Thus in my opinion, the morpheme concept is not adequate for the description of these verbs which consequently makes the search for the verb root or stem redundant. In order to gain more insight into these constructions, I suggest focussing on the iconic potential of these signs and the cognitive background for the construction of their meaning.

14When focusing on the notion of iconicity instead, it becomes clear that iconicity as means of linguistic representation plays a large role in signed languages, supposedly larger than in spoken languages, because iconicity and transparency are more difficult to use as construction mechanisms when language symbols rely on sounds versus visual stimuli. Human cognition finds it much easier to visualize what a wall looks like than to try to create a language form based on “what a wall sounds like”. In other words, the question as to how these signs can be segmented is in my opinion more a matter of how iconic resemblances in language can be described than a question of morpheme character.

  • 7  One of the first sign language descriptions by Stokoe (1960) focused on the fact that American Sig (...)

15However, it needs to be mentioned that earlier analyses of signed languages with regard to their morpheme and phoneme inventory were important steps pushing forward the description of signed languages. Not only did they open the discussion of signed languages as real languages – which is widely accepted in linguistics today – they also clearly showed that signed languages are possible to describe as double-structured systems like spoken languages.7 The fact that possibly both the phoneme and the morpheme concept need to be re-evaluated as proper concepts with regard to descriptions of depicting verbs does not mean that signed languages are not real languages nor that they do not have a double-layered structure; it just shows that the visual medium makes use of different mechanisms than the auditory does. In addition, it does not mean that all signs based on the above described iconic-visual mechanism should not or cannot be analyzed in smaller parts. It just means that one has to find other methods to describe the interaction of different sign parts. Cognitive linguistics in general (cf. Langacker 1987 and 1991), and particularly the mental space theory as used by Liddell (2003), provide the kind of theoretical background needed to explain the mechanisms found in depicting verbs.

2. Mental space theory and iconicity in signing

16The mental space/conceptual blending theory according to Fauconnier (1985), (1997), Fauconnier & Turner (2002), further developed by Liddell (2003) for signed languages, as well as other cognitive linguistic theories (e.g., Langacker 1987, 1991, and 2000, Lakoff & Johnson 1980), are based on the assumption that the cognitive organization of thoughts and images directly influences how language structure is composed. In the case of signed languages this means that also the use of iconicity is based on how humans organize information. Iconicity is an extreme point on a continuum of similarities and differences of conceptualized structures (see for example Wilcox’ model of iconicity in Wilcox 2004). Because of the way signers express thoughts by means of metonymical iconicity in parts of signed linguistic representations, iconic signs like depicting signs will often not show a one-to-one resemblance between the referred object and the sign when seen in isolation. In signed discourse on the other hand, different context clues contribute to a limitation of possible references often leading to an unambiguous reference to one specific referent. This feature of depicting verbs has traditionally been explained through the notion of classifier verbs. There is, however, one major issue when trying to describe the different classes “classifier verbs” supposedly are based on: In all linguistic classification systems has the distinction between classes been described as based on one or several criteria such as gender, animacy etc. With regard to depicting verbs this seems rather impossible, mainly because of the process of metonymical iconicity is based on different aspects for different depicting verbs like shape, involvement of human activity, part of a whole descriptions, etc. Thus, regardless of the theory of categorization that is employed, depicting verbs do not resemble categories based on the same criteria. In consequence, neither Aristotelian categories with clear-cut boundaries and without internal structure, nor prototypical categories representing radial categories structured around prototypes give satisfactory descriptions of the classes depicting verbs might be based on in the way it traditionally has been attempted. Rather than claiming that a hand formation as shown in drawing 1 is a “vehicle classifier” (Johnston & Schembri 2007 use this term with regard to the same handshape in Australian Sign Language) one would have to say that the hand formation is a resemblance of a schematized shape and orientation of an object in space. Not all depicting verbs, however, are based on metonymical iconicity of a schematized shape and on orientation of an object in space. We also find for example iconic resemblance of how an object is handled. In best case, several different “classifier verb” systems can be proposed, being based on different mapping mechanisms between real space and event space entities, as will be described in the following.

17Mental space theory (Fauconnier 1985, 1997) and – partly derived from it – conceptual blending (Fauconnier & Turner 2002) are based on the assumption that different conceptualizations interact with each other creating a new blended conceptualization. The model has been adopted and adjusted to signed language descriptions by Liddell (2003), who proposes the following four spaces for the description of iconic representations in ASL: the generic space, the blended space, the real space, and the event space, of which the latter two serve directly as input spaces for the blended space. In the following I will explain how this model applies to the analysis of depicting verbs based on NTS data.

18Let us assume, for example, a signer is talking about her car parking next to another car. In NTS she could use the hand configuration as shown in drawing 1 to refer to her own car, while she would use her other hand with the same hand configuration to refer to the other car, moving both hands in front of her in the signing space and putting them besides each other. Somebody who does not know what she is talking about, would watch the signer first moving and than holding her hands in the described manner without having a concrete idea what she is referring to. In fact, if the communication partner knows NTS, he could arrive at several different possible interpretations: he could assume that the signer is talking about for example:

  • putting hands in front of her (or somewhere else),

  • two objects with a flat surface which are in front of the signer (or somewhere else),

  • handling something in front of her (or somewhere else) by putting her hands in the described manner in front of her.

19In signed languages, the space in front of the signer as well as her body parts may be used in two distinct ways:

  • as articulation space were body parts are articulators creating a visual symbol,

  • as articulation space, were the space around the signer at the same time refers to some space in the discourse; in this case the body parts are not only articulators, but at the same time refer to entities in the discourse.

20This distinction is an important one in use of iconicity in signed languages; according to Liddell (2003), it defines the distinction between a real space blend and real space itself, as I will explain in the next paragraph.

  • 8  Real space and reality do not necessarily match. Optical illusions for example make use of this ph (...)
  • 9  These are only examples, the recipient may come up with numerous other interpretations as well.

21In our example, the communication partner watches the signer using her hands in the signing space in the described manner. This evokes a mental image in the communication partner’s brain about the actual scenery they both – the signer and the recipient – are in. This image (not the real event itself) is called real space.8 Based on this real space image, the recipient has to make an interpretation. Does the signer use her body parts as articulators interacting to evoke one visual symbol like a frozen sign or is the recipient supposed to interpret different body parts as iconic resemblances referring to discourse entities? Without a context he would have to decide which of the above mentioned interpretations is most likely, and based on his knowledge, he may conclude that this construction is not a frozen sign and thus needs an interpretation based on iconic resemblances. All this happens of course automatically in an instant and is usually not a conscious decision. Lacking more context information the recipient might come up with one of the following interpretations:9

  • the signer is referring to her putting her hands somewhere, for example on a table,

  • the signer is referring to two cars parking besides each other or two paper sheets lying on a desk,

  • the signer is referring to some kind of structure/entity with a specific form/shape (tables, drums etc.)

  • 10  This description is not meant as a schema of how the actual processes in the brains take place, it (...)

22These different interpretations are possible because the recipient tries to use the real space information to trigger a possible (and hopefully the signer’s intended) blend involving the generic space that contains conceptual structure shared by real space and event space. In this particular case the real space is structured by the hands standing for some kind of objects and their spatial relation to each other; the generic space serves to enable counterpart connections between the real space and the event space by holding abstract information about the elements and their relationship to both input spaces. In this specific case the conceptual structures that in signed language communication are shared by real space and event space are limited to a schematized shape, the number of objects in question (namely two), their orientation and relation to each other in space (being side by side with their long sides facing each other). Thus the real space information together with a variety of different event space entities which share these conceptual structures could create blends leading to different real space blend entities. Without knowing the implied event or its contextual anchor space because of lack of contextual information, the recipient can conclude his interpretation with several different blends between the same real space image and different event spaces leading to different real space blend entities, as shown in figures 1-310: This kind of interaction between real space, generic space and event space leading to real space blend entities in NTS is due to the mapping types between the cognitive image in the event space and the conceptualized form of the sign in the real space. These mapping types will be explained in detail in section 3.

Figure 1. Illustration of real space blends in depicting verbs: A and A’= objects with a larger horizontal than vertical extension and with a relatively large surface; relation A to A’= The two objects are close to each other in space, side by side

  • 11  The signer has to create the blend too, in order to be able to use the sign, but for the purpose o (...)

23In the figures, the generic space and the blended space are symbolized by circles, while the real space and the event space are due to practical reasons symbolized by a picture/drawing in a box. Real space stands for the mental image of what is in the situation (not necessarily what actually is there, but the image of it). The generic space contains the common elements of the real space and the event space, the event space is the mental space of different events the signer could talk about, while finally the blended space, which is the mental space that the recipient has to create in order to understand and interpret the real space signing properly.11 When watching a signer one sees the signer moving her hands in a particular position with particular handshapes accompanied by particular facial expressions and body movements, but this is not what the recipient reads from it. The recipient understands that the signer uses mental space mappings to visualize parts of her mental image. This prompts a mental space blend, where the conceptualization of the handshape, orientation, and movement becomes part of the iconic mental mapping between the intended event space and the real space.

Figure 2. Illustration of real space blends in depicting verbs: A and A’= objects with a larger horizontal than vertical extension and with a relatively large surface; relation A to A’= The two objects are close to each other in space, side by side

  • 12  The signer and the recipient’s situation is also part of the context information, which contribute (...)
  • 13  As Taub (2001) shows can iconicity not only be based on actual conceptualized form-features, but a (...)

24As can be seen in figures 1, 2, and 3, the same real space image can – depending on the context – evoke different blends, which according to Liddell (2003) all are real space blends. In other words: signs of this type can create many different blends in different contexts and when taken out of context they do not prompt one specific blend, but may trigger a variety of different blends. Usually this will not happen; the textual context the sign is produced in contributes to a limitation of scope, so that the recipient can make the intended mental mapping.12 Nevertheless, this particular feature of this class of signs is responsible for the use of the notion “classifier verbs”. Produced in isolation a depicting verb can trigger many different real space blend entities, which only share some conceptualized schematized elements. This has traditionally in sign language linguistics been interpreted as a form of classification. However, as shown above, this feature is primarily due to its cognitive mapping mechanism, which depends on metonymical iconicity. The entities which can be referred to by the same depicting verb in different contexts, thus cover a broad range of semantic fields; the only feature they share is based on the particular mental image the metonymical iconicity is based on, like the schematized shape of an object standing for the object.13 This in fact is a feature that also is shared by iconic gestures accompanying speech (McNeill 2000, 2005). This has, however, not led to the understanding of iconic gestures as classifiers.

Figure 3. Illustration of real space blends in depicting verbs: A and A’= objects with a larger horizontal than vertical extension and with a relatively large surface; relation A to A’= The two objects are close to each other in space, side by side

25In the following section, I will show how depicting verbs are dependent on three different mapping mechanisms making use of different metonymical mechanisms. These mechanisms are not only used in established depicting signs, but are also used as a construction base for new signs and established signs like the frozen signs.

3. It is all about the mapping: different mapping mechanisms in NTS

  • 14  A similar classification was already made by Mandel for ASL in his 1977 article, though not based (...)

26The same handshape, hand orientation, or movement can be part of different types of blends. Furthermore, the signer’s body parts can be part of different types of mapping between elements of the real space and the event space. Based on this, three different depicting constructions in NTS14 can be identified:

  1. Manipulator blends

  2. Substitutor blends

  3. Descriptors blends

27These sign construction types are based on different mapping types, where the sign-form is used in different ways in a blend as described in the example above. In manipulator blends, the sign is based on a blend where the hand signifies a hand handling or manipulating an object. The metonymical process involved can be described as “a hand interacting with an object stands for a whole event”, for example: “to move your hand in a manner that resembles moving an iron stands for the event of ironing something”. In substitutor blends, the overall shape and orientation of the hand or parts of the hand or arm signify the object itself. The metonymical process involved can be described as “a schematized shape and orientation in space of a body part stands for an object”. An example for this has been given above in figures 1-3. Finally in descriptor blends, neither the hand nor other body parts are part of the actual mapping, they only “trace” or “draw” the outline of an object in the signing space. We can describe this as “the outline of the shape or size of an object stands for properties of an object”. An example for this is “tracing the extreme endpoints of a fish with your hands stands for the description of the fish’s size”.

28Thus, the three mapping mechanisms differ in how they map conceptualized body parts onto other conceptualized entities: in manipulator blends “hands map on hands” where the hand is a hand handling an object, in substitutor blends “hands map onto objects”, and in descriptor blends “a structure created by the movement of the hands maps onto shapes and sizes of objects”, where the outline of the tracing is part of the outline of the object in the mapping. Furthermore, these signs do not only differ in the type of mapping and use of body parts in the blend they are involved in, but they also differ in their pragmatic use in NTS utterances being part of different event types. The following section will describe this in detail.

3.1 Handling of objects: manipulator blends and total mapping

  • 15  This “somebody” can be the signer or in case of gestures, the speaker or another person. Sometimes (...)
  • 16  To simplify the drawing, it does not contain a generic space.

29In NTS, one way to refer to an object or to the action it is involved with is to show how it is used by human hands: a manipulator blend. The mapping mechanism used in these signs is relatively straight forward, since the conceptualized hand is mentally mapped on somebody’s hand15 in the event space; see for example figure 4, “to hammer”.16

Figure 4. Illustration of a real space blend in manipulators: to hammer

  • 17  This kind of total mapping has been refer to as “roleshift” by a large number of sign language res (...)
  • 18  McNeill (2005) describes this also for English speakers using an example where somebody describes (...)

30In this blend type, there are actually two different mapping types involved: on the one hand is the overlapping between the real space hand and the event space hand as total as it can possibly get; thus seems the cognitive process needed to break down the mapping code to be relatively straightforward: the hand stands for another hand and what the hand does is meant to resemble what the other hand does. The blend created by these signs can also incorporate other parts of the upper body of the signer, even the whole upper body.17 Thus, the blend is based on a “total mapping” where parts of the upper body of real space are mapped onto a corresponding body in the event space, creating a “total mapping blend”. This one-to-one mapping type is relatively widespread in NTS. There is another mapping type involved in manipulator blends as well: virtual mapping. In real space, the hand holds nothing, but in event space the hand holds a hammer as shown in figure 4 (more about virtual mapping in section 3.3). It seems that this is a widely used mapping type in all forms of visual communication, as you also find it in gesture and mime as shown by general observation and a recent study for spoken Norwegian (Erlenkamp et al. 2009). If a non-signer, for example, wants to show how an instrument works, he or she will often accompany the Norwegian spoken language information with one or more gestures, showing what it looks like when handling the particular object without having the instrument at hand.18

  • 19  Since there is no generally accepted linguistic definition for the term gesture, I will use the te (...)

31A situation where manipulators can be used as gestures19 is, for example, when two or more people are repairing a car engine. Some parts of the engine may not be visible from the outside, so you have to stick your hands between the visible parts of the engine down to the point where you can reach the part of the engine that needs attention. Imagine one of the mechanics knows how to fix it but might not be able to reach the part (because of large hands), or perhaps he wants his colleague to learn how to do the trick himself. He might ask his colleague to do it for him and then he would probably explain: “When you have reached past (name for an engine part), try to grab hold of (the name for the hidden engine part); it is right behind there and then you turn it like this”. At the same time he would probably show with his hands in the air what he expects his colleague to do with the particular part of the engine. The same principle is used in NTS, though in a more conventionalized, grammaticalized, and often more complex manner. For example, do different systematic modifications like reduplication of movement for aspect marking and intensity often occur with these signs in NTS. See for example pictures 2 and 3. In this narrative, the signer is talking about an event where her car broke down in the middle of a highly trafficked road.

Picture 2 and 3. “Driving a car steadily”

  • 20  In the following I distinguish between the depicting verb (simply « manipulator »), and the corres (...)
  • 21  Dudis (2004) refers to this use of mouth-pictures as “onomatopoetic components.”

32She shows that she is calmly driving her car (just before the car breaks down) by using a manipulator20 that refers to the action of “holding the steering-wheel”, which in this case is a signed construction based on metonymical iconicity for referring to the signer driving a car. In addition, she adds a repetitive short back and forth movement of her hands and her facial expression is relatively relaxed while her upper and lower lips are vibrating against each other, creating some kind of “brrrrrr” vibration impression.21 The meaning of this construction is clear from the context: “I am driving steadily in my car”.

  • 22  Even the reduplication is partly iconic. Showing that something is going on for a while often incl (...)

33Although this kind of visual image is relatively iconic, it is fairly abstract and conventionalized in NTS. The repetitive back and forth movement of the hands is a conventionalized reduplication construction in NTS where only the forwards movement is an iconic part of the meaning construction in the blend, meaning “something is going on for a while in the same direction”. The same type of reduplication movement can be found in many other signs with this kind of aspectual, durative meaning like e.g., “to walk for a relatively long time” or “to teach somebody for a relatively long time”. Nevertheless, the back movement is more than a mere transition movement since it contributes to different types of reduplications: iterative aspect reduplication for example includes a systematic back transition movement in NTS which is more like an arch, longer and not straight in opposition to durative aspectual reduplication where the back transition movement is short and usually straight. The backwards movement is part of the constructed and grammaticalized meaning of reduplication in NTS, but the whole reduplication movement is not iconic in a concrete sense, since the car is not driving back and forth. Other parts of the signing like the speed of the movement are also part of these conventionalized constructions, with the higher speed of the movements usually signifying the higher intensity or higher speed of the action. Since the signer in this example uses both a relaxed face and a fairly fast movement, the meaning of the example above is: I am driving steadily (but not very fast) in my car. The lip movement supports that impression by being a steady but not intense vibration, which indicates a car engine’s vibration when driving when everything is working correctly. In other words, although the hand configuration of the manipulator in this example is an important part of the utterance, it is not the only meaningful part of it. The type and speed of movement, the facial expression and the mouth picture are equally important to the construction of the meaning of this utterance. All of the parts are to some degree iconic22, but they are, nevertheless, conventionalized parts of the NTS structure, and they are interdependent in the construction of the particular meaning of this utterance.

34Furthermore, the total mapping type is combined with the virtual mapping type as found in manipulator blends often used in NTS to create lexemes which have only one entity type as its referent: the frozen signs. While every manipulator blend in principle can prompt the conceptualization of different real space blend entities as long as they share the feature the metonymical iconicity the manipulator is based on, frozen signs are semantically much more limited, although one often still can identify the manipulator blend the signs once were derived from. In some respects, frozen signs can be compared to noun lexemes in spoken languages often corresponding to a depicting verb counterpart in terms of their iconic motivation.

35In terms of pragmatics, manipulator blends often are prompted to describe events, which involve the use of instruments or human interaction with other entities. The manipulator blends prompt real space blend entities mostly as part of predicates describing the action carried out with something.

3.2 Substituting objects: substitutor blends and partial mapping

  • 23  The term object is used here in a wide sense, including for example humans, animals and body parts
  • 24  The facial expression and even the upper body can be included, but that would result in a differen (...)

36A different form of mapping involves parts of the hand or the arm, or the whole hand, in some instances even the head or parts of the face. In this type of blend the body part is mapped onto a different entity – not being this particular body part, but another type of object23; thus the iconic resemblance between the real space entity and the event space entity will be more schematic than in blends based on total mapping. There are other differences: while the total mapping type can include different parts of the upper body without always giving a clear indication which parts of the body exactly become part of the blend, the real space area used in a substitutor blend is always clearly defined due to a particular mapping mechanism: partial mapping. Depending on different factors like the signer’s personal style, text genre and context, total mappings can include different parts of the upper body. The manipulator blend example above (“to hammer”) can include the hands, arms, upper body and facial expression when used in a blend to show how somebody is “hammering.” Another example is picture 2 with the signer driving her car. The whole upper body resembles her sitting in the car and driving. This is probably due to the text being in a narrative style. In a different discourse the same signer might have limited the total mapping on just the hands. A substitutor in contrast is limited to the part of the body which signifies the object, i.e., which prompts a real space blend entity. The substitutor based on a blend for “creature moving upright on two legs,” for example, involves only the straightened index finger in NTS, while the rest of the hand or the arm cannot be included in the blend.24

37Substitutor blends are often prompted when a signer refers to animate or inanimate objects moving or being moved through some space, or interacting with each other. These signs are often combined with elaborate movements of highly iconic character. Nevertheless, the movement does not depict the actual movement in space nor does the handshape depict the actual form and size of the object.

  • 25  Dudis (2004) describes similar observations for American Sign Language.

38While total mappings are prompted in relation to a matching scale between real space elements and event space elements, are substitutor blends usually based on real space entities being smaller than the conceptualized corresponding entities in event space would be, if present in real space.25

Picture 4. Substitutor blend depicting a car

39One depicting verb uses for example the flat hand (picture 4) to refer (amongst others) to vehicles with at least four wheels. In this substitutor blend, the conceptualized hand is mapped onto the conceptualized vehicle, while the arm – although it naturally follows the movement of the hand – is not part of the blend. In the narrative mentioned above where the signer talks about her car breaking down, we can see the same substitutor blend prompted with different movements.

Picture 5. “Driving steadily” based on two different blends

  • 26  Her right hand is bended due to the closeness of her hand to her chest.

40In picture 5 the signer shows that she is driving steadily by prompting simultaneously the above described manipulator blend with her left hand and the substitutor blend for the vehicle with her right hand.26 Both hands follow the same reduplication movement and the facial expression is the same as in the manipulator utterance above (pictures 2 and 3). In the second example (pictures 6 and 7), the signer describes how her car, after the engine had broken down, is somewhat out of control and slides sideways. In order to show this, she actually prompts a blend with two substitutor elements simultaneously. Although both handshapes in the substitutor blend are the same, due to the context and the overall iconicity of the blend, it becomes clear that the signer’s left hand does not signify another car, but instead the surface the car is moving on: the road.

Pictures 6 and 7. Substitutor blends depicting the car and the road

41The road-substitutor is an important reference point in this blend which shows how the car moves uncontrolled sideways on the road. The signer’s right hand is used as a substitutor for the car. Her facial expression is intense in this blend and refers to her own reaction and feelings during this situation; with the mouth picture she produces an additional intensifier. In the third example (pictures 8 and 9) the signer shows how the car finally stops. The whole upper body of the signer follows the forward movement of the hand which prompts the substitutor blend for the car and stops as the hand stops, followed by a short, well-defined movement backwards, than forward and then ending abruptly.

Pictures 8 and 9. ”Car stopping” based on a substitutor blend

42This parallel movement of hand and body shows how the car is stopping suddenly – the signer is using yet a another blend type, namely a surrogate blend, simultaneously to the substitutor blend to show how the abrupt deceleration of the car forces its passenger to move along with the stopping movement. Again the signer uses a combination of different mapping types, namely partial mapping on her right hand (the substitutor blend) and total mapping on her head, shoulders and face (the surrogate blend).

43In all three uses of the substitutor referring to the car, the movement is not exactly the same as the natural movement in the actual event. This is due to two different mechanisms:

  1. the conventionalized use of movements as in the case of reduplication: Of course the car was not moving back and forth while the signer was driving steadily. The movement is not iconic with respect to the actual movement, but it is based on an iconic representation of the human experience that an action which takes place over a long period of time often involves repetition (though not always),

  2. the use of body parts as elements in the real space mapping onto elements in event space which are conceptualized as being of other size than the corresponding elements (body parts) in real space. This leads to a difference in scale for the input of the blend.

44As a consequence of the scale difference between real space entities and the conceptualized event space entities, some restrictions follow blend mappings in substitutors:

  • the movement is not a one-to-one resemblance of the movement in the event space, but is stylized,

  • distances do not match distances as conceptualized in event space, but are stylized; great distances are set up by using repetitive movements in the same direction or elongated spatial paths by adding an arch to a usually straight movement.

45Interestingly, substitutor mappings seem to resemble some features that one may observe in puppet shows, where movement is often exaggerated, events and movements are stylised, and the puppeteer uses repetition to mark intensity or the prolonging of the action. For example, in the TV program “The Muppet Show”, you can see the “Swedish chef”, a puppet, chasing another puppet - a hen - which is supposed to be part of the meal he is creating in the kitchen. The running movement of the puppets is exaggerated in dimension and looks almost as if they where “jumping around”, while it is actually a bit slower than an actual person/hen would run if chasing/being chased. At the same time, the chasing movement is stylized and involves repetition, showing how the “Swedish chef” chases his victim in circles, sometimes throwing things like knifes – but he usually misses and the whole event is repeated several times.

46In NTS and other signed languages, the more complex and abstract use of movement in partial mappings prompting substitutor blends may stand as one explanation why iconic, speech accompanying gestures more often seem to be based on total mapping than on partial mapping. In order to “read” the right mapping for a blend, substitutor blends probably need more conventionalization than signs used to prompt manipulator blends or surrogate blends. Firstly, the recipient has to read the right mapping boundaries for the blend: which body parts are actually involved and which are not. Secondly, the recipient has to know and understand the use of stylized iconic and partly grammaticalized movements of the signing body parts in real space.

  • 27  A French seam is a particular and old type of seam often used in historical garments.

47Nevertheless partial mapping is also used in gestures in spoken languages. One example of this is a description a Norwegian native speaker and non-signer gave about “making a French seam on a garment”.27 The speaker used both her hands as substitutors for two different pieces of fabric to show how one first sews the fabric pieces together along one edge, then turning the new seam over and sewing again along the same edge. While she was talking, the speaker put her hands in the air and – pretending they where two pieces of fabric – she showed how they were run through an (imaginative) sewing machine.

48Nevertheless, substitutor blends seem to be much more complex when prompted in signed languages than in gestures accompanying spoken language; first results from a study comparing the use of gestures in spoken Norwegian and signs in NTS in the Frog story support this (Erlenkamp et al. 2009).

49In terms of pragmatic meaning, substitutor blends often are prompted to show how entities are moving/being moved through some space while interacting with other entities.

3.3 Tracing forms of objects: descriptor blends and virtual mapping

  • 28  I thank Guri Amundsen for suggesting this term to me.

50A third mapping type does not involve mapping on the hands or parts of the signer’s body in the blend at all. In this type of mapping, only the virtual lines/structures that the signer traces with her hands and fingers in the signing space become part of the real space part in the mapping of the blend. Thus, the mapping is neither “total” nor “partial”; it is a “virtual” mapping since only the conceptualized path of the hands in the real space movement or the conceptualized space surrounding the hands is part of the blend. This mapping also differs from the two others in that it is not directly involved in the prompting of a blend that refers to an action or an event, but to structural properties or features of an object. This mapping type is the obvious choice in NTS for descriptions of formal aspects of objects which is the reason why this class of signs is called “descriptors”.28 In the investigated narrative, several examples of descriptors can be observed.

Pictures 10 and 11. ”A road parting in two” based on a descriptor blend

51One of them is the description of the road and a drive leading from the main road to a gas station. First, the signer uses both her hands to prompt a blend of how the road is parting in two and where the two roads lead (pictures 10 and 11).

Pictures 12 and 13. Buoy for part of the road and tracing an area

52The signer uses a virtual mapping prompting a descriptor blend of a pathway by holding her thumbs and her index fingers in the signing space (as shown in the pictures 10 and 11) drawing a path in the air as an iconic, although stylized likeness of the road parting in two. Then she points out that there is some kind of empty space between the two roads (pictures 12 and 13) – probably a grass plain. This is expressed by the use of the index finger of the dominant right hand that traces the outline of an area in front of the signer, while the left hand is kept in the former position marking the prompting of the parting point of the main road.

Picture 14. ”Where the way is leading” using another descriptor blend on top of a buoy

53In the following, the signer indicates where her car is located on the main road by keeping her left hand in place as a reference point for the place on the main road in the blend where she has conceptualized herself in the car at that time in the narrative (picture 14). With her right hand she indicates where it would lead her if she followed the main road further on.

54Finally, she describes the road to the gas-station by combining the substitutor depicting a car with the descriptor of the main road in a complex blend, with the left hand remaining in place as shown in pictures 15, 16, and 17.

Pictures 15, 16, and 17. Combination of a substitutor blend and a buoy based on a descriptor blend

55The hands in the descriptor blends are not part of the virtual mapping, but outline the form of the object that they refer to.

56The descriptor blends in this example serve two different functions: First, the signer uses a double-handed descriptor to describe and set up the spatial information that is necessary to understand her dilemma concerning the event she is talking about – her car breaking down in the middle of a highly trafficked road. Thus, it is important to set up a blend where it becomes clear that she can continue on the main road, take a turn to the right leaving the main road and heading towards a gas-station, or try to stop the car in the empty space between the two roads. After the “scenery” is set up through the prompting of the first descriptor blend, she keeps her left hand in place to maintain the image of the scenery that she just prompted, and secondly, she uses this part of the descriptor blend as a reference point for further explanation about the ongoing event. Both these functions are often filled by descriptors, although only the first one is inherent and specific to descriptors. The second construction where the handshape of a former sign functions as reference point for further explanations is also filled by other types of signs, like substitutors, manipulators, frozen signs or even pointing. This use of signed constructions serves, according to Liddell (2003), a pragmatic function, which helps to keep track of referents, topics, and spatial relations. Liddell (2003) refers to signs functioning in this way as “buoys.”

57Descriptor blends – besides serving the function of setting up spatial information and prompting blends for descriptions of objects in a wide sense – can also be used as a basis for frozen signs, where the descriptor becomes lexicalized and conventionalized to refer to only one type of entity. The NTS sign HUS (house) is an example of a frozen sign originally based on a descriptor blend. The sign is a two handed sign, where both hands have a flat b-handshape and trace the surface of a pointy roof and two walls in one movement.

58A frozen sign depends much less on its iconic mapping processes than a sign used to prompt a descriptor blend. A house can be referred to by the frozen sign HUS as described above, no matter what the actual house looks like. It could be a house with a flat roof; nevertheless, the sign for house would not change its form. If the signer wants to describe in detail how a particular house looks like, she can prompt a descriptor blend to do so which would make use of the same basic principle that the sign HUS developed from: drawing an outline of the roof and walls. This raises the question whether the sign HUS and the descriptor are two different lexical items or just one. I suggest treating them as two different (lexical) items, because of their different functions in discourse and processes they are based on. Although they are originally based on the same mapping principle, is the sign HUS not as dependent as the descriptor construction on its iconic mapping, but much more referential on an abstract level than the descriptor blend. The sign for house does not prompt a real space blend entity, the descriptor blend does. The descriptor blend is prompted in constructions where a scenery or setting is described, while HUS is used as a reference sign to the abstract concept of house; both signs can be – and often are – used in the same clause to refer to and describe a house. I propose that the frozen sign for house is developed from a descriptor blend, but that both signs exist in NTS as part of the “vocabulary” serving different functions in discourse: the frozen sign serves the function of abstract, “noun-like” reference, while the descriptor blend is prompted to serve in an adjectival or predicative function.

59Descriptor blends based on virtual mapping can be further sub-classified into dimensional descriptor blends, path descriptor blends and shape descriptor blends. Dimensional descriptor blends prompt a conceptualization of an object’s dimension, usually in relation to the typical size of objects of the same kind. Path descriptor blends prompt the conceptualization of a path or parts of a larger structure, and shape descriptor blends prompt a conceptualization of the shape of an object in a Gestalt like manner. Examples for uses of each of these subclasses of blends are the descriptions of the:

  • size of a rat

  • extension and path of a road

  • specific shape of a house

60The virtual mapping type used in descriptor blends can also – besides its use in manipulator blends – be found in gesturing just as the total and partial mapping types described above, and even examples for the subclasses can be found in gestures.

  • 29  The Norwegian word for spiral staircase “vindeltrapp” does not directly involve a part that refers (...)

61One example for a dimensional descriptor blend used in gesturing is so well known that it has even been used in a commercial in Norway: somebody catches a fish and explains afterwards to some friends or family its size by extending the space between his two hands in an extension that is larger than the usual size of a fish. This use of space is parallel to the use of dimensional descriptors in a blend. The speaker uses a virtual mapping to refer to the size of the fish while saying “the fish was this big”. An example of a path descriptor blend is when a Norwegian native speaker, non-signer, is asked what a spiral staircase looks like.29 She will probably explain it by prompting a path descriptor blend based on the index finger drawing a spiral line in an upward or downward movement in the air. The third subclass of virtual mappings is also used in gestures among Norwegians: if a speaker wants to describe what an object looks like with respect to its specific exterior form or surface you can expect a gesture tracing its shape based on virtual mapping.

4. Gestures or classifiers? Similarities and differences between Norwegian gestures and depicting verbs in NTS

62As described above, all three forms of mapping in NTS depicting verbs can also be found in Norwegian hearing people’s “gestures”. This raises the question as to whether these signs can be regarded as signs at all, or if they should rather be classified as gestures? The answer depends on how one defines the term “gesture” and if one considers gestures as part of a linguistic system or not. McNeill (2000: 1) explains a common dilemma in linguistics when it comes to gestures: “The word ‘gesture’ needs no explanation. […] But what are the gestures? […] it is useful to introduce a set of distinction to situate our question.”

63Since there is no common sense definition of the concepts “gesture” and “language” in linguistics, the answer to the question above is a matter of choice. As Kendon (2000) points out: whether gestures are regarded as part of a language system or something outside of it depends solely on how one defines “gestures” and “language”. McNeill (2000) suggests different continua to distinguish between gestures and other phenomena and describes four different phenomena as reference points on the continua: gesticulation, pantomime, emblem, and sign language. The first of McNeill’s continua is based on the criterion “presence or absence of speech.” This criterion maybe useful in identifying a movement such as a gestures in hearing people’s communication, but in the case of signed languages it is not useful in determining whether a movement is a gesture or a sign based on this criterion, since both are not expected to be accompanied by speech when used by native signers. Otherwise, McNeill’s distinction between gestures and signed languages would imply that signed languages do not have any gestures at all.

64McNeill’s second criterion is the absence or presence of linguistic properties, claiming that gestures do lack linguistic properties, while signs in a signed language show linguistic properties. This claim needs in my opinion to be scrutinized more with regard to depicting verbs and their relationship to gestures.

65McNeill (2001) and (2005) uses an example of somebody saying “he grabs a big oak tree and bends it way back” while at the same time using a gesture comparable to a manipulator blend in NTS, where the hand of the speaker is mentally mapped onto a hand grabbing something and moving the hand back over in an arc formation. McNeill (2005: 7) claims that: “The bend it back gesture lacks all linguistic properties. It was nonmorphemic [sic], not realized through a system of phonological form constraints, and had no potential for syntactic combination with other gestures.” While I have argued above that depicting verbs like McNeill’s gestures are non-morphemic, the last claim is not true for depicting verbs: they do have a potential for syntactic combination with other gestures/signs and are in fact combined in this way in NTS discourse all the time.

  • 30  McNeill (2000: 4) points that out himself and concludes: “It [the use of gestures in signed langua (...)

66Additionally, as already described in section 3, depicting verbs can be systematically modified in NTS by e.g., reduplication which most sign language linguists consider being a “linguistic modification”.30 Thus, the main difference between the gesture described by McNeill and the same form used as a complex blend in NTS would be that signers do make use of linguistic properties in combination with this form, while speakers do not. The basic mapping mechanisms leading to blends on the other hand seem to be shared by depicting verbs and gestures.

67It is not hard to understand why signs often show a larger degree of linguistic properties than gestures: in spoken language the speech part of the communication carries the “linguistic” information, while depicting verbs in signed languages are an integrated part of the conventionalized language system. But this difference seems to be more a matter of degree than a class difference. Depicting verbs show some of the same non-linguistic properties claimed by McNeill to be typical for gestures, namely what he calls the “inapplicability of linguistic properties”, as he argues (2005: 7/8):

 “We can demonstrate the inapplicability of linguistic properties through a thought experiment. Imagine another person saying the same thing but with “it” meaning the corner of a sheet of paper. Then, rather than the hand opening into a grip, the thumb and forefinger would come together in a pinch; rather than the arm moving forward and slightly up, the pinching hand would be held slightly forward and down; and rather than pull the arm back, the pinching hand would rotate outward or inward. Also this gesture would naturally be performed with two hands, the second hand ‘holding’ the paper that is being bent back. That is, none of the form properties of the first gesture would be present in the second gesture, bends-it-back though it is.”

68The exact same description can be given for manipulator blends in NTS describing either the bending back of a tree or the bending back of the corner of a sheet of paper – except that there is no speech accompanying the signs. Thus, we can conclude that depicting verbs make use of cognitive mapping mechanisms leading to iconic blends in the same way as iconic gestures accompanying speech. The difference to gestures regards the depicting verbs’ ability to be an integrated part of signed language utterances by

  • combining syntactically with other signs/gestures

  • relying on language conventions for modification

  • being part of complex simultaneous constructions

69Thus I suggest placing depicting verbs together with gestures at not signed language signs on McNeill’s continuum 4, which is based on semiotic properties of different phenomena such as segmentability and analytical versus synthetic forms. As argued above, the question of segmentability has not yet been answered satisfyingly for depicting verbs, probably because these signs are not segmentable in terms of spoken language segments. Instead they can rather be described according to continuum 4 as global and synthetic, with a top-down semiotic where “the meaning of the ‘parts’ of the gesture are determined by the meaning of the whole” (McNeill 2005: 10). Therefore depicting verbs resemble – at least according to continuum 4 – gestures, which are, according to McNeill non-segmentable (=global) and synthetic forms in contrast to signs of a signed language which according to McNeill are segmented and analytic forms. According to McNeills continua 1, 2, and 3, depicting verbs are not like gestures at all, since they require absence of speech, can carry linguistic properties and are highly conventionalized, all properties allocated to signs of signed languages by McNeill in these tree continua. With regard to the criteria leading to continua 1, 2, and 3 depicting verbs thus behave like sign language signs. Thus, in terms of the four continua, depicting signs seem to take properties of both gestures and signs, giving them an intermediate state as “gesture verbs”.

70How then can we distinguish between gestures and signs in a signed language? On this point I agree with Mc Neill (2000) and Kendon (2000); the difference between signs and gestures is more a question of what one regards to be “linguistic” than a class-difference. Because signs and gestures in signed languages share the same medium, we can expect that there is no clear cut boundary between them in form or function. What Kendon (2000: 50) claims for gestures, is equally true for many signs in signed languages:

“[…] in gesture, representation is achieved in ways that are different from spoken language. For example, in gesture it is possible to represent spatial relationships by means of spatially distributed displays; forms of movement can be created that can serve a variety of symbolic purposes; visible forms can be shaped that can be used as representations of concrete objects, which may serve as a means of referring to such objects either in their own right or metaphorically.”

71Nevertheless, there are differences between the gesture system of hearing Norwegians and the use of depicting verbs, as well as between depicting verbs and other signs in NTS. However, these distinctions are supposedly a matter of degree and not absolute. The difference between gestures and depicting verbs and other signs is – besides the degree of grammaticalization – mainly based on the complexity in the use of the different mapping types.

  • 31  In some registers, like fairy tails or narratives, a signer can also use this mechanism to refer t (...)

72The above described mapping mechanisms are idealized types which often are combined in natural signed data. In particular is the total mapping in manipulator blends combined with surrogate blends. In a surrogate blend the signer’s upper body, the orientation of the head and the facial expression, are used to create a blend between the real space and the body of another entity – usually a human.31 In other words: the signer “pretends” to be another person. This mechanism can be used for quoted dialogues but also to “quote” another person’s actions.

73In total mappings, the surrogate and the manipulator blend seamlessly onto the same entity, since both are based on total mapping, where a manipulator blend is based on the prompting of the handling of an object by a human. Substitutor blends can also combine with surrogate blends, but this creates a type of double or multiple real space blend, i.e., different parts of the body are mapped onto different entities in the event space or at least to different “occurrences” of the same entity in event space. Dudis (2004) calls this phenomenon “partitioning” and identifies different partitionable zones on a signer’s body: the hands, the face, and the oral movement.

74In contrast, descriptors often combine with substitutors in complex blends and not with surrogate blends. This is probably due to their pragmatic use, which is similar to the pragmatic function of substitutors which is to refer to entities and their forms and movements in space. In comparison, gestures accompanying spoken Norwegian utterances do not seem to show such complexity in blending structures. This is also confirmed by preliminary results of our recent study (Erlenkamp et al. 2009). Neither did we find any systematic modification of gestures which are regularly found in depicting verbs. Thus, although gestures and depicting verbs seem to share blending mechanisms based on metonymical iconicity, they do not share the same degree of complexity and modifiability.

75The relation between depicting signs and other signs is not in focus in this article, However, an interesting observation is that depicting verbs – in contrast to their relation to gestures – share the same types of modification with other non-depicting signs in NTS. Although they furthermore share the possibility to combine with other elements to complex simultaneous constructions (see for example Vermeerbergen et al. 2007 for a description of different simultaneous constructions in signed languages), they do not share the iconic meaning construction based on real space blending.

76In order to describe the relationship between iconic gestures, depicting verbs and other sign classes in more accurate terms, a look at the role of salience and schematization in meaning construction based on iconicity on the one hand and entrenchment, conventionalization, grammaticalization, and lexicalization on the other hand would be helpful for the following reasons: The comparison of different signed languages reveals that not all signed languages share the exact same depicting verbs, which probably is due to conventionalization on the one hand and the process of metonymical iconicity on the other, where different images can be selected to become part of the iconic representation of a complex concept. All depicting verbs seem to be based, however, on the three mapping types described above: total mapping, partial mapping, and virtual mapping; additionally a certain degree of overlap between depicting verbs in different signed languages can be observed. It would be interesting to study how much this is due to historical convention in related signed languages, and/or ontological salience as a factor in the selection process for the iconic representation. In order to answer this question a comparative study of depicting verbs in different signed languages as well as the comparison of depicting verbs in a particular signed language with the iconic gestures used in combination with the surrounding majority’s spoken language would be needed.

  • 32  A term used to describe signs based on iconic mapping principles, like depicting verbs.

77Another important issue worth further investigation is the question of entrenchment with regard to depicting verbs in comparison to entrenchment of gestures and frozen signs. It has been claimed by Sutton-Spence & Woll (1999: 164) that “productive signs”32 in British Sign Language are “not part of the BSL lexicon, but are made up by the signer on the spot”. Although the three mapping principles found in depicting verbs are used to create ad hoc signs when signers lack established signs for concepts, depicting verbs as types are rather frequently used in all kind of signed texts. And, as Schmid (2007: 118) points out:

“It is fairly unlikely, however, that speech processing is always carried out in a creative, generative fashion in the sense that users always have to actively, or even consciously, search their memories for means of encoding what they have in mind or decoding what they hear and read. “

78If we assume that this is true for signed language communication, it can also be assumed that depicting verbs or at least the iconic processes they are based on, are to some degree entrenched. Otherwise it can be argued – as already proposed in the early stages of signed language linguistics – that the use of iconicity in signed language is due to the youth of signed languages and probably would disappear over time, when grammaticalization takes over (e.g., Frishberg 1975). Since we still lack studies on entrenchment in signed languages, this needs further investigation as does the question how grammaticalized and lexicalized depicting verbs are in comparison to both gestures and frozen signs. There is no doubt that depicting verbs occur in complex constructions, including complex blends and systematic modifications like aspectual markings. They also seem to carry a different function in discourse compared to frozen signs. While depicting verbs syntactically function as verbs, many of the frozen signs function as there noun-like counterparts. There are, however many unsolved issues left like for example the relation of depicting verbs, their parts and/or their mapping principles to the lexicon of a given signed language. Questions like these have direct practical consequences, as for example for the integration of depicting verbs as part of sign language dictionaries. Thus, further investigation on entrenchment, conventionalization, grammaticalization and lexicalization is needed and – as argued in this article – would benefit from a cognitive linguistic approach.

5. Conclusion

79In Norwegian Sign Language communication we find both common Norwegian gestures as well as highly lexicalized signs specific to NTS. In addition, three different sign classes labelled as “depicting verbs” are in a middle state between gestures and signs. These signs are based on the same iconic mapping principles as gestures, but they also make use of the complex signed language system of modifications in movement and space. At the same time they rely on their iconic potential making it difficult to apply a morpheme analysis.

80This first analysis of these signs in NTS shows that three different mapping principles are used in NTS leading to several blend types to create different types of gesture-like signs: the manipulators, the substitutors, and the descriptors. In sign language research, depicting verbs often have been analyzed as classifier verbs due to their claimed open reference to classes of entity types rather than having fixed references. As shown in this article, what has been described as reference to classes of entity types can in terms of conceptual blending be described as iconic blending based on three different iconic mapping mechanisms. Since the mapping is based on metonymical iconicity which uses parts of a formal resemblance of the schematized conceptualization of objects or of an action the object is involved in, depicting verbs can refer to all types of entities which share exactly the same schematized conceptualized physical property that the blend is based on. In my opinion, this is not due to the signs being classificatory, but due to the cognitive mapping mechanism depicting verbs are based on. The frozen signs are a good example of signs that have become independent from this mapping mechanism and rely solely on their lexicalized meaning. In addition, manipulator blends do not refer to any entity or class of entities directly; they prompt the blend for the action carried out by them. Thus, manipulators could not even semantically be classifiers for objects since they “classify” handling schemata.

81In conclusion, the meaning construction of depicting verbs reminds more of the meaning construction of gestures than of lexicalized meaning as found in spoken languages or frozen signs. At the same time is the use of these signs in NTS discourse as complex and can be as grammaticalized as the use of lexicalized signs in NTS, placing depicting verbs in an intermediate state between gestures and verbs.

Haut de page

Bibliographie

Aikhenvald, A. Y. 2000. Classifiers. A typology of noun categorization devices. Oxford: Oxford University Press.

Aikhenvald, A. Y. 2003. Commentary: classifiers in spoken and in signed languages: how to know more. In K. Emmorey (ed.), Perspectives on classifier constructions in sign language. Mahwah, NJ: Erlbaum: 87-90.

Allan, K. 1977. Classifiers. Language 53: 285-311.

Anderson, S.R. 2005. A-Morphous Morphology. Encyclopedia of Language and Linguistics, vol. 1, second edition. Amsterdam: Elsevier 198-203.

Barron, R. 1982. Das Phänomen klassifikatorische Verben. In H.-J. Seiler and C. Lehmann, C. (ed.), Apprehension. Das sprachliche Erfassen von Gegenständen, Teil I: Bereich und Ordnung der Phänomene, Tübingen: Narr, 133-146.

Brennan, M., Colville, M.D., Lawson, L.K. and G. Hughes 1984. Words in hand. A structural analysis of the signs of British Sign Language, Edinburgh: Moray House Coll. of Education.

Bybee, J. 2001. Phonology and language use. Cambridge: Cambridge University Press.

Craig, C. 1986. Noun classes and categorization. Amsterdam: Benjamins.

Craig, C. 1992. Classifiers in a functional perspective. In M. Fortescue, P. Harder and L. Kristoffersen (eds.), Layered structure and reference in a functional perspective. Amsterdam: Benjamins, 277-301.

Croft, W. 2001.Radical Construction Grammar. Oxford: Oxford University Press.

Dudis, P.G. 2004. Body partitioning and real-space blends. Cognitive Linguistics 15: 223-238.

Emmorey, K. (ed.), 2003. Perspectives on classifier constructions in sign language. Mahwah, NJ: Erlbaum.

Engberg-Pedersen, E. 1993. Space in Danish Sign Language. The semantics and morphosyntax of the use of space in a visual language. Hamburg: Signum.

Erlenkamp, S. 2000. Syntaktische Kategorien und lexikalischen Klassen. Typologische Aspekte der Deutschen Gebärdensprache. München: Lincom Europa.

Erlenkamp, S., Halvorsen, R.P., and E. Raanes 2009. What’s the point? The cognitive dimension of pointing in signed language communication, tactile communication and spoken language communication. Presentation at the Conference on pointing in signs and gesture, 4th-5th June 2009, Lille, France.

Erlenkamp, S. in prep. Meaning at hand. Mechanisms of grammatical construction in signed Language.

Fauconnier, G. 1985. Mental spaces: Aspects of meaning construction in natural language. Cambridge: MIT Press.

Fauconnier, G. 1997. Mappings in thought and language. New York: Cambridge University Press.

Fauconnier, G. and M. Turner 2002. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities, NewYork: Basic Books.

Frishberg, N. 1975. Arbitrariness and iconicity: Historical change in American Sign Language. Language 51: 676-710.

Goldberg, A.E. 1995. Constructions: A construction grammar approach to argument structure. Chicago: University of Chicago Press.

Grinevald, C. 2003. Classifier systems in the context of a typology of nominal classification. In K. Emmorey (ed.), Perspectives on classifier constructions in sign languages. Mahwah, NJ: Erlbaum, 91-109.

Johnston, T. and A. Schembri 2007. Australian Sign Language. An introduction to sign language linguistics.Cambridge: Cambridge University Press.

Kendon, A. 2000. Language and gesture: unity or duality? In D. McNeill (ed.), Language and gesture. Cambridge: Cambridge University Press, 47-63.

Kövecses, Z., and G. Radden 1998. Metonymy: Developing a cognitive linguistic view. Cognitive Linguistics 9: 37-77.

Lakoff, G. and M. Johnson 1980. Metaphors We Live By. Chicago/London: The University of Chicago Press.

Langacker, R.W. 1987. Foundations of cognitive grammar: Vol.I: Theoretical prerequisites. Stanford, CA: Stanford University Press.

Langacker, R.W. 1991. Foundations of cognitive grammar. Vol. II: Descriptive application. Stanford, CA: Stanford University Press.

Langacker, R.W. 2000. Grammar and conceptualization. Berlin, New York: Mouton de Gruyter.

Liddell, S.K. 2003. Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press.

Mandel, M.A. 1977. Iconic devices in American Sign Language. In L.A. Friedman (ed.), On the other hand: New perspectives on American Sign Language. New York: Academic Press.

McDonald, B.H., 1983. Productive and frozen lexicon in ASL: An old problem revisited. In W.C. Stokoe and V. Volterra,V. (eds.), SLR' 83 Proceedings of the Third International Symposium on Sign Language Research. Rome, Silver Spring: Linstok Press, 254-259.

McNeill, D. 2000. Introduction. In D. McNeill (ed.), Language and gesture. Cambridge: Cambridge University Press. 1-10.

McNeill, D. 2005. Gesture&Thought. Chicago/London: The University of Chicago Press.

Meyer, M. 1980. Frog where are you? New York:Puffin.

Radden, G. and Z. Kövecses 1999. Towards a theory of metonymy. In: K.-U. Panther, and G. Radden (eds.) Metonymy in Language and Thought. Amsterdam: John Benjamins, 17-59.

Schembri, A. 2003. Rethinking "classifiers" in signed languages. In K. Emmorey (ed.), Perspectives on classifier constructions in sign language. Mahwah, NJ: Erlbaum, 3-34.

Schmid, H.-J. 2007. Entrenchment, salience and basic levels. In D. Geeraerts and H. Cuyckens (eds.), The Oxford Handbook of Cognitive Linguistics, Oxford: Oxford University Press, 117-138

Supalla, T. 1982. Structure and acquisition of verbs of motion and location in ASL. San Diego: University of California, unpublished dissertation.

Supalla, T. 1986. The classifier system in American Sign Language. In C. Craig (ed.), Noun classes and categorization. Amsterdam: Benjamins, 181-214.

Sutton-Spence, R. and B. Woll 1999. The Linguistics of British Sign Language. Cambridge: Cambridge University Press.

Taub, S.F. 2001. Language from the body: Iconicity and metaphor in American Sign Language. Cambridge: Cambridge University Press.

Vermeerbergen, M., Leeson, L. and O. Crasborn (eds.), 2007. Simultaneity in signed languages: Form and function. Amsterdam: John Benjamins, 257-282.

Vogt-Svendsen, M. 2001. A comparison of mouth gestures and mouthings in Norwegian Sign Language (NSL). In P. Boyes Braem and R. Sutton-Spence (ed.), The Hands are the head of the mouth: The mouth as articulator in sign language.Hamburg, Signum Verlag, 9-39.

Wallin, L. 1994. Polysynthetic signs in Swedish Sign Language. English summary of Polysyntetiska tecken i svenska teckenspråket. Stockholm: Stockholms Universitet.

Wilcox, S. 2004. Cognitive iconicity: Conceptual spaces, meaning, and gesture in signed languages. Cognitive Linguistics, 15: 119-147.

Wilcox, S., Wilcox, P.P. and M.J. Jarquez, 2004. Mappings in conceptual space: Metonymy, metaphor, and iconicity in two signed languages. Linguistics (Jezikoslovlje), 4.1: 139-156.

Haut de page

Document annexe

Haut de page

Notes

1  Other terms used for these signs are “polysynthetic signs” (Wallin 1994) parallel to polysynthetic verbs in spoken languages, “proform” (e.g. Engberg-Pedersen 1993) and “depicting verbs” (Liddell 2003).

2  A space is grounded when « its elements are conceptualized as existing in the immediate environment » (Liddell 2003 : 82)

3  In highly lexicalized signs like BIL, this is probably not what happens when the sign is used in NTS communication today. One can observe for example the use of highly lexicalized signs like BIL or HUS (house) in their fixed form independent of the form the actual steering wheel or house roof referred to might have. Prototypical shapes of cars and houses seem, however, to, have been part of the creation of these signs when they first where used in NTS.

4 In the instance of the sign for wall, window etc. it is a vertical oriented, large, flat surface.

5  The term symbol is used here without reference to Peirce’s semiotic categories, but refers to a form that represents something else by convention, resemblance, or association.

6  These signs can be regarded as a minimal pair since they differ in only one visual-formal feature of a sign configuration and do refer to different entities, thus having different meanings.

7  One of the first sign language descriptions by Stokoe (1960) focused on the fact that American Sign Language had a double structure.

8  Real space and reality do not necessarily match. Optical illusions for example make use of this phenomenon by using visual mechanisms to evoke an image in the real space of the recipient that does not match the actual facts.

9  These are only examples, the recipient may come up with numerous other interpretations as well.

10  This description is not meant as a schema of how the actual processes in the brains take place, it is solely meant as a model for meaning construction.

11  The signer has to create the blend too, in order to be able to use the sign, but for the purpose of this article the focus will be on the recipient’s perspective.

12  The signer and the recipient’s situation is also part of the context information, which contributes to the understanding of these signs; shared background information does as well. Signs are only transparent if the recipient has the necessary background knowledge to understand the resemblance between sign form and the conceptualized object form.

13  As Taub (2001) shows can iconicity not only be based on actual conceptualized form-features, but also on metaphorical features, i.e., can a sign that refers to an abstract entity be based on conceptualized iconic form-features of a concrete entity whose referent (the abstract entity) is connected to it by metaphorical processes.

14  A similar classification was already made by Mandel for ASL in his 1977 article, though not based on mapping types. These three constructions are, however, not the only signed constructions in NTS which are based on the basic mapping principles. Constructions like the so called “roleshift”, use of perspective and even parts of the construction of so called “agreement verbs” are based on the same basic mapping principles (Erlenkamp (in prep)) For the purpose of this article I will focus on signed constructions which earlier have been identified as “classifier verbs”.

15  This “somebody” can be the signer or in case of gestures, the speaker or another person. Sometimes the construction refers to an unspecified person.

16  To simplify the drawing, it does not contain a generic space.

17  This kind of total mapping has been refer to as “roleshift” by a large number of sign language researchers and as “surrogate blend” by Liddell (2003). Surrogate blends and manipulators are however not the same phenomenon (for a discussion of this see Erlenkamp in prep.).

18  McNeill (2005) describes this also for English speakers using an example where somebody describes bending something back using an accompanying gesture based on the same mapping mechanisms as described here.

19  Since there is no generally accepted linguistic definition for the term gesture, I will use the term for now as understood intuitively by laymen before discussing it more specifically in section 4.

20  In the following I distinguish between the depicting verb (simply « manipulator »), and the corresponding blend type (« manipulator blend »).

21  Dudis (2004) refers to this use of mouth-pictures as “onomatopoetic components.”

22  Even the reduplication is partly iconic. Showing that something is going on for a while often includes that parts of the action are repeated over and over again like for example in “to comb hair for a relatively long time” or “to iron clothes for a long period of time”.

23  The term object is used here in a wide sense, including for example humans, animals and body parts.

24  The facial expression and even the upper body can be included, but that would result in a different kind of blend where different mapping mechanisms are involved. More about this phenomenon in section 4 .

25  Dudis (2004) describes similar observations for American Sign Language.

26  Her right hand is bended due to the closeness of her hand to her chest.

27  A French seam is a particular and old type of seam often used in historical garments.

28  I thank Guri Amundsen for suggesting this term to me.

29  The Norwegian word for spiral staircase “vindeltrapp” does not directly involve a part that refers transparently to a spiral.

30  McNeill (2000: 4) points that out himself and concludes: “It [the use of gestures in signed languages; S.E.] reveals that ‘gesture’ has the potential to take on the traits of a linguistic system […] the conclusion is that nothing about the visual-manual modality per se is incompatible with the presence of linguistic properties.”

31  In some registers, like fairy tails or narratives, a signer can also use this mechanism to refer to inanimate objects, like a football, bread, an airplane etc, but in this case the inanimate object becomes an animate like part of the narrative.

32  A term used to describe signs based on iconic mapping principles, like depicting verbs.

Haut de page

Table des illustrations

Légende Picture 1. The Norwegian sign BIL (car)
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-1.png
Fichier image/png, 146k
Légende Drawing1 and 2. Hand configurations in NTS
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-2.jpg
Fichier image/jpeg, 8,0k
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-3.jpg
Fichier image/jpeg, 8,0k
Légende Figure 1. Illustration of real space blends in depicting verbs: A and A’= objects with a larger horizontal than vertical extension and with a relatively large surface; relation A to A’= The two objects are close to each other in space, side by side
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-4.jpg
Fichier image/jpeg, 24k
Légende Figure 2. Illustration of real space blends in depicting verbs: A and A’= objects with a larger horizontal than vertical extension and with a relatively large surface; relation A to A’= The two objects are close to each other in space, side by side
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-5.jpg
Fichier image/jpeg, 24k
Légende Figure 3. Illustration of real space blends in depicting verbs: A and A’= objects with a larger horizontal than vertical extension and with a relatively large surface; relation A to A’= The two objects are close to each other in space, side by side
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-6.jpg
Fichier image/jpeg, 24k
Légende Figure 4. Illustration of a real space blend in manipulators: to hammer
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-7.jpg
Fichier image/jpeg, 16k
Légende Picture 2 and 3. “Driving a car steadily”
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-8.png
Fichier image/png, 319k
Légende Picture 4. Substitutor blend depicting a car
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-9.png
Fichier image/png, 231k
Légende Picture 5. “Driving steadily” based on two different blends
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-10.png
Fichier image/png, 204k
Légende Pictures 6 and 7. Substitutor blends depicting the car and the road
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-11.png
Fichier image/png, 307k
Légende Pictures 8 and 9. ”Car stopping” based on a substitutor blend
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-12.png
Fichier image/png, 332k
Légende Pictures 10 and 11. ”A road parting in two” based on a descriptor blend
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-13.png
Fichier image/png, 334k
Légende Pictures 12 and 13. Buoy for part of the road and tracing an area
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-14.png
Fichier image/png, 333k
Légende Picture 14. ”Where the way is leading” using another descriptor blend on top of a buoy
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-15.png
Fichier image/png, 215k
Légende Pictures 15, 16, and 17. Combination of a substitutor blend and a buoy based on a descriptor blend
URL http://journals.openedition.org/cognitextes/docannexe/image/250/img-16.png
Fichier image/png, 484k
Haut de page

Pour citer cet article

Référence électronique

Sonja Erlenkamp, « « Gesture verbs » »CogniTextes [En ligne], Volume 3 | 2009, mis en ligne le 17 mars 2010, consulté le 19 mars 2024. URL : http://journals.openedition.org/cognitextes/250 ; DOI : https://doi.org/10.4000/cognitextes.250

Haut de page

Auteur

Sonja Erlenkamp

University College of Sør-Tøndelag, Department of teacher and interpreter education, 7004 Trondheim, Norway

Haut de page

Droits d’auteur

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search