Subscribe to the PwC Newsletter
Join the community, add a new evaluation result row, sign language recognition.
61 papers with code • 9 benchmarks • 19 datasets
Sign Language Recognition is a computer vision and natural language processing task that involves automatically recognizing and translating sign language gestures into written or spoken language. The goal of sign language recognition is to develop algorithms that can understand and interpret sign language, enabling people who use sign language as their primary mode of communication to communicate more easily with non-signers.
( Image credit: Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison )
Benchmarks Add a Result
Most implemented papers
Learning to estimate 3d hand pose from single rgb images.
Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images.
BlazePose: On-device Real-time Body Pose tracking
We present BlazePose, a lightweight convolutional neural network architecture for human pose estimation that is tailored for real-time inference on mobile devices.
Skeleton Aware Multi-modal Sign Language Recognition
Sign language is commonly used by deaf or speech impaired people to communicate but requires significant effort to master.
A Simple Multi-Modality Transfer Learning Baseline for Sign Language Translation
Concretely, we pretrain the sign-to-gloss visual network on the general domain of human actions and the within-domain of a sign-to-gloss dataset, and pretrain the gloss-to-text translation network on the general domain of a multilingual corpus and the within-domain of a gloss-to-text corpus.
SubUNets: End-To-End Hand Shape and Continuous Sign Language Recognition
We propose a novel deep learning approach to solve simultaneous alignment and recognition problems (referred to as "Sequence-to-sequence" learning).
Fingerspelling recognition in the wild with iterative visual attention
In this paper we focus on recognition of fingerspelling sequences in American Sign Language (ASL) videos collected in the wild, mainly from YouTube and Deaf social media.
Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison
Based on this new large-scale dataset, we are able to experiment with several deep learning methods for word-level sign recognition and evaluate their performances in large scale scenarios.
TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation
Sign language translation (SLT) aims to interpret sign video sequences into text-based natural language sentences.
Context Matters: Self-Attention for Sign Language Recognition
For that reason, we apply attention to synchronize and help capture entangled dependencies between the different sign language components.
Visual Alignment Constraint for Continuous Sign Language Recognition
Specifically, the proposed VAC comprises two auxiliary losses: one focuses on visual features only, and the other enforces prediction alignment between the feature extractor and the alignment module.
Sign language recognition using image based hand gesture recognition techniques
- Change Username/Password
- Update Address
- Payment Options
- Order History
- View Purchased Documents
- Communications Preferences
- Profession and Education
- Technical Interests
- US & Canada: +1 800 678 4333
- Worldwide: +1 732 981 0060
- Contact & Support
- About IEEE Xplore
- Nondiscrimination Policy
- Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2023 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
- Published: 28 September 2021
Recognition of Indian Sign Language (ISL) Using Deep Learning Model
- Sakshi Sharma 1 &
- Sukhwinder Singh 1
Wireless Personal Communications volume 123 , pages 671–692 ( 2022 ) Cite this article
An efficient sign language recognition system (SLRS) can recognize the gestures of sign language to ease the communication between the signer and non-signer community. In this paper, a computer-vision based SLRS using a deep learning technique has been proposed. This study has primary three contributions: first, a large dataset of Indian sign language (ISL) has been created using 65 different users in an uncontrolled environment. Second, the intra-class variance in dataset has been increased using augmentation to improve the generalization ability of the proposed work. Three additional copies for each training image are generated in this paper, by using three different affine transformations. Third, a novel and robust model using Convolutional Neural Network (CNN) have been proposed for the feature extraction and classification of ISL gestures. The performance of this method is evaluated on a self-collected ISL dataset and publicly available dataset of ASL. For this total of three datasets have been used and the achieved accuracy is 92.43, 88.01, and 99.52%. The efficiency of this method has been also evaluated in terms of precision, recall, f-score, and time consumed by the system. The results indicate that the proposed method shows encouraging performance compared with existing work.
This is a preview of subscription content, access via your institution .
Buy single article.
Instant access to the full article PDF.
Price includes VAT (Russian Federation)
Rent this article via DeepDyve.
Availability of Data and Material
The authors declare that no data or material was taken illegally. However, publically available dataset was taken for implementation. The dataset generated in the study are available from corresponding author on request base only after copyrights are reserved.
Census of India 2011: Disabled population. Available: http://enabled.in/wp/census-of-india-2011-disabledpopulation . Accessed 30 Apr 2021.
Johnson, J. E., & Johnson, R. J. (2008). Assessment of regional language varieties in indian sign language. SIL International, Dallas, Texas, vol 2008, (pp. 1–121).
Kumar, D. A., Sastry, A. S. C. S., Kishore, P. V. V., & Kumar, E. K. (2018). 3D sign language recognition using spatio temporal graph kernels. Journal of King Saud University-Computer and Information Sciences.
Sharma, S., & Singh, S. (2020). Vision-based sign language recognition system: A Comprehensive Review. In: IEEE International Conference on Inventive Computation Technologies (ICICT), (pp. 140–144).
Sharma, S., & Singh, S. (2021). Vision-based hand gesture recognition using deep learning for the interpretation of sign language. Expert Systems with Applications, 182 , 115657.
Article Google Scholar
Cheok, M. J., Omar, Z., & Jaward, M. H. (2019). A review of hand gesture and sign language recognition techniques. International Journal of Machine Learning and Cybernetics, 10 (1), 131–153.
Gangrade, J., & Bharti, J. (2020). Vision-based hand gesture recognition for indian sign language using convolution neural network. IETE Journal of Research, 1–10.
Sharma, S., & Singh, S. (2019). An analysis of reversible data hiding algorithms for encrypted domain. In: 2019 Third International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), (pp. 644–648). IEEE.
2021, ISL Dictionary Launch, Indian Sign Language Research and Training centre (ISLRTC). Available: http://www.islrtc.nic.in/isl-dictionary-launch . Accessed 30 Apr 2021.
Kakoty, N. M., & Sharma, M. D. (2018). Recognition of sign language alphabets and numbers based on hand kinematics using A data glove. Procedia Computer Science, 133 , 55–62.
Suri, K., & Gupta, R. (2019). Convolutional neural network array for sign language recognition using wearable IMUs. In: 2019 6th International Conference on Signal Processing and Integrated Networks (SPIN), (pp. 483–488).
Rewari, H., Dixit, V., Batra, D., & Hema, N. (2018). Automated sign language interpreter. In: Eleventh International Conference on Contemporary Computing (IC3), (pp. 1–5).
Chong, T. W., & Kim, B. J. (2020). American sign language recognition system using wearable sensors with deep learning approach. The Journal of the Korea Institute of Electronic Communication Sciences, 15 (2), 291–298.
Gupta, R., & Kumar, A. (2020). Indian sign language recognition using wearable sensors and multi-label classification. Computers & Electrical Engineering, 90 , 106898.
Das, S. P., Talukdar, A. K., & Sarma, K. K. (2015). Sign language recognition using facial expression. Procedia Computer Science, 58 , 210–216.
Tripathi, K., & Nandi, N. B. G. (2015). Continuous Indian sign language gesture recognition and sentence formation. Procedia Computer Science, 54 , 523–531.
Lee, G. C., Yeh, F. H., & Hsiao, Y. H. (2016). Kinect-based Taiwanese sign-language recognition system. Multimedia Tools and Applications, 75 (1), 261–279.
Ansari, Z. A., & Harit, G. (2016). Nearest neighbour classification of Indian sign language gestures using kinect camera. Sadhana, 41 (2), 161–182.
Article MathSciNet Google Scholar
Beena, M. V., Namboodiri, M. A., & Dean, P. G. (2017). Automatic sign language finger spelling using convolution neural network: Analysis. International Journal of Pure and Applied Mathematics, 117 (20), 9–15.
Kumar, E. K., Kishore, P. V. V., Sastry, A. S. C. S., Kumar, M. T. K., & Kumar, D. A. (2018). Training CNNs for 3-D sign language recognition with color texture coded joint angular displacement maps. IEEE Signal Processing Letters, 25 (5), 645–649.
Müller, M., Röder, T., Clausen, M., Eberhardt, B., Krüger, B., & Weber, A. (2007). Documentation mocap database hdm05.
Rao, G. A., & Kishore, P. V. V. (2018). Selfie video based continuous Indian sign language recognition system. Ain Shams Engineering Journal, 9 (4), 1929–1939.
Xie, B., He, X., & Li, Y. (2018). RGB-D static gesture recognition based on convolutional neural network. The Journal of Engineering, 1515–1520.
Pugeault, N., & Bowden, R. (2011). Spelling it out: Real-time ASL fingerspelling recognition. In: IEEE International Conference on Computer Vision Workshops (ICCV workshops), (pp. 1114–1119).
Elpeltagy, M., Abdelwahab, M., Hussein, M. E., Shoukry, A., Shoala, A., & Galal, M. (2018). Multi-modality-based Arabic sign language recognition. IET Computer Vision, 12 (7), 1031–1039.
Ibrahim, N. B., Selim, M. M., & Zayed, H. H. (2018). An automatic arabic sign language recognition system (ArSLRS). Journal of King Saud University-Computer and Information Sciences, 30 (4), 470–477.
Kumar, P., Roy, P. P., & Dogra, D. P. (2018). Independent bayesian classifier combination based sign language recognition using facial expression. Information Sciences, 428 , 30–48.
Jose, H., & Julian, A. (2019). Tamil sign language translator—An assistive system for hearing-and speech-impaired people. In: Information and Communication Technology for Intelligent Systems, Springer, (pp. 249–257).
Ferreira, P. M., Cardoso, J. S., & Rebelo, A. (2019). On the role of multimodal learning in the recognition of sign language. Multimedia Tools and Applications, 78 (8), 10035–10056.
Sruthi, C. J., & Lijiya, A. (2019). Signet: A deep learning based indian sign language recognition system. In: 2019 International Conference on Communication and Signal Processing (ICCSP), (pp. 596–600).
Wadhawan, A., & Kumar, P. (2020). Deep learning-based sign language recognition system for static signs. Neural Computing and Applications, 1–12.
Kumar, A., & Kumar, R. (2021). A novel approach for ISL alphabet recognition using Extreme Learning Machine. International Journal of Information Technology, 13 (1), 349–357.
Sharma, A., Sharma, N., Saxena, Y., Singh, A., & Sadhya, D. (2020). Benchmarking deep neural network approaches for Indian Sign Language recognition. Neural Computing and Applications, 1–12.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., & Rabinovich, A. (2015). Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 1–9).
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint. arXiv:1409.1556.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (pp. 2818–2826).
Fukushima, K. (1980). A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36 , 193–202.
Rahman, M. M., Islam, M. S., Sassi, R., & Aktaruzzaman, M. (2019). Convolutional neural networks performance comparison for handwritten bengali numerals recognition. SN Applied Sciences, 1 (12), 1–11.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86 (11), 2278–2324.
Perez, L., & Wang, J. (2017). The effectiveness of data augmentation in image classification using deep learning. arXiv preprint. arXiv:1712.04621.
Triesch, J., & Von Der Malsburg, C. (2001). A system for person-independent hand posture recognition against complex backgrounds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23 (12), 1449–1453.
Rokade, Y. I., & Jadav, P. M. (2017). Indian sign language recognition system. International Journal of engineering and Technology, 9 (3), 189–196.
Kaur, J., & Krishna, C. R. (2019). An efficient Indian sign language recognition system using sift descriptor. International Journal of Engineering and Advanced Technology (IJEAT), 8(6).
Kumar, D. A., Kishore, P. V. V., Sastry, A. S. C. S., & Swamy, P. R. G. (2016). Selfie continuous sign language recognition using neural network. In: 2016 IEEE Annual India Conference (INDICON), (pp. 1–6).
Dour, G., & Sharma, S. (2016). Recognition of alphabets of indian sign language by Sugeno type fuzzy neural network. Pattern Recognit Lett, 30 , 737–742.
Athira, P. K., Sruthi, C. J., & Lijiya, A. (2019). A signer independent sign language recognition with co-articulation elimination from live videos: An indian scenario. Journal of King Saud University-Computer and Information Sciences.
Just, A., Rodriguez, Y., & Marcel, S. (2006). Hand posture classification and recognition using the modified census transform. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06), (pp. 351–356).
Kelly, D., McDonald, J., & Markham, C. (2010). A person independent system for recognition of hand postures used in sign language. Pattern Recognition Letters, 31 (11), 1359–1368.
Dahmani, D., & Larabi, S. (2014). User-independent system for sign language finger spelling recognition. Journal of Visual Communication and Image Representation, 25 (5), 1240–1250.
Kaur, B., Joshi, G., & Vig, R. (2017). Identification of ISL alphabets using discrete orthogonal moments. Wireless Personal Communications, 95 (4), 4823–4845.
Sahoo, J. P., Ari, S., & Ghosh, D. K. (2018). Hand gesture recognition using DWT and F-ratio based feature descriptor. IET Image Processing, 12 (10), 1780–1787.
Joshi, G., Vig, R., & Singh, S. (2018). DCA-based unimodal feature-level fusion of orthogonal moments for Indian sign language dataset. IET Computer Vision, 12 (5), 570–577.
Authors declare that no funding was received for this research work.
Authors and affiliations.
ECE Department, Punjab Engineering College (Deemed to be University), Chandigarh, India
Sakshi Sharma & Sukhwinder Singh
You can also search for this author in PubMed Google Scholar
Correspondence to Sakshi Sharma .
Conflict of interest.
The authors declares that they have no conflict of interest.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Reprints and Permissions
About this article
Cite this article.
Sharma, S., Singh, S. Recognition of Indian Sign Language (ISL) Using Deep Learning Model. Wireless Pers Commun 123 , 671–692 (2022). https://doi.org/10.1007/s11277-021-09152-1
Accepted : 16 September 2021
Published : 28 September 2021
Issue Date : March 2022
DOI : https://doi.org/10.1007/s11277-021-09152-1
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Indian sign language
- Sign language recognition system
- Gesture recognition
- Convolutional neural network
- Deep learning
- Data augmentation
- Find a journal
- Publish with us
- Essay Database >
- Essay Examples >
- Essays Topics >
- Essay on United States
Sign Language Research Paper Sample
Type of paper: Research Paper
Topic: United States , Body Language , Linguistics , Speech , America , Sign Language , Rhetoric , Deaf
ORDER PAPER LIKE THIS
In everyday life, in trains, buses and in the streets, we can often see the deaf people. They always attract the attention of the public: people strenuously pretending that they have absolutely no interest, yet are secretly watching deaf. What do at that moment people with disabilities to communicate feel? Is it difficult? How is it to live life not hearing a word? What a person feels, whose life lies within his inner world? For a deaf person the world around is perceived in another way. For example, if parents teach a deaf child to talk, to do it, they need to remember the vibrations of different sounds, not being able to hear them. They have to learn how to pronounce different sounds, when they want to play and have fun like any other child of this age. However, instead, they have to spend a few hours daily training. It is very difficult for a child, but it is usually required by their parents, whom they want to please so as they were proud of them. They have to try hard to learn to understand what other people say to them, judging only by the lips. In the case of strangers, it is even more difficult. Unfortunately, it is the reality for many people around every one of us. Despite the fact that we do not often think about it, yet the percentage of deaf people in most countries is quite high, so I think that this problem is quite relevant today. Moreover, many educated people today know sign language. For example, the Queen of Sweden enjoys a special reputation among deaf Swedes. Being a highly educated and intelligent person who knows several foreign languages, she understands and supports the need to use sign language. In general, recently, the study of sign languages became popular among people who do not have hearing problems. Study of the courses of foreign languages in colleges and universities in the U.S., conducted in 2002, showed a huge increase in popularity of courses on sign languages (up to 61 000 students) compared with 1998.
History of the development of sign language
Sign language is a kind of special speech, allowing people to denote whole words and letters of the alphabet with certain gestures. Sign language for the exchange of information is used both by people with impaired hearing and vocal cords, and by people without such deficiencies. Accordingly, there can be distinguished: a) sign language for people with no defects of speech (e.g., in the Australian tribes widows after the death of men for a year use only sign language for communication). b) Sign language for people with disorders of the organs of speech - blind or deaf (such people by the available data constitute 0.4 to 1.5% of population). Sign language by its capabilities is not inferior to the language of sounds, although socially it has lower status. According to the hypothesis of the sign language origin of the American researcher G. Hughes first proposed in 1973, people sounding speech was preceded by sign language, which began to arise spontaneously 3 million years BC. Then sign language began to be supplemented by about 20 40 sounds (Neville & Lawson, 1987). Within 100 thousand years BC, speech sounds ousted the sign language. Intensive development of the sound speech began only within the last 100 40 thousand years BC. The first confirmation of this hypothesis is that sign language of apes (chimpanzees use 200 gestures and gorillas - 1000) and children in the pre-speech - sensorimotor - period coincide. The second confirmation of the hypothesis was obtained experimentally. So, when a monkey was taught human sign language (it was taught a deaf-mute person is taught), it absorbed about 500 words and reached the level of development of a child of five. Monkey communicated in this language with people and other relatives, the apes. At the same time, this monkey even knew how to make generalizations (e.g. lemon and mandarin - citrus fruits) and also swear (knowing the word "junk" monkey scolded her husband, which angered her: "You – junk person." Over the past few decades in the ever-growing volume of literature, which explores and describes the deaf sign language, there were added new assumptions and hypotheses about the origin and components of language (Armstrong, 2002). The idea expressed by William Stokoe, was that the sign system could really be language. Stokoe was a linguist who worked in the 50s in the Gallaudet University and described the system, now known as the American Sign Language (ASL). This system is based on linguistic principles of contrast at the lexical level - below gesture or word, thus suggesting that sign language may have a phonological level, which can be identified, which greatly expands the range of linguistic studies of these phenomena. Works of Stokoe also supported numerous attempts to describe the stages of the languages of mankind over the evolutionary history of species, research, and received the uncertain glory. The theory of the evolution of human language was proposed as a result of the Darwinian theory of the origin of species, but was not widespread due to insufficient evidence. This area was so controversial that in 1866 the Linguistic Society of Paris banned discussion of this theory at their meetings. Stokoe and other scientists in the field of anthropology and linguistics referred to this theory and during 1960-1970 laid its more scientific basis (2005). Starting from the hypothesis of Stokoe, there was released a huge amount of research literature, which describes the attempts to define the degree of comparison possible between the sign and spoken language, and the role that each method has played in the evolution of language learning. Stokoe said that significant differences between the symbolic and spoken language stemmed from differences in the ability of the organs of sight and hearing to perceive information. Armstrong noted that deaf people have sensory thrills in a visual environment and it allows in sign language, says Peirce, to use design to some extent impossible in speech. For many hearing people who are unfamiliar with sign language of the deaf, it is hard to imagine, through fundamentally different modes of transmission, that it has the same effect of communication as the spoken word. For centuries, people have regarded sign language as gestures, which does not correspond to speech. Armstrong (2002) quotes Edward Tylor, an influential British anthropologist of the 19th century, saying that it should be noted that the sign language in any case does not match writing of words and speech. One reason is that sign language has a limited ability to express abstract concepts. All others noted trends in famous iconic languages to be mimic or iconic. In 1950, psychologists, argued that sign language was more illustrative, but lost in symbolism, precision, subtlety and flexibility, and these statements are still voiced today. In most countries, you can find cases where minority languages have been humiliated in public for political reasons. The idea of equality of the various kinds of languages was rooted in the early 20th century. Armstrong (2002) reminds us of the 1921 classic spoken language; Sapir said that the simplest South African bushman enjoys the forms of a rich system of symbols, which essentially is singing, comparable with refined French language (Corker & French, 1999). However, Armstrong points out that there were no steps towards the recognition of sign language of the deaf in 1950. During this time, the place of deaf sign language was between recognition and denial of its legitimate existence. For example, the first school for the deaf, where there was used in teaching sign language arose during the European Enlightenment of the 18th century. They seem to have succeeded in creating the atmosphere of tolerance, at least in France and the United States during the first half of the 19th century. The origins of the American Sign Language begins with school for the deaf in Paris, which was founded by the abbot de L'Eppee in 1755 during the second half of the 19th century. In teaching deaf children in the West, there dominated pure oral method. Supporters of the oral method considered sign language to be a primitive form of communication, struggling to teach the deaf to speak and perceive speech, but that was under duress, it was believed that it was the only way the deaf could adapt to society and participate in all aspects of life. Therefore, sign language had to be eliminated. The study, which was conducted by William Stokoe, tended to look for similarities between gestures and speech (2005). Stokoe focused first on finding in the American Sign Language elements similar to phonemes in speech. His goal was to devise a system of sign language similar to phonetic. It is understood that he had to overcome linguistic prejudice of many teachers and psychologists working with the deaf. Stokoe felt that the principles of sign language studies should follow the same that were used in the study of speech. These principles included the identification of a lexical structure and differences on a lexical level. Stokoe also realized that there was an important difference between sign and spoken language, which other researchers of the sign language sometimes forgot or ignored - the problem of simultaneously displaying different component gestures. He described these elements as visual analogs of the phonemes. Stokoe identified three parameters of gesture: 1) the location (where the gesture is done); 2) the direction of movement of the arm or hand showing gesture; 3) the action or movement of the hand (2005). Other aspects of language - facial expression, the signals are not carried out by hands - now are recognized as part of the language. They are not so easy to describe by this system, but the system has stood the test of time and can be used in the description of gestures. Stokoe’s success lies in winning acceptance of the American Sign Language as a natural human speech - it was a great achievement in the field of linguistic research. Today we see a large number of scientists linguists who study the American Sign Language and other sign languages, there is a body of literature describing the American Sign Language, which can surpass these figures of speech (Armstrong, 2002). However, these shifts were given a hard time. There were plenty of linguists and even directly deaf that defied Stokoe and tried to deny the linguistic status of the sign language system.
Gestures and the Sign Language
Among deaf people, there are several forms of speech, by which they communicate with each other - it has spoken language (verbal and gestural), dactyl and written. In an informal setting between the deaf they often turn to sign language (gestural speech - their "native" language). That is why educated deaf in all countries are like bilingual: in the easy communication they use their native language and in the official talks, worship, lectures, educational process, etc. they use a combination of gestures and dactyl (for the function words and morphemes, analogues of which are absent in the native language of the deaf). Gestural communication plays an important role in any human communication. Throughout its evolution, human used gestural communication channel, understanding and appreciating the emotional state of the tribesmen by spontaneous movements of their bodies, legs, and in particular hand. The ability to communicate with gestures is inherent in people from birth. Precisely because of this, if necessary, people may communicate without preliminary training by using signs. However, in this case the meaning of their dialogue will be very limited. In all cultures equally with words there is a specific set of gestures (kinesics expressions), universally understood and applied independently, irrespective of the sound of speech. This fact determined a real opportunity for deaf people to find ways to contact other members of their culture. Although gestural vocabulary of the deaf child having no contact with deaf adults usually consists of a small repertoire of "home" signs, they can communicate with others at the elementary level. The principle of this kind of communication is similar to pantomime. When a deaf-mute child is hungry, he can notify parents by imitating the process of chewing, or raise the hand to the mouth. When he wants something - points with the index finger. When agrees - nods, and if denies - then shakes his head from side to side. An American researcher Goldin-Meadow showed that home sign language of a child is functionally similar to the children's language as it is (2005). It is structured on several levels and has a vocabulary, syntax, morphology. Deaf children - Americans between the ages of 1.4 to 5.9 years, were able to transmit information about current, past and future events and manipulate the world around them, as capable as children knowing a conventional sign language, comment on objects and people (including themselves). In the recent past, many societies were very well developed and used sign languages in ritual silence, in communication at a distance, when there was the need to respect the silence on the hunt and in similar situations. Naturally, in those societies where there existed sign languages of the hearing people, few deaf people used them creatively enriching. However, these advanced and functionally rich options lexically proved short-lived, their use does not go beyond the narrow circle of friends of the deaf (Bowden et al., 2004). Numerous deaf communities, capable of supporting a functionally rich language and transmit it to new members is a late phenomenon that occurs in case of the high population density in urban areas. In Europe, with increasing mobility of the population in modern times over large areas within the entire states began to develop common, so-called national languages. In a sense, the parallel process occurred in the case of sign languages. The most important impetus for their development and distribution on the territory of entire nations has been the emergence in the late 18th century of special educational centers for children with hearing impairments, in France - led by Abbe Charles Michel de L'Eppee, in Germany - under the leadership of Samuel Heinicke. Successes of the French and German schools led to a proliferation of similar institutions in other countries, which borrowed only deaf educational ideas (as it happened in England) or the entire procedure, including the sign language (Emmorey, Kosslyn & Bellugi, 1993). The first such school in Russia was opened in 1806 in Pavlovsk, in the United States - in 1817 in Hartford (Conn.); both worked by the French method. As a result, the sign language of America was in the relationship with the Russian (through French), but to British Sign Languages (there are several), it is irrelevant. Deaf sign languages themselves get their first own names: native language of deaf of the U.S. is called amslen (short AMerican Sign LANguage), and gestural form of standard English is commonly referred to as siglish (from SIGned EngLISH). Soon the structure of sign languages began to be studied in many scientific centers of America, Western Europe and the rest of the world. By the assessment of Stokoe, amslen was an exotic language and in some respects as far from American English as the Papuan. Anyway, sign languages vary geographically. For example, in a small German state, there are several dialects. Despite the single sound language in Germany and Austria, the Austrian sign and the German sign language represent languages that are not related to each other. On the other hand, the American Sign Language has more in common with the French, and almost no resemblance to the British signed language. This is due to the fact that in the middle of the 18th century deaf French teacher Laurent Clerc came at the request of the American city Gallaudet on the establishment of the first school for the deaf in the United States. Laurent Clerc himself as a follower of sign method of Abbe de l'Epee influenced the spread of sign language in the United States, which explains the similarity between the American and French sign languages . Sign language is independent, naturally developed language used by deaf or hard of hearing people for the purpose of communication. Sign language consists of a combination of gestures, each of which is produced in conjunction with hands mimicry, shape or movement of the mouth and lips, as well as in combination with the position of the body. Hearing and deaf linguists from different countries proved the consistency of sign language as a linguistic system with its own, distinct from the audio languages, morphological and syntactic features. The main difference between the structures of sign language from the sound that its structure allows people to send multiple streams of information in parallel (synchronous language structure). For example, the contents of the "object moving on huge sizes along the bridge" can be transmitted using a single gesture, while audio languages are functioning sequentially (i.e., information is transmitted serially, one after another). Productive is a comparative study of the grammar and semantics of ordinary conversational and colloquial gestural systems as opposed to codified literary language. Conversational speech (including gesture) is characterized by two opposing trends: ruggedness and compactness, syncretism. For example, the meanings, which are codified in literary language by one token, colloquially are dissected: instead pen people often say something to write. In colloquial sign language, there is nominative analogy model type berry + black + language for blueberries. Idiomatic expressions such as "shut up like a clam" in sign language are absent. It shows in a descriptive way. All the idioms are shown not by the form words, but by the meaning. All the matter is that people, who have hearing loss, have also some features of the psyche. As they say, they do not think in words (especially those who have severe hearing impairment), but rather gestures, so the vocabulary is very limited. Provided special education this can be avoided. As for idiomatic expressions, the "shut up like a clam" will be shown, as by the meaning, but accompanied by such mimicry that you will understand the meaning completely. So, the word "lift" is closely associated with the image of "bend down and pick something up from the floor" (pick a handkerchief, raise the match); hence the expression "He lifted his hand" (in which the word "lift" is used in another sense), and "His temperature lifted," clearly divergent from this habitual way, often are not understood by the deaf. Therefore, it is quite natural that the task of learning a language to the deaf child is not simply mastering specific knowledge as mastery of knowledge, complicated by the presence of ambiguity and homonymy words, the resolution of which takes place in the context. Saying in sign language along with gestural component contains non-manual component (use of sight, facial expressions, movements of the head and body). These tools work the same to the tone of the sound languages, and are used to express deixis (reference to some objects), denying, the actual division, different types of questions, correlating various syntactic constituents, etc. Sign language, as opposed to sound, is nonlinear. Grammatical information is usually transmitted simultaneously with lexical; gesture in the performance process is subject to some modulation (hand moves uniformly accelerated or intermittently, in a vertical or horizontal plane changes direction, the same gesture performed with two hands, etc.). In the syntax of sign language volumetric space is used primarily for localization: gesticulating "puts" participants of the situation in certain point in space, and further articulation place predicates predictably modified depending on the location of subject and object.
Armstrong, D. F. (2002). Original signs: Gesture, sign, and the sources of language. Gallaudet University Press. Bowden, R., Windridge, D., Kadir, T., Zisserman, A., & Brady, M. (2004). A linguistic feature vector for the visual interpretation of sign language. In Computer Vision-ECCV 2004 (pp. 390-401). Springer Berlin Heidelberg. Corker, M., & French, S. (1999). Disability discourse. McGraw-Hill International. Emmorey, K., Kosslyn, S. M., & Bellugi, U. (1993). Visual imagery and visual-spatial language: Enhanced imagery abilities in deaf and hearing ASL signers. Cognition, 46(2), 139-181. Goldin-Meadow, S. (2005). The resilience of language: What gesture creation in deaf children can tell us about how all children learn language. Psychology Press. Neville, H. J., & Lawson, D. (1987). Attention to central and peripheral visual space in a movement detection task: An event-related potential and behavioral study. I. Normal hearing adults. Brain research, 405(2), 253-267. Stokoe, W. C. (2005). Sign language structure: An outline of the visual communication systems of the American deaf. Journal of deaf studies and deaf education, 10(1), 3-37.
Cite this page
Share with friends using:
Finished papers: 2365
This paper is created by writer with
If you want your paper to be:
Well-researched, fact-checked, and accurate
Original, fresh, based on current data
Eloquently written and immaculately formatted
275 words = 1 page double-spaced
Get your papers done by pros!
Free history essay sample 4, the last supper as a cultural formation subject research paper sample, sample research paper on music, example of essay on religious studies world religion, trisha essays, entice essays, feeney essays, oracle database essays, himalayan essays, cross contamination essays, science annotated bibliographies.
Password recovery email has been sent to [email protected]
Use your new password to log in
You are not register!
Now you can download documents directly to your device!
Check your email! An email with your password has already been sent to you! Now you can download documents directly to your device.
or Use the QR code to Save this Paper to Your Phone
The sample is NOT original!
Short on a deadline?
Don't waste time. Get help with 11% off using code - GETWOWED
No, thanks! I'm fine with missing my deadline