Skip to main content
  • Research Article
  • Open access
  • Published:

Comparisons of Auditory Impressions and Auditory Imagery Associated with Onomatopoeic Representation for Environmental Sounds

Abstract

Humans represent sounds to others and receive information about sounds from others using onomatopoeia. Such representation is useful for obtaining and reporting the acoustic features and impressions of actual sounds without having to hear or emit them. But how accurately can we obtain such sound information from onomatopoeic representations? To examine the validity and applicability of using verbal representations to obtain sound information, experiments were carried out in which the participants evaluated auditory imagery associated with onomatopoeic representations created by listeners of various environmental sounds. Results of comparisons of impressions between real sounds and onomatopoeic stimuli showed that impressions of sharpness and brightness for both real sounds and onomatopoeic stimuli were similar, as were emotional impressions such as "pleasantness" for real sounds and major (typical) onomatopoeic stimuli. Furthermore, recognition of the sound source from onomatopoeic stimuli affected the emotional impression similarity between real sounds and onomatopoeia.

1. Introduction

Sounds infinite in variety surround us throughout our lives. When we describe sounds to others in our daily lives, onomatopoeic representations related to the actual acoustic properties of the sounds they represent are often used. Moreover, because the acoustic properties of sounds induce auditory impressions in listeners, onomatopoeic representations and the auditory impressions associated with actual sounds may be related.

In previous studies, relationships between the temporal and spectral acoustic properties of sounds and onomatopoeic features have been discussed [1–4]. We have also conducted psychoacoustical experiments to confirm the validity of using onomatopoeic representations to identify the acoustic properties of operating sounds emitted from office equipment and audio signals emitted from domestic electronic appliances [5, 6]. We found relationships between subjective impressions, such as the product imagery and functional imagery evoked by machine operation sounds, audio signals, and the onomatopoeic features. Furthermore, in a separate previous study, we investigated the validity of using onomatopoeic representations to identify the acoustic properties and auditory impressions of various kinds of environmental sounds [7].

Knowing more about the relationship between the onomatopoeic features and auditory impressions of sounds is useful because such knowledge allows one to more accurately obtain or describe the auditory imagery of sounds without actually hearing or emitting them. Indeed, one previous study attempted a practical application of such knowledge by investigating the acoustic properties and auditory imagery of tinnitus using the onomatopoeic representations of patients [8]. Moreover, future applications may include situations in which electronic home appliances such as vacuum cleaners and hair dryers break down and customers contact customer service representatives and use onomatopoeic representations of the mechanical problems they are experiencing; engineers who listen or read accounts of such complaints may be able to obtain more accurate information about the problems being experienced by customers and better analyze the cause of the problem through the obtained onomatopoeic representations. Wake and Asahi [9] conducted psychoacoustical experiments to clarify how people communicate sound information to others. Participants were presented with sound stimuli and asked to freely describe the presented sounds to others. Results showed that verbal descriptions, including onomatopoeia, mental impressions expressed through adjectives, sound sources, and situations were frequently used in the descriptions. Such information may be applicable to sound design. Indeed, related research has already been presented in a workshop on sound sketching [10], although the focus was on vocal sketching only.

In practical situations in which people communicate sound information to others using onomatopoeic representation, it is necessary that the receivers of onomatopoeic representations (e.g., engineers in the above-mentioned case) be able to identify the acoustic properties and auditory impressions of the sounds that onomatopoeic representations represent. The present paper examines this issue. Experiments were carried out in which participants evaluated the auditory imagery associated with onomatopoeic representations. The auditory imagery of onomatopoeic representations was compared with the auditory impressions for their corresponding actual sound stimuli, which were obtained in our previous study [7].

Furthermore, one of the most primitive human behaviors related to sounds is the identification of sound sources [11]. Gygi et al. [12] reported that the important factors affecting the identification of environmental sounds involve spectral information, especially the frequency contents around 1-2 kHz, and temporal information such as envelope and periodicity. If we do indeed recognize events related to everyday sounds using acoustic cues [13–15], then it may be possible to also recognize sound sources from onomatopoeic features instead of acoustic cues. Moreover, such recognition of the source may affect the auditory imagery evoked by onomatopoeia. Although Fujisawa et al. [16] examined the auditory imagery evoked by simple onomatopoeia with two morae such as /don/ and /pan/ ("mora" is a standard unit of rhythm in Japanese speech), sound source recognition was not discussed in their study. In the present paper, therefore, we took sound source recognition into consideration while comparing the auditory imagery of onomatopoeic representations to the auditory impressions induced by their corresponding real sounds.

2. Experiment

2.1. Stimuli

In our previous study [7], 8 participants were aurally presented with 36 environmental sounds, and their auditory impressions of sound stimuli were evaluated. The sounds were selected based on their relatively high frequency of occurrence both outdoors and indoors in our daily lives. Additionally, participants expressed sound stimuli using onomatopoeic representations, as shown in Table 1.

Table 1 "Major" and "minor" onomatopoeic representations for each sound source.

For each sound stimulus, 8 onomatopoeic representations were classified into 2 groups based on the similarities of 24 phonetic parameters, consisting of combinations of 7 places of articulation (labiodental, bilabial, alveolar, postalveolar, palatal, velar, and glottal), 6 manners of articulation (plosive, fricative, nasal, affricate, approximant, and flap) [17], the 5 Japanese vowels (/a/, /i/, /u/, /e/, /o/), voiced and voiceless consonants, syllabic nasals, geminate obstruents, palatalized consonants, and long vowels, using a hierarchical cluster analysis in which the Ward method of using Euclidean distance as a measure of similarity was employed. For the two groups obtained from cluster analysis, two onomatopoeic representations were selected for each sound. One was selected from the larger group (described as the "major" representation) and the other from the smaller group (the "minor" representation). A major onomatopoeic representation is regarded as being frequently described by many listeners of the sound, that is, a "typical" onomatopoeia, whereas a minor onomatopoeic representation is regarded as a unique representation for which there is a relative smaller possibility that a listener of the sound would actually use the representation to describe it. In selecting the major onomatopoeic stimuli, a Japanese onomatopoeia dictionary [18] was also referenced. Consequently, 72 onomatopoeic representations were used as stimuli, as shown in Table 1; the expressions are written in both Japanese and the International Phonetic Alphabet [17]. In the experiments, however, the onomatopoeic stimuli were presented to participants using Japanese katakana, which is a Japanese syllabary used to write words. Almost all Japanese are able to correctly pronounce onomatopoeic representations written in Japanese katakana.

Onomatopoeic sounds uttered by listeners of sounds might more accurately preserve acoustic information such as pitch (the fundamental frequency of a vocal sound) and sound level compared to written onomatopoeic representations. Accordingly, onomatopoeic sounds (including vocal sketching) may be advantageous as data in terms of the extraction of fine acoustic information. However, written onomatopoeia also preserve a certain amount of acoustic information. Furthermore, in Japan not only onomatopoeic sounds are often vocalized, but onomatopoeia are also frequently used in printed matter, such as product instruction manuals in which audio signals that indicate mechanical problems are described in words. In such practical applications, there may also be cases where written onomatopoeic representations are used in the communication between customer service representatives and the users of products such as vacuum cleaners and hair dryers. Therefore, in the present study, we used written onomatopoeic stimuli rather than onomatopoeic sounds.

2.2. Procedure

Seventy-two onomatopoeic representations printed in random order on sheets of paper were presented to 20 participants (12 males and 8 females), all of whom were different from the participants in our previous experiments [7]. All participants were native speakers of Japanese, and therefore they were able to read onomatopoeic stimuli written in Japanese katakana. Further, they were familiar with onomatopoeic representations, because the Japanese frequently read and use such expressions in their daily lives. Participants were asked to rate their impressions of the sounds associated with the onomatopoeia. The impressions of the auditory imagery evoked by the onomatopoeic stimuli were measured using the semantic differential (SD) method [19]. The 13 adjective pairs shown in Table 2 were used to create the SD scales, which were also used in our previous psychoacoustical experiments (i.e., in measurements of auditory impressions for environmental sounds) [7]. Each SD scale had 7 Likert-type scale categories (1 to 7), and the participants selected a number from 1 to 7 for each scale for each onomatopoeic stimulus. For example, for the scale "pleasant/unpleasant," each category corresponded to the degree of pleasantness impression as follows: 1-extremely pleasant, 2-fairly pleasant, 3-slightly pleasant, 4-moderate, 5-slightly unpleasant, 6-fairly unpleasant, and 7-extremely unpleasant.

Table 2 Factor loading of each adjective scale for each factor.

Participants were also requested to provide answers by free description to questions asking about the sound sources or the phenomena that created the sounds associated with the onomatopoeic stimuli.

3. Results

3.1. Analysis of Subjective Ratings

The obtained rating scores were averaged across participants for each scale and for each onomatopoeic stimulus. To compare impressions between actual sound stimuli and onomatopoeic representations, factor analysis was applied to the averaged scores for onomatopoeic representations together with those for the sound stimuli (i.e., the rating results of auditory impressions) obtained in our previous experiments [7].

By taking into account the factors for which the eigenvalues were more than 1, a three-factor solution was obtained. The first, second, and third factors accounted for 45.5%, 24.6%, and 9.76%, respectively, of the total variance in the data. Finally, the factor loadings for each factor on each scale were obtained using a varimax algorithm, as shown in Table 2.

The first factor is interpreted as the emotion factor because adjective pairs such as "tasteful/tasteless" and "pleasant/unpleasant" have high loadings for this factor. The second factor is interpreted as the clearness factor because adjective pairs such as "muddy/clear" and "bright/dark" have high factor loadings. The third factor is interpreted as the powerfulness factor because the adjective pairs "strong/weak," "modest/loud," and "powerful/powerless" have high factor loadings.

Furthermore, the factor scores for each stimulus for each factor were computed. Figure 1(a) to Figure 1(c) shows the factor scores for the sound stimuli and the "major" and "minor" onomatopoeic representations on the emotion, clearness, and powerfulness factors, respectively.

Figure 1
figure 1

Factor scores for real sound stimuli and "major" and "minor" onomatopoeic representations on the (a) emotion factor, (b) clearness factor, and (c) powerfulness factor. Emotion factorClearness factorPowerfulness factor

3.2. Analysis of Free Description Answers of Sound Source Recognition Questions

From the free descriptions regarding sound sources associated with onomatopoeic representation, the percentage of participants who correctly recognized the sound source or the phenomenon creating the sound was calculated for each onomatopoeic stimulus. In Gaver's study on the ecological approach to auditory perception [20], sound-producing events were divided into three general categories: vibrating solids, gases, and liquids. Considering these categories, participants' descriptions in which keywords related to sound sources or similar phenomena were contained were regarded as being correct. For example, for "whizzing sound (no.1)", descriptions such as "sound of an arrow shooting through the air" and "sound of a small object slicing the air" were counted as correct answers. The percentages of correct answers for sound sources associated with "major" and "minor" onomatopoeic stimuli are shown in Figure 2.

Figure 2
figure 2

Percentage of correct sound source answers associated with "major" and "minor" onomatopoeic stimuli.

The percentage of correct answers averaged across all "major" onomatopoeic stimuli was 64.3%. On the other hand, the same percentage for "minor" onomatopoeic stimuli was 24.3%. Major onomatopoeic stimuli seemed to allow participants to better recall the corresponding sound sources. These results suggest that sound source information might be communicated by major onomatopoeic stimuli more correctly than by minor stimuli.

4. Discussion

4.1. Comparison between Onomatopoeic Representations and Real Sound Stimuli Factor Scores

From Figure 1(a), sound stimuli such as "owl hooting (no. 6)," "vehicle horn (no. 9)," "sound of a flowing stream (no. 11)," "sound of a noisy construction site (no. 12)," and "sound of a wind chime (no. 34)" displayed highly positive or negative emotion factor scores (e.g., inducing strong impressions of tastefulness or tastelessness and pleasantness or unpleasantness). However, the factor scores for the onomatopoeic representations of the same sound stimuli were not as positively or negatively high. On the other hand, the factor scores for the "major" onomatopoeic representations of stimuli such as "sound of water dripping (no. 3)," "sound of a temple bell (no. 25)," and "beach sound (no. 30)" were nearly equal to those of the corresponding real sound stimuli.

The absolute differences in factor scores between the sound stimuli and the major or minor onomatopoeic representations were averaged across all sound sources in each of the three factors, as shown in Table 3.

Table 3 Averaged absolute differences of factor scores between real sound stimuli and "major" or "minor" onomatopoeic representations (standard deviations shown in parentheses).

According to Table 3, for the emotion factor, the factor scores for the real sound stimuli were closer to those for the major onomatopoeic representations than to those for the minor onomatopoeic representations. The correlation coefficient of the emotion factor scores between the real sound stimuli and the major onomatopoeic stimuli was statistically significant at (), while the same scores of the minor onomatopoeic stimuli were not correlated with those of their real sounds.

As shown in Figure 1(b), for the clearness factor, the factor scores for the major and minor onomatopoeic representations were close to those for the real sound stimuli as a whole. Table 3 also shows that the averaged factor score differences between the real sound stimuli and both the major and minor onomatopoeia were the smallest for the clearness factor. Furthermore, the correlation coefficients of the clearness factor scores between the real sound stimuli and the major or minor onomatopoeic stimuli were both statistically significant at (sound versus major onomatopoeia: ; sound versus minor onomatopoeia: ). The impressions of muddiness (or clearness) and brightness (or darkness) for the onomatopoeic representations were similar to those for the corresponding real sound stimuli.

For the powerfulness factor, factor scores for the major and minor onomatopoeia were different from those for the corresponding sound stimuli as a whole, as shown in Figure 1(c) and Table 3. Moreover, no correlation of the powerfulness factor scores between the real sound stimuli and the onomatopoeic stimuli was found.

These results suggest that the receiver of onomatopoeic representations can more accurately guess auditory impressions of muddiness, brightness, and sharpness (or clearness, darkness and dullness) for real sounds from their heard onomatopoeic representations. Conversely, it seems difficult for listeners to report impressions of strength and powerfulness for sounds using onomatopoeic representations.

In the present paper, while onomatopoeic stimuli with highly positive clearness factor scores included the Japanese vowel /o/ (e.g., the major onomatopoeic stimuli nos. 2 and 21), those with highly negative clearness factor scores included vowel /i/ (e.g., the major and minor onomatopoeic stimuli nos. 27 and 34). According to our previous study [7], the Japanese vowel /i/ was frequently used to represent sounds with spectral centroids at approximately 5 kHz, inducing impressions of sharpness and brightness. Conversely, vowel /o/ was frequently used to represent sounds with spectral centroids at approximately 1.5 kHz, inducing impressions of dullness and darkness. From a spectral analysis of the five Japanese vowels produced by male speakers, the spectral centroids of vowels /i/ and /o/ were actually the highest and lowest, respectively, of all the five vowels [7]. Thus, it can be said that these vowels are at least useful in communicating information about the rough spectral characteristics of sounds.

As mentioned above, a relatively small difference in addition to a significant correlation of emotion factor scores between the real sound stimuli and the major onomatopoeic stimuli were found. Participants could recognize the sound source or the phenomenon creating the sound more accurately from the major onomatopoeic stimuli, as shown in Figure 2.

Preis et al. have pointed out that sound source recognition influences differences in annoyance ratings between bus recordings and "bus-like" noises, which were generated from white noise to have spectral and temporal characteristics similar to those of original bus sounds [21]. Similarly, in the case of the present paper, good recognition of sound sources may be the reason why the emotional impressions of the major onomatopoeic stimuli were similar to those for the real sound stimuli.

In our previous study, we found that the powerfulness impressions of sounds were significantly correlated with the number of voiced consonants [7]. However, as shown in Figure 1(c), the auditory imagery of onomatopoeic stimuli containing voiced consonants (i.e., nos. 26 and 35) was different from the auditory impressions evoked by real sounds. Thus, we can conclude that it is difficult to communicate the powerfulness impression of sounds by voiced consonants alone.

4.2. Effects of Sound Source Recognition on the Differences between the Impressions Associated with Onomatopoeic Representations and Those for Real Sounds

As mentioned in the previous section regarding the emotion factor, there is the possibility that differences in impressions between real sound stimuli and onomatopoeic representations may be influenced by sound source recognition. That is, impressions of onomatopoeic representations may be similar to those for real sound stimuli when the sound source can be correctly recognized from the onomatopoeic representations. To investigate this point for each of the three factors, the absolute differences between the factor scores for the onomatopoeic representations and those for the corresponding sound stimuli were averaged for each of two groups of onomatopoeic representations: one group comprised of onomatopoeic stimuli for which more than 50% of the participants correctly answered the sound source question, and another group comprised of those for which less than 50% of the participants correctly answered the sound source question (see Figure 2). These two groups comprised 30 and 42 representations, respectively, from the 72 total onomatopoeic representations.

Table 4 shows the averaged differences of factor scores for both groups mentioned above for each factor. The difference in the group of onomatopoeic representations in which participants had higher sound source recognition was slightly smaller than that in the other group for each factor. In particular, regarding the emotion factor, the difference between the averaged differences in both groups was statistically significant (). For the other two factors, no significant differences were found. These results indicate that the recognition of a sound source from an onomatopoeic representation may affect the difference between the emotional impressions associated with an onomatopoeic representation and those evoked by the real sound that it represents. Furthermore, it can be concluded that impressions of the clearness, brightness and sharpness of both the sound and onomatopoeic stimuli were similar, regardless of sound source recognition. On the other hand, the powerfulness impressions of both the sound and onomatopoeic stimuli were quite different, regardless of sound source recognition. For the powerfulness factor, the range of the distribution of factor scores throughout the sound stimuli was slightly smaller than that throughout the onomatopoeic stimuli (i.e., the averaged absolute factor scores for sound and onomatopoeic stimuli were 0.79 and 0.82, resp., as shown in Figure 1(c)). Sound stimuli which did not evoke strong powerfulness impressions were common. Furthermore, according to the eigenvalues of the factors, the powerfulness factor had the least amount of information among the three factors. These reasons may explain the large averaged differences of powerfulness factor scores between both groups.

Table 4 Absolute differences between factor scores for onomatopoeic representations and those for real sound stimuli, averaged for each of the two groups of onomatopoeic representations: those for which more than 50% of participants had correct sound source identifications, and those for which less than 50% of participants had correct identifications (standard deviations shown in parentheses).

5. Conclusion

The auditory imagery of sounds evoked by "major" and "minor" onomatopoeic stimuli was measured using the semantic differential method. From a comparison of impressions made by real sounds and their onomatopoeic stimuli counterparts, the clearness impressions for both sounds and major and minor onomatopoeic stimuli were found to be similar, as were the emotional impressions for the real sounds and the major onomatopoeic stimuli. Furthermore, the recognition of a sound source from an onomatopoeic stimulus was found to influence the similarity between the emotional impressions evoked by such onomatopoeic representations and their corresponding real sound stimuli, although this effect was not found for the factors of clearness and powerfulness. These results revealed that it was relatively easy to communicate information about impressions of clearness, including the muddiness, brightness, and sharpness of sounds, to others using onomatopoeic representations. These impressions were mainly related to the spectral characteristics of the sounds [22]. These results also indicate that we can communicate emotional impressions through onomatopoeic representations, enabling listeners to imagine the sound source correctly. Onomatopoeia can therefore be used as a method of obtaining or describing information about the spectral characteristics of sound sources in addition to the auditory imagery they evoke.

References

  1. Tanaka K, Matsubara K, Sato T: Onomatopoeia expression for strange noise of machines. Journal of the Acoustical Society of Japan 1997, 53(6):477-482.

    Google Scholar 

  2. Iwamiya S, Nakagawa M: Classification of audio signals using onomatopoeia. Soundscape 2000, 2: 23-30.

    Google Scholar 

  3. Hiyane K, Sawabe N, Iio J: Study of spectrum structure of short-time sounds and its onomatopoeia expression. Technical Report of IEICE 1998, (SP97-125):65-72.

    Google Scholar 

  4. Sato T, Ohno M, Tanaka K: Extraction of physical characteristics from onomatopoeia: Relationship between actual sounds, uttered sounds and their corresponding onomatopoeia. Proceedings of the Forum Acusticum, 2005, Budapest, Hungary 1763-1768.

    Google Scholar 

  5. Takada M, Tanaka K, Iwamiya S, Kawahara K, Takanashi A, Mori A: Onomatopoeic features of sounds emitted from laser printers and copy machines and their contribution to product image. Proceedings of 17th International Congress on Acoustics, 2001 3C.16.01.

    Google Scholar 

  6. Yamauchi K, Takada M, Iwamiya S: Functional imagery and onomatopoeic representation of auditory signals. Journal of the Acoustical Society of Japan 2003, 59(4):192-202.

    Google Scholar 

  7. Takada M, Tanaka K, Iwamiya S: Relationships between auditory impressions and onomatopoeic features for environmental sounds. Acoustical Science and Technology 2006, 27(2):67-79. 10.1250/ast.27.67

    Article  Google Scholar 

  8. Shiraishi K, Sakata T, Sueta T, et al.: Multivariate analysis using quantification theory to evaluate acoustic characteristics of the onomatopoeic expression of tinnitus. Audiology Japan 2004, 47: 168-174. 10.4295/audiology.47.168

    Article  Google Scholar 

  9. Wake SH, Asahi T: Sound retrieval with intuitive verbal descriptions. IEICE Transactions on Information and Systems 2001, E84(11):1568-1576.

    Google Scholar 

  10. Sonic Interaction Design : Sketching Sonic Interaction Design. Proceedings of the SID Workshop, 2008 http://www.cost-sid.org/wiki/HolonWorkshop

    Google Scholar 

  11. Guski R: Psychological methods for evaluating sound quality and assessing acoustic information. Acta Acustica United with Acustica 1997, 83(5):765-774.

    Google Scholar 

  12. Gygi B, Kidd GR, Watson CS: Spectral-temporal factors in the identification of environmental sounds. Journal of the Acoustical Society of America 2004, 115(3):1252-1265. 10.1121/1.1635840

    Article  Google Scholar 

  13. Warren WH, Verbrugge RR: Auditory perception of breaking and bouncing events: A case study in ecological acoustics. Journal of Experimental Psychology: Human Perception and Performance 1984, 10(5):704-712.

    Google Scholar 

  14. Ballas JA: Common factors in the identification of an assortment of brief every day sounds. Journal of Experimental Psychology: Human Perception and Performance 1993, 19(2):250-267.

    Google Scholar 

  15. Rosenblum LD: Perceiving articulatory events: Lessons for an ecological psychoacoustics. In Ecological Psychoacoustics. Edited by: Neuhoff JG. Elsevier Academic Press, San Diego, Calif, USA; 2004:219-248.

    Google Scholar 

  16. Fujisawa N, Obata F, Takada M, Iwamiya S: Impression of auditory imagery associated with Japanese 2-mora onomatopoeic representation. Journal of the Acoustical Society of Japan 2006, 62(11):774-783.

    Google Scholar 

  17. International Phonetic Association : Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet. Cambridge University Press, Cambridge, UK; 1999.

    Google Scholar 

  18. Asano T: The Dictionary of Onomatopoeia. Kadokawa Books, Tokyo, Japan; 1978.

    Google Scholar 

  19. Osgood CE, Suci GJ, Tannenbaum PH: The Measurement of Meaning. University of Illinois Press, Chicago, USA; 1957.

    Google Scholar 

  20. Gaver WW: What in the world do we hear? An ecological approach to auditory event perception. Ecological Psychology 1993, 5(1):1-29. 10.1207/s15326969eco0501_1

    Article  MathSciNet  Google Scholar 

  21. Preis A, Hafke H, Kaczmarek T: Influence of sound source recognition on annoyance judgment. Noise Control Engineering Journal 2008, 56(4):288-299. 10.3397/1.2949893

    Article  Google Scholar 

  22. von Bismarck G: Timbre of steady sounds: A factorial investigation of its verbal attributes. Acustica 1974, 30: 146-159.

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank all of the participants for their participation in the experiments. This paper was supported by a Grant-in-Aid for Scientific Research (no. 15300074) from the Ministry of Education, Culture, Sports, Science, and Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masayuki Takada.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Takada, M., Fujisawa, N., Obata, F. et al. Comparisons of Auditory Impressions and Auditory Imagery Associated with Onomatopoeic Representation for Environmental Sounds. J AUDIO SPEECH MUSIC PROC. 2010, 674248 (2010). https://doi.org/10.1155/2010/674248

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/674248

Keywords