cross lingual modelling experiments for indonesian - ASSTA Inc

0 downloads 0 Views 60KB Size Report
CROSS LINGUAL MODELLING EXPERIMENTS FOR INDONESIAN ..... and Waibel, A. (1997), The Global Phone Project: Multilingual LVCSR with Janus, SQEL,.
Martin et al. Cross Lingual Modelling Experiments

CROSS LINGUAL MODELLING EXPERIMENTS FOR INDONESIAN Terrence Martin, Sridha Sridharan Speech Research Laboratory, RCSAVT, School of Electrical and Electronic Systems Engineering Queensland University of Technology GPO Box 2434, George St, BRISBANE, Australia, 4001. Email:(tl.martin, s.sridharan)@qut.edu.au ABSTRACT: The extension of Large Vocabulary Continuous Speech Recognition (LVCSR) to resource poor languages such as Indonesian is hindered by the lack of transcribed acoustic data and appropriate pronunciation lexicons. Research has generally been directed toward establishing robust cross-lingual acoustic models, with the assumption that phonetic lexicons are readily available. This is not the case for Indonesian. This paper outlines the development of a small Indonesian lexicon and the transcription of Indonesian speech taken from the OGI Multi Language speech corpus. To overcome the lack of transcribed acoustic data, previous research has indicated that acoustic data from data rich languages can be used to train phoneme models for subsequent use in recognition of a target language. Using current cross lingual speech recognition techniques, the goal of this paper is to outline our preliminary experiments aimed at establishing which source languages are most suitable for use in Indonesian speech recognition. We investigate cross language transfer using English, Spanish and Hindi speech to train our acoustic source models for both phoneme and word recognition. A comparison between knowledge based and data driven mapping techniques and their effects on recognition is also tabled. It was found that Hindi speech models gave significantly improved recognition performance in comparison to English and Spanish.

1. INTRODUCTION Bahasa Indonesia and Bahasa Melayu, literally the Indonesian and Malay language respectively, constitute the national language for Malaysia, Indonesia, Singapore and Brunei. It is the most common form of inter-ethnic communication for approximately 200 million people in business, education and government. Unfortunately, the extension of Large Vocabulary Continuous Speech Recognition (LVCSR) technology to so called minority languages such as Indonesian is hindered by a lack of appropriate resources. While Indonesian may be poorly provided for in terms of resources for speech recognition, it still arguably ranks within the 20 most important languages in the world. In fact, after taking into consideration variability, distribution and importance, Schultz and Waibel (1997) rate Malay/Indonesian within the top ten languages. This disparity between the relative importance of the Indonesian language and the availability of speech recognition resources highlights the requirements for further research into the application of speech technology to Indonesian. Recent research (Beyerlein et al., 1999a; Schultz and Waibel, 2001b; Kohler, 1998) has indicated that a promising method for overcoming the lack of acoustic resources in the target language is by using acoustic models trained in data rich source languages for subsequent use in recognition in the target language. If sufficient target language acoustic data is available, adaptation can be performed on the source models so they are more representative of the feature space for the target language. Our research focus is on extracting suitable models from data rich languages such as English, Spanish and Hindi and adapting them to produce a state-of-the-art Indonesian speech recognition system. In (Schultz and Waibel, 2001b), it was shown that the accuracy achieved by implementing cross-language phone models in Automatic Speech Recognition (ASR) can be significantly affected by the selected source languages. The acquisition of the resources required to build a speech recognition system is expensive and the subsequent tailoring of these models to make them suitable for use on a target langauge is time consuming. To minimise cost and time considerations we are conducting preliminary experiments Proceedings of the 9th Australian International Conference on Speech Science & Technology Melb. 2002 c Dec 2 to 5-Accepted after abstract review °Australian Speech Science & Technology Association Inc. Page 184

Martin et al. Cross Lingual Modelling Experiments focused on establishing which languages are most suitable for an Indonesian speech recognition system. To achieve this, we examined the coverage and performance of English, Hindi, and Spanish for conducting cross language acoustic speech recognition on Indonesian. An acoustic corpus is only one of the resources required for statistical ASR; text corpora for language models and pronunciation lexicons are also required. However, most studies have relied on the availability of these resources. Unfortunately, to the authors knowledge, there are no pronunciation lexicons available. Currently, a 15k word commercial dictionary is being prepared for our research. In the interim we have developed a small (800 word) pronunciation lexicon for testing purposes. 2. ACOUSTIC DATABASE CREATION Phonetically transcribed acoustic data for Spanish, English, and Hindi was taken from the 22 Language Oregon Graduate Institute Multi Language Telephone Speech Corpus and used to train the acoustic models for the source languages. The source language speech is transcribed using Worldbet notation (Hieronymus, 1993), and is phonetically segmented and time aligned. Worldbet is an ASCII based character set encoding of the International Phonetic Alphabet (IPA). Table 1 outlines the total utterance durations and compositional details. Table 1: Cross Lingual Acoustic Source Data Specifications Language 1 English Spanish Hindi

Male Speakers 2 69 48 45

Female Speakers 3 41 24 9

Time(hr) 1.4 1.0 1.0

The choice of the OGI database and more specifically, the English, Spanish and Hindi languages enabled us to obtain adequate, if somewhat limited, data for training cross lingual acoustic models from a similar environment. Other languages such as Mandarin, were not utilised in this preliminary study because of the additional complications involved with tonal languages, ideographic scripts, or syllable based languages (Schultz and Waibel, 2001b). Using data recorded in a similar environment provided the opportunity to standardise the training and test environment and hopefully reduce the impact of train/test mismatch and variations in channel effects. To provide Indonesian acoustic test data, the entire OGI Indonesian acoustic corpus was transcribed and validated at the word level by two native Indonesian speakers. From this data, a subset comprising 22 of the 1 minute “stories before the tone” were selected for subsequent phonetic transcription. The transcription was then independently validated and non-speech artifacts inserted in accordance with the Worldbet notation. All instances of non-speech artifacts, in both the source and target transcriptions, were subsequently mapped to a single non-speech noise model for this series of experiments. 3. DICTIONARY CREATION To produce a phonetic transcript of the Indonesian speech, a phonetic lexicon was manufactured using the basic letter to sound rules taken from the Kamus Indonesia Inggris (Echols and Hassan, 1990), and cross checked with the rudimentary Indonesian pronunciations found in the The Learners Dictionary of Todays Indonesian (Quinn, 2001). The Kamus Indonesia Inggris dictionary contains an IPA based representation for the Indonesian phoneme set and basic letter to sound rules for pronunciation. We mapped the IPA set to Worldbet notation using Hieronymus (1993). The pronunciation lexicon created using these rules was checked using (Quinn, 2001) and then finally validated by cross-referencing against acoustic evidence. This resulted in the addition of several non-native phonemes to provide pronunciation coverage for the non-native words like Oregon which appear frequently in the Indonesian speech corpus and also to allow for additional within language variations. The entire lexicon consisted of nearly 800 words. This dictionary served as an interim measure for our preliminary testing until the completion of a 15k dictionary, which is being commercially produced. Proceedings of the 9th Australian International Conference on Speech Science & Technology Melbourne,December 2 to 5,2002 c °Australian Speech Science & Technology Association Inc. Page 185 Accepted after abstract review

Martin et al. Cross Lingual Modelling Experiments 4. CROSS LANGUAGE PHONE MAPPING Prior to building the acoustic models, preliminary mapping was required to ensure that the source data provided adequate coverage for the target language phonemes. Two techniques were used to select the best source language acoustic models to represent the target language. Firstly, a knowledge based technique was investigated in which direct mappings were made between equivalent Worldbet phonetic designations in the source and target languages. In the event that a direct mapping was not available, the closest Worldbet counterpart was selected after considering articulatory position and proximity of sound. As highlighted in (Beyerlein et al., 1999b) this technique has the benefit that no target language acoustic data is required. The mapping achieved by exploiting this linguistic knowledge is outlined in Section 4.1. Secondly, a data driven technique was investigated which used confusion matrix data to select the best representative for each target language phone from multiple source languages. This is further discussed in Section 4.2. Experimental results for both techniques are shown in Table 5 of Section 4.3. 4.1. Knowledge Driven Phone Mapping Using the OGI transcriptions for the source language and the phone set developed via (Echols and Hassan, 1990) and (Quinn, 2001), we constructed a suitable mapping which is shown in Table 2. The first column indicates the target phone set, including the non-native phonemes ( f, v, z, S, and x ). The subsequent columns show the coverage achieved by each of the three source languages, indicated by an “X”. For example the first row of phones containing “&, b, dZ ....” have “X ” indicated for each language indicated these phones had equivalent Worldbet counterparts in all three source languages. In the event that coverage was not provided by a particular source language, but an allophone with a similar articulatory position and sound was available, then this mapping was substituted. If no representative phoneme was available, a dash is indicated, and in this instance the model created for the universal set was used. Table 2: Cross Lingual Mappings for Indonesian Indonesian Phonemes [ Worldbet ] &,b,dZ,E,f,g,I,j k,l,m,N,s,S,tS,U,w V ,aU,h, > u, aI,r p d, ei, n, oU,v &r, n∼ t

English

Hindi

Spanish

X X X X ph X 3r th

X X X u:,ai,rr, X d[,e:,n[ , o:,X (t[ or tr)

X X -,-, hs,X X d[ , e, n[ , o,V 3 X t[

It can be seen in Table 2 that each language came close to providing complete phoneme coverage for the V target language. The exceptions were that no Spanish source model was available for the vowels “ ”, “>” or the diphthong “aU”. English and Hindi did not have a model for palatal nasal “n∼”, and Hindi did not have a model for voiced fricative “v” which as mentioned, does not appear in native Indonesian. It should be noted that this table only illustrates the target phonemes which had corresponding source language representatives with identical Wordbet/IPA symbols or potential source language replacements when no direct corresponding phoneme was available. There were however, several allophonic variants of the target phoneme available in the source languages. For example, there were several allophones for the trill “r”(trilled and tapped Spanish version) and plosive “t”(Hindi hyper-aspirated versions ). Given the rudimentary nature of our dictionary, we felt that these sounds might be more representative of the target language phone features. Accordingly, models for each allophone were created and these were used in the recognition process. After recognition these were mapped back to the base phoneme for comparative purposes. Proceedings of the 9th Australian International Conference on Speech Science & Technology Melb. December 2 to 5,2002 c Accepted after abstract review. °Australian Speech Science & Technology Association Inc. Page 186

Martin et al. Cross Lingual Modelling Experiments 4.2. Automatic Mapping As mentioned earlier, two methods were used to establish suitable “source-to-target” language mappings. The second technique used the HTK speech recognition toolkit (Young et al., 2001) to compare decoded utterances and the original Indonesian reference phoneme string. Initially we used a global phoneset (containing every phone model from all three source languages) to decode the Indonesian utterances. Unfortunately, this resulted in high confusion rates between dissimilar phonetic categories and produced poor results. To combat this we constrained the mapping choice to only allophonic variants of the target phones, and the recognition rates are shown beside the “All 3” entry in Table 5. Deviating from this idea, we used each individual source language phone recogniser to decode the Indonesian utterances, and then compared the resulting confusion matrices to select the best performing allophonic variant. This method is not globally competitive, however, using the phone models selected we were able to obtain a small improvement in performance as shown beside the “Individual” row in Table 5. These mappings and the corresponding phoneme recognition rates achieved in each source language are shown in Table 3. Table 3: Phone Recognition Rates on Indonesian Target Phone V & aI aU b d dZ E ei f g h

Best Source Phone V hi /68% sp & / 28% sp aI /60% en aU / 42% hi b / 58% hi d(/dr / 55% hi dZ / 57% sp E / 51% sp e / 54% en f / 42% hi gH / 36% hi h / 36%

Target Phone i I j k l m n N n∼ oU p r

Best Source Phone sp i / 69% hi Ix / 49% hi j / 32% hi k/kH / 67% sp L / 39% hi m / 69% hi n / 63% hi N / 36% sp n∼ / 33% sp o / 66% en pH / 69% hi rr/r( / 54%

Target Phone &r s S t tS u U v w

Best Source Phone sp 3 / 69% hi s / 73% nil recog instances hi t / 63% sp tS / 63% sp u / 32% sp U / 31% en v / 12% hi w / 36%

One result to emerge was that Hindi (identified by “hi ”) provided the best coverage for most Indonesian consonants, with the exception of those consonants taken from loanwords (f, v). For these phones, English (“en ”) achieved the best recognition rates. Spanish (“sp ”) achieved the best recognition rates V for most vowels, with the notable exceptions being “aU”, “>” and “ ” which were not in the Spanish phone set. However, the vowel recognition rate for Hindi on Indonesian vowels achieved only slightly inferior results in most instances. 4.3. Monophone Recognition Results The knowledge-based and Data driven mappings depicted in Table 2 and 3 were used to conduct monophone recognition experiments on Indonesian speech. Speech was parameterised using a 12th order MFCC analysis plus normalised energy, delta and delta-delta features with a frame size/shift of 25/10ms. Cepstral Mean Subtraction (CMS) was carried out to reduce the channel effects. The phone model topology was 3 state left-to-right, with each state emission density comprising 8 Gaussian mixture components. The speech files in the OGI database are sampled at 8 kHz and stored using 8 bit, µ-law encoding. A uniform prior phoneme probability was applied for phoneme recognition, that is, a simple phoneme loop recogniser. To give an indicative benchmark for comparative purposes, recognition rates were determined for each source language after decoding a “same language” test set. The recognition rates are shown in the second column of Table 4. Recognition rates are expressed in terms of percentage correct and percentage accuracy as produced by HTK (Young et al., 2001). Insertion and grammar penalties were adjusted so that the number of insertions and deletions were approximately equal. This acts to normalise the Proceedings of the 9th Australian International Conference on Speech Science & Technology Melb December 2 to 5 2002 c Accepted after abstract review. °Australian Speech Science & Technology Association Inc. Page 187

Martin et al. Cross Lingual Modelling Experiments speaking rate across all three source languages and provides a more meaningful comparison. A universal model set was also trained across all three languages, based solely on the similarity in Worldbet notation. The result of applying these models to all three source languages is also shown. Table 4: Baseline Source Recognition Rates on Source Test Sets Language Hindi English Spanish Universal

% Correct 52.5 45.4 51.9 36.7

% Accuracy 42.0 29.4 41.2 24.0

Deletions 508 1313 778 2591

Insertions 497 1370 774 2598

The phonetic models developed were then applied to Indonesian speech and the results are shown in Table 5. Table 5: Phone Recognition Rates on Indonesian Language Hindi English Spanish Universal All 3 Individual

% Correct 46.2 32.9 38.1 36.6 38.2 41.2

% Accuracy 31.4 18.0 21.5 20.9 24.3 25.5

Deletions 1034 1098 909 958 1097 1120

Insertions 980 989 857 975 1048 1117

It can be seen that Hindi significantly outperforms all other source language acoustic models, and examination of the baseline recognition rates in Table 4 reveals that the Hindi recognition of Indonesian performs better than English performance on English test data. 4.4. Word Recognition Results In Schultz and Waibel (2001a) it was found that there was no direct correlation between phoneme recognition rates and subsequent word recognition rates, when multiple languages were applied to the target language. Given this, we wanted to establish whether this idea held when several languages were used for Indonesian speech recognition at the word level. The knowledge based phoneme mappings determined in Section 4.1 were used. No language modelling was incorporated into our system, with a uniform a priori probability applied to each word. Table 6 depicts our results. Table 6: Word Recognition Rates on Indonesian Language Hindi English Spanish Universal Data Driven Set

% Correct 16.2 8.5 10.3 9.8 11.5

As expected, the results were extremely poor, except for the Hindi language which performed surprisingly well. However, Beyerlein et al. (1999a) reported similarly poor WER results when a simple knowledge based mapping was used. The fact that no language modelling was used must also be considered. Proceedings of the 9th Australian International Conference on Speech Science & Technology Melb. December 2 to 5 2002 Accepted after abstract review cAustralian Speech Science & Technology Association Inc. Page 188

Martin et al. Cross Lingual Modelling Experiments We also applied the mapping derived using confusion matrices outlined in Section 4.2. Unfortunately, whilst the word recognition rates did improve over the English, Universal, and Spanish Models, it failed to better the performance of the Hindi model set. 5. DISCUSSION The interesting result to emerge from these experiments was the superior recognition performance achieved by the Hindi language, at both the monophone and word level, even when compared to data driven methods. Whilst the data driven technique resulted in the selection of a phoneset which achieved individually superior recognition rates, the combination fails to translate to a globally superior solution. The superior recognition rates achieved by the Hindi models may occur because the mixture components in each state provide a more suitable generalisation for the context dependance in Indonesian speech. In addition, the transition between models may be more naturally represented using phonemes from the one language and the general language feature space for Hindi may be closer to Indonesian than the other languages. As seen in Table 3, Hindi and Spanish provided superior consonant and vowel recognition results respectively. This result may be illusory however, given that our data driven technique did not incorporate a more global candidate model selection criteria. Accordingly validation will require further directed research. 6. CONCLUSION In this paper we outlined the transcription of Indonesian speech contained in the 22 Language OGI speech Corpus and the development process for an Indonesian pronunciation dictionary. We examined the recognition performance using two mapping techniques for mapping from source languages to Indonesian. The data driven mapping technique improved recognition rates in comparison to English and Spanish source languages however, Hindi provided a significantly improved performance over all methods. This possibly indicates that the Hindi language provides a better general representation for the Indonesian language. Future examination of context dependance and a language based feature space comparison is required to support the findings in this work. 7. REFERENCES Beyerlein, P., Byrne, B., Huerta, J., Marthi, B., Morgan, J., Pterek, N., Picone, J. and W.Wang (1999a), Towards independant acoustic modelling, Technical report, John Hopkins University. Beyerlein, P., Byrne, B., Huerta, J., Marthi, B., Morgan, J., Pterek, N., Picone, J. and W.Wang (1999b), Towards language independant acoustic modelling, . IEEE Workshop on Automatic Speech Recognition and Understanding,. Echols, J. and Hassan, S. (1990), Kamus Indonesia Inggris, Penerbit PT Grameduia Pustaka Utama. Hieronymus, J. L. (1993), ASCII Phonetic Symbol for the World’s Languages : Worldbet, Journal of the International Phonetic Association 23. Kohler, J. (1998), Language adaptation of multilingual phone models for vocabulary independent speech recognition tasks, in Proc. ICASSP 98,, pp. 417–420. Quinn, G. (2001), The Learner’s Dictionary of Todays Indonesian, Allen and Unwin. Schultz, T. and Waibel, A. (1997), The Global Phone Project: Multilingual LVCSR with Janus, SQEL, Plzen, pp. 20–27. Schultz, T. and Waibel, A. (2001a), Experiments on cross language acoutic modelling, Proceedings of Eurospeech 2001. Schultz, T. and Waibel, A. (2001b), Language independent and language adaptive acoustic modelling, Speech Communication 2001, Vol. 35, pp. 31–51. Young, S., Kershaw, D., Odell, J., Ollason, D., Valtchev, V. and Woodland, P. (2001), The HTK Book (for HTK version 3.1), Entropic Ltd. Proceedings of the 9th Australian International Conference on Speech Science & Technology Melb. December 2 to 5 2002 c Accepted after abstract review °Australian Speech Science & Technology Association Inc. Page 189