A Statistical Approach for Similarity Measurement Between Sentences

0 downloads 0 Views 360KB Size Report
These examples are subsequently used as guidance for future ... way to quantify the discrepancies between two sentences, and to find ... Then a cumulative ... basis of two key aspects of a sentence namely verb and preposition. ..... specify µ.i = µ + ai , where a i is the deviation of the mean for the ith person from total mean., ...

A Statistical Approach for Similarity Measurement Between Sentences for EBMT Niladri Chatterjee Department of Mathematics Indian Institute of Technology Hauz Khas,New Delhi 110016 Email: [email protected] Abstract Success of Example-Based Machine Translation depends heavily on how efficient the retrieval scheme is. The more similar is the retrieved sentence to the input one, the easier will be the adaptation of the retrieved translation to the current requirement. However, there is no suitable scheme for measuring similarity between sentences. This paper reports preliminary results of a similarity measurement scheme that is based on a linear model , whose coefficients are determined by multiple regression technique. The data for the analysis has been collected from a survey of a number of respondents. Three major aspects of similarity, namely pragmatic, syntactic and semantic have been considered. Each respondent has been asked to evaluate the similarity between different pairs of sentences that are carefully designed to reflect one of the above types of similarity. A statistical analysis of these evaluations reveals general human perception about sentential similarity, which will help in designing a suitable retrieval scheme.

1. Introduction. Example-Based Machine Translation (EBMT) [Nagao, 1984][Brown, 1996] has of late become popular to facilitate automatic and/or semi-automatic Machine Translation. EBMT is based on the idea performing translation by imitating past translation examples.. In this type of translation system, a large amount of translation examples between two languages (L1 and L2, say, respectively the source and the target language) are stored in a textual database. These examples are subsequently used as guidance for future translation tasks. In EBMT one does not go through the rigour of syntax and semantics of the source and target languages. Rather, in order to translate a sentence, given in L1 to L2, the scheme first retrieves some similar sentence(s) in L1 from its knowledge base. The translation of the retrieved sentence(s) are then modified (or adapted) suitably to derive a translation of the given input sentence. Evidently, the scheme depends upon how good and effective the retrieval scheme is. The closer will be the retrieved sentence to the input one, the easier will be its adaptation to the present translation requirement, and consequently, the overall translation quality will improve. However, no scheme has so far been developed to quantify the similarity between two sentences in an objective way. The primary cause may be attributed to the general variations in human expressions – which is manifested in different ways of producing sentences that essentially convey the same meaning. Consider, for example, the following sentences: She is good looking, She is good to look at She looks good Not only are these sentences made of the same key words, they convey the same meaning too. On the other hand, the following sentences This horse is running good This horse is good to run on It was a good running by this horse have completely different senses to convey, although the key words are same again.

Adaptation of a retrieved sentence for generating the translation of the input cannot therefore be accomplished by taking into account the constituent words alone. Rather it involves all different aspects, namely syntactic, semantic and pragmatic, of a sentence. Any similarity measurement scheme should therefore give importance to these aspects in order that appropriate retrievals are made. However, there is no straightforward way to quantify the discrepancies between two sentences, and to find thereby a suitable measurement of similarity. We aim at providing a solution to this problem by assuming a linear model - where the overall similarity between two sentences is considered to be a weighted sum of discrepancies between different components. The weights of the different components can be generated using multiple regression method, the data for which is to be collected by surveying a large number of respondents. However, the key problem here is that everybody may not perceive similarity in the same way. Further, the same person may have different feelings about similarity in different contexts. Hence before designing any scheme based on sample data, one needs to ensure whether the respondents are consistent in their evaluation. This work reports a the results of a preliminary statistical analysis in this regard of a survey data conducted on 50 respondents each giving their intuitive feeling about similarity on 15 pairs of sentences. In the following section, we discuss the issue of similarity in the context of example-text retrieval. Section 3 discusses our proposed scheme for an objective evaluation of similarity. Section 4 discusses our findings.

2. Retrieval of Similar Text in Intra-Language Matching Similarity between two sentences may be defined in three primary ways [Wolstencroft, 1993]: • Semantic similarity refers to thesaurus or dictionary-based similarity i.e. sentences having same / similar words. • Syntactic similarity where structural similarity is used for measurement of similarity between sentences. • Pragmatic similarity where two sentences having the same practical significance are considered similar. The significance of different types of similarity and how they affect translation are discussed below. 2.1 Roles of Semantic and Syntactic Similarity Nirenburg et. al [Nirenburg, 1993], proposed a word-based scheme for similarity measurement. Here similarity is measured on the basis of common words between two sentences and calculating penalties on the basis of discrepancies such as: gaps between words, different word order, partial matching, inflection etc. Match scores are first calculated separately for each kind of incomplete match. Then a cumulative score is produced, and only those matches are retained for which the penalty is below a certain threshold. However, purely word-based similarity may have same obvious difficulties: i) Same word may have different meaning. Consider, for example, the sentence “he went to the bank”. Without appropriate context it is not clear to a translator whether it is a river bank or financial bank. ii) A word is conveying a sense that is different from its usual meaning. In the sentence “he ran into trouble”, the key verb “run” does not carry its normal sense. iii) Idiomatic expressions. Idioms often do not have any correspondence with the constituent words. For illustration, “he is a green horn” does not have any correspondence with either green or horn. iv) Synonymous words may not be recognised. Although “he has a car” and “he has an automobile", have the same meaning, it may not be apparent as synonymous words have been used. v) Class/subclass identification may be a difficulty. For example, “he has a car” and “he has a BMW” are similar, but may not be apparent immediately.

As a consequence, word-based similarity may not always be conducive for EBMT. For illustration, let us consider a simple input sentence "John is eating rice". Its correct Hindi translation is: "john chaawal khaa rahaa hai". Now let us consider two sentences: 1. Mohan is playing football. : mohan football khel rahaa hai. 2. John will eat rice. : john chaawal khaayegaa. Evidently, the second sentence “John will eat rice” does not provide any easy translation model for the input sentence John is eating rice. Although all the three keywords namely subject, object and verb are same in the two sentences. On the other hand, the sentence “Mohan is playing football.” is more helpful in providing the translation structure. In fact by replacing “mohan” by “john”, “football” by “rice” and “play” by “eat”, one may get a correct translation. So measuring similarity between sentences on the basis of word-based similarity is not ideal. A syntactically closer sentence may aid a translator more even if it does not have any key words common with the input sentence. However, syntactic similarity does not necessarily help a translator always. Consider for example, the sentence “he ran into a trouble”. Its correct Hindi translation, “woh ek muskil mein padh gaya”, cannot be obtained from the translation of a similar sentence “he ran into a house” (in Hindi "woh ek kamre mein ghus gaya”). This is because the two expressions carry different senses, and the corresponding verbs are different in Hindi. Hence not only is the surface level similarity to be considered, one has to consider the deeper meaning that is conveyed by the sentences for measuring similarity. Pragmatics of sentences therefore comes into picture. 2.2 Role of Pragmatic Similarity. Intuitively, from the point of view of generating translations, pragmatic similarity may be measured on the basis of two key aspects of a sentence namely verb and preposition. However, as illustrated below, similarity here does not always imply similitude in translation. Similarity in verbs: Verbs denote the basic action represented in a sentence. If two sentences have the same verb, one may expect that their translations will be similar. But a keen observation shows that it is not true. For illustration, consider the verb take in the following examples: Mohan will take food in the mess. Ram will take money from Deepa.

: mohan bhojanaaly mein khaanaa khaayegaa. : raam deepaa se peise Uhaar legaa .

Both the sentences have "take" as the main verb. But the sense being carried out by this word is different in the two contexts. From translation point of view in the first case the Hindi verb is “khaanaa”, while for the second case it is “lenaa”. Consequently, a direct replacement of the verb from a similar sentence is not helpful in generating the correct translation. Similarity in preposition: Can two sentences or clauses having the same main preposition will have similarity in meaning too? Also, can one expect similar translation of the same preposition across different examples. Here too no straightforward assumption holds. Let us consider the preposition by. We observe that its meaning varies with the context. For illustration: Novel by Shakespeare ⇒ Shakespeare kaa novel Killed by Ram ⇒ Ram dwaaraa maaraa gayaa Stop by the river ⇒ Nadi ke paas rookhnaa Killed by hunger ⇒ Bhukh se mar gayaa Thus the same word is translated in Hindi in different ways depending upon the underlying sense. The same may be observed for other prepositions such as to, with etc.

Dissimilarity of expression: Another problem of measuring similarity between sentences is the subjective nature of human expression. Human beings, constrained by vocabulary or other personal choices, often tend to express the same sense in different styles. The variation may come in many ways: 1.

Variation in kind of sentences. The following sentences all convey the same effective meaning: Is she really intelligent? I don’t think she is really intelligent. I think she is not that intelligent. Although they are “interrogative”, “negative” and “affirmative” type sentences, respectively.


Variation in construction. As discussed earlier (in Section 1), all of the following sentences “her look is good”, “she is good looking”, “she looks good”, “she is good to look at” have the same keywords used in different ways. Yet each of them is conveying the same sense. Similarly, the sentences “he is the best boy in my class” and “he is better than any other boy in my class” have expressional variations in terms of the adjective (superlative and comparative, respectively), but both have the same meaning. Similarly by changing from active to passive voice one can make a different construction of the same sentence.


Variation in vocabulary: - Use of synonymous words makes two same sentences look different. For illustration, consider “He is fatty”, “He is obese”, “He is plump”. A translator not familiar with the synonyms will fail to identify that the above sentences have the same meaning.

2.3. How to Measure Similarity? One should therefore take note of the above discussions, which may be summarised as follows: a)

Syntactically similar sentences in the source language are likely to have translations that are syntactically similar in the target language.

b) Although semantic similarity may mislead in generating a translation, the role of semantics can not still be ignored completely in carrying out a translation task. c)

It is possible to have a number of sentences all conveying the same meaning. That is expressional variations may lead to syntactic or semantic dissimilarities. Consequently, pragmatic knowledge about a sentence is also necessary for computing similarity.

The key question therefore is how to take into account all these aspects. Evidently, storing all these variations in a systems knowledge base will make storage requirement prohibitively large, and consequently, retrieval a very complex procedure. The present work therefore proposes an alternative. Here we intend to look at the overall similarity (or, to be more specific the dissimilarity) between two sentences as a consequent of component-wise discrepancies. The following section describes the proposed scheme in detail.

3. Proposed Scheme for Similarity Evaluation The key assumption of the proposed scheme is that the overall dissimilarity between two sentences can be expressed as a combination of the dissimilarities between different constituents of the two sentences. Here, we assume a linear model, i.e. D = w1 * d1 + w2 * d2 + w3 * d3 + ……….. + wk * dk . Where, D is the overall dissimilarity between two sentences; k is the number of components that contribute to the dissimilarity between sentences, dj, j = 1 .. k, is the discrepancy in terms of the jth component, and wj is the weight corresponding to the jth component.

In this work we try to investigate the feasibility of such a model. The key task in this regard is threefold: a) Identification of the individual components that contribute to similarity/ dissimilarity; b) Designing suitable measurement schemes for quantifying the discrepancy between two sentences with respect to each individual component. c) Determination of the weights for each component. These issues are being discussed below. 3.1 Components of a Sentence A sentence may have several components having different roles in the overall presentation. Some of them are explicit, while others are implicit; yet all are important from translation point of view, particularly between English and Hindi. In other language pairs some of them may not be of any importance. However, the overall scheme will remain same. The implicit components are: •

What is the subject of the sentence− a) Whether it is singular or plural. b) Whether it is first or second or third person. c) Whether any article has been used. If so, whether it is a definite or indefinite article. d) The gender of the subject e) Whether the subject is animate or inanimate.

The number of objects of the verb If the number is more than one, a classification of the objects (i.e. direct, indirect) is needed. The gender of the object. What is the action carried out by the main verb of the sentence. (As discussed earlier, often verbs convey a sense different from its usual one. Hence the action is important to notice for measuring any pragmatic similarity).

• •

Depending upon the language under consideration some of the components have more importance then others. For example, • In English two indefinite articles are used  ‘a’ and ‘an’; while in Hindi there is only one. • In English, in order to denote possessive case, one uses “ 's” for animate and “of “ for inanimate; while Hindi does not have any such distinction. • In English, conjugation of verbs corresponding to subjects in first and second person is often same. But in Hindi in first and third person verbs are often similar (e.g. "main karta hoon", "woh karta hai", but "tum karte ho".). • Unlike English, in Hindi no neuter gender exists. Every inanimate object is classified as masculine or feminine. • In English the gender of the subject does not affect the verbs. But in Hindi gender affects the verbs greatly. The explicit components are those that are visible in the construction. Some of them are: • The tense of the sentence. • The voice, i.e. whether the sentence is in active voice or passive voice. • The verb – i.e. the main verb and also whether any auxiliary verbs have been used. • The type of the sentence – i.e. whether it is interrogative or affirmative or exclamatory etc. All those components play some role in overall similarity computation between two sentences. The main difficulty is the contribution of each of the components in measuring the overall dissimilarity is not known. Moreover, the intuitive feeling about this measurement varies greatly form person to person.

We therefore proceed in the following way. Each component here is considered as an attribute of the underlying sentence. In most cases the attributes may take values only from a fixed set. The common AI technique of artificial enumeration [Chatterjee, 1994] may be used to impose some metrication on each of these sets; which in turn provides a measurement of distance (djs, given earlier) with respect to a particular attribute. The weights (corresponding to a particular metrication scheme) can now be determined by taking people’s response about similarity. The details are given below. 3.2 Measuring Component-wise Distance Chatterjee and Campbell [Chatterjee, 1994] proposed different metrication schemes. Some of them are: Use of fuzzy hedges, ordering relative to some reference, use of certain set properties, artificial enumeration etc. In this context artificial enumeration appears to be the most suitable. Artificial enumeration is a common technique that is used in AI (see, for example, the systems CHEF [Hammond, 1986], CAVALIER [Barletta, 1988] ) for a long time. Here, experts, in a given domain, are asked to assign numbers to different symbolic values to impose order among the participating items with respect to some property. The numbers are to be carefully chosen such that not only do they provide the relative order, but also are used for computing distances between the components. In this work, for each attribute the numbers have been chosen in the scale 1 to 9. We have worked with six different attributes. The artificial numbers used in representing different values for each of this attributes are as follows: a)

Number (of subject). In English only two types of numbers are used: singular and plural. The values assigned to them are the two extremes i.e. 1 and 9.

b) Gender (of subject). Not only are there three genders in English, nouns and pronouns also exist for each gender. Hence our enumerated numbers are as follows: For “he” the value is 1 and for “she” it is 9, for pronouns where genders are not clear (e.g. “I”, “we”, “you”, “it”, “this” ) the assigned value is 5. For male names the value assigned is 2, and for female names it is 8. For neuter objects the value assigned is 4. c)

Article. The definite article “the” gets the value 1, while the indefinite articles get 9, the opposite. However, to distinguish between “a” and “an”, the values assigned to them have been chosen to be 9 and 8, respectively.

d) Type of objects. Here we are considering only one aspect - whether the verb has any objects at all. If the answer is yes, whether it has one object or two. Number 1, 5 and 9 have been assigned to the three cases respectively. e)

Verbs. The Natural language verbs are too many to handle. However, through conceptual dependency theory Schank [Schank, 1975] has classified verbs in 11 classes depending upon the nature of the abstract action that is conveyed by a verb. The enumeration has been made in such a way that a proximity implies somewhat closeness in the underlying action.. The enumeration is as follows.: “Expel” = 1 “Ptrans” = 2 “Move” = 3 “Propel” = 3.5 “Grasp” = 4 “Atrans” = 5 “Ingest” = 6 “Mtrans” = 7 “Speak” = 7.5 “Mbuild” = 8 “Attend” = 9.


Tense. We consider the following tenses along with the given enumeration: “past perfect” = 1 “past continuous” = 1.5 “past indefinite” =2 “present perfect” = 3 “present indefinite” = 5 “present continuous” =6 “future indefinite” = 8 “future continuous” = 8.5

Evidently, the enumeration is very subjective. But the key point is that the enumeration has to be maintained consistently across all the examples, and the weights are to be determined accordingly.

3.3 Determination of Weights. The weights are determined by taking into consideration how people perceive similarity between pairs of sentences. The study was made in the following way. We have surveyed 50 respondents with a set of sentence pairs. There were 15 such pairs, and each respondent was asked to mark each pair in a scale of 0.0 to 1.0 as per their intuitive feeling of similarity between the sentences. Thus we have 750 responses that have been statistically analysed. The purpose was twofold: A) What is the relative importance of these factors. The sample sentence pairs were designed carefully so that they are varying in different components – gender, number, tense, verb, type of objects and article. In Appendix we provide the set of sentence pairs. There are 5 sets each having one main sentence, and three other sentences, each of which is similar to the main sentence in some way. These three sentences are pragmatically, syntactically and semantically similar to the main sentence, respectively Thus from each set three pairs are made, and overall 15 such pairs can be made. Thus we have 750 observation, let us call them Y1, Y2, … YN (where, N = 750). According to our model each Yj is a weighted sum of the individual discrepancies of the attributes of the underlying sentence pair. Thus the kth observation (k = 1, .. N) can be written as: Yk = w1 * x1k + w2 * x2k + w3 * x3k + ……….. + wa * xak , where a is the number of attributes considered. We need to estimate the values of the wj’s from our sample. This is done by applying multiple regression on the data. We write the entire data as in the matrix form: Xw = Y, where X is a 2-dimensional matrix of size 750 x 6 representing the individual differences, and Y is the 1x750 array corresponding to the different similarity values as given by the respondents. The X matrix was obtained by considering each pair of sentences and computing their differences with respect to the attributes in terms of the artificial numbers associated with each possible value. The solution may be obtained by solving the set of normal equations XTXw = XTY. The above scheme was implemented with six attributes discussed above, and the w vector was computed by solving the above equations using singular value decomposition. B) Whether all the respondents are consistent. A primary task before the task (A) above is to see whether each respondent is consistent in his/her judgement. Each respondent may have different views about the three similarities (pragmatic, semantic and syntactic). But each individual is expected to be consistent in his/her priorities over the three types. Further we need to test whether human perception about different types of similarity can be considered to be uniform. Statistically this may be tested by forming appropriate hypotheses using analysis of variance technique by R.A.Fisher (Bradt,, 1970).

3.2 Experiments and Results First we have considered the test of consistency for pragmatic similarity. Let the response of the ith respondent to the jth pair (considering only the sentence (a)) be xij. Table 1 provides the response of the first 10 respondents. We then resort to the following model: Let µ be the average response i.e. µ = (1 / 5 * 50) ΣΣ x ij, where i = 1,2, 50 and j = 1, .. 5. We can then specify µ.i = µ + ai , where a i is the deviation of the mean for the ith person from total mean., such that Σai = 0 for i = 1, 50. Similarly, µ j. = µ + bj , where b j is the deviation of a pair from the total mean., and Σbj = 0 for j = 1, 5. We can now express our problem as a model xij = µ + ai + bj + εij, where εij is the random error corresponding to the observation.

I→ J ↓ 1 2 3 4 5

1 0.6 0.6 0.6 0.4 0.7










1 1 0.8 0.8 0.6

0.9 1 0.5 0.3 0.5

1 1 0.9 0.9 0.9

1 1 0.3 0.3 0.5

1 0.1 0.7 0.4 0.6

1 0.8 1 0.2 0.8

0.9 0.9 0 0.7 0.7

0.8 0.2 0.4 0.1 0.8

0.8 0.9 0.8 0.7 0.9

Table 1: Evaluation of pragmatic similarity for the 5 pairs by 10 respondents.

In order to test whether there is any significant difference in the replies of any respondent, or is there any particular pair that was treated differently by the respondents, the testing hypotheses were: H0A (a1 = a2 = … = a50 = 0), and H0B (b1 = b2 = .. ... = b5 = 0). Fisher’s F test was applied to test the hypotheses. 50 5

The total sum of square of the deviation is Σ Σ (xij -  x ) 2 = Q = QA + QB + QW i =1 j = 1 where, QA = 5 Σ (xi. -  x ) 2 , QB = 50 Σ (x.j -  x )2 and QW = ΣΣ (xij - xi. - x.j +  x ) 2. If we now define S2A = QA / (50 - 1) S2B = QB / (5 - 1) and S2W = QW / (49 * 4), then both S2A / S2W and S2B / S2W follow F distribution. We can then carry out F-test to determine whether the hypotheses are accepted or not. If H0A is accepted then we say that people are consistent about pragmatic similarity. If H0B is accepted, we conclude that each individual is consistent about pragmatic similarity. Similar tests can be carried out with respect to semantic and syntactic similarity. With our sample data we have found that with respect to pragmatic and semantic similarities both H0A and H0B are accepted. But for syntactic similarity, only H0B was accepted, but H0A is rejected. That implies that although each individual is consistent with one's own evaluation, the respondents in general have different perception about syntactic similarity. In this respect we come to the following conclusion. Pragmatics and semantics are the key aspects for measuring sentences from understanding point of view. Evidently, that is consistent with our general notion of similarity. However, with respect to EBMT we have observed that (see Section 2.1) a syntactically similar example is often more helpful in generating a translation than semantically or pragmatically similar examples. This gives us a cue for organizing the knowledge base and retrieval of text example. We suggest that the translation examples are to be stored hierarchically. The syntactic features should provide the primary indices. Semantic and pragmatic similarities are to be evaluated within each group for obtaining the closest match. Any similarity function should be defined accordingly. Our next task was to obtain the values for the weights i.e. w1, w2 … w6. These values can be obtained by solving the set of linear equations XTXw = XTY. Evidently, the values that one gets depends upon the number of attributes that one is considering. In our case, as we considered the six attributes discussed in Section 3.2 , we have found that the relative importance of them is as follows:

• • •

The verb has got the highest importance, followed by the tense. Gender of the subject, the number (i.e. singular or plural) of the subject and the type of objects (i.e. whether one or two) also have significant weights. However the weight for the article is almost negligible.

Evidently, the results obtained here is too preliminary. More attributes are to be considered, with their appropriate sets of values, and the weights of the different attributes are to be computed accordingly.

4. Concluding Remarks The work aims at providing an objective way to measure similarity between sentences. The scheme assumes that the overall similarity is a weighted sum of discrepancies of individual components. However, there is no systematic way to decide upon what are the components and how their relative orders can be measured. The first task towards the above objective is to find what are the key components that contribute to the measurements of similarities between sentences. An exhaustive list needs to be determined. For each such component the possible values are to be identified and their relative distances are to be quantified. This paper presents a framework for this purpose. The proposed scheme was tested on a sample of 50 respondents on 15 pairs of sentences, and the result is encouraging. However, the work is still very preliminary, and more comprehensive tests are required to arrive at a conclusion. Further, the work is so far based on simple sentences, more complex sentence structures are also to be considered. Our final aim is to provide a composite function for computation of overall similarity between two sentences. This objective function can then be used for efficient text retrieval to facilitate EBMT. The present work is the first step towards that goal.

References [Barletta, 1988] Barletta R. and Mark W. (1988). “Explanation Based Indexing of Cases”. Proc. AAAI-88, pp. 541 – 546. [Bradt, 1970] Bradt N. H. (ed), (1970). “Statistical and Computational Methods in Data Analysis”, 1970. [Brown, 1996] Brown R. D. (1996). "Example-Based Machine Translation in the Pangloss System". COLING-96: The 16th Intl. Conf. on Computational Linguistics, Copenhagen, pp. 169--174. [Chatterjee, 1994] Chatterjee N. and Campbell J.A. (1994). “Adaptation Through Interpolation for TimeCritical Case-Based Reasoning”, Topics in Case-based Reasoning. Lecture Notes in Artificial Intelligence, Vol. 837. Ed. Wess S., et. al. Springer Verlag, pp. 221 - 233. [Hammond, 1986] Hammond K. (1986). “CHEF: A Model of Case-Based Planning”. Proc. AAAI-86, pp. 267 – 271. [Nagao, 1994] Nagao M. (1984). “A Framework of a Mechanical Translation Between Japanese and English by Analogy Principle”. Artificial and Human Intelligence, Ed. Elinthorn A. and Banerji R., North-Holland, pp 173 – 180. [Nirenburg, 1993] Nirenburg S. (1993). “Two Approaches of Matching in Example-Based Machine Translation”, Proc. TMI-93, Kyoto, Japan. [Schank, 1975] Schank R. C. (1975). “Conceptual Information Theory”. North Holland, Amsterdam. [Wolstencroft, 1993] Wostencroft, J (1993). “A Unifying Approach to Reasoning by Analogy”. Ph.D Thesis, University of London.

APPENDIX The survey sheet. 1.

Shilpa is drinking pepsi. (a) Pepsi is being drunk by Shilpa (b) Niti is eating softy. (c) Shilpa loves drinking pepsi.


IIT is located in Hauz Khas. (a) Hauz Khas has IIT in it. (b) Hauz Khas is located in IIT. (c) IIT in Hauz Khas has nice location.


Ram purchased two books. (a) Two books were purchased by me. (b) I ate two sweets. (c) I purchased two books for Ram.


I gave Mohan a book. (a) Mohan got a book. (b) I gave Ram a pen. (c) I gave Mohan a pen and a book.


Deepa is running well. (a) Deepa runs well. (b) Rashi is walking. (c) Well run Deepa.

Note: In all the 5 cases, sentence (a) is pragmatically similar, sentence (b) is syntactically similar and sentence (c) is semantically similar to the main sentence.

Suggest Documents