A Methodology for Evaluating Grades of Journals: A Fuzzy ... - CiteSeerX

2 downloads 0 Views 144KB Size Report
A Methodology for Evaluating Grades of Journals: A Fuzzy Set-based Group .... Step 4. Consolidating judgments (Ek'). Step 6. Adjusting for the weights (Yk). No. Yes ... a related discipline (e.g., marketing, computer science, or. MIS). ..... Perspectives, New York: Macmillan, 1993. ... Singapore: McGraw-Hill, 1997, 315-317.
Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000

A Methodology for Evaluating Grades of Journals: A Fuzzy Set-based Group Decision Support System Efraim Turban Information Systems Department, City University of Hong Kong, and Hong Kong University of Science & Technology [email protected]

Duanning Zhou Information Systems Department, City University of Hong Kong, and Zhongshan University, Guangzhou, China [email protected]

Abstract Many universities, research institutions and government agencies are continuously attempting to grade or rank journals as for their academic value. Such grading is needed both for personnel decisions and for funding and resource allocation purposes. The grading of journals is related to both objective information, such as the impact ratios of the journals, and to subjective information, such as experts’ judgments about the journals. Most of the existing journal grade evaluation methods only consider one of these aspects. The paper provides a fuzzy set-based group decision support model that integrates objective and subjective evaluations to provide a comprehensive method for evaluating grades of journals. This paper also presents a fuzzy set approach to deal with the imprecise and missing information inherited in the evaluation process and in subjective information. The system is available on the Web.

1. The research issues Evaluating the quality of academic journals is required for several personnel decisions, such as recruiting, promotion, tenure and retention. Such evaluation is also done for merit increases and for determination of research funding. Many institutions use some formal grading or point system for such evaluations. In most cases the evaluation is done by a group (a committee or a panel) which complicates the evaluation process. For years, researchers in several disciplines attempted to find the most appropriate formula for such evaluations. Research on this topic was conducted in:

Jian Ma Information Systems Department, City University of Hong Kong, Kowloon Tong, Hong Kong [email protected]

economics [10], accounting [1], finance [3][11], management[5], information systems [6][7][8][12][13][15][16], and construction management [17]. Unfortunately, there is no consensus on how to best conduct the journal evaluation. The methodologies proposed in these studies can be classified as either being subjective or objective, depending on how the decision information is obtained and used. The subjective approach, also called perception analysis approach, uses questionnaires to solicit subjective information from experts such as deans, department heads, renown practitioners, and/or academic staff members. The collected information is compiled and a ranking of the journals is done based on the collective respondents’ perceptions. Different researchers used different models for aggregating subjective opinions. The work of Coe and Weinstock [3] is an example of ranking finance journals based on respondents’ perceptions. The objective approach, also called citation analysis approach, determines the rankings of journals based on various forms of citation counts, by tabulating citations from a set of base journals, or a set of selected articles, over a certain time period. For example, Liebowitz and Palmer [10] used a citation approach to rank economics journals, and Holsapple et al [8] employed a citation analysis methodology to rank information system research journals. Citation information is publicly available in a form of "total cites", "immediacy index", "total articles", cited half-life and "impact ratio (or factor)". In this paper we use the impact ratio which is defined as "the average number of times articles published in a specific journal are cited during the year they were published". Information about impact ratio of different journals is

0-7695-0493-0/00 $10.00 (c) 2000 IEEE

1

Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000

available in sources such as the Journal Citation Reports, a CD-ROM database available in most libraries and updated annually.

In addition there is very little research on two related issues: how to deal with incomplete subjective or objective information, and how to transform the evaluations to a tangible journal grade.

Evaluations and rankings of journals determined by subjective approaches alone can be influenced by biases, or by lack of sufficient knowledge or experience. Objective approaches rank journals by making use of citation data, but they usually completely disregard subjective judgement of experts. It is obvious that neither the subjective, nor the objective approaches by themselves are best for the evaluation. Therefore, it makes sense to combine the two. However, despite the considerable research conducted an evaluation, there is almost no research on how to integrate the two approaches.

This paper attempts to fill this gap by proposing a methodology that deals with the above three research issues. The methodology was designed to fit the process of funding research in Hong Kong. However, it can be easily modified to cover other situations. The Hong Kong process, which will be describe later, involves a combination of objective information which is based on impact ratios and on historical grades, and on subjective information solicited from experts, who are organized in disciplinary panels. The Hong Kong system provide grades of A, B, C or zero to each journal.

Start

Step 1 Compute membership function. Find membership degree for each journal (Vk)

Step 2 Experts' judgement (Vik)

Step 5 Combining objective and subjective information (Ek)

Step 3 Setting up weights (Wk)

Input data and initial processing

Integration

Step 4 Consolidating judgments (Ek’)

Finalization Step 6 Adjusting for the weights (Yk)

Step 7 Is there a consensus?

Stop

No

Step 8 Sensitivity analysis

Yes

Figure 1. A flow chart of the proposed methodology

0-7695-0493-0/00 $10.00 (c) 2000 IEEE

2

Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000

%

3. The Proposed Methodology The proposed methodology is composed of eight steps. A schematic view of the steps is shown in Figure 1 together with the output (in parenthesis). An explanation of the steps follows. The mathematical description of the methodology is omitted due to limited space. The details of the 8 steps are: Step1 The relationship between impact ratios and grades Since journal evaluation is a repetitive process we can find historical data which show the final grade (or points) assigned to a journal and the impact ratio of the journal at that time. Using the concepts of membership degree and membership function in fuzzy set theory, our methodology expresses these relationships for journals in a related discipline (e.g., marketing, computer science, or MIS). A typical membership function for 3 grades, A, B, C is shown in Figure 2

B

A

100 90 80 70 60 50 40 30 20 10

8 2.

5 2.

2

9

2.

6

1.

1.

3

1

1.

7 0.

4

0 0

Due to incomplete and uncertain objective information (e.g., citation information may not be available for some journals), as well as lack of sufficient knowledge, experts may find it is difficult to express their preference precisely. Fuzzy set theory [18] is proposed here as a tool for solving the problem of imprecise subjective judgments and incomplete objective information. Fuzzy set methods for evaluation were attempted before. For example, Biswas [2] describes a fuzzy method to evaluate students’ answerscripts; Ross [14] provides a fuzzy method for a generic multi-criteria evaluation. However, none of the previous methods can solve our research problems. The proposed approach analyzes the past grades assigned to journals in an attempt to identify the relationship between the assigned grades and the impact ratios of journals at the time the grades were assigned. Based on the discovered relationship, the model produces a suggested current grading by integrating experts’ opinions with the current impact ratio of the journals. A computerized group decision support system based on the proposed evaluation method has been developed and it is available on the Web (http://144.214.54.91/zdn). This paper is organized as follows: Section 3 describes the proposed fuzzy set approach. Section 4 presents an application, and a summary is given in section 5. Appendix A illustrates a Web-based group decision support system developed for the evaluation of grade of journals in Hong Kong.

C

0.

2. A fuzzy set approach

Impact ratios

Figure 2. The relationship functions of grades A, B, C. An example of how such a function is derived is provided in section 4. In generalizing our methodology any other measures can be used. For example "total cites" can be analyzed against "points assigned". Once the membership function is computed, we can examine the current impact ratio of each journal, and read its membership degree directly from Figure 2. For example, for a journal with an impact ratio 1.5 or more the grade A is assured. For an impact ratio of 0.9, for example, the membership degrees are 0.2 for A and 0.8 for B. These results are expressed as a vector , called membership degrees. In general such a vector is labeled Vk for journal Jk. Step 2 Expressing the experts' subjective judgment Instead of requiring each expert to assign a grade to the journal, we allow him (her) to provide likelihood values (membership degrees in fuzzy set terminology). For example, if an expert thinks that a journal's value is "somewhat less than an A", she (or he) can express it as when four possible grades are permissible, as in our case. Note that membership values express likelihood, but not as the case with probabilities, their sum does not have to be 1. In our methodology the experts have two choices: Enter numerical values. Some experts may feel comfortable in generating themselves a vector such as

0-7695-0493-0/00 $10.00 (c) 2000 IEEE

3

Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000

. Such a vector is designated Vik; If an expert is certain that a journal belongs to certain grade, e.g., B, then the following expression is used . Enter a qualitative statement. For those experts who are not comfortable with the numerical opinion expression, we allow a choice from a list of predefined statements or judgment terms. We will present such a list in section 4 for the Hong Kong case. Typical judgment terms can be "somewhat less than an A, better than B" etc. In providing such as option it is necessary to convert the judgment terms to quantitative values to execute our computation. The conversion formula (an example of which will be provided in section 4) is known to the experts and they can change it if they so wish. The result of the conversion is a vector Vik . Step 3 Assigning weights Our methodology requires that weights be assigned to express the evaluators' policy regarding the importance of the objective vs. the subjective evaluation components. The manner in which we do it is as follow: • • • •

The weights are assigned by the group leader and/or by the panel's experts. A total weight of 1 is to be divided between the impact ratio (objective information) and the judgmental information. The weight of the judgmental information can be assigned to each expert individually, or to the panel as a whole (and then subdivided to the experts). The weights are assigned for each journal.

In our methodology we consider the availability of information. It is so happen that for some journals there is no impact ratio available. Also, for certain journals an expert may abstain from providing information. In such cases mathematical adjustments are made. The results of step 3 are expressed as a "weight vector", Wk. At this time we have all the input information necessary for the analysis. Step 4 Consolidating the experts' judgments The subjective information entered by each expert, for each journal (step 2) constitutes a vector, Vik, for each journal. For a group of experts we can aggregate all these vectors to a matrix, which we called expert evaluation

information to the experts, the consensus reaching step (step 7) can be quicker and resulting in a more accurate evaluation. A well known disfunction of a group process is the tendency to compromise for a less than the best possible solution after lengthy deliberation. There is almost no elapsed time loss between steps 4 and 7. The execution of steps 4-6 takes only few seconds on the computer. Step 5 Combining the information

objective

and

subjective

In this step we simply add the objective information Vk to the subjective evaluation matrix. Mathematically this is a simple addition operation which is done by K'

adding the vector Vk to matrix E . The result is a new matrix Ek (an integrated evaluation matrix). Step 6 Adjusting to the weights To impact of the weights is added to matrix Ek by a composition matrix operation W k  E k . The result is a fuzzy vector Yk for each journal. Step 7 Reaching a consensus The results of step 6, for each journal, is a grade vector which may look like this: Yk=(0.68, 0.36, 0, 0). From this result the experts may infer that the grade of the journal is most likely to be A. If strong disagreement arise, some conflict resolution method can be used. For example, GDSS [9], the Delphi method [4], or some other method can be used. Step 8 Sensitivity analysis To help the consensus process and conflict resolution a sensitivity analysis modular can be added. According to this modular experts can change their subjective evaluations and/or the weights in order to examine the impact on Yk. If the results are not sensitive it is likely that a vector such as (0.68, 0.36, 0, 0), will end with a quick A vote. However, if the analysis will result in large variations of values the use of a GDSS or other methodology is highly recommended.

4. Application in Hong Kong

K'

relation matrix, E . Once the subjective information is aggregated it can be shown to all experts and consensus may be attempted. Many of the methodologies of previous research attempt to do just that. However, we feel that if we provide more

Research is always one of the most important functions of a university. To improve the research performance, the University Grant Committee (UGC) in Hong Kong conducts research assessment exercise (RAE)

0-7695-0493-0/00 $10.00 (c) 2000 IEEE

4

Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000

4

3 3 2 2 2 2 1 1 1 0000 0 0 0000000000000000 2

6 6.

0-

6.

0

5.

5.

4-

4

5.

4.

8-

8

4.

4.

2-

2

63.

3.

0-

3.

6 2.

3.

0 2.

2.

4-

4 1.

8-

8

1.

1.

2-

2

0. 6-

0.

0.

0-

0.

0

A frequency analysis method was used to find the relationship between grades of journals and impact ratios in Science and Engineering discipline. List of Journals in Science & Engineering of the year 1996 (the last time the evaluation was conducted) including the impact ratio and suggested grade were used for this purpose. The set of grades of the journals is G = (A, B, C, 0), and the impact ratio set was found to be S⊂ [0,6.4]. Because the sample size (285) is not very large, we considered the interval [x, x+0.2] as the analysis unit. The results of frequency distribution for every element of G are shown in Figure 3. (The frequency of grade A on S.), in Figure 4. (The frequency of grade B on S.), and in Figure 5. (The frequency of grade C on S.).

8

5

(1) Analysis of the relationship between grades of journals and impact ratios

10

Figure 3. The frequency of grade A on S.

50

43 36

40

29

30

17 13 10

20 10

2

676 0010010000000101 0

6

5. 84.

2

4. 44.

8

4. 4.

0-

4

3. 6-

3. 2-

3.

0 3. 8-

3.

6 2. 4-

2.

2 2. 0-

2.

2.

8 1.

6-

1.

4 1.

2-

1.

0 1. 80.

6 0. 40.

0-

0.

2

0 0.

once in every three years. For the purpose of this exercise, local universities form cost centers according to disciplines suggested by the UGC. Academic staff members are assigned to cost centers according to their research disciplines. A staff member submits up to 5 recently (3 years) published or accepted articles (preferably journal articles) for RAE assessment. UGC forms disciplinary panels to review the journals and assign a grade (e.g. A: top tier; B: mid-tier; C: lower-tier) to each journal. When an academic staff member is assessed to have 3 “B” grade or better articles published during the assessment period (usually 3 years before the assessment date), he/she is most likely to become an active researcher (AR). In the previous two RAE exercises, an AR meant an allocation of research funds to the cost center equivalent to at least his/her salary for the forthcoming 3 years. Thus universities in Hong Kong pay a lot of attention to the RAE exercises. From the perspective of the universities, it is very important to estimate well in advance the likely grades of different journals. These estimations can be communicated to the staff members so that they can identify a set of journals to publish in, as well as to decide which journals to include as the five "best" (in case they have more than five). Our methodology can be used by the universities to optimize the above decision. It can also be used by the different panels to arrive at their decision. The proposed fuzzy evaluation method was tested in evaluating the grades of journals in Science and Engineering in the RAE of UGC of Hong Kong. The detail steps are:

Figure 4. The frequency of grade B on S.

18

20 15 10 10

6

5

5

2 0

0 0.0-0.1

0.1-0.2

0.2-0.3

0.3-0.4

0.4-0.5

0.5-0.6

Figure 5. The frequency of grade C on S. Based on the distribution of the grades, the percentages of the grades A, B, C are calculated as shown in Figure 6. (The Percentages of grades A, B, C on S.).

0-7695-0493-0/00 $10.00 (c) 2000 IEEE

5

Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000

100 90 80 70 60 50 40 30 20 10 0

A% B%

0. 00 0. .1 40 0. .5 80 1. .9 21 1. .3 61 2. .7 02 2. .1 42 2. .5 82. 9

C%

Figure 6. The Percentages of grades A, B, C on S. Figure 6 reflects the relationship between impact ratios and grades A, B, and C. Using fuzzy set theory, we explain the relationship as the possibilities (likelihood) of journals with impact ratios belonging to the grade set. We approximated the curves in Figure 6 by using straight lines to connect the main points (see Figure 7) and received the membership functions representing grades A, B, C as shown in Figure 2.

100

B

C

A%

60

B% 40

C%

20 2.7-2.8

2.4-2.5

2.1-2.2

1.8-1.9

1.5-1.6

1.2-1.3

0.9-1.0

0.6-0.7

0 0.3-0.4

x < 0.75, 0.75 ≤ x ≤ 1.5, 1.5 < x.

0 ( x − 0.1) / 0.45  µ B (x ) = 1 (1.5 − x) / 0.75  0

x < 0.1, 0.1 ≤ x ≤ 0.55, 0.55 < x < 0.75, 0.75 ≤ x ≤ 1.5, 1.5 < x.

0  µ C (x ) = (0.55 − x) / 0.45 0 

x < 0.1, 0.1 ≤ x ≤ 0.55, 0.55 < x.

1 µ 0 (x ) =  0

x < 0.1, Otherwise.

(2) Experts’ subjective judgments The judgment terms are 4-tuples corresponding to the grade set G, which are as shown in Table 1. The values in the right hand-side of the table were set by the authors. These values can be changed by the members of each panel. Table 1. The judgment terms (Normalized) Judgment term 4-tuple

A

80

0.0-0.1

0  µ A (x ) = ( x − 0.75) / 0.75 1 

Figure 7. Straight lines on Figure 4.

The membership functions were computed as the following mathematical formulas:

Absolutely belongs to A A somewhat less than A Less than A



Much better than B Better than B Absolutely belongs to B A somewhat less than B Less than B



Much better than C Better than C Absolutely belongs to C A somewhat less than C Less than C



Much better than 0 Better than 0 Absolutely belongs to 0



No information

(φ, φ, φ, φ)

0-7695-0493-0/00 $10.00 (c) 2000 IEEE

6

Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000

(3) Integrating the experts' subjective options Suppose there are six experts participating in the journal evaluation. The objective data and subjective data of journal J k as follow: Journal name:

From the membership functions, we can get the impact k

ratio vector V =(0.6, 0.4, 0, 0). And from Table 1, we get the evaluation matrix of experts’ subjective judgments

E k′ . 0.5 0.1 1 0 φ 0.3

0 0 0 0 φ 0

0  0 0  0 φ  0 

V k as the first row elements in E k ′ , and then k get evaluation matrix E .

We add

 0.6   0.5  0.9  E k = 0 1  φ   0.9

W k = (0.5, 0.1, 0.1, 0.1, 0.1, 0, 0.1) (5) Aggregation of information of impact ratio and experts’ judgments

Jk

Impact ratio: 1.2 Judgment information of expert 1: Much better than B Judgment information of expert 2: A somewhat less than A Judgment information of expert 3: Absolutely belongs to B Judgment information of expert 4: Absolutely belongs to A Judgment information of expert 5: No information Judgment information of expert 6: (used numerical data)

 0.5   0.9 0 E k' =  1 φ   0.9 

Assume the assigned weight of the impact ratio of the journal is 0.5. Thus, the weight of the experts is also 0.5. According to the method of calculating the weights, we get

0.4

0

0.5 0.1 1 0 φ

0 0 0 0 φ

0.3

0

0  0 0  0 0  φ  0

We aggregate the impact ratio and experts’ judgments and get:

Y k = W k ° E k = (0.63, 0.39, 0, 0) (6) Determination of the grade of the journal k

Jk

From Y , we can infer that the grade of the journal is most likely to be A. As stated earlier, if the team

does not accept this recommendation they need to use some consensus reaching method to resolve the conflict.

5. Summary Establishing grades of journals is very important yet a difficult task. Two distinct approaches are used in evaluating grades of journals: one is subjective approach, where journal grades are determined by experts’ subjective judgments; the other is objective approach according to which journal grades are determined by objective information such as impact ratios. We presented a fuzzy set approach which integrates the subjective and objective approaches. This synthetic approach comprises eight steps: namely, (1) analyzing the relationship between past journal grades and the impact ratios; (2) defining judgment terms for experts to elicit their subjective judgments; (3) assigning weights to impact ratio and experts; (4) integrate the experts' subjective opinions; (5)-(6) aggregating the information of impact ratios and experts to get fuzzy evaluation vector; (7)-(8) analyzing the fuzzy vector to decide the final grades of journals. A decision support system has been developed to demonstrate the feasibility and effectiveness of this application using a case study in Hong Kong and it is posted on the Internet (see Appendix A for details).

References

(4) Assignment of weights of impact ratio and experts

[1] V.A. Beattie, and R.J. Ryan, “The impact of non-serial publications on research in accounting and finance,” ABACUS, 27(1) (1991) 32-50 [2] R. Biswas, “An application of fuzzy sets in students’ evaluation,” Fuzzy Sets and Systems 74(1995) 187-194.

0-7695-0493-0/00 $10.00 (c) 2000 IEEE

7

Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000

[3] R.K. Coe, and I. Weinstock, “Evaluating the finance journal: the department chairperson’s perspective,” Journal of Financial Research, 6 (1983) 345-349 [4] N. C. Dalkey, The Delphi Method: An Experimental Study of Group Opinion, Santa Monica: The Rand Corporation, 1969. [5] M.M. Extejt, and J.E. Smith, “The behavioral sciences and management: an evaluation of relevant journals,” Journal of Management 16(3) (1990) 539-551 [6] M.L. Gillenson, and J.D. Stutz, “Academic issues in MIS: journals and books,” MIS Quarterly 15(4) (1991) 447452. [7] C.W. Holsapple, L.E. Johnson, H. Manakyan, and J.T. Tanner, “A citation analysis of business computing research journal,” Information and Management, 25(5) (1993) 231-244 [8] C.W. Holsapple, L. Johnson, H. Manakyan, and J. Tanner, “Business computer research journals: a normalized citation analysis,” Journal of Management Information Systems 11(1) (1994) 131-140 [9] L. M. Jessup, and J. Valacich, Group Support Systems: New Perspectives, New York: Macmillan, 1993. [10] S.J. Liebiowitz, and J.P. Palmer, “Assessing the relative impacts of economic journals,” Journal of Economic Literature, 22 (1984) 77-88

[11] R.H. Mabry, and A.D. Sharplin, “The relative importance of journals used in finance research,” Journal of Financial Research, 8(4) (1985) 287-296 [12] G. McBride, and R. Rademacher, “A profile of IS research: 1986-1991,” Journal of Computer Information Systems, 32(3) (1992) 1-5 [13] J.H. Nord, and G.D. Nord, “MIS research: a systematic evaluation of leading journals,” IBSCUG Quarterly, 2(2) (1990) 8-13 [14] T.J. Ross, Fuzzy Logic With Engineering Application, Singapore: McGraw-Hill, 1997, 315-317. [15] J.P. Shim, J.B. English, and J. Yoon, “An examination of articles in the eight leading management information systems journals: 1980-1988,” Socio-Econ Planning Sciences, 25(3) (1991) 211-219 [16] K.A. Walstrom, B.C. Hardgrave, and R.I. Wilson, “Forums for management information systems scholars,” Communications of The ACM, 38(3) (1995) 93-102 [17] C.K. Win, “The ranking of construction management journals,” Construction Management and Economics, 15 (1997) 387-398 [18] L.A. Zadeh, “Fuzzy sets,” Information and Control 8, 1965, 338-353.

Appendix A: Web-based decision support system for the evaluation of journal grades

expert first select a journal, then give the judgment opinion on the journal (see Figure 8 for an example). As an option, several experts can give their judgments simultaneously. 4) Facilitator Functions: After experts provide their independent judgments on journals, a facilitator can use the aggregation function to integrate the impact ratio information and expert judgment information to compute the aggregated results. Figure 9 shows the aggregation result of the example of the journal discussed in Section 4. The facilitator functions also include Journal, Judgment Term, Weight, User, etc. These functions are design to manage journal information, judgment term information, weight information, and expert information respectively.

A group decision support system has been developed as GDSS Server on the Web. The underlying technology of the GDSS Server includes Microsoft NT Server, Microsoft SQL Server, and Microsoft Internet Information Server. The GDSS Server is built using Microsoft Active Server Pages (.asp files). The GDSS provides four main functions: 1) Journal Information: This function is designed to display information of journals. 2) Judgment Term Information: This function is designed to display information of judgment terms. 3) Expert Judgment: This function is designed for the experts to express their subjective judgments. An

0-7695-0493-0/00 $10.00 (c) 2000 IEEE

8

Proceedings of the 33rd Hawaii International Conference on System Sciences - 2000

Figure 8. The page of expert judgments on journals

Figure 9. The page of determining grade of journals

0-7695-0493-0/00 $10.00 (c) 2000 IEEE

9