Session S3F IMPROVING THE EVALUATION OF ... - ICEE

0 downloads 0 Views 235KB Size Report
simultaneously. Second, in computer programming and software-related ... Therefore, communication traffic increases with the number of connecting clients.
Session S3F IMPROVING THE EVALUATION OF PROGRAMMING COURSES Yu-Liang Chi 1

Abstract - Distance learning relies upon communication infrastructure to link knowledge providers and requesters who are in two different locations. Several technologies, such as radios, televisions, the Internet, and others, exist through which synchronous or asynchronous distance learning can be pursued. Although new technologies solve the problem of physical separation, issues related to the evaluation of the learners remain problematic. This study proposes an approach for use in Web-based distance learning to evaluate the learning performance of students exclusively on programming courses. Two systems on accelerating on-line examination and semi-automatically grading assignments in programming courses are also presented. The proposed method allows students, using a web browser, to take examinations and submit their programming assignments online. Students can immediately obtain their grades. This system was evaluated at Chung Yuan Christian University (CYCU) in the 2002 spring and fall semesters in three programming classes. The evaluations were very positive. Index Terms - Distance Learning, Evaluation, Grading System, Online Testing

INTRODUCTION Distance learning has created a new era in education. It promotes learning anywhere and also emphasizes the convenience of continued learning. Many of distance learning facilities, organizations, and participants are all now involved throughout the world in efficiently gaining knowledge. Web-based systems are being developed in various industries and universities have not been able to ignore their effectiveness in education. Nowadays, more universities and organizations developing distance learning approaches that involve radios, televisions, and the Internet. The various delivery media can be categorized as synchronous or asynchronous. In most cases, distance learning brings clearly promote the distribution of knowledge. However, a critical issue associated with distance learning concerns the learner’s performance. Universities that give diplomas or degrees have hesitated to use distance learning in regular education because of the ambiguity in measurement of the quality of learning. This study seeks develop a system that reduces the time required to complete grading.

First, on-line testing that takes minimal time and provides quick feedback is needed to improve the evaluation of performance and to support distance learning. Online testing asks questions over the Internet, and the server scores the answers immediately. The outstanding issue concerns the performance when many clients take examinations simultaneously. Second, in computer programming and software-related courses, the evaluation and grading of students' programs takes a long time. Traditionally, the teacher gives and the students submit, their assignments, either in person or via email. Grading their assignments may take a long time because the complexity. Many computer or information technology classes are large so an instructor cannot easily return graded homework in a timely manner.

LITERATURE R EVIEW David Jackson [3] authors some works on software systems for grading student computer programs , Chi, Yu-liang and Philip Wolfe developed the DASGS [8] and the WASGS [7], and David Jackson and Michelle Usher proposed the ASSYST system [4], R. G. Ho developed computerized adaptive testing and on-line testing [9] [10]. All systems were developed to grade students' programming exercises using a graphical interface or browser that is used to direct all aspects of the grading process. The grading approach of ASSYST involves a checklist such as correctness, efficiency, style, complexity, and test data. ASSYST is a mainframe centric application and is not deployed in a distributed environment. DASGS is based on the three-tier client/server architecture and utilizes CORBA model. The only weakness of DASGS was the time required to download the Java applets and the object request broker (ORB) middleware from the Web server during runtime. The DASGS was replaced by the WASGS that leveraged middleware, since the WASGS was based on an advanced application server within three-tier client server architecture.

IMPROVING ONLINE TESTING Online testing is improved by reducing the networking flow and the server loading. Traditionally, an online testing model creates a library of questions on the server side and uses specific mechanisms to set an examination to be taken by

1

Yu-Liang Chi, Assistant Professor, National Chung Cheng University, 160 San-Hsing, Min -Hsiun, Chia-Yi 621 Taiwan, R.O.C. , [email protected]

0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference S3F-17

Session S3F

Gateway

Exam pack

Internet

connected clients (Fig. 1). Two main categories of this model exist - passing on all questions simultaneously and passing on one question at a time. The first approach is inflexible and naive, but performs relatively well. The examination document is downloaded as a static page, no problems of performance arise. The second approach is interactive on all sides. Since the Web-based system is loosely connected, every access action depends on basic networking procedures like handshake and confirmation. Therefore, communication traffic increases with the number of connecting clients. Previous research has already addressed performance issues of the interactive model [9] [10].

Questions Generator

unpack Testing Sys.

Exam doc



Internet

Case 1 Gateway

Questions Library Questions Generator

Case 2 : :



Internet

Question

Question

FIGURE 2 I MP ROVING ONLINE TESTING MODEL

Gateway

Questions Library Questions Generator

The downloaded system randomly selects a question from all the examination questions, and repeats until all of the questions have been asked. In the implementation phase, a “double circle-linked list” random shuffling mechanism (Fig. 3) was used to select questions. The server loading is greatly reduced because each client is responsible for obtaining the questions in a random order. This process was called a “reversed lottery” because selection was made by each client and rather than by the server.

FIGURE 1 ONLINE TESTING MODELS

A new model that combines the advantages of the above two approaches is designed to solve these problems. In this design (Fig.2), clients request an examination package, including applications as well as the examination document; these are downloaded and unpacked on the client side. When the client takes the test, he (or she) actually interacts with his (or her) system. Finally, the results are sent to the server. Since the whole procedure involves needs a limited communication with the server, the networking traffic problem is solved. In this way, the examination questions can be randomly presented to users. Cheating can be avoided when a test is given in centralized environment. This design was evaluated in two classes at CYCU, for example, by having students sit side by side while taking the online test.

FIGURE 3 DOUBLE LINKED LIST FOR REVERSED LOTTERY

In the testing phase, students must submit some required information, download the assigned pack, and then take the test as presented in Fig. 4. The system notifies before the time run out. When the test is over and the

0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference S3F-18

Session S3F answers have been submitted, a score is sent to a database and the specific client as presented in Fig. 5.

the convenience of the system are obvious, the uploading mechanism does not add value. If mo st aspects of grading an assignment are routine, as is the case of grading a programming assignment, then the computer can replace the human. According to a survey of 14 teaching assistants in programming courses, some of the grading programming assignments in traditional manner have three problems : • Submissions may not be well organized and the grading process takes a long time . • Submitting and returning the assignments are timeconsuming and inconvenient. • Feedback between the instructor and the students is delayed. Three formulae of the time required to grade programming assignments are presented. The first formula is for the time taken by a human being to grade assignments, with each step being performed manually. The time increases with the number of students . The second formula concerns submitting assisted electronically, such as when assignments by are e-mailed. Some of the work is eliminated. The third formula avoids all unnecessary routine work and shares the tasks of a grading procedure.

FIGURE 4

n

E1 (T)= Σ (si + ci +ri + t i * k )

A SAMPLE PAGE OF ONLINE TESTING

i=1

(1)

n

E2 (T)= Σ (ci +ri + t i * k ) i=1

(2)

n

E3 (T)= Σ (ti * k ) i=1

• • • • • • • •

(3)

E(T): Estimated total grading time s: time to grade an assignment c: time to compile each submitted program r: time to register each submitted program t: time to check an answer n: total number of students i: number of students k: number of submission(s) per assignment [1..n]

To comparing the quality of the submitted assignments, the experimental parameters are s=2, c=1, r=1, t=1, n=60 and k=1. Now, 60

FIGURE 5 A SAMPLE PAGE OF GRADING RESPONSE

ONLINE ASSIGNMENT OF GRADES FOR PROGRAMMING Submitting assignments over the Internet normally involves uploading files to the file server. Although the benefits of

E1 (T)= Σ (2 i + 1i +1 i + 1i * 1 )=300 mins = 5 hrs i=1

(1)

60

E2 (T)= Σ (2 i +1i + 1 i * 1 ) =240 mins = 4 hrs i=1

(2)

60

E3 (T)= Σ (1i * 1 ) = 60 mins = 1 hrs i=1

(3)

0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference S3F-19

Session S3F The system represents an improvement on reducing human routine work on grading programming assignments. This novel design provides a feasible interface so that a grader can use it to check assignments and register the final grade of each assignment more easily (Fig. 6). In the 2002 spring and fall semester, the system was evaluated following its application to grading several assignments in three programming classes. The evaluations were very positive. The proposed grading assignment model still depends on human to check the answers, so this is called a semi automatic grading system. Since the answers to the programming assignments may present in various formats, including graphics and tables, developing an artificial intelligent checker is difficult. Consequently, this design focuses on reducing human routing works and improving total performance and quality.

Presentation Tier

Logic Tier

Data Tier

2. Client’s Request

Hunter, J., "Java Servlet Programming ", O’Reilly & Associates, Inc., 1998.

[3]

Jackson, David, "A Software System for Grading Student Computer Programs ". J. Computer and Education, 27, No.3/4, 1996, pp.171180

[4]

Jackson, David and Usher, Michael, "Grading Student Programs using ASSYST ", Proc. of the 21st SIGCSE tech. symposium on computer science education, 1997.

[5]

Sun Microsystems, Inc. "JavaTM Development Kit (version 1.4) documentation". Sun Microsystems, Inc. 2002.

[6]

Sun Microsystems, Inc. "JavaTM 2 Enterprise Edition (J2EE) documentation". Sun Microsystems, Inc. 2002.

[7]

Chi,Yu-Liang and Wolfe, P. M. "A Web automatic software grading system". The 50th Industrial Engineering Solutions Conference. 1999

[8]

Wolfe, P. M. Chi,Yu-Liang, and Fukunari, Miki. "Distributed Automatic Software Grading System (DASGS)". Arizona State University CIEE/CEAS Program Tech. Report. 1998.

[9]

Ho, R. G., "Computerized adaptive testing", P sychological Testing, Vol XXXVI,117-130,1989

[10] Ho, R. G., "The effects of the reviewing and time-limited on computerized on-line testing. " Final Report of the National Science Council Project (NSC 84-2511-S-003-023), 1995

Web Server

1. Get HTML Page

[2]

Java Web engine

Client

3. Invoke Servlet

: :

7. Send back result pages

Servlet 6. Generate HTML pages

Internet

4. Trigger services

5. Database access

Files process

Client

Runtime pro. Database pro.

Clients

Server

Database

FIGURE 6 ARCHITECTURE FOR GRADING P ROGRAMMING ASSIGNMENTS

5. CONCLUSION Web technologies provide flexibility, scalability, and fast performance when used to perform tasks in the place of human. This paper presented two systems , on-line testing and the semi -automatic grading of assignments, to improve evaluation process in distance learning. A combination of emergent technologies is used to develop some value-added systems that solve the bandwidth problems of online testing and integrate procedures involved in grading assignments.

ACKNOWLEDGMENT The authors would like to thank the National Science Council of the Republic of China for financially supporting this research under Contract No. NSC 90-2511-S-033-005

REFERENCES [1]

Brodie, M. and Stonebraker, M., "Migrating Legacy System ", McGraw-Hill press, 1995.

0-7803-7961-6/03/$17.00 © 2003 IEEE November 5-8, 2003, Boulder, CO 33 rd ASEE/IEEE Frontiers in Education Conference S3F-20