Curriculum-Based Measurement and Computer Based Assessment:

Constructing an intelligent, web-based evaluation tool


Samuel A. DiGangi, Ph.D.,
Angel Jannasch-Pennell, Ph.D.,
Chong Ho Yu, Ph.D.,
Sudhakiran V Mudiam


(1999 November)
Paper presented at 29th Annual Meeting of
Society for Computers in Psychology
Los Angeles, CA



Introduction

Computer-based assessment techniques are becoming increasingly sophisticated in the use of online technology. Computer-based assessment (CBA), also known as computer aided assessment (CAA) & computer administered testing (CAT), supplements, and in many ways has replaced evaluation procedures that have traditionally involved paper and pencil (Fletcher & Collins, 1986/87). The ability to administer and score an assessment quickly and efficiently is one of the main strengths of CBA (Hasselbring, 1984). This is also one of the prime advantages of Curriculum-Based Measurement (CBM); a specific approach to curriculum-based assessment that provides a simple and effective way to monitor student progress on an on-going basis (Deno, 1985a). This paper examines the foundations of Computer Based Assessment and Curriculum Based Measurement and describes the development of an intelligent, task-analytic, web-based assessment tool.

Curriculum-Based Measurement

Salient characteristics of CBM include its focus on direct, repeated measurement of student performance in the curriculum using production-type responses (Marston, 1989). Sharipo and Kratochwill (1988) define direct measurement as representing "the behavior of interest to be assessed by noting its occurrence. As such, the data are empirically verifiable and do not require any inferences from observations to other behaviors". Conversely, most published achievement tests typically rely on indirect measurement of student skills with respect to the source of test items and the student response formats. CBM is based on a major premise that assessment and decision making are curriculum referenced (Fuchs, Deno, & Mirkin, 1983), meaning a student's performance on a test should indicate the student's level of competence in the local school curriculum. The data collected from the administration of CBA can provide the basis for decisions regarding referrals, IEP planning, and determining the least restrictive environment, however, systematic procedures for making these decisions are not detailed in the literature (Blackenship, 1985).

Curriculum-based measures are based upon a number of characteristics that are considered desirable for monitoring student progress (Jenkins, Deno & Mirkin, 1979). The measure(s) must to be (1) tied to a studentıs curricula, (2) of short duration to facilitate frequent administration by teachers/ educators, (3) capable of having many multiple forms, (4) inexpensive, and (5) sensitive to the improvement of students' achievement over time. Another necessary characteristic is the identification of basic skill academic behaviors within the content areas that educators could measure reliably and validly.

Proponents of CBM cite several advantages. CBM is relatively easy to administer and time efficient. Fuchs, Wesson, Tindal, Mirkin, and Deno (1981) noted that trained teachers typically spend approximately two minutes administering, scoring, and graphing a reading passage. As with the use of online testing, CBM does not require a lot of the teacherıs time. While the teacher will have more time available for instruction, he/she will still be getting the assessment information necessary to guide that instruction.

CBM can be administered frequently and repeatedly (Jenkins, Deno, & Mirkin, 1979) which allows the examiner to view the pupilıs progress as a function of performance over several days or months rather than in one testing session (Marston, 1989). Online testing also allows for ongoing monitoring of student progress which can be displayed over time. This method allows a view of each studentıs progress rather than relying on pre- and post-tests. These scores can be plotted to show the studentıs progress over time and compared to his/her expected rate of progress.

CBM allows the examiner to reference the studentıs performance in four ways (Deno, 1985b): (1) individually, in comparison to how the same student has done recently on other, similar tasks; (2) to a goal, how the student is progressing toward a long term goal; (3) instructionally, before or after adjustments in instruction have been made; and (4) normatively, in comparison to a local group such as the classroom or grade level. Online testing allows for the same comparisons, since each studentıs performance can be recorded and stored. This data can be saved and compared in a variety of ways at a future time.

Consistent monitoring of a studentıs progress toward goals increases the teacherıs sensitivity to when instruction needs to be modified (Fuchs, Fuchs, & Deno, 1985). This results in greater student achievement as instruction is modified on an ongoing basis to suit a studentıs current needs and is not limited solely to his or her needs at the beginning of the year. Because online testing can also provide consistent monitoring, it also helps the teacher be more sensitive to needed instructional changes.

Computer Based Assessment

A specific use of computers in assessment extends simple display of information to dynamically display data contingent upon the user input. These applications are generally referred to as expert systems (Hasselbring, 1984). Expert systems incorporate a large body of knowledge in a particular area and a decision making process to allow individuals access to a wider array of solutions or advice. Expert Systems encompass several learning perspectives and instructional approaches:

Student Model

The student model establishes the framework for identifying the student's misconceptions and sub-optimal performance. The structure of the student model can be derived from (1) the problem-solving behavior of the student, (2) direct questions asked from the student, (3) historical data (based on assumptions of the student's assessment of his skill level, novice to expert), and (4) the difficulty level of the content domain (Barr & Feigenbaum, 1982). Intelligent Tutoring Systems (ITS) compare the student's actual performance to the "student model" to determine if the student has mastered the content domain. Advancement through the curriculum is dependent upon the IT system's assessment of the proficiency level of the student. The student model contains a database of student misconceptions and missing conceptions. This database is known as the "bug library". "A missing conception is an item of knowledge that the expert has but the student lacks. A misconception is an item of knowledge that the student has but the expert does not (VanLehn, 1988, p. 62)." Bugs are identified from the literature, observation of student behavior and learning theory of the content domain (VanLehn, 1988). The intelligent tutor solves problems similar to the human. The tutor predicts student performance. If the performance does not meet the prediction, the system must determine if the deficiency is due to a misconception or a missing conception. Once the tutor recognizes there is misconception or a missing conception, it makes a diagnosis and prescribes instructional remediation. Instructional Model ITS actively interacts to student inputs and diagnoses the student's level of understanding or misunderstanding of the knowledge domain.

The tutorial exercises some control over the selection and sequencing of information by responding to student questions concerning the subject domain and in determining when the student needs help and what kind of help is needed (Halff, 1988). An effective ITS will meet the ever changing needs of the student; Intelligent tutoring systems diagnose a student's characteristic weaknesses and adapts the instruction accordingly. As the student's level of proficiency increases, ITS will ideally conform to the evolving skill level of the student. ITS adapts as the novice evolves into a subject matter expert.

In complex problem-solving domains, programming techniques and reasoning strategies that enable an instructional system to develop and update understanding of the student and his/her performance on the system debate surrounds the issue of whether strategies for student modeling should continue as a primary focus of advanced computer-based instruction (Jones & Winne, 1992). At the heart of the initial efforts to construct "human-like" software to deliver instruction, the approach of "modeling" every possible response of the user was employed. Corrective feedback is given immediately as the student is directed toward the path that the expert chose in solving the problem. In this approach, learners are not permitted or encouraged to explore random paths toward the final goal, but are being led toward the solution or model of thinking that was demonstrated by the expert on which the algorithms were based.

This approach has been challenged by several researchers (diSessa, 1985; Papert, 1980; Soloway, 1990). In complex problem solving domains, the student-model cannot specify all solution paths that a student might take. In order for the model-tracing approach to work, the tutor must constrain the student to follow solution paths that the computer or software can recognize. The model-tracing approach confines and hinders the student, therefore impedes innovation. There is no impetus for true exploration and experimentation with possible solution strategies, likewise new solution paths are not encouraged.

Instructional Model

ITS actively interacts with student inputs and diagnoses the student's level of understanding or misunderstanding of the knowledge domain. The tutorial exercises some control over the selection and sequencing of information by responding to student questions concerning the subject domain and in determining when the student needs help and what kind of help is needed (Halff, 1988). An effective ITS will meet the ever changing needs of the student. ITS diagnoses the student's characteristic weaknesses and adapts the instruction accordingly. As the student's level of proficiency increases, ITS will ideally conform to the evolving skill level of the student. ITS adapts as the novice evolves into a subject matter expert.

Feedback

ITS instructional feedback capabilities provide immediate knowledge of results. Burton (1988) identified seven formats for structuring instructional feedback to the student: help, assistance, empowering, reactive learning, modeling, coaching and tutoring.

Help Format - The Help Format allows the student to request assistance when they have made an error or when they perceive they need assistance. The on-line capability permits the student to learn by doing. The student perceives that s/he has control over the learning process. This capability has a positive impact on the student's acceptance of the system.

Assistance Format - In the Assistance Format the intelligent tutor assumes some of the responsibility for problem solving tasks and allows the student to concentrate on specific areas. The IT system instructs the task by presenting task's operational sequence. The student is provided an opportunity to apply that operation and eventually generalize the operations in solving similar problems. This format facilitates the development of conceptual understanding and encourages higher-order thinking skills which are involved in problem solving (Burton, 1988).

Empowering Format - The Empowering Format provides to the student the tools to review their own decision-making processes. The system captures the student's performance decisions and their impact, and provides a visual representation of the student's problem-solving ability. The student travels through their own decision tree to identify errors made. Problem-solving behavior is acquired in a "risk free" environment.

Reactive Learning Format - In the Reactive Learning Format the IT system "responds to the student's actions in a manner that extends the student's understanding of their own actions in the context of a specific situation (Burton, 1988, p. 127)." Initially, the student establishes a hypothesis, which the computer challenges. The hypothesis is challenged on the basis of its logic, its compatibility with the information the student has previously learned, and its consistency with the knowledge base. The student is required to articulate and justify his/her own reasoning.

Modeling Format - The Modeling Format models "expert" performance for the student. The student learns by observing the "expert" at work.

Coaching Format - The Coaching Format simulates the "human" coach. It constantly monitors student performance to identify sub-optimal performance. ITS will immediately interrupt the interaction and provide advice to the student. The system compares the student's performance to its "expert" model. If the student's performance deviates from the "expert", the coach redirects the student towards the "expert". The coaching format is not concerned with the student completing a predetermined lesson. The primary emphasis is on skill acquisition and general problem-solving through computer games (Barr & Feigenbaum, 1982).

Tutor Format - The instructional "tutor" identifies deficiencies in skill performance. The automatic instructional capability of ITS provides an environment to enhance learning. The instructional tutor identifies errors of commission, omission, and "bugs" in student performance. ITS communicates through natural dialogue and provides remediation when necessary. The Tutor must determine when to interrupt and how often. Too little feedback or too much can hinder the learning process. ITS constantly analyzes the student's performance to ensure the learning process of the knowledge domain is being mastered.

Instructional exercises are characterized by manageability, structural transparency and individualization (Halff, 1988). The sequencing of the information manages the presentation of the knowledge base to insure that the student is able to accomplish the task. Students are "enabled" by the system to solve the exercises. The hierarchical format of the exercises present prerequisite skills before higher level skills are encountered. The sequence of the task reflects the structure of the procedure being taught and helps the student to acquire the target behavior.

The two types of instructional tutor formats are the expository tutor and the procedure tutor.

Expository Tutor - The Expository Tutor presents factual knowledge with an emphasis on the development of inferential skills (Halff, 1988). The expository tutor's primary mode of instruction is through a natural dialogue with the student. Factual knowledge is sequenced to provide a coherent structure to the learning process. The framework establishes a relationship between existing knowledge and general concepts. Instructional dialogue begins with generalities and proceeds to specifics.

Procedure Tutor - The Procedure Tutor provides instruction on procedural skills that can generalize to other situations. The procedure tutor emphasizes the development of effective problem- solving skills. ITS usually assumes the "coaching" mode. Information is sequenced in the form of practice exercises and examples that are based on the student's accomplishment of specific instructional objectives. Guidance is provided throughout the learning process.

The Application

Web-assess (1999) is a modular, dynamic web-based assessment tool consisting of a web server, middleware, and a server-side database. WebObjects (Apple, 1999) was employed as a middleware application, interacting with a relational database that holds content required for administering the test as well as instruction to be delivered. Functioning as a high performance web-application server, Web-assess allows for dynamic web based application to be developed and deployed rapidly. The use of dynamic, rather than static webpages, allows the construction of web-based presentation that is driven by the user input. The content of the pages are assembled with resources from the database, in real time‹yielding webpages which are customized specifically for the user.

Figure 1 illustrates the overall structure and function of the Web-assess application. The core of the system is the middle-layer module that contains the rules for implementation of the evaluation protocol. The function of the middle layer extends beyond basic query re-direction to perform specific computations and decision-making tasks dictated by the underlying Java routines. Java programming in WebObjects can present questions in different sequences and combinations based upon student responses.

As indicated in the schematic, the entire system is built with components as a three-tier model. This topology enables system interoperability, expansion and migration; upgrading the web server or porting data into another database system would not require a major restructuring. The workflow of the three-tier model is described as the following: The middle layer application receives http requests from the web browser through the web server (Netscape SuiteSpot). The WebObjects Adaptor acts as an intermediary between the web server and the middle layer application. These http requests are processed by the WebObjects application and dynamically generated HTML pages are returned back to the browser.


In essence, the middle layer application is the central part of the implementation that allows for dynamic content to be generated. The logic of the CBM test is carefully implemented as a set of "rules". These "rules" allow the student to take one of several paths of the test based on the performance. It also provides for the delivery of appropriate instruction when requested by the user. Figure 2 is an example of a question generated by the application.

Web-assess allows teachers and parents to view and monitor student progress and assess their skill set on a continuous basis. It can also be used to study and compare several students in a class. This would be particularly useful to teachers when determining the curriculum. Also, the tool automates the process of evaluating the tests taken by the student and maintains the records of his/her performances.


Conclusion

CBM and online assessment strategies have many similar theoretical strengths which facilitates combining the two for a more streamlined and time efficient process of evaluating student progress within a particular curriculum. By incorporating the components of CBM, web-based online evaluation becomes even more time efficient and easier to incorporate into the classroom routine. Frequent administration of CBM is easy since it is so time efficient when given online. By recording and storing student performance on CBM tasks, online assessment provides the teacher more information without additional time commitment. The teacher can easily access data by student or by class to make comparisons. Not only does online CBM provide the teacher more time for instruction, it also provides information on when and how that instruction should be modified.

Developing Web-assess as a web based tool allows for the rapid, continuous assessment of the curriculum level as well as allowing the tutoring of appropriate material based on that assessment. It also allows parents and teachers to keep track and monitor the studentıs progress in an effective manner as the tool keeps track of all the information. The adaptation of MASI into an intelligent web-based system has the potential of leading Web-based instruction into a new direction. This project is an attempt of integrating content delivery, evaluation, and diagnosis of Web-based instruction.

Computer-based assessment is an area that is continuing to expand‹particularly with the use of online assessment methods. The format of these instruments has extended beyond traditional paper and pencil assessment formats to include interactive, dynamic methodologies. In addition, software companies and independent programmers are designing packages that allow individuals to create their own assessments online. There are many similarities between CBM and online assessment that make the two very compatible and strengthen the advantages of each. This is a program that has the potential to make a major impact in the field of instructional technology.

References

Blankenship, C. S. (1985) Using curriculum-based assessment data to make instructional decisions. Exceptional Children, 52, 233-238.

Barr, A.. & Feigenbaum, E. A. (1982) The Handbook of artificial intelligence. Stanford, CA: HeirisTech Press.

Burton, R. R., and Brown, J. S. (1982). An investigation of computer coaching. In D. H. Sleeman & J. S. Brown (Eds.), Intelligent tutoring tystems (pp. 79-98). New York: Academic Press.

Deno, S. (1985a). Curriculum-based measurement: the emerging alternative. Exceptional Children, 52(3), 219-232.

Deno, S. (1985b). The nature and development of curriculum-based measurement. Preventing School Failure, 36(2), 5-10.

Deno, S. & Mirkin, P. (1977). Data based program modification. Reston, VA: Council for Exceptional Children.

Fletcher, P. & Collins, M. (1986/87). Computer-administered versus written tests‹advantages and disadvantages. Journal of Computers in Mathematics and Science Teaching, 6(2), 38-43.

diSessa, A. (1985). A principled design for an integrated computational environment. Human computer interaction, 1, 1-47.

Fuchs , Deno, S. & Mirkin, P. (1983). Database program modification of continuous evaluation systems with computer software to facilitate implementation. Journal of Special Education Technology, 6, 50-57.

Fuchs, L., Fuchs, D., & Deno, S. (1985). The importance of goal ambitiousness and goal mastery to student achievement. Exceptional Children, 52(3), 63-71.

Fuchs, L., Wesson, C., Tindal, G., Mirkin, P., & Deno, S. (1981). Teacher efficiency in continuous evaluation of IEP goals (Research Report No. 53). Minneapolis: University of Minnesota Institute for Research on Learning Disabilities. (ERIC Document Reproduction Service No. ED 215467).

Halff, H. M. (1988). Curriculum and instruction in automated tutors. In Polson, M. C., & Richardson, J. J. (Eds.), Intelligent tutoring systems (pp. 79 - 108). Hillsdale, NJ: Lawrence Erlbaum & Associates Publishers.

Hasselbring, T. (1984). Computer-based assessment of special-needs students. Special Services in the Schools, 1(1), 7-19.

Jenkins, J., Deno, S., & Mirkin, P. (1979). Measuring pupil progress toward the least restrictive environment. Learning Disabilities Quarterly, 2, 81-92.

Jones, M., & Winne, P. H. (1992). Adaptive Learning Environments. Heidelberg: Springer-Verlag.

Marston, D. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and why do it? In M. Shinn (Ed.), Curriculum-based measurement: assessing special children (pp. 18-78). New York: The Guilford Press.

Netscape Communications Corporation. (1999). Netscape SuiteSpot. The author: Mountain View, CA:

Papert, S. (1980). Mindstorms: Children, Computers and Powerful Ideas. NY: Basic Books.

Sharipo, E. S., & Kratochwill, T. R. (1988). Behavioral assessment in schools: Conceptual foundations and practical applications. New York: Guilford.

Soloway, E. (1990, July). Interactive learning environments. Paper presented at the NATO Advanced Studies Institute, Calgary.

VanLehn., K. (1988). Architectures for intelligence / the Twenty-second Carnegie Mellon Symposium on Cognition. Hillsdale, N.J. : L. Erlbaum Associates.

Web-assess (1999). Web-assess: A web-based assessment tool. Arizona State University, Tempe, AZ: Arizona Board of Regents.


 

Navigation

 

Publications TOC

Simplified Navigation

Table of Contents

Search Engine

Contact Me