ADMINISTERING,SCORING
AND REPORTING A TEST




        MANALI H SOLANKI
        F.Y. M.SC. NURSING
        J G COLLEGE OF
        NURSING
TERMINOLOGY

   Analysis: The examination and
    evaluation of the relevant information
    to select the best course of action
    from among various alternatives.

   Test: A procedure for critical
    evaluation; a means of determining
    the presence, quality, or truth of
    something.
 Scoring:It means to evaluate and
 assign a grade.



 Report: A document   containing
 information organized in a
 narrative, graphic, or tabular form,
 prepared on ad hoc, periodic,
 recurring, regular, or as required
 basis.
INTRODUCTION:

 Administering the written test is
 perhaps the most important
 aspect of the examining process.
 The atmosphere the test
 administrator creates in the test
 room and the attitude the test
 administrator displays in
 performing his/her duties is
 extremely important
 The test administrator's
 manner, bearing, and attitude
 may well inspire confidence in
 competitors and put them at
 ease while participating in the
 testing process.
ADMINISTERING A TEST:

 A teacher's test administration
 procedures can have great impact
 on student test performance.

Before  the test
After Distributing Test Papers
During the Test
After the Test
TYPES OF SCORE
Raw Scores :

 A Raw Score is simply the
 number of questions a student
 answers correctly for a test.
Uses:
 A raw score provides an
 indication of the variability in
 performance among students
 in a classroom.
Limitations:
 A rawscore by itself has no
 meaning. It can be interpreted
 only by comparing it with some
 standard such as total number of
 items for a test or with raw scores
 earned by a comparison group.
Percentile Rank
 A percentile is a measure that
 tells us what percent of the
 total frequency scored at or
 below that measure. A
 percentile rank is the
 percentage of scores that fall
 at or below a given score.
Advantages:
 Laypeople easily understand
 them

 Easy   to interpret
Limitations:
 Percentile   differences are not
 equal
Stanine (Standard nine)
 Standard  nine) : Stanine scores
 express test results in equal steps
 that range from 1 (lowest) to 9
 (highest). The average is a score
 of 5. In general, stanine scores
 1,2 & 3 are below average, 4,5 &
 6 are average and 7, 8 & 9 are
 above average
Standard Scores
 The  standard scores indicate a
  student’s relative position in a
  group. It expresses test
  performance in terms of standard
  deviation units from the mean
 The mean is the arithmetical
  average. The standard deviation
  is a measure of the spread of
  scores in a group.
Types of standard Scores

Z – Score
 If a mean and standard deviation
  can be calculated for a given set
  of raw scores, each raw scores
  can be expressed in terms of its
  distance from the mean in
  standard deviation units or z –
  scores.
    Z – Score =
          Raw Score –Mean/Standard
     deviation

    Note: Z – score is always minus when
    the raw score is smaller than the mean.
T Scores:
 anyset of normally distributed
 standard scores that has a
 mean of 50 and SD of 10.
 Multiplying the z – score by 10
 and adding the product to 50 can
 obtain T Scores.
Advantages
 only   positive integers are provided

    Interpretation is relatively simple
    once the concept of T – Score
    is grasped.
GRADING
 Grading refers to the process of
 using symbols, such as letter to
 indicate various types of
 students progress (Nitko 2001).
Common Methods of
Grading :
 Lettergrades :
 There is a great flexibility in
 the number of grades that can
 be adopted i.e. 3 – 11.
Limitations:

 Meaning   of grades may vary
 widely

 Do not describe
 strengths/weakness of
 students
Strengths:

 Easy   to use

 Easyto interpret
 theoretically

 Provide   a concise summary
 Number/Percentage        grades

 (5, 3, 2, 1, 0) or (98%, 80%, 60%
 etc.)
 It is same as letter grades. Only
 difference is that instead of letters
 numbers of percentage is used.
Strengths:

 Easy to use
 Easy to interpret theoretically
 Provide a concise summary
 May be combined with letter
  grades
 More continuous than letter
  grades
Limitations:

 Meaning   of grades may vary
  widely
 Do not describe
  strengths/weaknesses of students
 Meaning may need to be
  explained or interpreted.
Two category grades
 Itis good for courses that
  require mastery of learning.
Strengths:
 Less emotional for students.


Limitations :
 Less reliable
 Does not contain enough
  information about student’s
  achievement
 Provides no indication of the level of
  learning.
CHECK LIST AND RATING SCALE


 They are more detailed and
 since they are too detailed it is
 cumbersome for teachers to
 prepare
Strengths
 Present
        detailed lists of students’
 achievements

 Canbe combined with letter
 grades

 Good   for clinical evaluation
Limitations:

 May become too detailed to
 easily comprehend

 Difficult   for record keeping.
Advantages of Grades

 Grades    are divided in to 5 – 7
  divisions to which student’s
  performance is assigned as
  compared to 101 (0 – 100)
  divisions of conventional marking.
 It is a convenient method.
 Chances of errors are minimized
Disadvantages of Grades:

 The  assigned grades varies
  from teacher to teacher
 2. Do not indicate students
  strengths or weaknesses
 3. Foster unfair competition
  among students.
Scoring Essay type Questions
:
 Evaluating essay response the
 evaluator should:
 Evaluator should use appropriate
  method to minimize bias
 Pay attention to the significant
  and relevant aspects of the
  answer
 Be careful not to let personal
  idiosyncrasy affect assessment
 Apply uniform standard to all the
Method of grading essay type
question :

 Analytical   grading : (Point
 method)

 In this method of the ideal answer
 to a question is specified in
 advance, although need not be in
 the amplitude the ideal or model
 answer is broken down into
ADVANTAGES:

 Itcan yield very reliable scores
 The preparation of detailed
  answer may bring to the teachers.
 The sub division of the model
  answer can make it easier to
  discuss with the students the
  marks awarded to them.
LIMITATIONS:

 Itis very laborious and time
  consuming

 In attempting to identify the
  elements, undue attention may be
  given to the specific aspect.
Global grading:

 In
   this method the ideal answer is
 not sub divided into the specific
 points and component points.
 The examiner is interacted to read
 the responses rapidly from a
 general impression and using
 some standard and standard
 assign
Sequential Grading
 In
   order to bring more objectivity
 more answers can be scored
 beautifully. This is the same
 teacher valuing answer of a
 particular question.
Computer Software
    The “Software” learns a specific
    subject area by scanning
    appropriate documents.
    Then, the software is fed graded
    essays to set up the grading
    standards.
Scoring Objective Tests
   Hand graded :

    Due to human effort, mistakes
    may occur. Having two graders
    grade exams help to catch 90% of
    those simple mistakes in grading.
Machine Scoring :

 As  accurate as the answer
  code given to the computer.
 Some testing publishers will
  only release or sell their
  products to individuals who
  have undergone special
  training or have a particular
  degree in a related field.
Avante International
Technology (Biometric)

 The first test scoring system to
 achieve less than 1 error in 1.5
 million marks during testing by an
 independent testing laboratory
 responsible for testing election
 equipment and ballots. The same
 error-free tabulation method is
 adapted for test scoring and
 grading, and survey tabulation.
ITEM ANALYSIS:
Definition
   Item analysis is a process that
    examines students’ response to
    individual test items/questions in order
    to assess the quality of those items
    and of the test as a whole.
Benefits of item analysis:

 Provides    a basis for efficient
  classroom discussion of the test
  results
 Provides data for remedial work
 Provides a basis for the general
  improvement of classroom
  instruction
 Provides a basis for increased
  skill in item construction
Procedures involved in an
item analysis

 Qualitative:
 Qualitative item analysis procedures
  include proofreading of the exam prior
  to administering it for typographical
  errors, for grammatical cues and for
  appropriateness of the reading level of
  the material, conducting small group
  discussions of students after the exam
  and some time with the experts.
Quantitative:

 Item difficulty index (p)
 Item difficulty index portrays the
  “easiness” of an item because the
  higher the percentage, the easier the
  item. Item difficulty index is
  symbolized by p.
 Item difficulty = R/T
 R = number of students who
 correctly answered the item
 T = number of students included
 for the analysis.
Item Discrimination Index (D)
 The item discrimination index of a
 test refers to the degree, which
 the item discriminates between
 high achieving students and low
 achieving students in terms of the
 scores of the total test
 Theformula to determine item
discrimination index is :
   D = R u - R 1/ ½ T
Ru = number of students in the
upper group who got the item
right.
R1 = number of students in the
lower group who got the item right.

 ½ T = One half of the total number
 of students included in the
EXAMPLE:

   After you have notified the doctor
    about leg pain in a postpartum mother,
    your most APPROPRIATE action
    would be to

 Massage her leg to increase
  circulation
 Have her walk around to decrease the
  stiffness
 Ask her to remain in bed
Distractor Power
 The kind of statistic is Distractor
 Power. It provides information
 about effectiveness of the
 distractors
Simplified item analysis
procedures


 Conducttest/exams and score
 them. (Suppose we have
 conducted test on 21 students).

 Arrange all answer sheets in
 order of merit (From higher to the
 lower score).
 Calculate  27% of the answer
 sheet. For a group of 21 students
 it will be approx. 6.

 Select 6 papers within the highest
 total score and the 6 papers with
 the lowest total score.

 Putaside 9 papers. They will not
 be used.
 Compute the difficulty index of
 each item.

 Compute  the discriminating index
 of each item.

 Evaluate    the effectiveness of
 distracter
REPORTING
GOALS
 Accurate  and useful reporting of
 assessment results enables
 teachers, students, parents and
 the public to understand why
 various assessment instruments
 are being applied and how the
 results will be used as part of the
 institute improvement process
JOURNAL:

 Developing   and scoring essay
 tests.
Administering,scoring and reporting a test ppt
Administering,scoring and reporting a test ppt
Administering,scoring and reporting a test ppt

Administering,scoring and reporting a test ppt

  • 1.
    ADMINISTERING,SCORING AND REPORTING ATEST MANALI H SOLANKI F.Y. M.SC. NURSING J G COLLEGE OF NURSING
  • 2.
    TERMINOLOGY  Analysis: The examination and evaluation of the relevant information to select the best course of action from among various alternatives.  Test: A procedure for critical evaluation; a means of determining the presence, quality, or truth of something.
  • 3.
     Scoring:It meansto evaluate and assign a grade.  Report: A document containing information organized in a narrative, graphic, or tabular form, prepared on ad hoc, periodic, recurring, regular, or as required basis.
  • 4.
    INTRODUCTION:  Administering thewritten test is perhaps the most important aspect of the examining process. The atmosphere the test administrator creates in the test room and the attitude the test administrator displays in performing his/her duties is extremely important
  • 5.
     The testadministrator's manner, bearing, and attitude may well inspire confidence in competitors and put them at ease while participating in the testing process.
  • 6.
    ADMINISTERING A TEST: A teacher's test administration procedures can have great impact on student test performance. Before the test After Distributing Test Papers During the Test After the Test
  • 7.
    TYPES OF SCORE RawScores : A Raw Score is simply the number of questions a student answers correctly for a test.
  • 8.
    Uses:  A rawscore provides an indication of the variability in performance among students in a classroom.
  • 9.
    Limitations:  A rawscoreby itself has no meaning. It can be interpreted only by comparing it with some standard such as total number of items for a test or with raw scores earned by a comparison group.
  • 10.
    Percentile Rank  Apercentile is a measure that tells us what percent of the total frequency scored at or below that measure. A percentile rank is the percentage of scores that fall at or below a given score.
  • 11.
    Advantages:  Laypeople easilyunderstand them  Easy to interpret
  • 12.
    Limitations:  Percentile differences are not equal
  • 13.
    Stanine (Standard nine) Standard nine) : Stanine scores express test results in equal steps that range from 1 (lowest) to 9 (highest). The average is a score of 5. In general, stanine scores 1,2 & 3 are below average, 4,5 & 6 are average and 7, 8 & 9 are above average
  • 14.
    Standard Scores  The standard scores indicate a student’s relative position in a group. It expresses test performance in terms of standard deviation units from the mean  The mean is the arithmetical average. The standard deviation is a measure of the spread of scores in a group.
  • 15.
    Types of standardScores Z – Score  If a mean and standard deviation can be calculated for a given set of raw scores, each raw scores can be expressed in terms of its distance from the mean in standard deviation units or z – scores.
  • 16.
    Z – Score = Raw Score –Mean/Standard deviation Note: Z – score is always minus when the raw score is smaller than the mean.
  • 17.
    T Scores:  anysetof normally distributed standard scores that has a mean of 50 and SD of 10. Multiplying the z – score by 10 and adding the product to 50 can obtain T Scores.
  • 18.
    Advantages  only positive integers are provided  Interpretation is relatively simple once the concept of T – Score is grasped.
  • 19.
    GRADING  Grading refersto the process of using symbols, such as letter to indicate various types of students progress (Nitko 2001).
  • 20.
    Common Methods of Grading:  Lettergrades : There is a great flexibility in the number of grades that can be adopted i.e. 3 – 11.
  • 21.
    Limitations:  Meaning of grades may vary widely  Do not describe strengths/weakness of students
  • 22.
    Strengths:  Easy to use  Easyto interpret theoretically  Provide a concise summary
  • 23.
     Number/Percentage grades (5, 3, 2, 1, 0) or (98%, 80%, 60% etc.) It is same as letter grades. Only difference is that instead of letters numbers of percentage is used.
  • 24.
    Strengths:  Easy touse  Easy to interpret theoretically  Provide a concise summary  May be combined with letter grades  More continuous than letter grades
  • 25.
    Limitations:  Meaning of grades may vary widely  Do not describe strengths/weaknesses of students  Meaning may need to be explained or interpreted.
  • 26.
    Two category grades Itis good for courses that require mastery of learning.
  • 27.
    Strengths:  Less emotionalfor students. Limitations :  Less reliable  Does not contain enough information about student’s achievement  Provides no indication of the level of learning.
  • 28.
    CHECK LIST ANDRATING SCALE  They are more detailed and since they are too detailed it is cumbersome for teachers to prepare
  • 29.
    Strengths  Present detailed lists of students’ achievements  Canbe combined with letter grades  Good for clinical evaluation
  • 30.
    Limitations:  May becometoo detailed to easily comprehend  Difficult for record keeping.
  • 31.
    Advantages of Grades Grades are divided in to 5 – 7 divisions to which student’s performance is assigned as compared to 101 (0 – 100) divisions of conventional marking.  It is a convenient method.  Chances of errors are minimized
  • 32.
    Disadvantages of Grades: The assigned grades varies from teacher to teacher  2. Do not indicate students strengths or weaknesses  3. Foster unfair competition among students.
  • 33.
    Scoring Essay typeQuestions : Evaluating essay response the evaluator should:  Evaluator should use appropriate method to minimize bias  Pay attention to the significant and relevant aspects of the answer  Be careful not to let personal idiosyncrasy affect assessment  Apply uniform standard to all the
  • 34.
    Method of gradingessay type question :  Analytical grading : (Point method) In this method of the ideal answer to a question is specified in advance, although need not be in the amplitude the ideal or model answer is broken down into
  • 35.
    ADVANTAGES:  Itcan yieldvery reliable scores  The preparation of detailed answer may bring to the teachers.  The sub division of the model answer can make it easier to discuss with the students the marks awarded to them.
  • 36.
    LIMITATIONS:  Itis verylaborious and time consuming  In attempting to identify the elements, undue attention may be given to the specific aspect.
  • 37.
    Global grading:  In this method the ideal answer is not sub divided into the specific points and component points. The examiner is interacted to read the responses rapidly from a general impression and using some standard and standard assign
  • 38.
    Sequential Grading  In order to bring more objectivity more answers can be scored beautifully. This is the same teacher valuing answer of a particular question.
  • 39.
    Computer Software  The “Software” learns a specific subject area by scanning appropriate documents. Then, the software is fed graded essays to set up the grading standards.
  • 40.
    Scoring Objective Tests  Hand graded : Due to human effort, mistakes may occur. Having two graders grade exams help to catch 90% of those simple mistakes in grading.
  • 41.
    Machine Scoring : As accurate as the answer code given to the computer.  Some testing publishers will only release or sell their products to individuals who have undergone special training or have a particular degree in a related field.
  • 42.
    Avante International Technology (Biometric) The first test scoring system to achieve less than 1 error in 1.5 million marks during testing by an independent testing laboratory responsible for testing election equipment and ballots. The same error-free tabulation method is adapted for test scoring and grading, and survey tabulation.
  • 43.
  • 44.
    Definition  Item analysis is a process that examines students’ response to individual test items/questions in order to assess the quality of those items and of the test as a whole.
  • 45.
    Benefits of itemanalysis:  Provides a basis for efficient classroom discussion of the test results  Provides data for remedial work  Provides a basis for the general improvement of classroom instruction  Provides a basis for increased skill in item construction
  • 46.
    Procedures involved inan item analysis Qualitative:  Qualitative item analysis procedures include proofreading of the exam prior to administering it for typographical errors, for grammatical cues and for appropriateness of the reading level of the material, conducting small group discussions of students after the exam and some time with the experts.
  • 47.
    Quantitative:  Item difficultyindex (p)  Item difficulty index portrays the “easiness” of an item because the higher the percentage, the easier the item. Item difficulty index is symbolized by p.
  • 48.
     Item difficulty= R/T R = number of students who correctly answered the item T = number of students included for the analysis.
  • 49.
    Item Discrimination Index(D)  The item discrimination index of a test refers to the degree, which the item discriminates between high achieving students and low achieving students in terms of the scores of the total test
  • 50.
     Theformula todetermine item discrimination index is : D = R u - R 1/ ½ T Ru = number of students in the upper group who got the item right. R1 = number of students in the lower group who got the item right. ½ T = One half of the total number of students included in the
  • 51.
    EXAMPLE:  After you have notified the doctor about leg pain in a postpartum mother, your most APPROPRIATE action would be to  Massage her leg to increase circulation  Have her walk around to decrease the stiffness  Ask her to remain in bed
  • 52.
    Distractor Power  Thekind of statistic is Distractor Power. It provides information about effectiveness of the distractors
  • 53.
    Simplified item analysis procedures Conducttest/exams and score them. (Suppose we have conducted test on 21 students).  Arrange all answer sheets in order of merit (From higher to the lower score).
  • 54.
     Calculate 27% of the answer sheet. For a group of 21 students it will be approx. 6.  Select 6 papers within the highest total score and the 6 papers with the lowest total score.  Putaside 9 papers. They will not be used.
  • 55.
     Compute thedifficulty index of each item.  Compute the discriminating index of each item.  Evaluate the effectiveness of distracter
  • 56.
  • 57.
    GOALS  Accurate and useful reporting of assessment results enables teachers, students, parents and the public to understand why various assessment instruments are being applied and how the results will be used as part of the institute improvement process
  • 58.
    JOURNAL:  Developing and scoring essay tests.