Delta. Inferences about the agreement on a nominal scale between two and among several raters

Team:

  • Antonio Martín Andrés
  • Pedro Femia Marzo
  • María Álvarez Hernández
  • Ana D. Maldonado González



  • Model for Multiple Choice Tests evaluation

    When a test-taker answers a Multiple Choice Test (MCT) with K alternative answers, of which only one is correct (the key) and the remainder are distractors, he or she can know the true answer and responds accordingly, or does not know it and responds guessing. Traditionally, it is assumed that the test-taker chooses the alternative in question with a probability of 1/ K when he or she is guessing. Our model proposes that the random response pattern can be different and it depends on each test taker. (Read more...)



    Model for Nominal agreement between two raters

    When two raters independently classify n objects within K nominal categories, the level of agreement between them is usually assessed by means of Cohen’s Kappa coefficient. However, the coefficient Kappa has been the subject to several criticisms, the main one is that it is very sensitive to the marginal distributions. Additionally, when a more detailed analysis is needed, it requires the evaluation of the degree of agreement class by class, and traditionally, non-chance corrected indexes are used for this purpose. Our model Delta, does not possess the limitations aforementioned of kappa and it allows to define measures of agreement class by class which are chance-corrected. Additionally, Delta distinguishes the case where one of the two raters is a standard, from the case where neither rater is a standard (Kappa doesn't), and also the samplig type which has been considered. (Read more...)

     

     

    Webmaster: Pedro Femia / 2011-2024