# Scenarios for using raters or observers

There are different times where either the raters or the observers should be used to ensure that they gather any useful data that can be used to achieve something positive. Therefore, this assignment would aim to provide several instances where this can be accomplished and completed as a whole. The first scenario is that the calculation to be performed comprises primarily of various groups. Here, the raters will be forced to look at the groups where each and every discovery made will go. In order to thoroughly gauge this subject, it must be possible to approximate the percentage of the agreement between the various haters by means of estimates (Burda et al.,2017). A good example is a situation where two raters are to do the rating for almost one hundred observations.
The other scenario is where a measure used in continuous nature. Under this circumstance, one will be forced to calculate the correlations between the ratings that will be provided by the two witnesses. A good example is the case of a classroom where all the levels of activities taking place are rated on the scale of 1 to 10. At the same time, they may decide to give their rating at a constant interval of let say every 20 seconds. By establishing the correlations between the scores of the two raters, consistency will be fully gauged (Burda et al.,2017). The level of reliability of data will also be there.
The final scenario is where the observers are being calibrated by the reliability. This will now involve the whole process of encouraging the rate at which the observers are reliable. This is possible even without doing an estimation of the value. A real case is that of those working in the psychiatric unit where the nurses are assigned with the task of rating the patients who are o the units (Baylis et al.,2015). It is therefore believed that each and every nurse will be able to give ratings which can be compared.
Statistical procedures involved in evaluating inter-rater reliability
Various methods can be used when it comes to ensuring the reliability of inter-rater. In this section, I will have a look at these three important procedures one by one. The first procedure is the use of joint probability of the agreement between these raters. Here, the process is very simple and experiences robust measurement (Baylis et al., 2015). It is done regarding the numbers of times the ratings have been assigned by every individual rater. This will then be divided by the sum of all the ratings done during that period. In this procedure, data that is to be collected is presumed to be nominal in nature.
The other procedure is the Kappa statistical method. This procedure can work for two raters at some times and involve any number of raters available so long as it is a fixed number (Baylis et al., 2015). This process will take into account the amount or total summation of the agreements that is expected to arise in the case of chance.
Finally, the use of correlation coefficient procedure. This will involve the utilization of coefficients that were first engineered by Spearman, Kendall, and Pearson. These can be used to measure pairwise correlation among various raters. This is achieved through the use of ordered scale. Burda et al.(2017), a scale used in rating is usually continuous. The other two believes and takes the assumption that the scale is ordinal. The real result will, therefore, be obtained concerning average agreement level.

References
Baylis, A., Chapman, K., Whitehill, T. L., & Group, T. A. S. (2015). Validity and reliability of visual analog scaling for assessment of hyper nasality and audible nasal emission in children with repaired cleft palate. The Cleft Palate-Craniofacial Journal, 52(6), 660-670.
Burda, B. U., O’Connor, E. A., Webber, E. M., Redmond, N., & Perdue, L. A. (2017). Estimating data from figures with a Web‐based program: Considerations for a systematic review. Research Synthesis Methods.