List Three Different Scenarios Where Raters Or Observers Could Be Used For Data Collection In A Research Study
Observation is a method of data collection that entails the use of sensory systems such as eyes and ears to record behavior. During observation, human beings make conclusions on the behavior of a given group, its frequency, latency, and duration. Those who take part in the observation process are known as raters or observers. There are many scenarios where observers could be used to collect data. The first scenario is when the research involves collecting data about a group of people or animals in their natural setting (Coolican, 2017). The animals, for instance, may be primates or those on the lower part of the phylogenetic scale. In such a case, the psychologist may decide to habituate the animals and collect unbiased information from them through observation without making the animals notice the presence of the researcher.
Observers could also be used to collect data in cases where the frequency of the aggressive behavior is recorded. An example of such a situation is observing 10-year-old children play in the field. The rater first defines the aggressive response to be followed and decides on the methods to be used to collect data. Participant observation could involve playing with the children with or without making them aware that they are under scrutiny. The researcher could also choose to observe them from a distance. In such a case, the frequency of the aggressive behavior is recorded.
The last scenario that may involve the use of raters is when the data is to be collected over a long period. Behaviors that require a longer time to be recorded are those which occur at low frequencies. Therefore, observers need to schedule time on a daily or weekly basis when they will be recording the data.
Discuss Three Different Statistical Procedures Used to Evaluate Interrater Reliability in Research
Interrater reliability is defined as the stability or consistency in the ratings across observers, raters or judges. Reliability indices are procedures used to determine consistency in the ratings made by a number of judges. There are different types of reliability indices. The first one is the observer agreement percentage (Kline, 2015). An example of the observer agreement is the percentage of observation agreed upon by two or more judges. For instance, when two judges are observing a particular behavior in children over a period of ten minutes at a one-minute time interval, and each is to note the number of times the behavior occurred within the set period, the reliability of their ratings can be determined using the method of percentage of observation. If both of them agree that the behavior occurred eight times out of the ten intervals, then the percentage of agreement will be 80%. One problem with the observer agreement percentage is that it does not give the degree of the accords in the ratings provided.
Interobserver correlations is another procedure which is used to measure interrater reliability. In this method, the consistency in ratings of two or more judges is done by calculating Spearman or Pearson correlation coefficient of the scores (Kline, 2015). For example, two members of the faculty may be asked to rank ten journal articles on a 5-point scale. The interobserver correlation of the ratings is determined by calculating the Pearson's correlation between the rankings (Gwet, 2014). Another method used is Kenda's coefficient of concordance. The procedure is applied when one or more judges rate a series of stimuli. The stimuli may be from objects, people, or animals. The coefficient is used to determine whether there are any differences between the ratings given by the judges.
References
Coolican, H. (2017). Research methods and statistics in psychology. Psychology Press.
Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters. Gaithersburg: Advanced Analytics.
Kline, P. (2015). A handbook of test construction (psychology revivals): introduction to psychometric design. Routledge.