Qualitative analysis approaches may be used to establish causal relations to justify causal statements. This belief is shared by a sizable number of quantitative and qualitative scholars today (Keman and Woldendorp, 2016). Nonetheless, this assumption remains contentious, and a comprehensive rationale for this bias is yet to be presented. For some time, there has been resistance to qualitative research's ability to resolve casualty. Different views on this topic are now held in quantitative and qualitative opinions, and there is no sign of approaching a consensus. Nonetheless, the arrival of realism as a separate option to both constructivism and positivism as a philosophical attitude for social science provides modern ways of addressing the issue. This paper presents such an explanation addressing both new philosophical evolutions, which encourage this position and real strategies for research, which can be used by qualitative researchers in causal investigations.
Designing and Evaluating Alternative Explanations
Some strategies help to develop causal explanations. However, they fail to address the issue of creating credible options to these statements, choosing between different accounts that correspond to the information, or examining an explanation for any prospective threats of validity. Some approaches are particularly useful in addressing causal validity. One of the strategies is the modus operandi strategy. This method was originally suggested by Bruter (2013).It is similar to the technique of a physician trying to detect illness in a patient, an inspector attempting to ascertain the source of a plane crash or a detective technique of seeking to resolve a crime. Essentially, instead of trying to address threats of validity as variables, by keeping them constant in a way or trying to statistically direct for their impact, the modus operandi technique handles them as procedures. The researcher attempts to recognize their prospective threats of validity or optional explanations, which would cause a threat to the potential explanation. In addition, the researcher hunts for pointers ascertain if these procedures were functioning and if they had hypothesized the causal influence. For instance, take a scientist who is worried that a section of her interviews with trainers had been affected by their supervisors informed perceptions on the issues under investigation and that they failed to express their own beliefs. Rather than disregarding teachers with his supervisor from the sample, the researcher can reflect on what inner proof could differentiate between the two causal procedures (like a change in behavior and voice when these two matters are discussed) by searching for such evidence in the interviews or other information. The researcher could attempt to discover methods to explore this impact directly via succeeding meetings. The biggest challenges in the application of this approach in qualitative research are inventing the most significant alternative clarifications and specifying their function in sufficient detail that their outcome is predictable (Bruter, 2013). It is noted that it is often hard for any person who takes months or weeks inventing one explanation to become seriously engaged with another one. Looking for inconsistent proof and negative cases
The application of operandi technique is determined by the researchers readiness to look for evidence, which could challenge the explanation they have developed. Researchers tend to vigorously and consciously notice supporting situations and disregard, those that fail to fit their previous conclusions (Keman and Woldendorp, 2016). Recognizing and examining inconsistent data and negative cases form a significant part of evaluating a suggested outcome. Situations that fail to be accounted for by a certain explanation or interpretation can single out major flaws in that report. However, the expected discrepant proof should itself be evaluated for threats of validity. Sometimes, an obviously discrepant situation is not convincing, like when the explanation of the discrepant information is itself doubtful. Physics provides many examples of apparently dis-ascertaining experimental evidence, which some time was after discovered to be defective. The primary concept here is to carefully investigate both the discrepant and supporting information to judge whether it is plausible enough to modify or retain the conclusion. One method that contributes to this objective is called quasi-statistics The phrase refers to the application of natural numerical outcomes that can be conveniently obtained from the information. (Weinberg, 2012). An assertion that an individual occurrence is prevalent, rare, or typical in the population and context researched is an intrinsically quantitative claim and needs some quantitative contribution.
Quasi-statistics can also be employed to evaluate the measure of evidence that carries on a particular threat or conclusion, the number of different sources, and the number of existing discrepant instances. This approach was used efficiently in a definitive participant-observation research of student in medical school, which presented over fifty graphs and tables of the quantity and spread of interview and observational information challenging and supporting their conclusions. Triangulation is a way of data collection from a wide range of people and contexts by use of the various methods (Weinberg, 2012). The technique minimizes the risk of systematic biases due to a particular method or source and places the researcher in a set of mind regarding their work critically. For instance, the research did not depend completely on interviews with students of a medical school for conclusions regarding how ideal teaching assisted students in learning. The researchers explanations were validated by the researchers own experience by participant-observer in the trainer's classes. The explanations also corroborated them by trainers of why they worked the way they did. Additionally, the researcher intentionally interviewed students using diverse attitudes and characteristics to make sure those responses were not only coming from a fraction of students. Nonetheless, triangulation is not useful in automatically promoting validity. To start, the techniques that are related to triangular may possess similar biases and hence offer only a wrong sense of security. For instance, documents, questionnaires, and interviews are all susceptible to a bias of self-report. Second, researchers may unconsciously or consciously choose those data or techniques that are inclined to support their conclusions, or highlight the information that corresponds with their compatibility or vividness with their hypotheses. These two examples present what is often referred to as researcher bias. In the end analysis, threats of validity are eliminated by evidence rather than methods as methods require to be chosen for their capacity for creating evidence that can effectively evaluate these risks.
Getting reactions from others constitutes a constructive way of pinpointing threats of validity, ones own biases, flaws and assumptions in methods and reasoning. One particular kind of reaction is systematically seeking feedbacks to ones statistics and conclusions from individuals being studied, a procedure called member checks (Wilson,2012). It not only works in correcting wrong interpretations of their meanings and views but also can offer optional explanations of observed processes and events. This method was used in the student of teaching in medical school, performing informal interviews among the students being studied to ensure that what students were saying was well understood and that the conclusions by the researcher were sensible to them. Nonetheless, it is warned that reactions of members are not perfectly generated, but instead are formed and limited by the status of their production. Some of the challenges faced in the application of this method include lack of interest in members, problems of comparing their perceptions with those of the researcher, their desire to agree with the researcher, their ulterior purposes, and other interaction limitations. These threats to validity must themselves be assessed and considered.
Conclusion
The approaches discussed comprise the ones that numerous diligent qualitative researchers often apply, though they are hardly defined clearly in empirical study publications. They can be lawfully used for the enhancement and evaluation of causal explanations. Using qualitative methods to identify causal effects involves its risks and threats of validity. Field researcher is regularly interested in understanding the happenings in their settings of study, not just to develop their theoretical knowledge but to add to their improvement as well. To succeed in these two tasks, they ought to identify the causal procedures that happen in these contexts and to differentiate valid interpretations for results from false ones. By using available strategies for comprehending causal processes and addressing threats of validity to causal conclusions, it is possible for qualitative research to provide causal explanations adequately.
References
Bruter, M. (2013). Political science research methods in action. Basingstoke: Palgrave Macmillan.
Keman, H. and Woldendorp, J. (2016). Handbook of research methods and applications in political science. Northampton, MA: Edward Elgar Pub.
Weinberg, D. (2012). Qualitative research methods. Malden, MA: Blackwell Publishers.
Wilson, E. (2012). Introduction to Scientific Research. Dover Publications.
Type your email