How Sampling Errors Affect Credibility of Secondary Data Sources

How Errors Sampling Affect Secondary Data Sources


In certain observational experiments, sampling challenges are already present and need to be resolved in order to maintain the credibility of the results of the study. The perception of adequate sampling is primarily used in secondary data sources to suggest that the survey is adequate if the distortion in sampling due to the use of certain sample size is not sufficient to invalidate the results of the study.


If a poll does not guarantee that everyone has a fair chance to be surveyed, it can lead to criticism from a majority of the population. This will diminish the credibility of the collected data, and the results will be considered not to be accurate. For instance, when a retail company having 10,000 outlets in four neighboring countries decides to evaluate customer satisfaction. The main strategic focus is regional and not an outlet. The company should randomly select an equal number of customers in the four countries or select the customers basing on the number of outlets per country and survey them. However, when the company chooses to survey only the customers in a particular country, then the credibility of the study will diminish (Wang, and Bing-Huan 883).



Moreover, numerous questionnaires are designed in a way that everyone who receives the survey should be asked similar questions.


However, the credibility of a secondary data source suffers when varying questions are posted to each member of the selected sample. Thus, the core and idiosyncratic approach should be used to enhance the credibility of a study. This approach states that questions of major interest to certain decision makers should be marked and noted. For instance, the questionnaires on customer satisfaction might have different areas such as product performance, delivery, billing, and order. However, in some outlets the areas that carry more weight should be marked, like the ordering part will be important in certain outlets than others (Wang, and Bing-Huan 878).



Samples should be adjusted during Fieldwork


In reality, the researcher often decides on the sample size before fieldwork begins. The researcher uses a mathematical formula that bases on the standard deviation and the specific difference of the population to give the amount of a given sample. The standard deviation of the population is often obtained from a previously published research, and the specific difference is proposed by the researcher. However, during the actual research, the observer finds a different situation form the one he learned in the previous studies. In most cases, researchers find that they cannot have the required sample size due to missing data (Palinkas et al 537).


In a perfect world, all the participants would answer all the questions presented by the researcher. However, in reality, people get bored, sick, tired, and in some cases, they would be unable or do not want to participate. A situation like this will make the researcher have an inadequate sample size. The study then will have reduced statistical power due to inadequate sample size. Therefore, the researcher will have to adjust the sample size to maintain the statistical power of the research. More sophisticated analyses require a higher sample size for the researcher to achieve enough power for the study. For example, if the researchers consider a very complicated analysis like a multivariate analysis of covariance (MANCOVA) or multiple linear regression he will need to have the adequate sample size to achieve the necessary outcomes of the study. Thus, when some respondents fail to turn up during fieldwork, adjustments have to be done on the sample. Some non-essential variables can be removed and the sample size reduced, or the sample size will be increased. Therefore, samples always have to be adjusted after the fieldwork begins because the real world is different from the perfect world in the books and also the assumptions made in previous studies.



Sampling not to be considered as Secondary Data Collection


Secondary data collection refers to the method of using an already gathered data. The investigator conducting a research does not need to go for fieldwork to collect similar data but uses the available data. Sampling should not be considered as a secondary data collection technique since it involves selecting a particular group of the population to find firsthand information. Data collected during sampling is gathered with a concrete idea in mind to answer the set research questions or to meet certain objectives. However, secondary data sources provide a vast amount of information, but the appropriateness of the information to the research objectives might differ (Costa et al 885).


Sampling is a statistical procedure whose main focus is the selection of an individual observation. Sampling helps the researcher to make statistical inferences about a certain population. Unlike secondary data collection methods, sampling entails the collection of original data that has been gathered only for the purpose in mind. It also involves data that is collected for the first time and is fresh. Secondary data is data that was collected for other reasons. Furthermore, in secondary data collection less effort, time, and the cost is usually spent (Costa et al 889).


An example of sampling is when an observer who wants to find the relationship between unemployment and street crime in New York City. The city has several streets with several persons, and so the researcher will have to select and determine a sample size of the study to be conducted. After determining the sample size, the observer will then proceed to send a questionnaire to various respondents who will fill in their opinions on the questions posted. The information filled in the questionnaires will be firsthand information. Hence sampling will not be considered as a secondary data collection method. Therefore, sampling does not need to be considered as secondary data collection.



Sampling Error is the same as Respondent Bias


Sampling error refers to a statistical error which occurs when a researcher fails to select a sample representing the whole population of data and the outcomes of the sample do not represent the outcomes that would be obtained from the entire population. The main cause of sampling error is a biased sampling procedure. Furthermore, the chance is another possible cause of sampling error. On the other hand, respondent bias refers to the factors or conditions that happens during the process of responding to surveys, and influence the manner responses are provided. Such conditions result in a non-random deviation of the responses from their true value. Therefore, the sampling error is the same as respondent bias since they are both errors that affect the results of a survey negatively and similarly (Johnson and Turner 301).


Sampling error is the same as respondent bias. For example, if a certain company provides a subscription-based service which permits consumers to make payments monthly for streaming videos and another programming for at least 10 hours weekly. When the company chooses a population of people aged between 12 and 22 years to survey purchasing decision about a video streaming service. Since people of this age group are not working full-time, they will give responses that favor them leading to respondent bias. Moreover, the results from this selected sample would not represent the outcomes of the entire population including the adults who would be the main beneficiaries of the subscription service leading to a sampling error (Johnson and Turner 309).Thus, the sampling error is the same as respondent bias since they are both errors that affect the results of a survey negatively and similarly.



Works Cited


Costa, Gabriel C., et al. "Sampling bias and the use of ecological niche modeling in conservation planning: a field evaluation in a biodiversity hotspot." Biodiversity and Conservation 19.3 (2010): 883-899.


Han, Hui, Wen-Yuan Wang, and Bing-Huan Mao. "Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning." Advances in intelligent computing (2005): 878-887.


Johnson, Burke, and Lisa A. Turner. "Data collection strategies in mixed methods research." Handbook of mixed methods in social and behavioral research (2003): 297-319.


Palinkas, Lawrence A., et al. "Purposeful sampling for qualitative data collection and analysis in mixed method implementation research." Administration and Policy in Mental Health and Mental Health Services Research 42.5 (2015): 533-544.

Deadline is approaching?

Wait no more. Let us write you an essay from scratch

Receive Paper In 3 Hours
Calculate the Price
275 words
First order 15%
Total Price:
$38.07 $38.07
Calculating ellipsis
Hire an expert
This discount is valid only for orders of new customer and with the total more than 25$
This sample could have been used by your fellow student... Get your own unique essay on any topic and submit it by the deadline.

Find Out the Cost of Your Paper

Get Price