# Adjusting the Statistically Determined Sample Size Marketing Research Help

The sample size determined statistically represents the final or net sample size that must be achieved in order to ensure that the parameters are estimated with the desired degree of precision and the given level of confidence, In surveys this represents the number of interviews that must be completed. In order to achieve this final sample size, a much greater number of potential respondents have to be contacted. In other words. the initial sample size has to be much larger because typically the incidence rates and completion rates are less than 100 percent.

Incidence rate refers to the rate of occurrence or the percentage of persons eligible to participate in the study. Incidence rate determines how many contacts need to be screened for a given sample size requirement. Suppose a study of floor cleaners calls for a sample of female heads of households age 25 to 55, Of the women between the ages of 20 and 60 who might reasonably be approached to see if they qualify, approximately 75 percent are heads of households between 25 and 55. This means that, on average, 1.33 women would be approached to obtain one qualified respondent, Additional criteria for qualifying respondents (for example, product usage behavior) will further increase the number of contacts, Suppose that an added eligibility requirement is that the women should have used a floor cleaner during the last two months, It is estimated that 60 percent of the women contacted would meet this criteria, Then the incidence rate is 0.75 x 0.60 =0.45. Thus the final sample size will have to be increased by a factor of (1/0.45) or 2.22.

Similarly the determination of sample size must take into account anticipated refusals by people who qualify, The completion rate denotes the percentage of qualified respondents who complete the interview, If for example, the researcher expects an interview completion rate of 80 percent of eligible respondents, the number of contacts should be increased by a factor of 1.25, The incidence rate and the completion rate together imply that the number of potential respondents contacted, that is, the initial sample size, should be 2.22 X 1.25 or 2.77 times the sample size required. In general, if there are c qualifying factors with an incidence of Q1 Q2 Q3 … ,Q4, each expressed as a proportion,

The number of units that will have to be sampled will be determined by the initial sample size, These calculations assume that an attempt to contact a respondent will result in a determination as to whether the respondent is eligible, However, this may not be case, An attempt to contact the respondent may be inconclusive as the respondent may refuse to answer, not be at home, be busy, etc. Such instances will further increase the initial sample size, These instances are considered later when we calculate the response rate, Often, as in the following symphony example, a number of variables are used for qualifying potential respondents, there by decreasing the incidence rate.

Real Research

Tuning Up a Symphony Sample

A telephone survey was conducted to determine consumers awareness of and attitudes toward the Jacksonville Symphony Orchestra The screening qualifications for a respondent included in the survey were (I) has lived in the Jacksonville area for more than one year; (2) 25 years old or older; (3) listens to classical or pop music; and (4) attends live performances of classical or pop music, These qualifying criteria decreased the incidence rare 10 less than  5 percent, leading 10 a substantial increase in the number of contacts. Although having four qualifying factors resulted in a highly target tuned sample, it also made the interviewing process inefficient,because several people who were called could not qualify, The survey indicated that parking was a problem and people wanted greater involvement with the symphony. Therefore, the Jacksonville Symphony Orchestra advertised the Conductor’s Club in 2009. Annual fund donors who join can enjoy the perks of membership, including complimentary valet parking at all Jacksonville Symphony Master works and Pops concerts, All membership levels include complimentary admission to intermission receptions in the Davis Gallery at selected concerts (including open bar and hors d’ oeuvres).

Calculation of Response Rates

Following the Council of American Survey Research Organizations we define response rate as:

Number of Completed Interviews
Response Rate =

Number of EIigible units in SampIe

To illustrate how the formula is used. consider the following simple example involving a single-stage telephone survey with individuals where no screening is involved, The sample consisted of 2.000 telephone numbers that were generated randomly, Three attempts were made to reach each respondent, The results are summarized as follows.

In this example. the number of eligible units is 2.000. and the response rate after 3 calls is 85.0 percent.

Now consider the case of a single-stage sample where screening is required to determine the eligibility of the respondents, i.e to ascertain whether the respondent is qualified to participate in the survey, The attempt to screen each respondent will result in one of three outcomes: (1) eligible. (2) ineligible. (3) not ascertained (NA.) The NA category will include refusals. busy signals, no answers. etc. In this case, we determine the number of eligible respondents in the NAs by distributing NAs in the ratio of (1) to (1 + 2), Suppose that we made 2,000 telephone calls that resulted in the following outcomes:

Number of completed interviews = 800
Number of eligible respondents = 900
Number of ineligible respondents = 600
Not ascertained (NA) = 500

The first step is to determine the number of eligible units in the NAs. This can be calculated as: 500 X (900/(900+600» = 300

Thus, the total number of eligible units in the sample = 900 + 300 = 1200, Thus, the response rate = 800/1200 = 66.7 percent.

Although we illustrate the calculation of response rates for telephone interviews, the calculations for other survey methods are similar. For more complex calculation of response rates Response rates are affected by non response. Hence, non response issues deserve attention.

Non-response Issues in Sampling

The two major non-response issues in sampling are improving response rates and adjusting for non-response, Non-response error arises when some of the potential respondents included in the sample do not respond, This is one of the most significant problems in survey research. Non respondents differ from respondents in terms of demographic, psycho-graphic personality, attitudinal, motivational, and behavioral variables, For a given study, if the non-respondents differ from the respondents on the characteristics of interest, the sample estimates will be seriously biased. Higher response rates, in general, imply lower rates of non-response bias, yet response rate may not be an adequate indicator of non-response bias, Response rates themselves do not indicate whether the respondents are representative of the original sample, Increasing the response rate may not reduce non-response bias if the additional respondents are not different from those who have already responded but differ from those who still do not respond. Because low

FIGURE 12.2

response rates increase the probability of non-response bias, an attempt should always be made to improve the response rate.12

Improving the Response Rates

The primary causes of low response rates are refusals and not-at-homes, REFUSALS Refusals, which result from the unwillingness or inability of people included in the sample to participate, result in lower response rates and increased potential for non-response bias,
Refusal rates, the percentage of contacted respondents who refuse to participate, range from 0 to 50 percent or more in telephone surveys, Refusal rates for mall-intercept interviews are even higher, and they are highest of all for mail surveys. Most refusals occur immediately after the interviewer’s opening remarks or when the potential respondent first opens the mail package, In a national telephone survey, 40 percent of those contacted refused at the introduction stage, but only 6 percent refused during the interview, The following example gives further information on refusals, terminations, and completed interviews.

Real Research

Reasons for Refusal

In a study investigating the refusal problem in telephone surveys, telephone interviews were conducted with responder and non-responders to a previous survey, using quotas of 100 for each sub-sample, The results are presented in the foil wing table:

Refusals, Terminations, and Completed Interviews

The study found that people who are likely to participate in a telephone survey (responded) differ from those who are likely to refuse (non-responded) in the following ways: (1) confidence in survey research. (2) confidence in the research organization. (3) demographic characteristics. and (4) beliefs and attitudes about telephone surveys.

A recent study conducted by CMOR indicated that consumers prefer Internet surveys versus the telephone method of surveys, Statistically speaking. out of 1.753 U.s. consumers. 78.9 percent of respondents chose the Internet as their first choice of survey method. whereas only 3.2 percent chose the telephone method of surveys.

Given the differences between responders and non-responders that this study demonstrated, researchers should attempt to lower refusal rates. This can be done by prior notification. motivating the respondents, incentives, good questionnaire design and administration, and follow-up.

Prior notification. In prior notification, potential respondents are sent a letter notifying them of the imminent mail, telephone, personal, or Internet survey. Prior notification increases response rates for samples of the general public because it reduces surprise and uncertainty and creates a more cooperative atmosphere.

Motivating the respondents. Potential respondents can be motivated to participate in the survey by increasing their interest and involvement, Two of the ways this can be done are the foot-in-the door and door-in-the-face strategies, Both strategies attempt to obtain participation through the use of sequential requests, in the foot-in-the-door strategy, the interviewer starts with a relatively small request such as “Will you please take five minutes to answer five questions?” to which a large majority of people will comply, The small request is followed by a larger request the critical request, that solicits participation in the surveyor experiment, The rationale is that compliance with an initial request should increase the chances of compliance with the subsequent request, The door-in-the-face is the reverse strategy, The initial request is relatively large and a majority of people refuse to comply, The large request is followed by a smaller request, the critical request, soliciting participation in the survey, The underlying reasoning is that the concession offered by the subsequent critical request should increase the chances of compliance. Foot-in-the-door is more effective than door-In-the-face.

Incentives Response rates can be increased by offering monetary as well as non-monetary incentives to potential respondents, Monetary incentives can be prepaid or promised. The prepaid incentive is included with the surveyor questionnaire, The promised incentive is sent to only those respondents who complete the survey, The most commonly used non-monetary incentives are premiums and rewards, such as pens, pencils, books, and offers of survey results, Prepaid incentives have been shown to increase response rates to a greater extent than promised incentives, The amount of incentive can vary from 10 cents to \$50 or more, The amount of incentive has a positive relationship with response rate, but the cost of large monetary incentives may outweigh the value of additional information obtained, Questionnaire design and administration, A well-designed questionnaire can decrease the overall refusal rate as well as refusals to specific questions Likewise, the skill used to administer the questionnaire in telephone and personal interviews can increase the response rate, Trained interviewers are skilled in refusal conversion or persuasion, They do not accept a “no” response without an additional please, The additional might emphasize the brevity of the questionnaire or the importance of the respondent’s opinion, Skilled interviewers can decrease refusals by about 7 percent on average.

Follow-up. Follow-up. or contacting the non-respondents periodically after the initial contact, is particularly effective in decreasing refusals in mail surveys.The researcher might send a postcard or letter to remind non-respondents to complete and return the questionnaire, Two or three mailings are needed, in addition to the original one, With proper follow-up, the response rate in mail surveys can be increased to 80 percent or more. Follow-ups can also be done by telephone, e-mail, or personal contacts.

Other facilitators. Personalization, or sending letters addressed to specific individuals, is effective in increasing response rates The next example illustrates the procedure employed by Arbitron to increase its response rate.

Real Research

Arbitron’s Response to Low Response Rates

Arbitron is a major marketing research supplier. For the year ending December 31, 2008, the company responded revenue of \$368.82 million. Recently. Arbitron was trying to improve response rates in order to get more meaningful results from its surveys, Arbitron created a special cross-functional team of employees to work on the response rate problem. Their method was named the “breakthrough method” and the whole Arbitron system concerning the response rates was questioned and changed. The team suggested six major strategies for improving response rates:

I. Maximize the effectiveness of placement/follow-up calls.
2. Make materials more appealing and easier to complete.
3. Increase Arbitron name awareness.
4. Improve survey participant rewards.
5. Optimize the arrival of respondent materials.
6. Increase usability of returned diaries.

Eighty initiatives were launched to implement these six strategies, As a result, response rates improved significantly. However. in spite of those encouraging results. people at Arbitron remain very cautious, They know that they are not done yet and that it is an everyday fight to keep those response rates high. Arbitron’s overall response rate was about 33 percent.

NOT-AT-HOMES The second major cause of low response rates is not-at-homes, Telephone and in-home personal interviews, low response rates can result if the potential respondents are not at home when contact is attempted. A study analyzing 182 commercial telephone surveys involving a total sample of over one million consumers revealed that a large percentage of potential respondents was never contacted, The median non contact rate was 40 percent, In nearly 40 percent of the surveys, only a single attempt was made to contact potential respondents, The results of 259,088 first-call attempts, using the sophisticated random digit dialing M/NRIC System shows that less than 10 percent of the calls resulted in completed interviews.

The likelihood that potential respondents will not be at home varies with several factors, People with small children are more likely to be at home than single or divorced people, Consumers are more likely to be at home on weekends than on weekdays, and in the evening as opposed to during the afternoon. Pre-notification and appointments increase the likelihood that the respondent will be at home when contact is attempted.

The percentage of not-at-homes can be substantially reduced by employing a series of callbacks, or periodic follow-up attempts to contact non-respondents, The decision about the number of callbacks should weigh the benefits of reducing non-response bias against the additional costs, As callbacks are completed, the callback respondents should be compared to those who have already responded to determine the uselessness of making further callbacks, In most consumer surveys, three to four callbacks may be desirable, Whereas the first call yields the most responses, the second and third calls have a higher response per call, It is important that callbacks be made and controlled according to a prescribed plan.

Adjusting for Non-response

High response rates decrease the probability that non-response bias is substantial, Non-response rates should always be reported and, whenever possible, the effects of non response should be estimated, This can be done by linking the non-response rate to estimated differences between respondents and non-respondents, Information on differences between the two groups may be obtained from the sample itself, For example, differences found through callbacks could be extrapolated, or a concentrated follow-up could be conducted on a sub-sample of the non-respondents, Alternatively it may be possible to estimate these differences from other sources.” To illustrate, in a survey of owners of major appliances, demographic and other information may be obtained for respondents and non-respondents from the warranty cards, For a mail panel, a wide variety of information is available for both groups from syndicate organizations, If the sample is supposed to be representative of the general population, then comparisons ‘can be made with census figures, Even if it is not feasible to estimate the effects of non-response, some adjustments should still be made during data analysis and interpretation, The strategies available to adjust for non-response error include sub-sampling of non-respondents, replacement, substitution, subjective estimates trend analysis, simple weighting, and imputation.

SUB SAMPLING OF NON-RESPONDENTS Sub-sampling of non-respondents, particularly in the case of mail surveys, can be effective in adjusting for non-response bias, In this technique, the researcher contacts a subs ample of the non-respondents, usually by means of telephone or personal interviews, This often results in a high response rate within that sub-sample, The values obtained for the sub-sample are then projected to all the non-respondents, and the survey results are adjusted to account for non-response. This method can estimate the effect of non-response on the characteristic of interest.

REPLACEMENT In replacement, the non-respondents in the current survey are replaced with non-respondents from an earlier, similar survey, The researcher attempts to contact these non-respondents from the earlier survey and administer the current survey questionnaire to them, possibly by offering a suitable incentive. It is important that the nature of non-response in the current survey be similar to that of the earlier survey, The two surveys should use similar kinds of respondents, and the time interval between them should be short. As an example, if the department store survey is being repeated one year later, the non-respondents in the present survey may be replaced by the non-respondents in the earlier survey.

SUBSTITUTION In substitution, the researcher substitutes for non-respondents other elements from the sampling frame that are expected to respond. The sampling frame is divided into subgroups that are internally homogeneous in terms of respondent characteristics, but heterogeneous in terms of response rates, These subgroups are then used to identify substitutes who are similar to particular non-respondents but dissimilar to respondents already in the sample, Note that this approach would not reduce non-response bias if the substitutes are similar to respondents already in the sample.

Real Research

Exit Polling of Voters: Substituting Non-respondents

Planning exit interviews for a presidential election begins as early as two years before the big day, Research firms such as (http://www.themarketingresearch.com) systematically recruit and train workers.

The questions are short and pointed. Certain issues are well-known determinants of a voter’s choice, whereas other questions deal with last-minute events such as political scandals, The questionnaires are written at the last possible moment and are designed to determine not only who people voted for but on what basis. Uncooperative voters are a problem in exit polling, Interviewers are told to record a basic demographic profile for non-compilers, From this demographic data, a voter profile is developed to replace the uncooperative voters using the method of substitution. Age, sex, race, and residence are strong indicators of how Americans vote. For example, younger voters are more likely to be swayed by moral issues, whereas older voters are more likely to consider a candidate’s personal qualities. Therefore, researchers substitute for non-respondents other potential respondents who are similar in age, sex, race, and residence. The broad coverage of exit interviews and the substitution technique for non-compliant voters allow researchers to obtain margins of error “lose to 3 to 4 percent, Exit polls correctly predicted Barack Obama as the clear winner in the 200& presidential elections.23

SUBJECTIVE ESTIMATES When it is no longer feasible to increase the response rate by sub-sampling, replacement, or substitution, it may be possible to arrive at subjective estimates of the nature and effect of non response bias, This involves evaluating the likely effects of non-response based on experience and available information, For example married adults with young children are more likely to be at home than single or divorced adults or married adults with no children, This information provides a basis for evaluating the effects of non-response due to not-at-homes in personal or telephone surveys.

TREND ANALYSIS Trend analysis is an attempt to discern a trend between early and late respondents, This trend is projected to non-respondents to estimate where they stand on the characteristic of interest, For example, presents the results of several waves of a mail survey, The characteristic of interest is average dollars spent on shopping in department stores during the last two months, The average dollar expenditures for the first three consecutive mailings can be calculated from the survey data but this value is missing for non-respondents (Non response case). The value for each successive wave of respondents becomes closer to the value for non-respondents. For example, those responding is the second mailing spent 79 percent of the amount spent by those who responded to the first mailing, Those responding to the third mailing spent 85 percent of the amount spent by those who responded to the second mailing. Continuing this trend, one might estimate that those who did not respond spent 91 percent [85 + (85 – 79)] of the amount spent by those who responded to

Table 12.4

the third mailing. as shown in parentheses in Table 12.4. This results in an estimate of 5252 (277 X 0.91) spent by non-respondents, as shown in parentheses in  and an estimate of 5288 (0.12 X 412 + 0.18 X 325 + 0.13 X 277 + 0.57 X 252) for the average amount spent in shopping at department stores during the last two months for the overall sample, Suppose we knew from mail panel records that the actual amount spent by the non-respondents was \$230 rather than the \$252 estimated. and the actual sample average was \$275 rather than the \$288 estimated by trend analysis, Although the trend estimates are wrong, the error is smaller than the error that would have resulted from ignoring the non-respondents, Had the non-respondents been ignored. the average amount spent would have been estimated at \$335 (0.12 X 412 + 0.18 X 325 + 0.13 X 277)/(0.12 + 0.18 + 0.13) for the sample.

WEIGHTING Weighting attempts to account for non-response by assigning differential weights to the data depending on the response rate,.24 For example, in a survey on personal computers, the sample was stratified according to income. The response rates were 85. 70. and 40 percent. respectively, for the high-. medium-. and low-income groups. In analyzing the data. these subgroups are assigned weights inversely proportional to their response rates. That is, the weights assigned would be (100/85), (100 no). and (100/40), respectively. for the high-. medium-. and low-income groups. Although weighting can correct for the differential effects of non-response. it destroys the self-weighting nature of the sampling design and can introduce complications. Weighting is further discussed in Chapter 14 on data preparation.

IMPUTATION Imputation involves imputing. or assign, the characteristic of interest to the non-respondents based on the similarity of the variables available for both non-respondents and respondent.P For example. a respondent who does not report brand usage may be imputed the usage of a respondent with similar demographic characteristics. Often there is a high correlation between the characteristic of interest and some other variables. In such cases, this correlation can be used to predict the value of the characteristic for the non-respondents.

### Related Assignments

Posted on November 30, 2015 in Sampling Final and Initial Sample Size Determination

Back to Top