Validating a Survey: What It Means, How to do It

A comprehensive guide to validating a survey

Visualize Surveys + Shop Market Research 

Access, analyze, and visualize the leading market research at mTab.

Surveys are used to gather information on a variety of topics, including demographics, opinions, and preferences. However, it is important to make sure that you validate the questions in your survey before you start collecting data.

While many organizations may urge you to “add validation” as a quick survey tip, that’s about as far as their suggestion often goes. Unfortunately, validating a survey requires a lot more than that. Dave Collingridge noticed the same phenomenon when he was a social sciences graduate student unable to find a professor or other faculty member who would or could help him with survey validation. Now a senior research statistician at a major healthcare organization, Collingridge has since researched and compiled his own method for validating survey questions, a strategy he shared in a Sage Publications article posted on MethodSpace.

What Validating a Survey Means
Validating a survey refers to the process of assessing the survey questions for their dependability. Because there are multiple, tough-to-control factors that can influence the dependability of a question, validating a survey is neither a quick nor easy task

validating a survey or questionnaire

How to Validate a Survey
Collingridge outlines a six-step validation method he has successfully used over the years.

Step 1: Establish Face Validity
This two-step process involves having your survey reviewed by two different parties. The first is a group familiar with your topic who can evaluate if your questions successfully capture your topic. The second review should come from someone who is an expert on question construction, ensuring that your survey does not contain common errors such as leading, confusing or double-barreled questions.

Step 2: Run a Pilot Test
Select a subset of your intended survey participants and run a pilot test of the survey. Suggested sample sizes vary, although about 10 percent of your total population is a solid number of participants. The more participants you can round up, the better, although even a smaller sample can help you weed out irrelevant or weak questions.

Step 3: Clean Collected Data
Enter your collected responses into a spreadsheet to clean the data. Having one person read the values aloud and another entering them into the spreadsheet greatly reduces the risk of error. Once data is entered, your next step is to reverse code negatively phrased questions. If respondents have responded carefully, their answers to questions that are phrased negatively should be consistent with their answers to similar questions that are phrased positively. If that is not the case, you may want to think about eliminating that respondent from the survey. Also double-check minimum and maximum values for your overall dataset. If you’ve used a five-point scale and you see a response indicating the number six, you may have an error with data entry.

Step 4: Use Principal Components Analysis (PCA)
Principal components analysis, or PCA, allows you to identify underlying components that are being measured by your survey questions. These are known as factor loadings, and questions point back to the same elements should load into the same factors. A factor loading scale runs between -1.0 and 1.0.

Solid values to look for are factor loadings of 0.6 or above. You’ll occasionally run across questions that don’t appear to load onto any factors, which may necessitate a question removal or separate analysis. Your overall goal at this stage is to determine what the factors represent by seeking out common themes in questions that load onto the same factors. You can combine questions that load onto the same factors, comparing them during your final analysis of data. The number of factor-themes you can identify indicates the number of elements your survey is measuring.

This step validates what your survey is actually measuring. For instance, several questions may end up measuring the underlying component of employee loyalty, a factor not expressly asked about in your survey but one uncovered by PCA. Because PCA can be complex and needs to be precise, calling on a skilled expert for guidance during this step is a wise idea if you’re not familiar with the process.

Step 5: Check Internal Consistency
Your next step is to review the internal consistency of questions that load onto the same factors. Checking the correlation between questions that load on the same factor measures question reliability by ensuring the survey answers are consistent.

You can review the internal consistency with a standard test known as Cronbach’s Alpha (CA). Test values range from 0 to 1.0, and values should generally be at least 0.6 to 0.7 or higher to indicate internal consistency. If you have a value lower than 0.6, some CA programs let you delete a question from the test to see if it improves consistency. If it does, you may want to consider deleting the question from the survey. Like PCA, CA can be complex and most effectively completed with help from an expert in the field of survey analysis.

Step 6: Revise Your Survey
The final stage of the validation process is to revise your survey based on the information you gathered from your principal components analysis and Cronbach’s Alpha. If you run across a question that doesn’t neatly load onto a factor, you can choose to delete it. If the question is an important one you’d rather not delete, you can always retain it and analyze it separately. If only minor changes were made to your survey, it’s likely to be ready to go after its final revisions. If major changes were made, especially if you removed a substantial amount of questions, another pilot test and round of PCA and CA is probably in order.

Validating your survey questions is an essential process that helps to ensure your survey is truly a dependable one. You may also include your validation methods when you report on the results of your survey.

Mention your survey’s face value was established by experts, the survey was pilot tested on a subset of participants, and your pre-launch analysis included using PCA and CA methodology. Validating your survey not only fortifies its dependability, but it also adds a layer of quality and professionalism to your final product.

Try analyzing your survey with mTab as a next step.

John Sevec

SVP, Client Strategy

John provides strategic advisory and insight guidance to premier clients across mTab’s portfolio. His expertise spans customer strategy, market insight and business intelligence.