Database Systems Corp.
Home  |   Contact Us  |   About Us  |   Sign Up  |   FAQ

predictive dialers and crm software
computer telephony software predictive dialer

CATI Telephone Interview
Voice Recording Software
Predictive Dialer
Business Phone Systems
Phone Software
Softphone IVR System
Computer Phone Software
Web Phone Software
Softphone Phone System

predictive dialers and crm software
computer telephony software predictive dialer

Church Events Announcements
School Alert Service
Digital Phone Systems
Collection Predictive Dialer
Debt Collection Software
Human Resources Software
Financial Services Marketing
Mortgage Software
Mortgage Calculator
Mortgage Leads
Call Centers
Marketing Leads
Real Estate Leads
Insurance Lead Providers
Fund Raising By Phone
Store Locator Phone Service
Insurance Marketing Leads
Insurance Software Solutions
Mortgage Marketing
Political Call System
Political Activism
Real Estate Marketing
Real Estate Marketing Tools
Real Estate Software
Real Estate Listings
Reminder System

predictive dialers and crm software
Information

Phone Surveys
Touchphone Surveys
Phone Survey Software
Customer Surveys
IVR Call
Clinical IVR
Employee Opinion Survey
Automatic Surveys
Salary Survey
Customer Satisfaction Survey
Customer Service Surveys Opinion Survey
Automated IVR Survey
Political Survey
Marketing Survey
Consumer Survey
Automated Surveys
Survey Dialers
Phone Surveying
Automatic IVR Surveys
IVR Survey Software
Survey Autodialer
Market Research Survey

emergency notification systems


phone survey and customer surveys

Phone Surveys

ivr survey software solution Technology from Database Systems Corp. lets you develop IVR survey applications using our interactive voice response IVR solutions. Surveys can be initiated by outbound phone calls or can be a response to callers. Using our PACER and WIZARD phone systems with the Smart Message Dialer and survey software, we can call your survey prospects and play a highly focused and custom greeting. We then can give your survey audience the option to take your survey or even talk with a representative, leave a voice message, hear additional information, or simply decline to participate in the survey. The survey can accept touchphone response or can record each question response for later analysis. To view more information regarding our automated phone applications, please visit our Automatic Phone Survey solution web page.

The following is an article relating to call survey techniques and products and services in our business.

The Impact on Performance Measurement Systems - and the Value of Dispersion

By Frederick C. Van Bennekom, Dr.B.A.,
Principal, Great Brook Consulting
www.greatbrook.com

Scale Design in Surveys: The Impact on Performance Measurement Systems - and the Value of Dispersion


My last article focused on how best to set performance goals based on results from surveys.  Is it better to set a goal based on the percentage of respondents who gave “Top Box” scores or it better to use an average?  As with many issues, there is no clear cut answer.  That article prompted an email from Cathy Switzer, Director, Customer Support Services, of FileMaker, Inc. who asked:

When we spoke a few months back, regarding expected survey scores, I was aggregating scores (scale of 1 low to 5 high) and taking an average.  I've rerun stats to calculate the total count of `5' responses, the total count of `4' responses, etc.  Respondents who gave us a 5 range from a high of XX% to a low of YY%.  Naturally, my question - what is an acceptable goal to receive the top rating?  I'd like to see 90% at the top rating, but realize that that may be too high an expectation!  So I turn to you for guidance on setting a goal %.
Here's my response:
There is no iron-clad goal for the % of 5s.  The wording of the questionnaire, scale choice, and the anchor choices will make a big difference, a VERY big difference.  The real goal is for a positive trend over time.  In fact -- and this will seem counterintuitive -- there's a strong argument for using wording that leads to a lower % of top box scores.  Why?  How else can you measure improvement?  When I worked for DEC (Digital Equipment Corp.), virtually all field service branches scored in the 8.8 to 9.3 range (on a 1 to 10 scale) in their annual survey.  Realistically, how can you improve?  (There were other issues with that survey program, too.)  I personally lean towards scales with good dispersion properties -- but I'm not measured on the results!
Let's first consider the impact of the questionnaire design and in particular the scale chosen for the survey questions.  Look at the following two scale design options that could be used on a customer satisfaction survey:
Strongly Disagree
Strongly
Agree
1
2
3
4
5
Completely Disagree
Completely
Agree
1
2
3
4
5
You may have encountered these scale types in surveys you've taken.  They are known as a Likert-type scale - after its creator - or more commonly known as a “strength of agreement” scale.  The surveyor poses a statement and asks respondents how strongly they agree with the statement.  Note that the scale could have a different number of points, an even versus odd number of points, or have descriptive words, known as “anchors” over every point on the scale.  Those differences are not my concern for this article.
Let's focus on the anchors used for the endpoints on the scale.  Imagine after staying in a hotel you were posed with the survey question or statement, “The service I received was excellent.”  What if every aspect of your stay was superb, but one small problem occurred, perhaps room service dirty dishes stayed in the hallway for far too long.  On which scale would you be likely to give a 5 rating?  Note that the second scale has an extremely high threshold for giving a 5 (or a 1).  That scale is less likely to get ratings on the endpoints due to the absolute requirement expressed in the anchors.  I seriously question whether the “Completely” scale is truly an interval scale because the cognitive difference between a 3 and  4 rating is likely smaller than between 4 and 5 rating,  but that's a topic for another article.  
My point with this example is that scale design and anchor choice will influence respondents' ratings -- both higher and lower.  This is a key reason why I'm skeptical of the cross-company comparison data sets where each company is using a different survey instrument.  So many variables are in play that legitimate comparisons are quixotic.  
My email also keyed on the idea that a scale with good dispersion properties allows for measuring improvement trends.  Let's assume for a moment that the purpose of the survey is not to gild your performance reviews, but instead is meant to identify areas for improvement.  (Being a surveyor in many ways is like being an accountant.  Tell me what numbers you want and I can deliver them to you!)  If all the scores are clustered on the high end because of an artifact from the questionnaire design, how can you identify where to focus improvement efforts?  Those of you with some background in quality management should know the goal of Pareto Analysis: separate the critical few from the trivial many.  With no dispersion, then the noise-to-signal ratio overwhelms the critical few.  (Yes, my late dad was an engineer.)
So what scale designs lead to dispersion?  In an experiment we did with a client, we created several different questionnaires measuring the same attributes, but with various scale designs.  The different questionnaires were posed randomly to 10,000 respondents on a telephone survey to ensure we didn't introduce a bias.  We found:
Providing a numeric scale with anchors only for the endpoints -- for example, a 1 to 5 scale was presented with verbal descriptions only for the 1 and 5 endpoints -- led to more people choosing the endpoints, probably because that's what they heard.  This effect may be lessened on a web form or paper survey since the respondent would visually see the entire scale.
Conversely, presenting a scale as a series of verbal descriptions -- for example, “Are you extremely satisfied, very satisfied, somewhat satisfied, somewhat dissatisfied, very dissatisfied, or extremely dissatisfied?” -- lead to more dispersion and less clustering of responses.
Similarly, a “school grade” scale led to more dispersion.  A school grade scale is where you ask the respondent to grade performance on an A, B, C, D, and F scale.  Apparently, grade inflation doesn't affect surveys!
By the way, I would not consider the “Completely” scale to have good dispersion properties simply because of the high thresholds for 1s or 5s.  Here's another scale I've used that has good dispersion properties.  I call it a Met Expectations scale.  
Fell Well Short of Expectations
Met Expectations
Greatly Exceeded Expectations
-2
-1
0
1
2
Notice how a level of service delivered just as promised puts you in the center of the scale, and the numbering scheme, while still a 5-point scale, reinforces the midpoint as an average position.  This scale, though, is a challenge to present to management.  I recall telling my anxious client when I ran my first numbers on the survey data set that the company scored a 1.1 on the overall satisfaction question!  It doesn't sound anywhere near as impressive as it really was!
I have covered a lot of ground here, so let me summarize.  Scale design has a huge impact upon how respondents will answer a question.  Thus, a performance measurement system will be greatly affected by the questionnaire design.  Goals have to be set cognizant of the questionnaire.  Also, the goals of a performance measurement system using surveys -- where higher scores are wanted -- can compromise the role of that same survey program in identifying areas for quality improvement where we want a dispersion of responses to separate out the “critical few” on which to focus attention.  





Frederick C. Van Bennekom, Dr.B.A., Principal

Fred Van Bennekom founded Great Brook to apply the lessons he learned from his research and past experience in support service organizations. In addition to his role at Great Brook, Dr. Van Bennekom is a Lecturer in Northeastern University's College of Business Administration in Boston and at the Hult International Business School where he teaches Service Management and Operations Management courses in the graduate and executive programs.

Dr. Fred frequently conducts his Survey Design Workshop to help people learn how to conduct more effective survey programs. A current schedule is available on his Survey Design Workshop Website.