DSC Tech Library
Call Centers and Technology
This section of our technical library presents information and documentation relating to Call Center technology and Best Practices plus software and products.
Since the Company's inception in 1978, DSC has specialized in the development of communications software and systems. Beginning with our CRM and call center applications, DSC has developed computer telephony integration software and PC based phone systems. These products have been developed to run on a wide variety of telecom computer systems and environments.
The following article presents product or service information relating to call centers and customer service help desks.
Five Employment Testing Mistakes To Avoid
Call Center Times
Five Employment Testing Mistakes to Avoid
With tight budgets and a wealth of candidates on the market, it's common for companies to cut corners when hiring. However, eliminating important screening and evaluation tools can be costly in the long run. By avoiding these five typical employment testing mistakes, you can help reduce turnover, avoid discrimination lawsuits and save time and money.
1) Testing Just for Skills and Not for Job or Cultural Fit. Skills are quantifiable. Valid assessments reveal indicators of a skill or application proficiency. This is only part of what the candidate brings to the job. When behavior, attitude and personality are not assessed in the hiring process, you are only evaluating half the candidate. A candidate who tests as an expert in a given skill can still be a bad fit for the job, company culture and organizational morale. It is just as important to learn if a candidate's behavior, attitude, and work-related values are likely to enhance or interfere with their success as an employee. Skills can be improved with training. Counter-productive behavior, such as theft, substance abuse or aggression, is a different story. Take a three-dimensional look at candidates during the selection process to measure skills and behavioral traits.
2) Using Invalid Assessment Tools. Be wary of assessment tools that are the lowest price in town. Like anything else, you get what you pay for. Shortcuts may have been taken to reduce development costs and thus lower price. Validity of a test refers to its value as a source of reliable, relevant information about the people taking the test. A test cannot be valid unless it is reliable. Test reliability depends primarily on the quality of its construction. Test construction, using skilled test developers, professional content analysts, editors, subject-matter experts and industrial psychologists, takes time and money.
3) Testing for Skills Unrelated to the Job. The content and design of the test should relate directly to the job. To the courts, the critical issue of determining job discrimination is whether a testing tool is an effective means for assessing the job-relevant knowledge, skills and abilities (KSA's) of job candidates. Employers can justify their desire to identify candidates that are most likely to be successful employees. For most job titles, a demonstration of content validity is considered acceptable legal proof of validity. The objective for the employer should be to measure job-relevant skills and stay out of the courts.
Accepting Poor Test Quality. Fair and objective tests do not rely on "tricknology." Tests should never be designed to trick or confuse a candidate. Instructional design is a science that applies theoretical tools to accurately measure a candidate's skill level. Look for tests based on real-world applications, not on certification examinations or textbook definition. At Qwiz, we utilize Blooms Taxonomy, an instructional design tool, to categorize the level of proficiency required to correctly answer a question. Once a test is developed, it is constantly reviewed to maintain its integrity and relevance. Good tests require good questions and the development does not stop once a test is published.
5) Relying on Tests Exclusively to Make the Final Decision. Effective hiring processes combine many components, including résumé review and tracking, skills and ability assessments, behavioral and personality assessments, interviews, and background and drug screening. Assessment scores are guidelines and should never be used as the sole basis for making hiring or promotion decisions. Many companies seek to match qualified people to positions that are defined and required for the success of their organization. Skills and competencies tell just part of the picture. The most productive employees bring with them the best combination of skills, attitude and cultural fit.
Kurt Ballard is senior vice president and chief marketing officer of Qwiz. Inc. (www.qwiz.com), the leader in competency assessments. You can reach him at KBallard@qwiz.com.
How to Identify a Reliable Employment Testing System
When you consider investing in an employment testing system to help make hiring and advancement decisions, it's important to recognize that not all tests are created equal. Here's what you need to know to make the best choice.
The Standards for Educational and Psychological Testing describe many types of information that testing companies should be able to provide to test users. Although The Standards do describe employers' best practices, it should be stressed that they are not laws. In specific situations, legal and practical considerations will take precedence.
A testing company should provide test users or potential test users sufficient information so that, before they adopt a test, they can evaluate the suitability of the test. In particular, the test user should be able to learn the following:
The testing company should explain the proper interpretation of test scores. This is especially important if, instead of being reported in raw form (for example, number of correct answers), the scores are converted to a special scale (examples: percentiles, stanines, a scale of 0 to 10, or a scale of minus 3 to plus 3).
- the purpose of the test
- what the test is intended to measure
- how the test is administered
- whether the test has a stringent time limit
- the average time required for a test session
- how many questions are on the test
- what type of questions are on the test
- the intended test takers
- the score(s) generated by the test
- how the score(s) should be interpreted
- common misuses of the test, if any
If the test includes overall or composite scores that combine the results from several subscales, the testing company should explain how these composite scores are calculated, and how they should be interpreted. For example, do all subscales receive equal weight? If particular misinterpretations of the scores are likely, test users should be forewarned. If the testing company scores the tests and the scoring system automatically assigns labels, categories, or descriptions to test takers, such as "unqualified" or "advanced programmer," the testing company should be able to provide a rationale for these assignments.
Support for Claims
If the testing company claims that use of the test will result in a specific outcome, such as reduced turnover or increased productivity, they should also report the basis for that claim, including supporting evidence. The evidence should be detailed enough that the test user can judge whether the same results were likely to be obtained in the test user's own setting with the test user's own candidates and/or employees.
If the testing company claims that test scores can predict job performance (that is, that there is a correlation between test scores and a job performance measure), they should also report the basis for that claim, including supporting evidence. The evidence should be detailed enough that the test user can judge whether the same results were likely to be obtained in the test user's own setting with the test user's own candidates and/or employees, and using the test user's own preferred performance measures.
Accuracy of Measurement
There are two common indicators of a test's accuracy of measurement: its reliability and its Standard Error of Measure (SEM). The test company should be able to provide this information on every test. If a test generates scores on more than one scale, the reliability and SEM should be available for every scale. The reliability is reported as a decimal number between zero and one, with higher numbers being better. The reliability allows a comparison of one test with another, even though the two tests may be of different types or different lengths.
The SEM gives the likely margin of error in an individual's test score, in terms of the scale in which the score is reported. The SEM makes it possible to define a "confidence zone" around an individual's score on a test. However, the SEMs of different tests usually cannot be compared because SEM is influenced by factors such as test length.
What Test Users Should Do To Ensure the Validity of Their Tests
The Standards include a number of guidelines that, if followed, will maximize the validity of the test user's testing process. This validity concerns the entire process, including the test itself, the test administration procedures, the interpretation of the test results, and the decisions that are based on the test results.
Test users should note that, by increasing the validity of their testing program, they are simultaneously increasing its effectiveness and its legal defensibility. Further, the test users are legally responsible for the validity of their own testing process.
Before adopting a test, a test user should be comfortable that the test is appropriate for its intended use. In particular, the test should:
If the population of test takers contains a large number of individuals who are members of protected classes (for example, minorities, women, and people with disabilities), the test user should investigate whether the test would be suitable for those individuals.
- provide the information necessary to support the test user's employment decisions
- be suitable for the job for which it would be used
- be suitable for the test takers who will take it
- generate scores that are relevant and comprehensible
- have appropriate reliability
- be convenient to administer and score
Beware of test score misinterpretations. There are many ways that a test score can be misinterpreted, but they can all be avoided if the test user knows enough about how the test is constructed and what the scores mean. One common source of misinterpretation is scale names that are ambiguous or misleading. For example, if a candidate scores high on a scale titled "Assertiveness," does that mean she takes initiative, or does it mean she is argumentative? Does it mean she takes the lead in social situations, or does it mean she defends her ideas energetically? If the test user has investigated the intent and the content of the Assertiveness scale, he or she will be able to interpret this score more accurately.
Depending on the subject matter, tests may become out of date as jobs and technology change. The test may no longer be applicable to how the job is performed, or it may contain references that reveal its age (for example, math problems involving 10-cent cups of coffee). In the first case, the test is no longer job-relevant; in the second case the test may remain relevant but the obsolete references may be distracting to test takers. The test user is responsible for using only tests that are job relevant and up to date.
The Responsibilities of Test Users
1. Inform test takers about the test(s) they will be given and how the scores will be used. Even in employment situations, where these answers may be obvious, the information should be communicated at least briefly.
2. Inform test takers how different test taking strategies will affect their scores. Test takers should be informed that certain test taking strategies that are unrelated to the subject matter being tested may enhance or inhibit their test performance. For example, let them know whether guessing is a good or poor strategy. Make sure they know whether they can go back to questions they have skipped. Tell test takers if certain parts of the test are more important than others.
3. Have a clear policy on retaking tests. The Standards recognize the right of test users to set their own policies on retakes. Once it is defined, the policy should be clear and should be applied uniformly.
4. Inform test takers of their rights if the integrity of their test score is challenged. The test user should have a clear policy for the situation when a test taker's results may be invalid because of cheating or because they did not complete the test properly. In that event, the test user should inform the test taker of their rights to contest the test user's decision.
5. When testing individuals with disabilities, take steps to ensure that test score inferences accurately reflect the individuals' ability. When modifying a test or test administration procedures to test an individual who has a disability, the overarching concern is the validity of the inference made from the test score. The law allows test users to modify their testing procedures to accommodate candidates' disabilities; it also allows test users to refuse to modify their testing procedures if a modification would impair the usefulness of the tests. The primary goal is for the score(s) generated by the testing procedures to be an accurate (valid) indicator of the candidate's qualifications for the job. The test user is responsible for deciding whether a modification is appropriate in a particular situation.
By ensuring that your employment tests are fair and effective, you can make the best hiring, training and advancement decisions.