Emotional Intelligence Information: RETURN TO MAIN MENU

About the Mayer-Salovey-Caruso Emotional Intelligence Tests (MSCEIT's)

Introduction to the MSCEIT Tests

Obtaining the MSCEIT

Primarily For Test Administrators...

Primarily for Researchers...

How to Calculate the Reliabilities for the MSCEIT

I have received a number of questions about how to calculate the reliability of the MSCEIT that indicates that a bit more information in this area can be helpful. Here then, are some discussions of how to calculate reliability. Some of it is very basic, but also touching on some issues that require a bit more thought.

At the basic level, when people respond to the MSCEIT, they enter various responses for each item (e.g., 1, 4, 3, etc.). These raw responses are not scored responses, and so reliability should not be calculated on them. Any reliability they possess would reflect individual differences in the use of continuous response scales on Branches 1, 2, and 4, and would not reflect the reliability of measured emotional intelligence.

Once the data are scored, individual item data is typically reported as fractional values (e.g., .32, .45, .56, etc.). These are the data on which reliability can be calculated. It can't be stressed enough that the appropriate reliability coefficient to be used to assess the internal consistency of the MSCEIT is the split-half reliability coefficient. When using the split half, it is necessary to divide (i.e., assign) items from a given task equally to one or the other half of the test one is calculating the split half reliability for.

Researchers have gotten into the habit of using coefficient alpha reliabilities for virtually all their reliability needs, and indeed, coefficient alpha often provides a wonderful estimate of reliability -- when items are homogenous (i.e., all the same in their nature). Coefficient alpha, however, provides an inappropriate reliability to report for the MSCEIT because the branches and full-scale scores are based on items which vary task-by-task. In statistical terms, this means that the items are heterogeneous. For that reason, when I report internal-consistency reliabilities for the MSCEIT, I always use split-halves (with the exception of within a single, individual task, where items are homogeneous).

So, let's say a researcher wanted to know the reliability of Branch 3 on the MSCEIT, which is made up of subtests C (20 items) and G (12 items). The researcher would first create a summed score of, say, all the odd items across both tasks (i.e., C1 + C3 + C5... + ...C19 + G1 + G3...G11), and then a second summed score of all the even items across both tasks (i.e., C2 + C4 + C6... + ...C20 + G2 + G4... G12).

The correlation between those two sums equals (by classical true-score theory) the reliability of half of Branch 3. To obtain a reliability estimate for all of Branch 3, it is necessary to apply the Spearman-Brown Prophecy formula. The specific correction, in this instance, is that the reliability of the whole test (all of Branch 3) is equal to twice the reliability of the half-test, divided by 1 plus the reliability of the half test. (This formula is available in most psychometric texts; see, for example, Nunnally, or Allen & Yen.)

To calculate the reliability for two Branches together (i.e., an area score), one simply generalizes the procedure, adding in items from additional tasks, including, for the Strategic EI area, odd and even items from Tasks D and H. One continues this process, adding in items from the whole test, to estimate reliability of the full test. (Again, after obtaining the correlation between the two test halves, it is necessary to correct upward with the Spearman-Brown formula.)

Test-retest reliability is also appropriate with the MSCEIT, and can be estimated as the simple correlation between the MSCEIT given to the same participants at two different points in time.