Emotional Intelligence Information: RETURN TO MAIN MENU

About the Mayer-Salovey-Caruso Emotional Intelligence Tests (MSCEIT's)

Introduction to the MSCEIT Tests

Obtaining the MSCEIT

Primarily For Test Administrators...

Primarily for Researchers...

Interpreting and Understanding MSCEIT Scores

Chapter 4 of the MSCEIT User's Manual is devoted to interpreting and understanding MSCEIT Scores. Sometimes, however, I am posed questions that go beyond information presently in the manual. This portion of the web site is devoted to such questions.

Topic 1: Correspondence Between Scale and Total Scores


I administered the MSCEIT test, and the test-taker's total EIQ doesn't match the average of the Branch Score EIQ's. Can you explain?

Answer (developed collaboratively with Gill Sitarenios of MHS)

Some people who use the MSCEIT discover, when they check the branch scores, that there is a somewhat different-seeming total score than would be indicated at face value by the branch scores.  For example, an individual might score a bit above average on a number of branches, but then score even higher on the total score.  The total score, in this case, plainly isn't a straight average of the branch scores, so the question is what happened. (This also happens in reverse, i.e., obtaining a total score lower than the individual scores).

This is a common occurrence with criterion-report tests (that is ability-tests) such as the Wechsler, MSCEIT, and other scales, and is a consequence of the careful normative process that takes place with such tests. In regard to the MSCEIT (but the general principles apply to other ability tests as well), task raw scores are computed first as described in Appendix E of the test manual. The branch, area, and total raw scores are then computed as the average of the constituent task raw scores. For each of the task, branch, area, and Total scores, these raw scores are then independently converted to percentile norms, and then adjusted to create a more normal distribution.  Thus, for example, the total score is not a direct average of the branch scores, but reflects a conversion based on how the individual's total raw score compares to the rest of the normative sample.

Consider the following case: Branch 1 SS = 130, Branch 2 SS = 130, Branch 3 SS = 130, Branch 4 SS = 130. Superficially, it might seem that the Total SS also should be 130. However, such a value does not accurately capture or reflect performance. To score high on all four branches (unless they are all perfectly correlated) is a much more rare occurrence then scoring high on any one specific branch. So, if each Branch score on its own generates an SS = 130, one would expect a total SS that is significantly higher than that. The profile like the one described should produce a more extreme standard score, as it does, when computed properly.

Topic 2: On Respondents Skipping Answers


Given that the online version of the MSCEIT now enables respondents to skip answers, I was wondering if you could please tell me how the MSCEIT deals with missing item responses when scoring the scales? I have a few cases with missing items but their scale scores are present. I'd like to estimate the missing item level values so I can use all cases in the split-half reliability analysis.

I look forward to hearing from you.

Kind regards,

Answer (courtesy of MHS Research)

It is fairly recent (late 2006) that we have allowed individuals to miss items. The Association of Test Publishers (ATP) suggested that it is against a test taker's rights to not be given an option to omit individual items. In order for a MSCEIT scale score to be deemed invalid, more than 1/2 of the items need to be omitted. When items are omitted (but not enough to create an invalid scale), the items that answered are scored and totaled and divided by the number of items answered for that scale. In regards to split-half analyses, Dr. Mayer's suggestion can be used.

[Here was that suggestion regarding the calculation of reliability: "If it is just a little missing data rather than, say, adding up the score of each item, I would be inclined to take the mean item score of the odd items (or even items) on the particular scale, and that for the even items, and use the correlation between those means rather than the respective sums."]

If you have any further questions please let me know.

Topic 3: Statistical Cutpoints for Qualitative Categories


We currently source the online version of the MSCEIT and obtain the Resource Report.

The scores on the Resource Report are reported in Standard Scores, however, as most of our clients are more used to us reporting in percentiles for our ability assessment results, we have been converting the standard scores into percentiles. This has married up nicely with the different ranges the participants are placed in, as we had been informed that the standard score ranges used in the MSCEIT scoring were the same as the percentiles from the normal curve distribution. That is:

  • If participants scored below 2nd percentile, their results would fall in the “Improve” range;
  • If they scored between 2nd and 16th percentiles, they would fall in the “Consider Developing” range;
  • If their scores were between the 16th and 84th percentiles, they would be placed in the “Competent” range;
  • If they scored between the 84th and 98th percentiles they would fall in the “Skilled” range;
  • If they scored above 98th percentile, they would fall in the “Expert” range.

However, of late there have been a couple of anomalies that have presented themselves and I would just like to query them. On a few occasions, we have converted a participant’s standard score into a percentile and it has placed them in what we would consider the “Average” or “Competent” range. However, when we have compared their standard and percentile score to the range, participants have been reported as falling in the “Consider Developing” range.  We have also had a score of late that we converted to be in the “Average” or “Competent” range, that fell in the “Skilled” range according to the report. I have included some examples below:


Standard Score

Percentile Conversion


Perceiving Emotions



Consider Developing

Using Emotions



Consider Developing

Understanding Emotions



Consider Developing

Perceiving Emotions




I am just wondering whether there is a different range for the MSCEIT reports and whether there is a reason for this. I assumed the majority of psychological assessment tools worked off the normal distribution and associated ranges, however, if the MSCEIT is different I would like to know why and the actual differences as I feel it would better enhance our practice and interpretation of this tool.

Thank-you for your help and I look forward to hearing from you soon.

Answer (courtesy of MHS Research)

Below are the score ranges for the MSCEIT Resource Report. The score ranges are a bit different from what the individual who queried is using. The score ranges of twenty are used because significant score differences for the MSCEIT are generally around 10 points and a range of twenty gives some more room for interpretation within the score range.

Score Range


0 - < 70


>=70 and < 90

Consider Developing

>=90 and < 110


>=110 and < 130