SDSCA - FAQs

FREQUENTLY ASKED QUESTIONS

Medication Adherence

Q: Why do new versions not include Medication Self Care?

The revised version of the SDSCA does not include a medication sub-scale. The medication scale was excluded because of its ceiling effects and lack of variability among participants that resulted in unsatisfactory test-retest reliability for included items. Everyone seemed to report excellent adherence to the medication items.

Administer EITHER item 6a or items 7a and 8a.....This is up to the investigator. If you administered all three, no harm done, (except respondent burden). However, don't score all three; score just 6A or7a and 8a.

Relation between Diabetes Self-Care and Glucose Control (A1c)

Q: Has improvement in the SDSCA scores been linked with improved A1c or other long term outcomes?

Providers may assume too much about patient adherence from the patient's A1c assay; those with "good" A1c assay results are presumed to be "adherent" while those with "poor" A1c results are presumed to be "nonadherent" (Clarke, Snyder, & Nowacek, 1985). This is unfortunate because A1c results are, in fact, often a poor indicator of patient behavior (Johnson, 1994). A high A1c value tells us something is wrong, but not what. If we wish to identify which of the many possible patient behaviors or other factors (illness, inadequate prescription, incompatible self-management actions, failure to use DCCT certifiable measures of HbA1c, comorbid conditions, idiosyncratic factors, etc.) might be involved, we must conduct a behavioral assessment. Adherence is one important contributor to good control--but is not the same as control and cannot be evaluated by just looking at lab values.

Adherence is not a personality trait. Rather, the various diabetes self-care behaviors are relatively independent of one another; the extent to which a patient is adherent to one component of the treatment regimen (e.g., glucose testing) will tell us little about how "adherent" the patient is to a different component (e.g., exercise or medication taking).

Clarke WL, Snyder AL, Nowacek G. Outpatient pediatric diabetes - I. Current practices. J Chron Dis 1985; 38:85-90.

Johnson SB. Methodological Issues in Diabetes Research: Measuring adherence. 1992. Diabetes Care,15(11), 1658-1667. 

Can I Omit, or Add Items to the SDSCA?

Yes, it is possible to use only a subset of items from any of our measures, and we believe with most measures. Once you remove items, the scales generally become less reliable,ut if reliability is only based on the sheer number of items, I am not sure where that gets us. We think it is better to have the items fit your study. So, in our work we use the number and type of items that are least burdensome to study participants, and that fit our particular study question and/or target population.

Can I Reformat the SDSCA?

Please adapt the SDSCA to your fit your study needs, there is no reason to not reformat it in a way that makes sense for your study. You do not need to use our scoring scheme either. The only reason to use our scoring system would be if you wanted to compare your outcomes with those in our Seven studies Diabetes Care article.

Can I Change the Wording of the Recommendations in the SDSCA items?

The SDSCA is now old, and the recommendations change as new data become available. For instance, physical activity requirements have now changed. I would recommend updating the minutes to the most recent recommendations.

A “healthful eating plan” should be defined by your research. Nutrition guidelines are different from year to year (actually minute to minute in my read of the situation). So, make the healthful eating plan fit your study. The definition should reflect what you are teaching and what is a healthful diet in your country, or in your population, at the moment of your study. We are assessing how well people with diabetes engage in healthful activities designed to improve their glycemic control. But what is considered a healthful regimen activity changes as new evidence emerges. It is up to you to define for your culture, and for your particular study.

Foot Care

The recommendation for foot care may change as new evidence emerges. I do want to emphasize that you need to reverse-score the foot-soaking item, since people with diabetes should not soak their feet. Due to retinopathy, they may not feel the heat of the water, and could burn themselves. We purposely did not elaborate on the foot care items, because we used the same wording as used by Litzelman, from whom those items came. The foot care items are a test of what patients “think” they are supposed to do to care for their feet.

Here is the Litzelman reference, but it is old, and I am no longer up on the foot-care literature. I would suggest using for your study the most up-to-date evidence based foot-care recommendations. AND that may necessitate changing the wording of the items.

Litzelman DK, Slemenda CW, Langefeld CD, Hays LM, Welch MA, Bild DE, Ford ES, Vinicor F: Reduction of lower extremity clinical abnormalities in patients with non-insulin-dependent diabetes mellitus. Ann Intern Med 119:36–41, 1993

Cut-off Points and Evaluating clinical Improvement

Q. Is there a definition of improvement in self-management sub-scores? what amount of change is seen as clinical improvement?)

This question is an excellent one, but not one we have explored with our data. It seems like an obvious question, and one to which we should have an answer. In our studies we have explored levelof self-care as a continuous measure; we do not categorize the scores. We use level of self-care to predict or explore whether level of self-care is predicted by patient demographic characteristics, psychosocial behaviors, or to predict behavioral or biologic outcomes. But we have not actually ever established cut-off points, or defined criterion levels. Given that we would like to find and set a criterion level for each of the behaviors covered in the SDSCA, I am sure that one of us will address this issue, but so far we have not. 

In our 2000 Diabetes Care article you can find normative data from our seven studies, and you could see how your sample results compare with ours.

Our theory has been that we would not want to define cut-offs even though categories are useful for classifying people as “adherent” or non-adherent”. We are always worried that such categories are artificial, and lose valuable information retained in the continuous measures.

There are reasons for defining a cut-point, however. They might be if an investigator wanted to define some kind of multiple behavior score. In that case just use days/week on average across areas. 

Expanded items:

Q. Is it is possible to use only the first part of the SDSCA questionnaire (questions 1 to 12) and not the "Additional Items for the Expanded Version of the Summary of Diabetes Self-Care Activities"?

A. The additional 9 items are only for cases in which a researcher wants to know what the respondent’s prescription is so they can compare the current level of self-care to the prescribed level. You can use all of the items or some of them as it fits your research question.

Here is what we say in the 2000 Diabetes Self Care article: “Additional self-care items are also provided that address questions of clinical interest, but for which little or no reliability and validity data are available. Six additional items address self-care recommendations. These may be useful for clarifying patient understanding of self-management goals, as well as for evaluating congruence between perceived recommendations and reported levels of self-care (adherence). The expanded version of the SDSCA may be used when a particular question is of interest to study investigators or when time permits." So, it is up to you.

Dichotomizing

Dichotomizing may not work because there are scales in the SDSCA that have only 2 items (and there are only 7 answers per item). There are only 13 possible values for these 2-item scales: 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7). If you compute a difference score from one time point to another time point using these 13 possible values, there are only 13 possible difference score values (they are: 0, .5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6). With only 13 possible difference scores in a sample of 600 subjects, there will be a lot of subjects who have exactly the same difference score. Given that there are only 13 possible values, a standard significance test based on assumptions of continuous values and a normal distribution is warranted. But a nonparametric test is problematic. The Wilcoxon signed-rank test, for instance, gives ranks to each absolute-value difference score, and then ranks everybody according to their score. This means there will only be 13 ranks out of many subjects, and lots of subjects will have "ties" placing them at the same rank. So it is OK to do a test based on NOMINAL values rather than CONTINUOUS values, and you could you categorize (i.e., dichotomize or trichotomize) the SDSCA scores and run a chi-square test. Of course, then the question would be whether to categorize the baseline and follow-up values first or whether to calculate the difference score and then categorize.

You could calculate the difference score from the raw values and then categorize by dichotomizing (whether the person did worse or got better) or trichotomizing (whether the person did worse, stayed the same, or got better), using your own standard for making the categorization. You could, for instance, define staying the same as <1 standard deviation from zero change (or some fraction of a standard deviation), define getting worse as >1 standard deviation (or some fraction of a standard deviation) in the negative direction, and improving as >1 standard deviation (or some fraction of a standard deviation) in the positive direction. You could start with 1 standard deviation and see where that takes you -- if it turns out all your subjects stayed the same by this definition, you could reduce to .5 of a standard deviation and see if you get more even distributions across the three categories.

Smoking Item Questions

Q. In calculating the other SDSCA scores, the mean of the items is computed. What about the smoking item, how does one calculate that item?

The 2000 Diabetes Care article explains the way to score the items, including the smoking item, as well as the normative data which you can use to compare your results. The column titled “Average Values” would be the most helpful to you.

Here is the section from page 7 of the Diabetes Care article explaining how to score the smoking item (#11):

Smoking Status = Item 11 (0 = nonsmoker, 1 = smoker), and number of cigarettes smoked per day.

Add the smoking score to the sum of the mean scores for each of the items 1 through 10.

Negative Reliabilities

Q. How do you explain the issue of obtaining negative reliability of item 4 despite code reversing? Specifically:

“I followed the directions for scoring, reversing and analysis of the reliability, whereby I combined each two items (accounts for a subscale) and collapsed all items for doing Cronbach's alpha although you didn't do that. I encountered some difficulties in interpreting my results. After reversing item 4 of the scale (4. On how many of the last SEVEN DAYS did you eat high fat foods such as red meat or full-fat dairy products?), I got negative reliability which violates the concept of reliabiltiy. However, my results were consistent with the Spanish translation of the tool and the authors explained all the challenges that I encountered with my reliability analysis. How do you explain this issue of obtaining negative reliability of item 4 despite code reversing?”

The “Specific Diet” scale is the least reliable and we found that to be the case in all of our seven studies (see Table 1 in the 2000 Diabetes Care Article). Report the negative reliability, and say that your findings agree with other versions of the SDSCA.

It is most likely a combination of having a small sample, because with too few study participants it is not possible to make generalizations about scale reliabilityand,for the SDSCA, the fact is that unlike usual scale construction in which the same item is repeated in different ways in order to get the traditional high reliabilities, we focused questions on different regimen components, and to keep the measure brief, only used one item per idea. It also may be that in reality, there is no relation between eating low fat versus low calorie vs. high fruits and vegetables.

New Versions of the SDSCA

Q. I understand that your revised scale (Toobert, Hampson and Glasgow, 2000) had not been fully validated at the time of writing. Do you know if there is recent research investigating reliability and validity.?

A. We have never gone on to validate the recommended scale at the end of the Diabetes Care, 2000 article, mostly because we rarely use the SDSCA in our own work! We tend to collect full measures of diet (e.g., Food Frequency Questionnaires); and physical activity (e.g., the Champs), etc., and so it would be redundant for us to collect both. So,e just haven’t done any further work on the SDSCA. We should, since I get questions and requests to use it from all over the world, almost every day.