Miastenia Gravis (MG) Epidemiologia : ∗ A MG apresenta prevalência de 50 a 125 casos em uma população de um milhão. ∗ A incidência da doença está correlacionada com o sexo e a idade do paciente. Assim sendo o sexo femenino apresenta maior incidência entre a 2a e 3a décadas e o sexo masculino entre a 6a e 7a décadas. ∗ A doença acomete duas vezes mais as mulheres do que os h
Edp036 93.119International Journal of Public Opinion Research Vol. 22 No. 1ß The Author 2009. Published by Oxford University Press on behalf of The World Associationfor Public Opinion Research. All rights reserved.
doi:10.1093/ijpor/edp036 Advance Access publication 14 October 2009 Tilburg University, FSW-MTO, Room S110, PO Box 90153, 5000 LE Tilburg, When rating questions are used to measure attitudes or values in survey research aresearcher might want to control for the effect of overall agreement with the set ofitems that is rated. The need for controlling for overall agreement arises when the setof items refers to conceptual opposite perspectives, when balanced sets of items areused, or when a researcher is interested in relative preferences rather than overallagreement. In this paper, we introduce a method for filtering out overall agreementwhen a researcher’s aim is to construct a latent class typology of respondents, that is, alatent-class ordinal regression model with random intercept. With this approachsegments in the population are identified that differ in their relative preference ofparticular items over other items in the set. Examples are given on the concepts oflocus of control, gender role attitudes and civil morality. The examples demonstratethat when an overall agreement is present in the data, the method is able to detect it,and at the same time allows identifying latent classes of respondents that differ intheir relative preference of the items being rated.
The use of rating questions in measuring attitudes or values is still verypopular. Rating questions involve a response format in which respondentsare asked to indicate their level of agreement with particular issues. A typicalformat has been introduced by Likert (1932) who developed a response scale Corresponding author: Guy Moors; e-mail: firstname.lastname@example.org This article was first submitted to IJPOR November 6, 2008. The final version was received July 6, 2009.
This paper makes use of the CentERdata panel. The author gratefully acknowledges CentERdata and itsdirector, Marcel Das, for including our short questionnaire.
I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H with ordered categories ranging from ‘completely disagree’ to ‘completelyagree’. Although the number of response categories may vary, these ratingquestions have one thing in common: they aim at measuring the desirability orabsolute value of a particular object. However, a researcher’s interest mightnot be focused on absolute levels of agreement but rather on relative prefer-ences of one issue over another. Take the following example: people may valuea high income as well as opportunities for self-development in a job, but it isthe preference of the first over the latter that defines the extrinsic versusintrinsic work motivation (Kohn & Schooler, 1983). From a substance pointof view, then, it is useful to distinguish between an overall agreement responseand the relative preferences of respondents. Next to this substance relatedjustification for modelling overall agreement responses, a methodologicalmotive has been provided. It has been argued that rating questions are sus-ceptible for agreement bias or acquiescence (Billiet & McClendon, 2000). Thistype of response bias refers to the tendency among respondents to be moreprone to choose one of the agreement response categories irrespective of thecontent of the issue (Paulhus, 1991). This can be easily demonstrated when abalanced set of items is developed to measure a particular latent construct.
A balanced set of items includes positively and negatively worded itemstoward a particular attitude. The concept of ethnocentrism, for instance, canbe measured by including items that refer to intolerance toward ethnic mino-rities as well as to items that reflect acceptance of other cultures (Billiet &McClendon, 2000). When respondents tend to agree with both types of issues,this can be regarded as agreement bias. Note that agreement tendency can onlybe interpreted as agreement bias when a balanced set of equivalent items isdefined. With unbalanced scales, e.g., only positively worded items, it isimpossible to say whether the agreement tendency has a substantive meaningor whether it reflects a method bias.
In this article we demonstrate the usefulness of an approach that has been recently introduced in the context of consumer research to remove the effects ofthe overall response rating level (Magidson & Vermunt, 2006), that is a latentclass regression model with random intercept. This approach is applicable whenthe aim of the researcher is to define latent segments or clusters in a population,and as such complements methods that have been developed for identifyinglatent dimensions or factors. We apply this approach in three examples: (1)questions referring to locus of control; (2) gender role attitudes; and (3) civilmorality. The first example is chosen because of a conceptual need for distin-guishing between intrinsic versus extrinsic locus of control. The second exampleincludes balanced items and illustrates differential outcomes as far as agreementresponse bias is concerned. The third example demonstrates that it is possible todistinguish among clusters of respondents that differ in their relative ranking ofmorality issues after taking into account an overall agreement response pattern.
The article is organized as follows. First, we elaborate on the method that was developed for removing the effect of overall response rating in estimatingtaste preferences of consumers and argue how this perspective can be general-ized to attitude research. Secondly, we define the context or situation in whichthe regular survey researcher might consider using this approach. And finally,we apply the method with examples from these particular situations to demon-strate the usefulness of the approach.
R A N K I N G T H E R A T I N G S : I N T R O D U C I N G A M E T H O D T O R E M O V E A N O V E R A L L R E S P O N S E R A T I N G L E V E L I N C O N S U M E R R E S E A R C H A N D — B Y E X T E N S I O N — I N Which type of cornflakes is preferred by which segment of the population?Are there particular design aspects of a car that influence a young woman’schoice in buying that car? We all can imagine the kind of questions that arerelevant in consumer research. In this type of research there is a primary needto understand differences in taste preferences of consumers. This is importantwhen new products need to be developed and/or when advertising aims at aparticular consumer segment. The predominant method of data collection(Magidson & Vermunt, 2006), however, asks a test panel to rate each of theattributes of a product as such. The popularity of rating scales lies in theirconvenience (van Herk & van de Velden, 2007; see also Munson & McIntyre, 1979)–rating scales are easy to administer, can be completed in a short period oftime, and are usually not difficult to understand for respondents. Furthermore,they allow for use of parametric statistical methods. Other formats such as pre-ference ranking and pair-wise comparison, by contrast, are often considered tobe too demanding for respondents (Klien, Du¨lmer, Ohr, Quandt & Rosar, 2004);although it has been argued that the use of discrete choice experiments to elicitpreferences may decrease response burden (Ryan, Bate, Eastmond & Ludbrook,2001). Nevertheless, rating question remains the predominant method.
The problem with rating data in consumer research, however, is that an overall liking tends to dominate the results rather than that one capturesdifferences in preference between the presented products (Magidson &Vermunt, 2006). This overall liking reflects a respondents’ response tendencythat should be taken into account when the principal aim of a researcher is toget information on the preference of a product relative to other products. Notethat when we are referring to the overall liking as a response tendency we donot imply this to be a response bias. Such a tendency is only a response bias ifit is independent from the true content of what the researcher aims at mea-suring. There are many cases, however, in which an overall liking (or dislik-ing) makes perfectly sense. Personally, for instance, I don’t like cornflakes, but I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H as a test person I can rate different cornflakes and even make a distinction intaste between them.
To eliminate the overall response level effect in rating data it has been suggested that a within-case ‘centering’ of the data might be useful (Cattell,1944; Cunningham, Cunningham & Green, 1977; Magidson & Vermunt, 2006). This within-case centering involves the calculation of (a) a meanlevel of liking for all objects (items) that are rated and (b) subtracting thisvalue from the observed value for each object (item). As such it measures thedeviance from the personal average liking of the objects or items that arerated. Positive values indicate higher than average liking, negative valuesindicate less than average liking. It has been pointed out (Dunlap &Cornwell, 1994; Cheung & Chan, 2002; Cheung, 2006) that from a statisticalpoint of view this procedure causes particular problems that relate to theipsative nature of within-case centering data. The measurement of oneobject (item) is no longer independent from the measurement of other objects(items) in the set since the sum of all values of the within-case centeringvariables is a constant (¼0). With k items in a set only k-1 within-casecentering variables are needed for full information on the relative rating ofall k items. If, however, a researcher needs information on relative preferencesrather than overall liking a model-based alternative to within-case centering ofrating data should be considered. As is demonstrated by Magidson & Vermunt(2006) a latent-class ordinal regression model with random intercept providessuch a model-based alternative. Furthermore, this approach has the additionalbenefit of maintaining the ordinal discrete metric of the observed rating data.
After all, when an individual’s overall mean is subtracted from the discreteratings of the items the original discrete distribution changes in a kind of con-tinuous scale with a complicated distribution (Magidson & Vermunt, 2006;Popper, Kroll & Magidson, 2004). Furthermore, Popper, Kroll & Magidson(2004) have demonstrated that the LC ordinal regression model with randomintercept outperforms alternative approaches that also aim at distinguishingbetween an overall liking dimension and a relative preference dimension.
We use Latent GOLD 4.5 to analyze the models presented in this article.
An example data layout for the latent class ordinal regression model is pre-sented in figure 1. The layout reflects a multilevel data structure in whicheach rated object or item (¼lowest level) defines a separate record for eachperson (Respondent ID ¼ highest level). A nominal variable (Item ID) iden-tifies which item is rated and an ordinal response variable (Rating) identifiesthe rating that is given to a particular item. Individual level covariates, such asgender, age or education, may also be included.
Let Yij denote the rating of respondent i of item j, with i ¼ 1, 2, . . . , n (n ¼ number of respondents); and j ¼ 1, 2, . . . , m (m ¼ number of items). Therating (Yij) takes on discrete values denoted by k with k ¼ 1, 2, . . . , FIGURE 1 Example data layout for the LC ordinal regression models r (r ¼ number of response categories). Given that the rating is a discreteordinal response variable we define an adjacent-category logit model(Agresti, 2002) as follows: This is the specification of a regression model for the logit associated witha given rating m instead of m – 1 for item j conditional on membership oflatent class x, for x ¼ 1, 2, . . . , t (t ¼ number of latent classes). The intercept,ik is allowed to vary across individuals and is defined as a function of theexpected value of the intercept (¼k) and a normally distributed continuousfactor (Fi) which has a mean equal to 0 and a variance equal to 1, and where lis the factor loading. Given that the variance of the intercept (ik) equals l2,both the expectation and the square root of the variance are model parameters.
The xj parameter in the equation refers to the effect of the jth item for latentclass x. By using effect coding for parameter identification, the xj sum to zeroover all items so that positive values for xj indicate that a particular latentclass x likes an item more than average and negative values indicate that theitem is less preferred than average. It is clear that in this model the randomintercept accounts for the overall agreement tendency, whereas the xj para-meters indicate the relative preference of an item compared to the averageliking of all items.
W H E N I S C O R R E C T I N G F O R O V E R A L L A G R E E M E N T I N P U B L I C O P I N I O N R E S E A R C H U S E F U L ? Public opinion research is rarely about taste preferences, although one couldargue that a similarity can be drawn between rating products and expressingan opinion by rating attitude questions. As indicated in the introduction there I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H are different situations in which a public opinion researcher might wish tocorrect for overall agreement tendencies in attitude or values measurement.
These reasons may be of a more conceptual nature, or they may relate tomethodological concerns.
First, social scientists often develop sets of items to measure attitudes and values that are assumed to measure opposite opinions. Well-known examplesin the literature are: (a) the concepts of ‘left’ versus ‘right’ or ‘materialism’versus ‘postmaterialism’ in political science (Inglehart, 1990); or (b) ‘internal’versus ‘external’ motivation in social psychology (Rotter, 1954); and (c) ‘self-directedness’ versus ‘external benefits of a job’ in sociology (Kohn & Schooler,1983). We admit that it has been debated whether these concepts are trulybipolar or rather distinct concepts that do not necessarily oppose one another(Alwin & Krosnick, 1988). At the same time, we subscribe to the principlethat when a researcher defines a concept in a particular way; the operationa-lization should be consistent with the concept. This is exactly the argumentraised by Inglehart (1990) among others, who claims that in politics—as inlife—it is about making choices between alternatives and, by consequence,attitude research should focus on these choices people make. Asking rankingquestions, in this case, would be the obvious choice of a researcher. After all, aranking assignment involves respondents indicating their first, second, etc.
choice among a set of alternatives. There are, however, instances in whichranking cannot be used. In secondary data analysis, for instance, a researcher isleft without a choice. Also, the complexity of a ranking assignment becomesproblematic when the list of options is fairly long. Suggested alternatives forlong lists of items such as pair-wise comparisons or using triads, also causeburden upon respondents since it increases survey time. In these cases aresearcher can use the suggested method to control for overall agreementand develop a measurement model that reflects relative preferences. Sincethe method involves defining segments in the population that differ in theirpreference structure, the method is applicable if the researcher’s main interestis in classifying respondents that differ in their preference profiles. Hence, theapproach is conceptually similar to cluster analysis in that it aims at classifyingsimilar objects into groups of which the number of groups as well as theirforms are unknown (Kaufman & Rousseeuw, 1990). In this article we applythe method on four items referring to the concept of internal-external locus ofcontrol (Rotter, 1954; Mirowsky & Ross, 1990).
Secondly, an overall agreement tendency translates into agreement bias when a balanced set of agreement scales is used to measure a particularconstruct. In this case respondents reveal a tendency to opt for the agreementcategories of a response scale regardless whether the attitude questions arepositively or negatively worded. In this article a balanced set of gender roleattitudes is analyzed. The methodological concern that inspired researchers to develop balanced sets of questions is the knowledge that (some) respondentstend to agree (yes-saying) in answering survey questions irrespective of thecontent of the items. This phenomenon especially occurs when large sets ofitems with dichotomous response alternatives (yes/no; agree/disagree; true/false) are used (Krosnick, Judd & Wittenbrink, 2005). However, skewnesstoward the agreement categories of Likert type of questions is also indicativeof an agreement response style (Billiet & McClendon, 2000). To correct foragreement bias in the latter case, Billiet & McClendon (2000) have developedan approach in the context of structural equation modeling that looks solid.
Their approach, however, involves a confirmatory factor analysis and is, byconsequence, applicable when a researcher focuses on latent factors or dimen-sions. The method that we have described in the previous section is differentin the sense that it identifies latent classes, which is conceptually more similarto identifying clusters. Hence, a researcher who wants to identify a gender roletypology (Eid, Langeheine & Diener, 2003) may consider adopting theapproach suggested in this article. Finally, our third application of themodel on a set of issues reflecting different aspects of civil morality(Halman, 1997) provides a nice example on how these issues are rank orderedwithin different segments of the population.
In summary, we suggest that a public opinion researcher should seriously consider the use of the approach elaborated in this article when he or she: (a) aims at classifying respondents in latent classes or clusters in terms of their relative preference of items compared to other items in a set ofquestions, (b) is using or has to use rating scales, and (c) needs to control for overall agreement.
Data used in this research come from a short questionnaire that was designedand administered in the context of the Dutch CentERpanel web-survey thatwas established in 1991. This panel started as a random sample of over 2000households in the Netherlands representative of the Dutch population. Panelmembers, aged 16 years or older, complete a questionnaire on the internetfrom their home on a weekly basis. Households without internet access aresupplied with a set-top box with which questionnaires can be completed usinga television screen as a monitor. Since sampling strategy is random, thisinternet panel is not composed of self-selected members, but rather asample of respondents that represents the full population of Dutchhouseholds.
I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H We lack information to calculate different response rates at different stages of this panel research in the period from 1991 to the date of our measurementin December 2005. Important to know, however, is that the organizers of thispanel offered us the facility to administer a short questionnaire to a subsampleof all panel members in the mailing list of December 2005. A random sub-sample of 1468 respondents was contacted of which 1051 respondents filled inthe questionnaire. Hence, the refusal rate of all contacted persons was 29percent.
The analyses reported in this article are conducted with Latent GOLD 4.5. This program also allows specifying information on the sampling designand accordingly corrects standard errors and Wald statistics (Vermunt &Magidson, 2005). In our models standard errors and Wald statistics are cor-rected for the clustering of individuals within household.
The short questionnaire included two sets of four questions referring respectively to ‘gender roles’, and ‘locus of control’. Respondents wereasked to rate their agreement on a six-point scale ranging from 1 ¼ ‘completelydisagree’, to 6 ¼ ‘completely agree’. A third set of six questions referred toissues of ‘civil morality’ and respondents needed to indicate on a scale from1 ¼ ‘never’ to 10 ‘always’ to what extent they rated particular behaviours asjustified. A ‘no opinion’ option was not presented on the screen because thiswas the regular practice in this internet panel. The use of short questionnairesis standard practice in this (and other) internet panel research. Furthermore, itis not uncommon in public opinion research to use short scales, especially notwhen information is collected on a broad range of topics. The method pro-posed in this article, however, can be used with longer sets of questionsas well.
The questionnaire was administered in Dutch and the selected items were adapted from regular surveys as indicated in the first table. Table 1 presentsan overview of all items per concept and their associated label that will be usedin the remainder of this article.
Before presenting the results from the LC regression models with random intercept we need to present some details of our analyses. The first step in anylatent class analysis usually involves estimating latent class models with 1, 2 , . . . , n number of latent classes and comparing the fit of each one to thedata. Model fit is provided by its log likelihood (LL) and associated BayesianInformation Criterion (BIC) value, which is a parsimony index. The lower thislatter value, the better the model fit. The principal reason to use informationcriterion in model selection is that it allows for comparisons of non-nestedmodels, which is the case when latent classes are added. We will present thesemodel statistics for LC regression models with and without random intercept.
The differences between these two models is that the latter is a traditional LCregression model in which latent classes differ with respect to both the I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H intercept and regression coefficients, whereas in the former model individual-level variability in the intercept is captured by the random intercept. Thecomparison of these models with and without random intercept allows evalu-ating the significance of including the random intercept in our models.
Additionally we will compare the LC standard pseudo R2 statistics of themodels. Details on the selected model will be presented for each example.
In the second step of the analysis, we present the results from the selectedmodel. Each of the three example models includes a set of covariates, i.e.
gender, age (in categories), education, living arrangement and job status. Assuch our models fit within a structural equation modeling (SEM) framework,which is of course a regular practice in public opinion research. Finally, wecross validate the results of the selected LC regression models with randomintercept with the results from a regular latent class approach in which a‘‘counting for agreement variable’’ was included as a control variable. Thislatter variable is operationalized as the total number of agreement responsesafter dichotomizing items contrasting the disagreement response options (¼0)with the agreement options (¼1) (Billiet & McClendon, 2000; see also: Ray, 1979). If the random intercept adequately captures overall agreement, than theresults of the two models should be similar. The major difference, however, isthat the model-based estimate of overall agreement is statistically more solidsince it is a probabilistic model. This implies that for each respondent aprobability of having given an overall agreement response is estimated,whereas the count variable may be biased by measurement error, especiallywhen only a few items are included in this count index. The detailed results ofthe ‘‘counting for agreement’’ adjustment models are not presented since ineach example the similarity in results was obtained. The major differencesbetween the random intercept model and a model controlling for the countvariable were that standard errors of the measurement model were generallysmaller in the intercept model and that the estimated proportion of respon-dents in each latent class slightly shifted. As a summary result we have listedthe correlation between the random intercept scores at the individual levelwith the ‘‘counting for agreement’’ variable.
D E M O N S T R A T I N G T H E U S E F U L N E S S O F A L A T E N T - C L A S S O R D I N A L R E G R E S S I O N M O D E L W I T H R A N D O M For each example the results in the tables are presented in three parts. First,we compare the model selection and fit statistics of consecutive models withand without random intercept. Second, the latent class estimates are presentedshowing the expected value of the intercept (k), the random intercept loading(l) and the effect sizes of items for each latent class (xj). The third part of the table includes the structural part of the model in which the effects ofcovariates on latent class membership are presented. Significance at the vari-able level is indicated by the Wald statistic, and standard errors allow toevaluate the significance of parameters (parameter >1.96 Â SE). The questionon ‘what difference does it make’ is answered at the end by comparing resultswith outcomes from the standard latent class regression analyses.
The first example includes a selection of four items referring to locus ofcontrol as presented in a study by Mirowsky & Ross (1990). In their workMirowsky & Ross link these issues to the concepts of control and defence.
They argue that locus of control gets a different meaning whether a respon-dent’s defence mechanism is self-defensive or self-blaming. A self-defensivemechanism occurs when a respondent claims responsibility for success orwhen he/she denies responsibility for failure. Self-blaming refers to a situationin which a respondent claims responsibility for failure or denies responsibilityfor success. The interesting thing about Mirowsky & Ross’ arguments fromthe perspective of our article is that they provide a theoretical rationale forapplying latent class analysis. After all, they define a theoretical typology andthe aim of a latent class analysis is identifying segments in populations and assuch defining an empirical typology.
In both the standard latent-class regression model and the random inter- cept model the two class model reveals the lowest BIC value. The two classrandom intercept model performs better: overall it has the lowest observedBIC and the pseudo R2 is substantially higher (¼.67) than in the modelwithout random intercept (¼.45).
Examining the latent class part of the selected model we find confirmation of a strong positive random intercept effect. This is consistent with the argu-ment that overall agreement tends to dominate the picture, as is confirmed bythe correlation of this random intercept with the count variable indicating thenumber of agreement scores on this set of items (r ¼ .791). The positiverandom intercept effect implies that respondents indicated that all fourissues are influential factors in the lives of people. The latent classes identifythe relative preferences of particular issues compared to others controlling forthe average agreement (i.e. random intercept) with the four issues. The twolatent classes claim responsibility for both successes (insuccess) and failures(infail). Mirowsky & Ross (1990) made similar observations. The main differ-ence between the first and second class is that the first latent class onlymarginally makes that distinction between claiming responsibility and denyingresponsibility, whereas in the second latent class this difference is more clearlyarticulated.
I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H Only education is significantly related to the latent class typology of locus of control (P-value of the Wald-statistic <.00). Education is a complex cate-gorical variable distinguishing between primary (first category), secondary(second and third category) and post-secondary levels. Within this post-secondary level the three categories are also listed by increasing level ofeducation. At the same time the type of education (general versus vocationaltraining) also defines the classification. For example, depending on the tracka student has followed at the secondary level he or she is more likely toprogress to a particular type of higher post-secondary level. Higher secondaryeducation—which is a general training—prepares for university. As such theclassification of educational categories should be thought of as both a vertical(up-down levels) and horizontal (differences in type of education) stratifica-tion. With this in mind the general picture that emerges from the analysis isthat the likelihood of being in the second latent class, which more stronglyclaims responsibility for both successes and failures, increases by level and bytype of training (general versus vocational). Among the first three categories ofeducation the likelihood of being classified in the second latent class increases.
This likelihood also increases with increasing levels within the post-secondarylevels. The fact that respondents with a higher secondary training have asomewhat higher likelihood of being in the second latent class compared tothe intermediate vocational training level indicates that a distinction in voca-tional-general education also plays a role.
The second example is chosen from a longstanding theme in sociology, i.e.
gender role attitudes (Thornton & Young-DeMarco, 2001). The set of ques-tions includes two items that reflect gender stereotyping and two items thatindicate gender equality. These two sets of items are regarded as conceptuallybalanced, and hence as items located at the poles of a continuum. Two itemsreflect opposite views on the work-family balance in the life of women (’job’vs. ‘family orientation’) and two items discuss the impact of having paid workwhile being a mother (‘working mother’ versus ‘pre-school child’).
Comparing the model selection criteria we observe a similar result as in the previous example. BIC values indicate that in both types of models, withand without random intercept, the same number of latent classes is selected.
In this example a three class model is preferred. Again the random interceptmodel performs better, although only marginally so: R2 only increases with.05. This should not come as a surprise since the random intercept has adifferent meaning compared to the previous example. In the locus of controlexample overall agreement with the four issues can be fully legitimate from acontent point of view. As far as the gender role attitude questions are con-cerned an overall agreement tendency much more reflects a response bias.
13987 13223 13170 13182 13994 13229 13164 13166 1-Class 2-Class 3-Class 4-Class 1-Class 2-Class 3-Class 4-Class I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H In general one would expect that when respondents answer positively andnegatively worded questions consistent with the true content of the concept,then the random intercept would be 0 or insignificant. In the case of genderrole attitudes, however, there is a significant positive random intercept indi-cating an agreement tendency. This is confirmed in the ‘counting for agree-ment’ model. Again the random intercept is strongly correlated with thecounting for agreement variable (r ¼ .770).
The latent class profile is consistent with a conceptual framework that distinguishes between an egalitarian and a gender stereotyping orientation,respectively the second and third latent class. The second egalitarian latentclass shows relatively higher than average rating of ‘job importance’ and higheragreement with the issue that a ‘working mother’ can establish a good relation-ship with her children as well. At the same time the average rating of ‘apreschool child’ suffering from having a working mother and ‘family priority’is less than average. The third gender stereotyping latent class defines thecounterpart. The first and largest latent class, however, hardly makes anydifference between the gender issues, except for the issue of ‘family priority’,which is significantly less preferred. This means that this class qualifies as acategory of non-differentiators. Non-differentiators are respondents who give(nearly) identical scores to each item. This interpretation is confirmed by thefinding that the latent class probabilities of belonging to this first latent classstrongly correlate with an index calculated as the standard deviation of thefour scores given by an individual (r ¼ –.82).
Not surprisingly a gender cleavage is observed in the likelihood of being classified in the second and third latent class with women being more egali-tarian minded (second class) and men being more conservative (third class).
Overall education is significantly related to the latent class typology, but the differences between educational categories are only significant as far as thesecond egalitarian class is concerned. University students are clearly moreegalitarian in gender roles than other categories, whereas the lower secondaryeducational level is the least likely to be classified in this second latent class.
Again the level-type combination in education proves to be important.
In contrast to education, living arrangements are related to the probability of belonging to the first ‘neutral’ and third ‘gender stereotyping’ latent classes.
Respondents living with a partner (with or without children) contrast withsingle parents on these issues. The former are more likely to use ‘genderstereotyping’ in their attitudes, whereas single parents are much less likely.
The latter, by contrast, are more likely to be classified in the first ‘neutral’category, whereas respondents living with a partner are not.
Having a paid job is linked to classification in the first ‘neutral’ category and the second ‘egalitarian’ latent class, with respectively a negative andpositive estimated likelihood.
I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H 24371 23788 23771 23859 23605 22842 22791 22851 12137 11768 11683 11651 11750 11292 11190 11144 1-Class 2-Class 3-Class 4-Class 1-Class 2-Class 3-Class 4-Class I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H The effect of the age-group variable is not significant. However, an ana- lysis with exact age instead of the age-group variable, confirmed that a lineareffect of age was significant for this first latent class, whereas no significantlinear effect on the second and third latent class was found.
The general picture that emerges regarding the relationship between co- variates and gender roles is that expressing a clear position—whether it isegalitarian or more gender stereotyping—is defined by the level of ‘stability’in the lives of people. Respondents that experience uncertainty in their lives-i.e. single parents, not having a job, and being young-are more likely to beclassified in the first latent class that show little differentiation in relativepreference of the four gender role issue.
In the final example we apply our analyses to a set of six items referring tocivil morality. Three of these items indicate justification of illegitimate perso-nal enrichment, that is: ‘claiming state benefits one is not entitled to’; ‘cheatingon tax’; and ‘paying cash for services to avoid taxes’. The remaining itemsquestion the justification of interference in life and death of people, i.e. abor-tion, euthanasia and homosexuality. It is important to notice that these latterissues are legalized in Dutch society although political opinions on the matterdiffer. In most research (Halman, 1997) this distinction between the two setsof morality issues is maintained, supported by empirical evidence from factoranalysis which identifies two dimensions. In this research, however, we aim atidentifying latent classes that differ in their justification of these behavioursrelative to an overall justification level. The latent-class regression model withrandom intercept suits this purpose.
The difference between models with and without random intercept is probably most clear-cut with this third example. Although the BIC criterionresults in selecting a three class model in each case, the difference with a twoclass model in the traditional latent class regression model is small. Given thatthe estimated class size of the third latent class in this case is only 6 percent(compared to 18 percent in the model with random intercept), some research-ers may conclude that a two class model isn’t a bad choice. The differences infit of the three class models with and without random intercept are also quitepronounced, both in terms of BIC and R2. In the latter case the pseudo R2increases from .54 to .80 when adding a random intercept. This randomintercept again strongly correlated with the variable which counted thenumber of agreement responses on the set of six items (r ¼ .855).
The first and largest (54.5 percent) latent class appears to rank-order items in a ‘Guttman fashion style’ according to the difficulty of finding actionsjustifiable. ‘Claiming state benefits’ is the least and ‘homosexuality’ is themost likely behaviour to be justified. The three items regarding illegitimate personal enrichment rank lowest; the items on interference in life and deathrank highest. However, the difference between ‘paying cash’ and ‘abortion’ issmall. Rank-ordering within these two sets also seems to be logic when onereflects on current Dutch (and probably other Western) societies. Legislationon homosexuality is very liberal in the Netherlands, which was one of the firstcountries to legalize marriage among homosexual couples. Legislation oneuthanasia and abortion are more strict but not prohibited. Given that eutha-nasia is a decision regarding one’s own live and abortion involves the choice ofa woman (or couple) on a premature conception, it seems obvious that publicmorality is more liberal in the case of euthanasia. None of the ‘personalenrichment’ issues is legal. However, there might be some confusion regardingthe issue of ‘paying cash for services to avoid taxes’ since it is legal to paysomebody who can help you with optimizing your tax reductions. The majordistinction, however, is between ‘claiming state benefits’ and the other twooptions. The former is regarded as ‘‘stealing from the poor’’ since most statebenefits are provisions for the disadvantaged. When issues refer to "taxes"people associate it with the state or government and people tend to be lessmoralistic in their solidarity with the state.
The third latent class adopts a similar rank-ordering as the first latent class. The differences in beta’s between items, however, are more pronounced -especially for the first and last item. Furthermore, the difference between‘paying cash for services’ and ‘abortion’ is much more clear-cut than is thecase for the first latent class. The second latent class, on the other hand, doesnot adopt a clear rank-ordering of the issues. In fact this category hardlydiffers from the overall justification except for two of the six items, i.e.,‘claiming state benefits’ and ‘paying cash for services’. Oddly, this categoryshows greater than average tolerance toward ‘paying cash for services’, while atthe same time tend to find ‘claiming state benefits’ less justifiable. However, incomparison with the other two latent classes, this second class is the leastlikely to regard ‘claiming state benefits’ as not justifiable. This is also the casefor the other two issues referring to personal enrichment suggesting that thesecond latent class has somewhat less moral objection against these illegitimateways of self enrichment. Nevertheless, the dominant picture is that the latentprofile of the second class does not strongly differ from the overall agreementpattern identified by the random intercept. As such it brings together a groupof respondents who are virtually acting as non differentiators as far as theirevaluation of moral behaviors is concerned.
Although the rank ordering in relative preference of the six items was highly similar for the first and third latent class, the effects of covariates inpredicting class membership are distinct. Except for gender, none of the othercovariates is significantly related to membership of the first latent class, as isindicated by the fact that beta’s are less than twice their standard error.
I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H Women are somewhat more likely than men to be classified in the first latentclass. We already indicated that the rank ordering of the morality issues withinthe first latent class comes closest to what could be regarded as Dutch cultureon these issues. Since the concept of a national culture implies a sharing ofvalues or attitudes across different social groups within that society, it seemsobvious that little effect of covariates could have been expected. Hence, theeffect of covariates—or rather the lack of effect of covariates—on the like-lihood of being classified in the first and largest latent class is merely suppor-tive evidence for interpreting the latent profile of the first class as reflectingDutch culture.
Covariates, however, are clearly related to classification in the third latent class. Women, higher educated respondents, respondents without children andrespondents in the age group of 35–44 years is the emerging profile of thisthird latent class, as is indicated by their positive betas observed in Table 4.
Recall that the rank ordering of items within this latent class is similar to theone observed in the first latent class, except that the differences are morepronounced as far as the first ’claiming state benefits’ and last ’homosexuality’issues are concerned.
Are men more indifferent towards moral issues than women? That is the first question that pops in mind when we observe that men are more likely tobe classified in the second latent class that hardly makes any difference in therank ordering of items except for a moderately higher than overall agreementwith the issue of ‘paying cash for services to avoid taxes’ and somewhat lessagreement with ‘claiming state benefits’. Also older respondents, aged 65 ormore, and respondents with only primary or lower secondary education aremore likely to be classified in this second latent class. In fact, the pattern ofassociation between covariates and second class membership that did notsubstantially differentiate in their ranking of moral issues, appears to be theopposite pattern compared to the third latent class that did adopt a morepronounced item preference ranking. This opposite pattern of associationsuggests that the difference between the second and third latent class is adifference in strength by which a difference is made in justification of parti-cular moral behaviours.
In each of the three examples given we found that the latent class regressionmodel with random intercept fitted the data better than the model withoutrandom intercept. Hence, in terms of measurement fit it does make a differ-ence whether or not a random intercept is included to control for overallagreement tendencies in a set of items. An applied researcher, however,might wonder whether more substantive findings would change. He or she TABLE 5 Crosstabulation of predicted latent class membership (modalassignment) in the standard with the random intercept LC regression model might wonder whether latent classes differ in content and/or size, or whethereffects of covariates change. A lengthy discussion of similarities and differ-ences is beyond the purpose of this article. We can put the answer very boldly:‘yes it can and yes it does make a difference’. Let us illustrate this claim withsome summary comparisons for the three examples given.
In none of the three examples there was clear evidence that the number of classes changes when including the random intercept. Class sizes and classmembership, however, varied with distinct results in the three examples. Themost parsimonious way of illustrating this is by cross-tabulating predictedclass membership of the standard LC model with the random intercept I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H model (Table 5). Class membership is defined by modal assignment given thelatent class probabilities that are estimated by the model.
Class sizes differed when comparing the two types of models. Differences were smallest in the second ‘gender role’ example and most clearly observedin the third ‘civil morality’ example. The effect of adding a randomintercept in the first ‘locus of control’ and second ‘gender roles’ example isthat a number of people who were classified in the first latent class of thestandard LC regression model switched to the second and/or third latentclass in the random intercept model. In both cases this means that theserespondents moved from a latent class in which latent class weights (betas)were moderate (or even insignificant) to a latent class in which the effectsizes were more pronounced. It is important to note that in both exampleslatent class weights were similar in the models with and without randomintercept and that latent class probabilities of the corresponding latent classescorrelated highly (>.90). Hence, in the first two examples the principaleffect of introducing the random intercept was in fine-tuning the measurementmodel.
In the ‘gender role’ example, in which the random intercept could be interpreted as capturing agreement bias or acquiescence, the effects of covari-ates on latent class membership were very similar in the models with andwithout random intercept. The only differences we observed were that stan-dard errors were generally somewhat less in the random intercept model andthat effects of covariates were somewhat smaller. In the ‘locus of control’example, however, the effect of education—which was the only significantcovariate—was substantially smaller in the random intercept LC model com-pared to the standard LC regression model. In the latter model the differencein beta between the first and last category of education was equal to 2.045,whereas in the random intercept model this difference equaled 1.024 (seeTable 2).
Finally, the third example on ‘civil morality’ revealed very different results when the standard LC model is compared with the random intercept model.
Estimated latent class sizes differ substantially and one third of the respon-dents are classified in the off-diagonal cells of the table. Furthermore, latentclass weights (betas) obtained with the traditional LC regression model alsodeviated from the betas reported in Table 4. Only the betas of the first latentclass were similar. However, the latent class probability scores for this firstclass only correlated with .54, which is consistent with the previous findingthat many respondents switched latent classes. Hence, it is safe to concludethat the measurement model of ‘civil morality’ is quite different when arandom intercept is included. Estimates of the covariates were also very dif-ferent, but given that we are talking about two truly different measurementmodels, a comparison of covariate effects makes little sense.
Our research demonstrates the usefulness of a method developed in the con-text of consumer research, that is, a latent-class ordinal regression model withrandom intercept, in controlling for overall agreement with rating questions inpublic opinion research. We argued that the desire to control for such overallagreement may arise from a substantive motive, i.e., when a researcher’s con-ceptual framework refers to relative preferences of particular issues comparedto others rather than absolute agreement rating. Controlling for overall agree-ment can also be justified from a methodological point of view when abalanced (positively and negatively worded) set of items is analyzed. In thiscase overall agreement can be interpreted as agreement bias. The method isapplicable when a researcher is interested in developing an empirical typology.
As such it does not ‘replace’ other methods developed to control for overallagreement response patterns when a researcher’s aim is to identify latentdimensions. To our knowledge, research on overall agreement response pat-terns in public opinion research has been exclusively developed within thisdimensional—factor analytic type of—perspective. The method used in thisresearch is a valuable addition for researchers working within the typology—cluster analytic type of—perspective. Furthermore, the approach also allowsfor inclusion of covariates predicting latent class membership. As such themodel fits within the structural equation modeling framework. With softwarereadably available and given the examples from regular public opinion surveysthat were presented, we hope that public opinion researchers will consider theuse of this approach in their own work when they share the concern thatguided this research, i.e., controlling for overall agreement response tendenciesin the context of identifying segments or clusters in a population that differ intheir relative preferences of particular items compared to other items in theresponse set.
Agresti, A. (2002). Categorical data analysis (2nd edn) New York: Wiley.
Alwin, D.F., & Krosnick, J. A. (1988). A test of the form resistant correlation hypothesis – Ratings, rankings and the measurement of values. Public OpinionQuarterly, 52, 526–538.
Billiet, J., & McClendon, M. J. (2000). Modeling acquiescence in measurement models for two balanced sets of items. Structural Equation Modeling, 7, 608–628.
Cattell, R. B. (1944). Psychological measurement: normative, ipsative, interactive.
Psychological Review, 44, 292–303.
Cheung, M. W.-L., & Chan, W. (2002). Reducing uniform response bias with ipsative measurement in multiple-group confirmatory factor analysis. Structural EquationModeling, 9, 55–77.
I N T E R N A T I O N A L J O U R N A L O F P U B L I C O P I N I O N R E S E A R C H Cheung, M. W.-L. (2006). Recovering preipsative information from additive ipsative data: a factor score approach. Educational and Psychological Measurement, 66, Cunningham, W. H., Cunningham, I. C. M., & Green, R. T. (1977). The ipsative process to reduce response set bias. The Public Opinion Quarterly, 41, 379–384.
Dunlap, W. P., & Cornwell, J. M. (1994). Factor analysis of ipsative measures.
Multivariate Behavioral Science, 29, 115–126.
Eid, M., Langeheine, R., & Diener, E. (2003). Comparing typological structures across cultures by multigroup latent class analysis. A primer. Journal of Cross-Cultural Psychology, 34, 195–210.
Halman, L. C. J. M. (1997). Is there a moral decline? A cross national inquiry into morality in contemporary society. International Social Science Journal, 13, 49–69.
Inglehart, R. (1990). Culture shift in advanced industrial society. Princeton University Kaufman, L., & Rousseeuw, P. J. (1990). Finding groups in data: An introduction to cluster analysis. New York: John Wiley & Sons, Inc.
Klein, M., Du¨lmer, H., Ohr, D., Quandt, M., & Rosar, U. (2004). Response sets in the measurement of values: A comparison of rating and ranking procedures.
International Journal of Public Opinion Research, 16, 474–483.
Kohn, M., & Schooler, C. (in cooperation with Miller, J., Miller, K., Schoenbach, C.
& Schoenberg, R.) (1983). Work and personality. An inquiry into the impact of socialstratification. Norwood, New Jersey: Ablex Publishing Corporation.
Likert, R. (1932). A technique for the measurement of attitudes. Archives of Magidson, J., & Vermunt, J. K. (2006). Use of latent class regression models with a random intercept to remove overall response level effects in ratings data. In:A. Rizzi & M. Vichi (Eds.), Proceedings in Computational Statistics (pp. 351–360).
Mirowsky, J., & Ross, C. E. (1990). Control or defense? Depression and the sense of control over good and bad outcomes. Journal of Health and Social Behavior, 31, Munson, J. M., & McIntyre, S. H. (1979). Developing practical procedures for the measurement of personal values in cross-cultural marketing. Journal of MarketingResearch, 16, 48–52.
Paulhus, D. L. (1991). Measures of personality and social psychological attitudes. In: J. P. Robinson & R. P. Shaver (Eds.), Measures of Social Psychological AttitudesSeries (Vol. 1, pp. 17–59). San Diego: Academic.
Popper, R., Kroll, J., & Magidson, J. (2004). Application of latent class models to food product development: a case study. Sawtooth Software Conference Proceedings.
San Diego, CA: Sawtooth Software, Inc.: Sequim, WA.
Ray, J. J. (1979). Is the acquiescent response style problem not so mythical after all?: Some results from a successful balanced F scale. Journal of Personality Assessment,43, 638–643.
Rotter, J. B. (1954). Social learning and clinical psychology. New York: Prentice-Hall.
Ryan, M., Bate, A., Eastmond, C. J., & Ludbrook, A. (2001). Use of discrete choice experiments to elicit preferences. Quality in Health Care, 10(Suppl 1), i55–i60.
Thornton, A., & Young-DeMarco, L. (2001). Four decades of trends in attitudes toward family issues in the United States: The 1960s through the 1990s. Journalof Marriage and the Family, 63, 1009–1037.
van Herk, H., & van de Velden, M. (2007). Insight into the relative merits of rating and ranking in a cross-national context using three-way correspondence analysis.
Food Quality and Preference, 18, 1096–1105.
Vermunt, J. K., & Magidson, J. (2005). Latent GOLD 4.0 User’s Guide. Belmont Massachussetts: Statistical Innovations Inc.
Guy Moors is Assistant Professor at the Department of Methodology & Statistics,Faculty of the Social and Behavioural Sciences, Tilburg University. His researchinterests are in the field of social demography, values and attitude research, surveyresearch methodology and latent class analysis.
Conforms to EU Directive 91/155/EEC, as amended by 2001/58/EC - United Kingdom (UK) SAFETY DATA SHEET Neisseria Selective Supplements Identification of the substance/preparation and company/undertaking Identification of the substance or preparation Product name : Neisseria Selective Supplements Trade name Use of the : PRO-LAB Neisseria Selective Supplements