RR Donnelley (NASDAQ: RRD) is a Fortune 500 company based in Chicago, Illinois, that provides print and related services. Corporate headquarters are located at 111 S. Wacker Drive.
The company, originally known as R.R. Donnelley & Sons Company, was founded in 1864 by Richard Robert Donnelley. His son, Reuben H. Donnelley, founded the otherwise unrelated company R. H. Donnelley.[1]
RR Donnelley's cartographic production facility was for many years one of the largest in the United States. In the late 1980s, the division was spun off as its own company, Geosystems, which in turn became MapQuest. It is now a subsidiary of AOL.







For literature review, there is a process formation of ideas and studies presented by several proponents that assumes relative basis on Weber’s ideal type and the imposed ideas as well as research evidence on social science, from social research encounter of such process as to be adopted and executed in this particular study. The focus of the literature review is to place a detailed ground of supporting materials that are amiably academically driven and has undergone to the standards of systematic organization of some books, journals and articles in having variety of great minds working into the ways of research paradigm and recognition.



Foundation of Weber’s Ideal Type

Ideal type, as developed by Weber allowing in a conceptual tool pressing an ideal type assumption, as recalled that Weber argued that no scientific system is ever capable of reproducing all concrete reality, nor can any conceptual apparatus ever do full justice to the infinite diversity of particular phenomena. It was known that, ‘all science involves selection as well as abstraction’ (Weber, 1949; 1968) as social scientist can easily be caught in a dilemma when he chooses his conceptual apparatus. An ideal type is an analytical construct that serves the investigator as a measuring rod to ascertain similarities as well as deviations in concrete cases. It provides the basic method for comparative study. Furthermore, being formed by “accentuation of one or more points of view and by the synthesis of a great many diffuse, discrete, more or less present and occasionally absent concrete individual phenomena, which are arranged according to those one-sidedly emphasized viewpoints into a unified analytical construct” (Heckman, 1983) as it does mean moral research ideals.



Ideal Type and Research Studies

Some of Weber's ideal types refer to collectivities rather than to the social actions of individuals, but social relationships within collectivities are always built upon the probability that component actors will engage in expected social actions. An ideal type never corresponds to concrete reality but always moves at least one step away from it. It is constructed out of certain elements of reality and forms a logically precise and coherent whole, which can never be found as such in that reality. Thus, Julien Freund reveals that, “being unreal, the ideal type has the merit of offering us a conceptual device with which we can measure real development and clarify the most important elements of empirical reality”. Ideally, “Weber's earliest involvement in empirical social research had investigations on labor conditions, workers' attitudes and work histories, using the questionnaire and direct observation technique as well as the modern statistical approach concerning psychological aspects of factory work also, critique of other person's study of workers' attitudes, he advocated a quantitative or typological approach to qualitative data” (Lazarsfeld and Oberschall, 1965 p. 185). There have been “explicit support on quantitative techniques and the meaning of social relationships is being expressed in probability, the value and the role of empirical research in sociology, from the debate as to whether sociology and psychology should be distinguished and the presence of social science as one better conceptual device, as used without such reference to empirical research” (Lazarsfeld and Oberschall, 1965 p. 185). Moreover, proponent Hekman (1983, p. 119) indicated that, “Max Weber's concept of the ideal type would seem to have fallen into neglect in contemporary social science, and have argued that Weber's concept is methodologically sound and logically consistent. Inasmuch as it offers a common basis for the analysis of subjective meaning and structural forms it may provide a corrective to what she sees as the present methodological disarray in social theory”. The literature study can be critical in terms of effective utilization of Weber’s ideal type concepts, as there might be fruitless efforts to grasp the significance of such basic tool of science through the work of Weber. “Weber then did not have such mature grasp of the class of logical devices to which the ideal type belongs for example in idealizations, which is inevitable for some research strategies which are in the sense theoretically put in a continuum. However, several idealizations had remarkably assumed real theoretical for instance, the ones found in social science are available in discipline but not being recognized as such” (Lopreato and Alston, 1970 p. 88). Indeed, in social science domain there must have “special effort to recognize theoretically inspired approach of Weber’s idealization as there helps in avoiding incorrect squabbles about additional aspects of research theories and enhance the chances of getting down to serious business of theory construction within focused sense of purpose and certain cumulative orientation” (Lopreato and Alston, 1970 p. 88).

Purpose of Ideal Type Contents

Several proponents have argued that greater reliability in social research methods are to be obtained through developing operational definitions that securely connect classificatory schemes with observable phenomena, some argument rely on definite distinction between observable and unobservable pathways that the philosophers view as one issue at hand for ideal type research so, the following purpose could be used, for situations in some disorders:



tributes are equally represented, so that the effects of attributes can be estimated efficiently. The number of parameters to be estimated is determined by the number of products, the number of attributes and levels. In the example we cited earlier, we have two two-level and three three-level attributes and the smallest integer that can be divided by 2, 3, 2 x 2, 2 x3 and 3 x 3 is 36. That is, we achieve a perfect orthogonality if we have a total of 36 profiles in the study. However, we know that 36 profiles are too many for respondents to complete and we have to reduce it, say, to 18. The number 18 can be divided by each of the above-mentioned numbers except 2 x 2. Here, we compromise the number of profiles in the study by having imperfect orthogonality. The balance here refers to the frequency of attribute levels appearing in the total number of profiles. In other words, ideally we need to have an equal number of attribute levels for each attribute included in the selected profiles. This is hard to achieve when we have to reduce the number of profiles in a fractional factorial design.

Fortunately, many software packages have provided the calculation determining the number of profiles that are needed. The SAS Macro procedure of %mktruns is one of such examples (Kuhfeld, 2000).

3) Presentation of the choice sets

There are many ways that discrete choice scenarios may be presented. Two popular ways are choice question and allocation. These two are very similar, except that respondents are asked to make a choice among the alternatives (choice question) or to allocate the number of prescriptions (allocation).

Suppose we now have five attributes and three competitive drugs for headaches. Table 1 illustrates the simplified version of the choice question and Table 2 shows allocation. The allocation approach asks the respondent to assign the number of patients/prescriptions to products.




Notice here that in both tables, we can include different attributes and attribute levels for each drug. For instance, for Drug A, the dosing may be QD, BID and TID. For Drug B, the dosing may be daily, every other day or weekly.

As shown in the tables, physicians are asked in each scenario to either make a choice or allocate his or her next 10 prescriptions or patients across a set of profiles. These allocations can also be made for each patient type or other situational variables such as severity of the disease and comorbidities. In presenting the sets of alternatives to physicians, we usually also present these situational variables along with the choice sets. For instance, in presenting the drug set for treating patients with headaches, we may present different types of patients: migraine, tension and cluster; or mild and severe. Also patients may vary between men and women, since migraines affect more women than men. Physicians therefore are asked to make a choice among the alternatives under each level of these situational variables. The responses to different situational variables are collected in order to assess the impact of situational variables on prescribing preferences.

As in the conjoint task, we want to reduce respondents’ cognitive burden while completing the task. The factors that could affect the respondents’ burden are the number of attributes, the number of levels per attribute, the number of brands and situational variables. Note here that if you are interested in the cross effects of attribute levels (interaction effects), you should specify these requirements in the design stage so that the number of profiles or sample size required for such needs can be met. Another way of reducing cognitive burden is to fractionalize the task; that is, divide the total number of choice sets into several subsets and each respondent only completes one subset. The data are combined and then analyzed. However, by doing so, the total sample size of respondents will increase.

B. Data analysis
The data from discrete choice are analyzed using the multinomial logit model (Louviere 1988, 1991). Note that this multinomial logit model is different from ordinary least square regression model (used frequently in the conjoint study) in that the coefficients are interpreted as the effects on the odds of choosing one alternative relative to another.

As shown in the two tables, physicians are asked to make a choice on the number of prescriptions for each product for the next 10 patients. The responses from physicians are used as the dependent variable in the logit model. The attribute levels are independent variables. The model is to assess how well the independent variables predict the physician’s choice of drugs. Specifically, from the output of the logit model, one can compute an odd ratio of a profile or individual drug chosen over the alternatives in the choice set. In addition, the coefficients can be used to compute the utility value of each attribute level and the derived relative importance of attributes.

If the situational variables are used in the task, as seen frequently in a pharmaceutical marketing research, these variables are included in the model and their impact on a physician’s choice of product are estimated. For instance, in the example cited earlier, we want to assess three drugs and evaluate physicians’ share of preference of each drug. Physicians are asked to prescribe a product for patients with different gender and severity. Gender and severity can be included in the model as independent variables and their coefficients estimated. If we know that the prevalence rate of headache for women is 70 percent compared to 30 percent of men, and if 20 percent of the headaches are severe and 80 percent are non-severe, we can weight the model coefficients to reflect these distributions and to obtain an overall share of preference of products.

Validation and simulation
Validation in a choice task refers to the estimation of how well the model can predict the actual observed values. As in a conjoint task, this is achieved through what is called “holdout” sets. These holdouts are not used for the estimation of the model. Rather they are used solely for the purpose of validation. In the examples we cited earlier, if we have a total of 18 choice sets, each has three profiles plus one “none of these” option. We may include two more choice sets as holdouts. These holdouts may include the most likely profiles of the drug the client wants to assess, along with other competitors. Since these holdouts are not going to be used for estimating the model, the responses for these holdouts can therefore be compared with the predicted values of the holdouts derived from the model. A high association between the actual values and the predicted values establishes a high reliability and thus validates the model.

Simulation refers to the process when the derived model is used to estimate the preference share. The following are the three purposes of a discrete choice model in simulating the impact of a change of attributes:

1) Determining which attribute level or a combination of the levels contribute most to respondents’ choice of that drug and thus the preference share of the drug. Deriving the relative importance of attributes. For instance, to what extent a change of the price from $10.00 to $15.00 and/or side effects from high to low for drug A will affect the preference share of drug A. Does the efficacy have more influence on prescribing than other attributes tested? How important is formulary status relative to other attributes tested?

2) Assessing the cross-effects of attributes. As we indicated earlier, the unique feature in discrete choice is the estimate of cross-effects, such as brand by price. Therefore, the study of price elasticity is a very common application of discrete choice.
he researcher’s next task is to make sense of the collected data. Before the researcher can gain understanding from the collected data, he/she must first examine the raw information (i.e., what was actually collected) to make sure the information exists as required. There are many reasons why data may not be presented in the form needed for further analysis. Some of reasons include:

Incomplete Responses – This most likely occurs when the method of data collection (e.g., survey) is not fully completed, such as when the person taking part in the research fails to provide all information (e.g., skips questions).
Data Entry Error – This exists when the information is not recorded properly which can occur due to the wrong entry being made (e.g., entry should be choice “B” but is entered as choice “C”) or failure of data entry technology (e.g., online connection is disrupted before full completion of survey).
Questionable Entry – This occurs when there are apparent inconsistencies in responses such as when a respondent does not appear to be answering honestly.
To address these issues the researcher will take steps to “cleanse” the data which may include dropping problematic data either in part (e.g., exclude a single question) or in full (e.g., drop an entire survey). Alternatively, the research may be able to salvage some problem data with certain coding methods, though a discussion of these is beyond the scope of this tutorial.
 
Last edited:
RR Donnelley (NASDAQ: RRD) is a Fortune 500 company based in Chicago, Illinois, that provides print and related services. Corporate headquarters are located at 111 S. Wacker Drive.
The company, originally known as R.R. Donnelley & Sons Company, was founded in 1864 by Richard Robert Donnelley. His son, Reuben H. Donnelley, founded the otherwise unrelated company R. H. Donnelley.[1]
RR Donnelley's cartographic production facility was for many years one of the largest in the United States. In the late 1980s, the division was spun off as its own company, Geosystems, which in turn became MapQuest. It is now a subsidiary of AOL.







For literature review, there is a process formation of ideas and studies presented by several proponents that assumes relative basis on Weber’s ideal type and the imposed ideas as well as research evidence on social science, from social research encounter of such process as to be adopted and executed in this particular study. The focus of the literature review is to place a detailed ground of supporting materials that are amiably academically driven and has undergone to the standards of systematic organization of some books, journals and articles in having variety of great minds working into the ways of research paradigm and recognition.



Foundation of Weber’s Ideal Type

Ideal type, as developed by Weber allowing in a conceptual tool pressing an ideal type assumption, as recalled that Weber argued that no scientific system is ever capable of reproducing all concrete reality, nor can any conceptual apparatus ever do full justice to the infinite diversity of particular phenomena. It was known that, ‘all science involves selection as well as abstraction’ (Weber, 1949; 1968) as social scientist can easily be caught in a dilemma when he chooses his conceptual apparatus. An ideal type is an analytical construct that serves the investigator as a measuring rod to ascertain similarities as well as deviations in concrete cases. It provides the basic method for comparative study. Furthermore, being formed by “accentuation of one or more points of view and by the synthesis of a great many diffuse, discrete, more or less present and occasionally absent concrete individual phenomena, which are arranged according to those one-sidedly emphasized viewpoints into a unified analytical construct” (Heckman, 1983) as it does mean moral research ideals.



Ideal Type and Research Studies

Some of Weber's ideal types refer to collectivities rather than to the social actions of individuals, but social relationships within collectivities are always built upon the probability that component actors will engage in expected social actions. An ideal type never corresponds to concrete reality but always moves at least one step away from it. It is constructed out of certain elements of reality and forms a logically precise and coherent whole, which can never be found as such in that reality. Thus, Julien Freund reveals that, “being unreal, the ideal type has the merit of offering us a conceptual device with which we can measure real development and clarify the most important elements of empirical reality”. Ideally, “Weber's earliest involvement in empirical social research had investigations on labor conditions, workers' attitudes and work histories, using the questionnaire and direct observation technique as well as the modern statistical approach concerning psychological aspects of factory work also, critique of other person's study of workers' attitudes, he advocated a quantitative or typological approach to qualitative data” (Lazarsfeld and Oberschall, 1965 p. 185). There have been “explicit support on quantitative techniques and the meaning of social relationships is being expressed in probability, the value and the role of empirical research in sociology, from the debate as to whether sociology and psychology should be distinguished and the presence of social science as one better conceptual device, as used without such reference to empirical research” (Lazarsfeld and Oberschall, 1965 p. 185). Moreover, proponent Hekman (1983, p. 119) indicated that, “Max Weber's concept of the ideal type would seem to have fallen into neglect in contemporary social science, and have argued that Weber's concept is methodologically sound and logically consistent. Inasmuch as it offers a common basis for the analysis of subjective meaning and structural forms it may provide a corrective to what she sees as the present methodological disarray in social theory”. The literature study can be critical in terms of effective utilization of Weber’s ideal type concepts, as there might be fruitless efforts to grasp the significance of such basic tool of science through the work of Weber. “Weber then did not have such mature grasp of the class of logical devices to which the ideal type belongs for example in idealizations, which is inevitable for some research strategies which are in the sense theoretically put in a continuum. However, several idealizations had remarkably assumed real theoretical for instance, the ones found in social science are available in discipline but not being recognized as such” (Lopreato and Alston, 1970 p. 88). Indeed, in social science domain there must have “special effort to recognize theoretically inspired approach of Weber’s idealization as there helps in avoiding incorrect squabbles about additional aspects of research theories and enhance the chances of getting down to serious business of theory construction within focused sense of purpose and certain cumulative orientation” (Lopreato and Alston, 1970 p. 88).

Purpose of Ideal Type Contents

Several proponents have argued that greater reliability in social research methods are to be obtained through developing operational definitions that securely connect classificatory schemes with observable phenomena, some argument rely on definite distinction between observable and unobservable pathways that the philosophers view as one issue at hand for ideal type research so, the following purpose could be used, for situations in some disorders:



tributes are equally represented, so that the effects of attributes can be estimated efficiently. The number of parameters to be estimated is determined by the number of products, the number of attributes and levels. In the example we cited earlier, we have two two-level and three three-level attributes and the smallest integer that can be divided by 2, 3, 2 x 2, 2 x3 and 3 x 3 is 36. That is, we achieve a perfect orthogonality if we have a total of 36 profiles in the study. However, we know that 36 profiles are too many for respondents to complete and we have to reduce it, say, to 18. The number 18 can be divided by each of the above-mentioned numbers except 2 x 2. Here, we compromise the number of profiles in the study by having imperfect orthogonality. The balance here refers to the frequency of attribute levels appearing in the total number of profiles. In other words, ideally we need to have an equal number of attribute levels for each attribute included in the selected profiles. This is hard to achieve when we have to reduce the number of profiles in a fractional factorial design.

Fortunately, many software packages have provided the calculation determining the number of profiles that are needed. The SAS Macro procedure of %mktruns is one of such examples (Kuhfeld, 2000).

3) Presentation of the choice sets

There are many ways that discrete choice scenarios may be presented. Two popular ways are choice question and allocation. These two are very similar, except that respondents are asked to make a choice among the alternatives (choice question) or to allocate the number of prescriptions (allocation).

Suppose we now have five attributes and three competitive drugs for headaches. Table 1 illustrates the simplified version of the choice question and Table 2 shows allocation. The allocation approach asks the respondent to assign the number of patients/prescriptions to products.




Notice here that in both tables, we can include different attributes and attribute levels for each drug. For instance, for Drug A, the dosing may be QD, BID and TID. For Drug B, the dosing may be daily, every other day or weekly.

As shown in the tables, physicians are asked in each scenario to either make a choice or allocate his or her next 10 prescriptions or patients across a set of profiles. These allocations can also be made for each patient type or other situational variables such as severity of the disease and comorbidities. In presenting the sets of alternatives to physicians, we usually also present these situational variables along with the choice sets. For instance, in presenting the drug set for treating patients with headaches, we may present different types of patients: migraine, tension and cluster; or mild and severe. Also patients may vary between men and women, since migraines affect more women than men. Physicians therefore are asked to make a choice among the alternatives under each level of these situational variables. The responses to different situational variables are collected in order to assess the impact of situational variables on prescribing preferences.

As in the conjoint task, we want to reduce respondents’ cognitive burden while completing the task. The factors that could affect the respondents’ burden are the number of attributes, the number of levels per attribute, the number of brands and situational variables. Note here that if you are interested in the cross effects of attribute levels (interaction effects), you should specify these requirements in the design stage so that the number of profiles or sample size required for such needs can be met. Another way of reducing cognitive burden is to fractionalize the task; that is, divide the total number of choice sets into several subsets and each respondent only completes one subset. The data are combined and then analyzed. However, by doing so, the total sample size of respondents will increase.

B. Data analysis
The data from discrete choice are analyzed using the multinomial logit model (Louviere 1988, 1991). Note that this multinomial logit model is different from ordinary least square regression model (used frequently in the conjoint study) in that the coefficients are interpreted as the effects on the odds of choosing one alternative relative to another.

As shown in the two tables, physicians are asked to make a choice on the number of prescriptions for each product for the next 10 patients. The responses from physicians are used as the dependent variable in the logit model. The attribute levels are independent variables. The model is to assess how well the independent variables predict the physician’s choice of drugs. Specifically, from the output of the logit model, one can compute an odd ratio of a profile or individual drug chosen over the alternatives in the choice set. In addition, the coefficients can be used to compute the utility value of each attribute level and the derived relative importance of attributes.

If the situational variables are used in the task, as seen frequently in a pharmaceutical marketing research, these variables are included in the model and their impact on a physician’s choice of product are estimated. For instance, in the example cited earlier, we want to assess three drugs and evaluate physicians’ share of preference of each drug. Physicians are asked to prescribe a product for patients with different gender and severity. Gender and severity can be included in the model as independent variables and their coefficients estimated. If we know that the prevalence rate of headache for women is 70 percent compared to 30 percent of men, and if 20 percent of the headaches are severe and 80 percent are non-severe, we can weight the model coefficients to reflect these distributions and to obtain an overall share of preference of products.

Validation and simulation
Validation in a choice task refers to the estimation of how well the model can predict the actual observed values. As in a conjoint task, this is achieved through what is called “holdout” sets. These holdouts are not used for the estimation of the model. Rather they are used solely for the purpose of validation. In the examples we cited earlier, if we have a total of 18 choice sets, each has three profiles plus one “none of these” option. We may include two more choice sets as holdouts. These holdouts may include the most likely profiles of the drug the client wants to assess, along with other competitors. Since these holdouts are not going to be used for estimating the model, the responses for these holdouts can therefore be compared with the predicted values of the holdouts derived from the model. A high association between the actual values and the predicted values establishes a high reliability and thus validates the model.

Simulation refers to the process when the derived model is used to estimate the preference share. The following are the three purposes of a discrete choice model in simulating the impact of a change of attributes:

1) Determining which attribute level or a combination of the levels contribute most to respondents’ choice of that drug and thus the preference share of the drug. Deriving the relative importance of attributes. For instance, to what extent a change of the price from $10.00 to $15.00 and/or side effects from high to low for drug A will affect the preference share of drug A. Does the efficacy have more influence on prescribing than other attributes tested? How important is formulary status relative to other attributes tested?

2) Assessing the cross-effects of attributes. As we indicated earlier, the unique feature in discrete choice is the estimate of cross-effects, such as brand by price. Therefore, the study of price elasticity is a very common application of discrete choice.
he researcher’s next task is to make sense of the collected data. Before the researcher can gain understanding from the collected data, he/she must first examine the raw information (i.e., what was actually collected) to make sure the information exists as required. There are many reasons why data may not be presented in the form needed for further analysis. Some of reasons include:

Incomplete Responses – This most likely occurs when the method of data collection (e.g., survey) is not fully completed, such as when the person taking part in the research fails to provide all information (e.g., skips questions).
Data Entry Error – This exists when the information is not recorded properly which can occur due to the wrong entry being made (e.g., entry should be choice “B” but is entered as choice “C”) or failure of data entry technology (e.g., online connection is disrupted before full completion of survey).
Questionable Entry – This occurs when there are apparent inconsistencies in responses such as when a respondent does not appear to be answering honestly.
To address these issues the researcher will take steps to “cleanse” the data which may include dropping problematic data either in part (e.g., exclude a single question) or in full (e.g., drop an entire survey). Alternatively, the research may be able to salvage some problem data with certain coding methods, though a discussion of these is beyond the scope of this tutorial.

Hey netra, as we all know that marketing research report on is very important for any one who is preparing the projects. Well, i appreciate your work and thanks for sharing report on R. R. Donnelley & Sons. BTW, i am also uploading a document on R. R. Donnelley & Sons which would help others.
 

Attachments

  • R.R.Donnelley & Sons.pdf
    283.9 KB · Views: 0
Top