Chicago Bridge & Iron Company (Chicago Bridge & Iron Company N.V.), (NYSE: CBI), known commonly as CB&I, is a large multinational conglomerate engineering, procurement and construction (EPC) company. CB&I specializes in projects for oil and gas companies. CB&I operates from more than 80 locations around the world, and as of August 1, 2009, CB&I has a total of approximately 16,000 employees.


ure of data
Before one can use data effectively, it helps to have a rudimentary grasp of the nature of data. In marketing research, we often think of data as the conglomeration of numbers obtained from a survey. Unlike mere numbers, however, data is inherently meaningful. It assumes meaning to the extent that it relates to an aspect of phenomenal reality. More colloquially put, phenomenal reality is "where the things of interest (phenomena) are happening." In marketing research, that phenomenal reality is usually the marketplace.

Figure I illustrates my own viewpoint on how data should be construed. In Figure 1, phenomenal reality is represented by the Oriental symbol of wholeness--Yin and Yang. For those unfamiliar with this symbol, a brief explanation is in order. The ancient Chinese believed that the world originated with two opposite yet complimentary "forces." Yin is symbolized by the large black area of the symbol, and Yang by the large white area. Within each area, there is a small dot of the opposite color. This dot represents the interdependence of Yin and Yang, despite their separateness. These two "forces" also have connotative as well as denotative aspects. Yin is characterized as female, passive and dark. Yang is characterized as male, active and light.

What the Yin-Yang symbol is intended to reflect, for current purposes, is the idea that the many phenomena we investigate have a "completeness" that is resistant to an analysis designed to break it into components. Although dividing the whole is sometimes the only way to gain understanding, that whole must eventually be reconstituted in our theories about the phenomena. The significance of this viewpoint is not patently obvious if one construes data using only more traditional Western thought, which emphasizes componential aspects (e.g., computer flow charts).

The Yin-Yang symbol also captures the subtle complexity of the phenomena under investigation. It suggests a "harmony of opposites." If data reflected only chaos, there would be no reason to collect it in the first place. The "pie wedge" removed from the symbol represents the act of measurement to obtain data.

It should be made clear that we are not talking here about drawing a sample from a population of consumers. In sampling, we expect to obtain a representative group of respondents--a sort of microcosm of the population. Notice that what we obtain is not a microcosm of the symbol (i.e., a complete, but smaller Yin-Yang symbol), but rather incomplete information in the form of a piece of data.

For the purposes of the following discussion, any complex black and white figure could suffice. Here the "surplus meaning" of the Yin-Yang symbol merely enriches the process. Imagine that you did not know what the entire symbol looked like. You had only the piece of data. What could you conclude about the entire symbol? For one, you could conclude that it has both white and black areas. For another, you could conclude that it is possible to have a circle of white surrounded by black. You might also note the arc of the edge of the piece. Something you might be able to infer, but not necessarily conclude, is that the arc is part of a larger circle. Likewise, you might be able to conjecture that a small black circle might also exist.

By "slicing" things slightly differently the next time you collect data, you might get the black circle. Or, you might get a portion of the "S" shaped curve that divides the main regions of black and white. In other words, data can never give us the "full picture." We must use our mental faculties to interpret that data for it to become useful. A key point to be made here is that we sometimes concentrate on the piece of data rather than on how it fits into the whole.

Theories are developed to explain or account for phenomena. In any particular discipline, there is an implicit understanding of what "counts" as a phenomenon of interest. For example, the behavior of free- falling bodies would considered appropriate for study by physicists, but not by marketing researchers. In marketing research, the primary phenomenon of study is purchase behavior.

The bane of marketing research is the theory-less "one-shot" study. Anyone who continually does one- shot studies is simply wasting ammunition. A one-shot marketer is trying to grab the proverbial gold ring on the carousel. A one-shot researcher is using skills and training in a mostly opportunistic way. Approached correctly, the field of marketing research can grow in sophistication to encompass issues bordering on a better understanding of human behavior itself. Approached poorly, it will never be more than a way for marketers to help protect their interests in risk-laden situations.

In this context, it should be noted that many of the activities related to marketing research are actually tangential to the purchase behavior per se. For example, advertisements are often tested to determine their effectiveness in communicating key ideas, but testing is not typically tied to the purchase behavior. Only in recent years, with the advent of scanner technology, has it even been feasible to ask whether or not advertising can produce a measurable effect on purchase behavior. There is no doubt, of course, that advertising (and other promotional activities) can help establish the preconditions for a particular purchase behavior (e.g., awareness of a new product).

Data in marketing research
From whatever angle one approaches the topic of "data use," there are certain premises that are tacitly assumed. From the purely academic perspective, the major premise is that the "goal" of marketing research is an explanation of the dynamics of the marketplace. What is sought is understanding rather than knowledge about a specific situation. The practical applications of this understanding need not be immediate, but application ought to be within the realm of possibility. The academic perspective can be seen as a "long-term" one. The main reason it can be viewed so is that there is no reason to believe that the bases of consumer behavior will change radically over time. Specific products and services may change, but not the underlying principles governing behavior. In this vein, marketing research can be viewed as a special member of the family of behavioral sciences-- special because of its direct ties to practical concerns. The cross-fertilization of the behavioral sciences over the years is evident to even the casual observer. Through this dynamic process, models of the marketplace are being molded, chiseled, and hewn into powerful conceptual frameworks.

From a business perspective, the major premise is that the goal of marketing research is to provide information for decision making. Marketing research, per se, holds no preeminent position in the array of information used to reach decisions. Obviously, the overriding goal for a decision maker is to seek good decisions and avoid bad decisions. This "short-term" perspective might be labeled hedonic empiricism. In this context, the "long-term" view is precluded by the immediacy of the need for information.

It must not be concluded, however, that either of these two very different perspectives is the "superior" one. Both perspectives have adaptive advantages, as well as attendant dangers. "Long-term" academic researchers are often criticized for being out of touch with the realities of the marketplace (the "ivory tower" criticism). "Short-term" marketers are often accused of doing research that is motivated by the fear of being judged solely responsible for a "bad" intuitive decision. They are merely seeking a place to "point a finger" if the consequences of their decisions don't pan out as expected. With apologies to comedian Flip Wilson, it is as though they want to be able to say, "The research made me do it!"

Synergistic relationships
It is often overlooked that most marketing research situations are of the form of a synergistic relationship between researcher and marketer. In this sense, marketing research is not "done" in the sense that a statistical analysis is "done." Rather, marketing research emerges as a joint function of the needs of the marketer and the skills of the researcher. One might argue that marketing research is more of a transition than a product or service. To the extent that marketers see research as a product, they will de-emphasize the understanding that can be gained. To the extent that researchers see marketing as a service, they will de-emphasize the important role it has in the non-academic world. The optimal situation is a dialogue between marketer and researcher that ensures mutually satisfactory transactions.

Without such dialogue, the analysis of a data set is often divorced from the original questions the survey was intended to address. From an objective standpoint, any statistical textbook could be consulted to determine the "proper" analysis. But the main questions might not be addressed even in the objectively "proper" analysis. "Proper" data is not necessarily useful data. Since the design of the survey and analysis of the data are inevitably interwoven, this dialogue between marketer and researcher should precede questionnaire development.

There can be no "magical" statistical solutions if the prior steps have not insured that the "proper" analyses can be performed. Worsening the situation is the widespread availability of statistical software. This encourages untrained individuals to apply statistical tests in an indiscriminate manner. The expectations generated in the minds of the owners of these statistical packages are oftentimes unrealistic. Owning a "statistical cookbook" does not make a person a "chef." And not even the greatest chef can make chocolate mousse from headcheese.

It is a lucky marketer who works with a researcher who is aware of the validity, and business necessity, of the "short-term" view. And it is an equally lucky researcher whose client appreciates that the "long-term" view can pay dividends in the future. Working together, this "team" of the marketer and researcher can address any challenge offered by the marketplace. They will not only find opportunities with a "long-term" view, but also will seize opportunities by dealing with "short-term" competitive threats with information rather than emotion.

Fueled by imagination and insight, the contribution of both marketers and researchers should lead to those "competitive advantages" that are so sought after in the world of business. So how does one go about finding such "gems" in the data? In some sense, what we seek is information rather than insight, but I would contend that the two go together more often than not.

Broad generalizations contribute little, and preoccupation with minutiae is equally counterproductive. Useful data should satisfy both the marketer and the researcher. The real challenge to those in marketing research is finding the right "level of focus" for the wisest "data use."

Vondruska's Postulates
What we need is a principled way in which analysis can be approached to maximize obtaining the desired information. The "level of focus" notion leads directly to Vondruska's Postulates, which are as follows:

Postulate 1: Lower levels of phenomenal organization are easier to detect than higher levels of phenomenal organization.

Postulate 2: Higher levels of phenomenal organization are easier to imagine than lower levels of phenomenal organization.

Obviously, the converse of each postulate is implied as well (e.g., it is difficult to detect organization at higher levels). What do I mean by "organization?" Simply that the world is not merely a collection of disjointed atoms in space. Hydrogen molecules organize into stars; people organize into market segments. We see patterns. We see constancy. We understand.

Admittedly, the postulates are a bit abstract. So an illustrative analogy seems in order. Consider the following (familiar?) high school math formulas:

Ellipse:
x2 + y2
a2 + b2

Parabola:
y2 = 4 px.

Hyperbola:
X2 - y2
a2 - b2 =1

In terms of the postulates, these formulas can be considered at a "low" level of organization. They are useful unto themselves, but no relationship between the formulas is implied. Now consider the illustration of the conic sections in Figure 2.

By re-conceptualizing ellipses, parabolas, and hyperbolas at a "higher" level of "organization," we now see something new. Despite their distinct formulas, we see them as members if the family of plane figures. As the philosopher Ludwig Wittgenstein contended, sometimes things are related by family resemblance rather than common attributes. If we do not know that, we will not look for such resemblances.

The point here is that the same type of mental processes prevail when we work with data. Recasting the postulates in terms of the phrase "He cannot see the forest because of the trees" may help to explain them further.

Sometimes we can easily detect the "trees," but we miss imagining the "forest." And at other times, we get clobbered by "trees" as we dash through the "forest" of our preconceived notions.

The true power of these postulates is that they apply not only to marketing, but to most investigative endeavors. The proper "level of focus" for most meaningful investigations usually lies between the extremes of high and low levels of organization. Often, more than one "focus" is needed to thoroughly understand an array of data. Some, of course, will be more useful than others for particular purposes.

Facts vs. Ideas
Facts "need" ideas, and ideas "need" facts. Examples of the need for both measurement and theory abound in the history of science. The astronomer Johann Kepler spent many years of his life pursuing a mathematical/theoretical framework that would provide an account of planetary orbits. He immersed himself in the mysteries of mathematics in his attempt to bring order to astronomical phenomena. His driving intuition was that the perfection of mathematics must be hidden in the universe itself.

One of Kepler's contemporaries, the lesser known Tycho Brahe, approached the problem of determining the nature of the planetary orbits in a different way. He measured. He collected data. Night after night, he sat at his telescope and dutifully recorded the positions of the observable planets. But to his eye, no patterns emerged from the data. It was only when he and Kepler shared their different perspectives did the true usefulness of the data become apparent. Kepler is credited with the discovery that planets orbit the Sun in an elliptical pattern, but Tycho Brahe had no small contribution to that discovery.

Kepler's discovery of the elliptical orbits of the planets would not have been possible without the painstaking data collection of planetary positions by Tycho Brahe. The key is that Kepler had to consider the facts in his discovery. He would have much preferred the orbits to be perfect celestial circles, but the evidence mitigated against that theory. On a more mundane level, research realities such as these are encountered in marketing research on an everyday basis.

Hypothesis-driven research
It is not enough merely to subject data to rigorous analysis. The most useful data is gleaned from an analysis in which one already has a suspicion of what is sought. Hypothesis-driven research also yields the greatest insights from analysis. I have a personal rule that I apply to any analysis. After I have applied all of the "right" statistical tools, I look for "patterns" in the data. When I start to scour statistics manuals to find a procedure that will give me interesting results, I stop. This is a sure sign that I have "tortured" the data into confessing all of its secrets. Alas, sometimes there are no further secrets.

Higher level statistical analyses do not typically uncover relationships that are not at all apparent at lower levels. They simply "formalize" those relationships in a more elegant, and sometimes more useful way. A good example of this is hierarchical log linear analysis. Although there is the potential in this procedure for detecting very high level interactions between variables, these complex interactions are often impossible to interpret--for all practical purposes.

Obviously, there is a big difference between knowing what one ultimately wants to accomplish through marketing research and actually accomplishing it. Ambiguity in research design is especially common in the non-academic world. Invoking another astronomical analogy, it is as though many marketers fail to realize that even though they can see the planet Jupiter, that does not mean that they can get there directly. It takes a long time to get to Jupiter--and when you finally get there it will be in a new location! Both theoretical knowledge and technical knowledge are required to reach distant goals. Only then can the improbable become the possible.

There is a lesson to be learned here. Straightforward thinking does not always produce the desired result. Some research problems have solutions that possess a property that is denoted in the German language by the word "umweg." There is no suitable direct translation, but the idea is that only a roundabout approach will work. All direct approaches fail. Most puzzles and games incorporate this "umweg" principle. Indeed, Nature herself seems to have an immense sense of humor with regard to thwarting direct approaches.

Of course, marketing research is not exempt from this "umweg" principle. An analysis plan which is too straightforward often founders on the rocks of perplexing findings. Luckily, by understanding the nature of data, we are still able to tease out the actionable information needed for practical marketing solutions.

Prediction vs. Assessment
Behavior itself is governed by a multitude of factors, some of which are only measurable after the fact. This is a major reason why customer satisfaction research enjoys its current popularity. Marketers realize that although it might be impossible to predict behavior in the marketplace, they can determine the characteristics of products that succeed, and products that fail. If these characteristics are interpreted at the proper level of abstraction, they may be applied to future products with a degree of confidence heretofore not possible.

Some marketers use the argument that looking at "after the fact" measures such as customer satisfaction is like looking in the rearview mirror while driving a car (after Marshall McLuhan's comments). This is specious thinking, because we do not really have a front window in marketing research. Nor do we have the "crystal ball" that all marketers seem to covet. What we do have is the ability to learn from our mistakes, and to see products and services through the eyes of the consumer. Every projection is a gamble of sorts. Useful data allows us to hedge our bets. It does not provide a sure thing.

Another way to characterize customer satisfaction research is in terms of a feedback mechanism. In much the same way that the thermostat on a climate control system detects deviations from some acceptable range, a good customer satisfaction survey provides information about problems in the marketplace. It should also provide a feel for one's competitive position in the marketplace. This is the best way to use customer satisfaction data. The worst way to use it is as a yardstick to set "goals" for employee performance. This is because customer satisfaction has an intuitively asymptotic aspect to it.

In plain English, 1) you can only please people so much; 2) some people will never be completely satisfied; 3) the more you please people, the more they expect. So if your "goal" is to improve overall customer satisfaction by 5% each year, you are doomed to failure once the "performance curve" starts to level off (asymptote) over time.

Also, note well that simply because a survey is repeated over time (i.e., a tracking study) does not mean that it fulfills the requirements of a good customer satisfaction survey. What is monitored is as important as the monitoring itself. The acid test for any customer satisfaction program is how well it can detect the problems that detract from the quality of a product or service. If the program does that, it will make a difference to the bottom line as well.

Implications for theory and action
To obtain a complete perspective on the myriad of different activities that constitute the field of marketing research, we must "take a step backward to admire the work." What we then see is a lattice of interrelated activities leading toward a dual goal--to better understand the consumer, and to better compete in the marketplace.
 
Last edited:
Chicago Bridge & Iron Company (Chicago Bridge & Iron Company N.V.), (NYSE: CBI), known commonly as CB&I, is a large multinational conglomerate engineering, procurement and construction (EPC) company. CB&I specializes in projects for oil and gas companies. CB&I operates from more than 80 locations around the world, and as of August 1, 2009, CB&I has a total of approximately 16,000 employees.


ure of data
Before one can use data effectively, it helps to have a rudimentary grasp of the nature of data. In marketing research, we often think of data as the conglomeration of numbers obtained from a survey. Unlike mere numbers, however, data is inherently meaningful. It assumes meaning to the extent that it relates to an aspect of phenomenal reality. More colloquially put, phenomenal reality is "where the things of interest (phenomena) are happening." In marketing research, that phenomenal reality is usually the marketplace.

Figure I illustrates my own viewpoint on how data should be construed. In Figure 1, phenomenal reality is represented by the Oriental symbol of wholeness--Yin and Yang. For those unfamiliar with this symbol, a brief explanation is in order. The ancient Chinese believed that the world originated with two opposite yet complimentary "forces." Yin is symbolized by the large black area of the symbol, and Yang by the large white area. Within each area, there is a small dot of the opposite color. This dot represents the interdependence of Yin and Yang, despite their separateness. These two "forces" also have connotative as well as denotative aspects. Yin is characterized as female, passive and dark. Yang is characterized as male, active and light.

What the Yin-Yang symbol is intended to reflect, for current purposes, is the idea that the many phenomena we investigate have a "completeness" that is resistant to an analysis designed to break it into components. Although dividing the whole is sometimes the only way to gain understanding, that whole must eventually be reconstituted in our theories about the phenomena. The significance of this viewpoint is not patently obvious if one construes data using only more traditional Western thought, which emphasizes componential aspects (e.g., computer flow charts).

The Yin-Yang symbol also captures the subtle complexity of the phenomena under investigation. It suggests a "harmony of opposites." If data reflected only chaos, there would be no reason to collect it in the first place. The "pie wedge" removed from the symbol represents the act of measurement to obtain data.

It should be made clear that we are not talking here about drawing a sample from a population of consumers. In sampling, we expect to obtain a representative group of respondents--a sort of microcosm of the population. Notice that what we obtain is not a microcosm of the symbol (i.e., a complete, but smaller Yin-Yang symbol), but rather incomplete information in the form of a piece of data.

For the purposes of the following discussion, any complex black and white figure could suffice. Here the "surplus meaning" of the Yin-Yang symbol merely enriches the process. Imagine that you did not know what the entire symbol looked like. You had only the piece of data. What could you conclude about the entire symbol? For one, you could conclude that it has both white and black areas. For another, you could conclude that it is possible to have a circle of white surrounded by black. You might also note the arc of the edge of the piece. Something you might be able to infer, but not necessarily conclude, is that the arc is part of a larger circle. Likewise, you might be able to conjecture that a small black circle might also exist.

By "slicing" things slightly differently the next time you collect data, you might get the black circle. Or, you might get a portion of the "S" shaped curve that divides the main regions of black and white. In other words, data can never give us the "full picture." We must use our mental faculties to interpret that data for it to become useful. A key point to be made here is that we sometimes concentrate on the piece of data rather than on how it fits into the whole.

Theories are developed to explain or account for phenomena. In any particular discipline, there is an implicit understanding of what "counts" as a phenomenon of interest. For example, the behavior of free- falling bodies would considered appropriate for study by physicists, but not by marketing researchers. In marketing research, the primary phenomenon of study is purchase behavior.

The bane of marketing research is the theory-less "one-shot" study. Anyone who continually does one- shot studies is simply wasting ammunition. A one-shot marketer is trying to grab the proverbial gold ring on the carousel. A one-shot researcher is using skills and training in a mostly opportunistic way. Approached correctly, the field of marketing research can grow in sophistication to encompass issues bordering on a better understanding of human behavior itself. Approached poorly, it will never be more than a way for marketers to help protect their interests in risk-laden situations.

In this context, it should be noted that many of the activities related to marketing research are actually tangential to the purchase behavior per se. For example, advertisements are often tested to determine their effectiveness in communicating key ideas, but testing is not typically tied to the purchase behavior. Only in recent years, with the advent of scanner technology, has it even been feasible to ask whether or not advertising can produce a measurable effect on purchase behavior. There is no doubt, of course, that advertising (and other promotional activities) can help establish the preconditions for a particular purchase behavior (e.g., awareness of a new product).

Data in marketing research
From whatever angle one approaches the topic of "data use," there are certain premises that are tacitly assumed. From the purely academic perspective, the major premise is that the "goal" of marketing research is an explanation of the dynamics of the marketplace. What is sought is understanding rather than knowledge about a specific situation. The practical applications of this understanding need not be immediate, but application ought to be within the realm of possibility. The academic perspective can be seen as a "long-term" one. The main reason it can be viewed so is that there is no reason to believe that the bases of consumer behavior will change radically over time. Specific products and services may change, but not the underlying principles governing behavior. In this vein, marketing research can be viewed as a special member of the family of behavioral sciences-- special because of its direct ties to practical concerns. The cross-fertilization of the behavioral sciences over the years is evident to even the casual observer. Through this dynamic process, models of the marketplace are being molded, chiseled, and hewn into powerful conceptual frameworks.

From a business perspective, the major premise is that the goal of marketing research is to provide information for decision making. Marketing research, per se, holds no preeminent position in the array of information used to reach decisions. Obviously, the overriding goal for a decision maker is to seek good decisions and avoid bad decisions. This "short-term" perspective might be labeled hedonic empiricism. In this context, the "long-term" view is precluded by the immediacy of the need for information.

It must not be concluded, however, that either of these two very different perspectives is the "superior" one. Both perspectives have adaptive advantages, as well as attendant dangers. "Long-term" academic researchers are often criticized for being out of touch with the realities of the marketplace (the "ivory tower" criticism). "Short-term" marketers are often accused of doing research that is motivated by the fear of being judged solely responsible for a "bad" intuitive decision. They are merely seeking a place to "point a finger" if the consequences of their decisions don't pan out as expected. With apologies to comedian Flip Wilson, it is as though they want to be able to say, "The research made me do it!"

Synergistic relationships
It is often overlooked that most marketing research situations are of the form of a synergistic relationship between researcher and marketer. In this sense, marketing research is not "done" in the sense that a statistical analysis is "done." Rather, marketing research emerges as a joint function of the needs of the marketer and the skills of the researcher. One might argue that marketing research is more of a transition than a product or service. To the extent that marketers see research as a product, they will de-emphasize the understanding that can be gained. To the extent that researchers see marketing as a service, they will de-emphasize the important role it has in the non-academic world. The optimal situation is a dialogue between marketer and researcher that ensures mutually satisfactory transactions.

Without such dialogue, the analysis of a data set is often divorced from the original questions the survey was intended to address. From an objective standpoint, any statistical textbook could be consulted to determine the "proper" analysis. But the main questions might not be addressed even in the objectively "proper" analysis. "Proper" data is not necessarily useful data. Since the design of the survey and analysis of the data are inevitably interwoven, this dialogue between marketer and researcher should precede questionnaire development.

There can be no "magical" statistical solutions if the prior steps have not insured that the "proper" analyses can be performed. Worsening the situation is the widespread availability of statistical software. This encourages untrained individuals to apply statistical tests in an indiscriminate manner. The expectations generated in the minds of the owners of these statistical packages are oftentimes unrealistic. Owning a "statistical cookbook" does not make a person a "chef." And not even the greatest chef can make chocolate mousse from headcheese.

It is a lucky marketer who works with a researcher who is aware of the validity, and business necessity, of the "short-term" view. And it is an equally lucky researcher whose client appreciates that the "long-term" view can pay dividends in the future. Working together, this "team" of the marketer and researcher can address any challenge offered by the marketplace. They will not only find opportunities with a "long-term" view, but also will seize opportunities by dealing with "short-term" competitive threats with information rather than emotion.

Fueled by imagination and insight, the contribution of both marketers and researchers should lead to those "competitive advantages" that are so sought after in the world of business. So how does one go about finding such "gems" in the data? In some sense, what we seek is information rather than insight, but I would contend that the two go together more often than not.

Broad generalizations contribute little, and preoccupation with minutiae is equally counterproductive. Useful data should satisfy both the marketer and the researcher. The real challenge to those in marketing research is finding the right "level of focus" for the wisest "data use."

Vondruska's Postulates
What we need is a principled way in which analysis can be approached to maximize obtaining the desired information. The "level of focus" notion leads directly to Vondruska's Postulates, which are as follows:

Postulate 1: Lower levels of phenomenal organization are easier to detect than higher levels of phenomenal organization.

Postulate 2: Higher levels of phenomenal organization are easier to imagine than lower levels of phenomenal organization.

Obviously, the converse of each postulate is implied as well (e.g., it is difficult to detect organization at higher levels). What do I mean by "organization?" Simply that the world is not merely a collection of disjointed atoms in space. Hydrogen molecules organize into stars; people organize into market segments. We see patterns. We see constancy. We understand.

Admittedly, the postulates are a bit abstract. So an illustrative analogy seems in order. Consider the following (familiar?) high school math formulas:

Ellipse:
x2 + y2
a2 + b2

Parabola:
y2 = 4 px.

Hyperbola:
X2 - y2
a2 - b2 =1

In terms of the postulates, these formulas can be considered at a "low" level of organization. They are useful unto themselves, but no relationship between the formulas is implied. Now consider the illustration of the conic sections in Figure 2.

By re-conceptualizing ellipses, parabolas, and hyperbolas at a "higher" level of "organization," we now see something new. Despite their distinct formulas, we see them as members if the family of plane figures. As the philosopher Ludwig Wittgenstein contended, sometimes things are related by family resemblance rather than common attributes. If we do not know that, we will not look for such resemblances.

The point here is that the same type of mental processes prevail when we work with data. Recasting the postulates in terms of the phrase "He cannot see the forest because of the trees" may help to explain them further.

Sometimes we can easily detect the "trees," but we miss imagining the "forest." And at other times, we get clobbered by "trees" as we dash through the "forest" of our preconceived notions.

The true power of these postulates is that they apply not only to marketing, but to most investigative endeavors. The proper "level of focus" for most meaningful investigations usually lies between the extremes of high and low levels of organization. Often, more than one "focus" is needed to thoroughly understand an array of data. Some, of course, will be more useful than others for particular purposes.

Facts vs. Ideas
Facts "need" ideas, and ideas "need" facts. Examples of the need for both measurement and theory abound in the history of science. The astronomer Johann Kepler spent many years of his life pursuing a mathematical/theoretical framework that would provide an account of planetary orbits. He immersed himself in the mysteries of mathematics in his attempt to bring order to astronomical phenomena. His driving intuition was that the perfection of mathematics must be hidden in the universe itself.

One of Kepler's contemporaries, the lesser known Tycho Brahe, approached the problem of determining the nature of the planetary orbits in a different way. He measured. He collected data. Night after night, he sat at his telescope and dutifully recorded the positions of the observable planets. But to his eye, no patterns emerged from the data. It was only when he and Kepler shared their different perspectives did the true usefulness of the data become apparent. Kepler is credited with the discovery that planets orbit the Sun in an elliptical pattern, but Tycho Brahe had no small contribution to that discovery.

Kepler's discovery of the elliptical orbits of the planets would not have been possible without the painstaking data collection of planetary positions by Tycho Brahe. The key is that Kepler had to consider the facts in his discovery. He would have much preferred the orbits to be perfect celestial circles, but the evidence mitigated against that theory. On a more mundane level, research realities such as these are encountered in marketing research on an everyday basis.

Hypothesis-driven research
It is not enough merely to subject data to rigorous analysis. The most useful data is gleaned from an analysis in which one already has a suspicion of what is sought. Hypothesis-driven research also yields the greatest insights from analysis. I have a personal rule that I apply to any analysis. After I have applied all of the "right" statistical tools, I look for "patterns" in the data. When I start to scour statistics manuals to find a procedure that will give me interesting results, I stop. This is a sure sign that I have "tortured" the data into confessing all of its secrets. Alas, sometimes there are no further secrets.

Higher level statistical analyses do not typically uncover relationships that are not at all apparent at lower levels. They simply "formalize" those relationships in a more elegant, and sometimes more useful way. A good example of this is hierarchical log linear analysis. Although there is the potential in this procedure for detecting very high level interactions between variables, these complex interactions are often impossible to interpret--for all practical purposes.

Obviously, there is a big difference between knowing what one ultimately wants to accomplish through marketing research and actually accomplishing it. Ambiguity in research design is especially common in the non-academic world. Invoking another astronomical analogy, it is as though many marketers fail to realize that even though they can see the planet Jupiter, that does not mean that they can get there directly. It takes a long time to get to Jupiter--and when you finally get there it will be in a new location! Both theoretical knowledge and technical knowledge are required to reach distant goals. Only then can the improbable become the possible.

There is a lesson to be learned here. Straightforward thinking does not always produce the desired result. Some research problems have solutions that possess a property that is denoted in the German language by the word "umweg." There is no suitable direct translation, but the idea is that only a roundabout approach will work. All direct approaches fail. Most puzzles and games incorporate this "umweg" principle. Indeed, Nature herself seems to have an immense sense of humor with regard to thwarting direct approaches.

Of course, marketing research is not exempt from this "umweg" principle. An analysis plan which is too straightforward often founders on the rocks of perplexing findings. Luckily, by understanding the nature of data, we are still able to tease out the actionable information needed for practical marketing solutions.

Prediction vs. Assessment
Behavior itself is governed by a multitude of factors, some of which are only measurable after the fact. This is a major reason why customer satisfaction research enjoys its current popularity. Marketers realize that although it might be impossible to predict behavior in the marketplace, they can determine the characteristics of products that succeed, and products that fail. If these characteristics are interpreted at the proper level of abstraction, they may be applied to future products with a degree of confidence heretofore not possible.

Some marketers use the argument that looking at "after the fact" measures such as customer satisfaction is like looking in the rearview mirror while driving a car (after Marshall McLuhan's comments). This is specious thinking, because we do not really have a front window in marketing research. Nor do we have the "crystal ball" that all marketers seem to covet. What we do have is the ability to learn from our mistakes, and to see products and services through the eyes of the consumer. Every projection is a gamble of sorts. Useful data allows us to hedge our bets. It does not provide a sure thing.

Another way to characterize customer satisfaction research is in terms of a feedback mechanism. In much the same way that the thermostat on a climate control system detects deviations from some acceptable range, a good customer satisfaction survey provides information about problems in the marketplace. It should also provide a feel for one's competitive position in the marketplace. This is the best way to use customer satisfaction data. The worst way to use it is as a yardstick to set "goals" for employee performance. This is because customer satisfaction has an intuitively asymptotic aspect to it.

In plain English, 1) you can only please people so much; 2) some people will never be completely satisfied; 3) the more you please people, the more they expect. So if your "goal" is to improve overall customer satisfaction by 5% each year, you are doomed to failure once the "performance curve" starts to level off (asymptote) over time.

Also, note well that simply because a survey is repeated over time (i.e., a tracking study) does not mean that it fulfills the requirements of a good customer satisfaction survey. What is monitored is as important as the monitoring itself. The acid test for any customer satisfaction program is how well it can detect the problems that detract from the quality of a product or service. If the program does that, it will make a difference to the bottom line as well.

Implications for theory and action
To obtain a complete perspective on the myriad of different activities that constitute the field of marketing research, we must "take a step backward to admire the work." What we then see is a lattice of interrelated activities leading toward a dual goal--to better understand the consumer, and to better compete in the marketplace.

Hey netra, nice information on Chicago Bridge & Iron Company and thanks for your effort and sharing it to help others. BTW, i am also going to upload a document which would give related information on Chicago Bridge & Iron Company and help others.
 

Attachments

  • Chicago Bridge & Iron Company.pdf
    129.8 KB · Views: 0
Top