Tipultech logo

Policy capturing

Author: Dr Simon Moss

Overview

Policy capturing is a technique that is used to examine how individuals reach decisions. Policy capturing is regarded as a form of judgment analysis and had been applied to a variety of settings and contexts (see Cooksey, 1996).

A typical example was reported by Sherer, Schwab and Heneman (1987), in their study of how supervisors, in the setting of a private hospital, reach decisions about salary rises. Participants of this study, called judges, received information about a set of employees. The employees differed on five key factors: performance level was average or superior, performance was consistent or inconsistent, current salary was low, medium, or high, and the individuals either had or had not been offered another job from a different organization. After reading information about each employee, participants then decided whether the percentage and absolute increase in salary they would recommend. Which of these five factors shaped the decisions varied appreciably across the participants.

Hitt and Barr reported another excellent example of policy capturing. This study assessed which factors determine evaluations of job applicants and corresponding salaries. The participants or judges-66 managers who often need to reach similar decisions in their work lives-read the applications of these applicants and watched a video presentation that each candidate had prepared. Several variables differed across applicants: the applicants, for example, had accumulated either 10 or 15 years of experience, were 35 or 35 years of age, were male or female, were African or Caucasian, had completed a BS or MBA, and were applying to be a regional sales manager or vice president of sales. Subsequent analysis showed that factors unrelated to experience, such as age and sex, affected decisions. Furthermore, the relevance of each factor interacted with one another.

Study design: Cues and judges

To undertake a study that applies policy capturing, researchers must consider an extensive range of issues. Each of these issues is discussed below.

Research method

First, researchers need to decide whether to apply a nomothetic or idiographic design (see Allport, 1937). A nomothetic design is utilized to identify principles or patterns that apply universally. That is, usually variations across individuals are disregarded. Researchers who apply this orientation assume the various judges can be substituted for one another. Indeed, often information from the various judges are collapsed or aggregated.

An idiographic design is utilized to examine individuals separately-that is, to explore variations across judges. Subsequently, researchers might examine how variations across judges, such as the extent to which they attach importance to age when selecting a salary, correlates with other factors, such as the experience of these judges. Idiographic designs, thus, represent the complexity and variation that pervades most contexts, but does attenuate statistical power.

Identify the cues

Second, researchers need to identify which cues should differentiate the profiles, such as age, sex, experience, and so forth. Usually, these cues are derived from focus groups, interviews, surveys, company documents, or academic literature (Aiman-Smith, Scullen, & Barr, 2002;; Cooksey, 1996;; Karren & Barringer, 2002). For example, these cues, according to these sources, could be:

  • The most important, influential, or comprehensive
  • Consistently important across many studies or sources
  • Limited in number-less than nine or so, to ensure sufficient power.
  • Define the cue values

    Third, the values of each cue need to be identified. For some variables, like gender, the values are obvious-in this instance, males or females. For other variables, like age or performance, the values are not obvious.

    The values can be either concrete or abstract (Stewart, 1988). Concrete values apply the measurement units that are utilized in practice. For example, to represent performance, the researcher might utilize the scale that is used in the organization to evaluate this attribute, such as a percentile. Abstract values apply measurement units that seldom utilized in practice, but easy to understand and use in the study.

    Define the cue distributions

    Fourth, the distribution of each cue needs to be specified. The distribution might be uniform, normal, or similar to the actual distribution in the population (see Stewart, 1988). To illustrate, for gender, the distribution might be uniform& half the profiles that are assessed in a study might be male and the remainder might be female. Alternatively, the distribution of this sample might align with the distribution of the population. Perhaps, 75% of the profiles that are assessed in a study might be male if the industry is dominated by men.

    Define cue intercorrelations

    Fifth, the researcher needs to decide whether the cues are correlated with each other. These correlations should align roughly with patterns in the environment. For example, suppose, in the organization, the males tend to be older and the females tend to be younger (Brehmer & Brehmer, 1988). A similar correlation should be observed in the profiles that need to be judged.

    Define the judgment and judges

    Sixth, the researcher needs to describe the judgment that participants, called judges, need to reach. For example, they might need to estimate the salary, benefits, or punishments they would impose on individuals.

    In addition, they must select the judges. Usually, in these studies, judges are representative of participants who tend to reach these decisions (Aiman-Smith, Scullen, & Barr, 2002). To understand how applicants are selected, individuals who often choose candidates represent the target population. To understand how priorities for elderly care are decided, relevant decision makers represent the target population, and so forth.

    The judgment context

    Seventh, researchers need to describe the context or background to judges, ultimately to curb extraneous differences across participants that might shape their decision. For instance, according to Cooksey (1996), researchers should, if possible, characterize:

    The purpose of these judgements& they could be told they need to select five job applicants for a project team, formed to save water in the organization

    The circumstances or events that preceded the judgment& they could be informed the organization consumes more water than rivals.

    Whether the scenario is real or hypothetical

    How the profiles of each person were generated& for example, perhaps the applicants generated their own profiles.

    Whether the information in these profiles is accurate

    Role the judge should assume, such as a manager or shareholder.

    The scenario context

    In the typical study, researchers present information about various individuals, such as job applicants, or events, such as accidents. Researchers must construct some details or narratives around the cues to improve the plausibility of information and the engagement of judges.

    For example, some studies assess how judges evaluate job applications. Usually, the application includes more information than merely the limited number of cues that are examined. The applications might include some information on personal interests or experiences that are not germane to the study. This information, however, must in essence be similar in all profiles, manipulated systematically, or randomly distributed across judges.

    Furthermore, the order of information should be considered. Often, the order of cues is randomly distributed.

    Experimental design

    Once the issues and participants are identified, researchers then need to design the experiment. They must choose the number of profiles or scenarios as well as the number of judges, for example.

    Number of profiles or scenarios

    In a typical study, judges evaluate a series of individuals, applicants, events, or scenarios, collectively called profiles. The number of profiles, according to some scholars, should be at least 5 times the number of cues (Hair, Jr, Black, Babin, Anderson, & Tatham, 2006). If patients differ on three cues or factors-perhaps age, sex, and illness-each judge should evaluate at least 15 profiles. Nevertheless, more profiles might be needed if the factors comprise more than two levels (Graham & Cable, 2001). Furthermore, more profiles are needed if the factors or cues tend to be correlated, both in the environment and thus in the study (Cooksey, 1996).

    Orthogonal versus non-orthogonal designs

    Second, researchers need to decide whether the design is orthogonal versus non-orthogonal (Stewart, 1988). Orthogonal design implies that each combination of cues is presented in a factorial fashion. To illustrate, if age and sex are the cues, the combinations of young females, young males, old females, and old males should each be presented in the profiles, typically with equal frequency. Although the most powerful design, several factors might preclude this option. For example, orthogonal designs are not plausible if:

    In the environment, some combinations are uncommon or implausible. If age and management level were the cues, perhaps young executive might be implausible, for instance. In this instance, some combinations of cues might be excluded (Connolly, Arkes, & Hammond, 2000).

    Many cues or factors are included, in which case many possible combinations emerge, and the number of profiles that would need to be presented would be unreasonable to judges. Sometimes, in these instances, a random subset of combinations is included, called confounded factorial design.

    Reliability

    Some mechanisms are sometimes introduced to assess the reliability of these measures. A subset of profiles-perhaps 4 or 5-are included more than once (Karren & Barringer, 2002). The correlation between these pairs is regarded as a measure of reliability. Low values indicate the policy or inclination of individuals might change across time or might not be applied consistently.

    Data analysis

    Typically, regression analyses is undertaken to ascertain how the cues relate to decision outcomes. Standardized regression coefficients, or squared semi-partial correlations, indicate which cues are most important (Cooksey, 1996).

    Critique of policy capturing

    Traditional methods of policy capturing do present several limitations and benefits. First, the decisions, because examined in controlled environments, disregard other information that decision makers usually can access (Brehmer & Brehmer, 1988;; Hoffman, 1960). Decision makers, for example, can utilize subtle interpersonal cues and do not need to rely solely on information presented on paper (e.g., Aiman-Smith, Scullen, & Barr, 2002). Hence, the usual experiential underpinning of decisions is removed.

    Context and history of policy capturing

    Policy capturing is one facet of a broader range of techniques, collectively referred to as social judgment theory (Hammond, McClelland, & Mumpower, 1980;; Hammond, Stewart, Brehmer, & Steinmann, 1975). Social judgment theory is a set of techniques designed to represent, usually mathematically, how individuals, often experts, clinicians, practitioners, or managers, reach decisions (for historical precedents, see also Hammond, 1955, 1960). In particular, these techniques can uncover the processes that underpin decisions, even if the decision maker is unaware of these cognitive operations and activities-and even if the decision maker influences the object or person that is being judged (Hammond, 1955, 1960).

    Four main forms of social judgment theory have been distinguished: single system, double system, triple system, and n-system (Hammond, McClelland, & Mumpower, 1980). Single system examines only the judgement process-and is often referred to as policy capturing. Double system characterizes both the judgment process and the task environment simultaneously, often to examine the effect of feedback on judges. Triple system is the same as the double system, except two judges are compared. Finally, n-systems are equivalent to triple system, except more than two judges are compared.

    References

    Aiman-Smith, L., Scullen, S. E., & Barr, S. H. (2002). Conducting studies of decision making in organizational contexts: A tutorial for policy-capturing and other regression-based techniques. Organizational Research Methods, 5, 388-414.

    Allport, G. W. (1937). Personality: A psychological interpretation. London: Constable.

    Anderson, B. F., Deane, D. H., Hammond, K. R., & McClelland, G. H. (1981). Concepts in judgement and decision research: Definitions, sources, interrelations, comments. New York, NY: Praeger.

    Barham, L. J., Gottlieb, B. H., & Kelloway, E. K. (1998). Variables affecting managers' willingness to grant alternative work arrangements. The Journal of Social Psychology, 138, 291-302.

    Bond, S., Hyman, J., Summers, J., & Wise, S. (2002). Family-friendly working? Putting policy into practice. York, UK: Joseph Rowntree Foundation.

    Brehmer, A., & Brehmer, B. (1988). What have we learned about human judgment from thirty years of policy capturing? In B. Brehmer & C. R. B. Joyce (Eds.), Human judgment: The SJT view (pp. 75-114). Amsterdam: Elsevier Science Publishers BV.

    Brehmer, B. (1988). The development of social judgment theory. In B. Brehmer & C. R. B. Joyce (Eds.), Human judgment: The SJT view (pp. 13-40). Amsterdam: Elsevier Science Publishers BV.

    Brehmer, B., & Joyce, C. R. B. (1988). Human judgment: The SJT view. Amsterdam: Elsevier Science Publishers BV.

    Carroll, J. S., & Johnson, E. J. (1990). Decision research: A field guide. Newbury Park: Sage Publications.

    Casper, W. J., Fox, K. E., Sitzmann, T. M., & Landy, A. L. (2004). Supervisor referrals to work-family programs. Journal of Occupational Health Psychology, 9, 136-151.

    Connolly, T., Arkes, H. R., & Hammond, K. R. (2000). General introduction. In T.Connolly, H. R. Arkes & K. R. Hammond (Eds.), Judgment and decision making: An interdisciplinary reader (2nd ed., pp. 1-12). Cambridge:Cambridge University Press.

    Connolly, T., & Ord??ez, L. (2003). Judgment and decision making. In W. C. Borman, D. R. Ilgen & R. J. Klimoski (Eds.), Handbook of psychology (Vol. 12). New York: John Wiley & Sons.

    Cooksey, R. W. (1996). Judgment analysis: theory, methods, and applications. San Diego: Academic Press.

    Dawes, R. M. (1998). Behavioral decision making and judgment. In D. T. Gilbert, S. T. Fiske & G. Lindzey (Eds.), The handbook of social psychology (4th ed.,Vol. 2, pp. 497-548). New York: McGraw-Hill.

    Dex, S., & Scheibl, F. (2001). Flexible and family-friendly working arrangements in UK-based SMEs: Business cases. British Journal of Industrial Relations, 39, 411-431.

    Doherty, M. E., & Brehmer, B. (1997). The paramorphic representation of clinical judgment: A thirty-year retrospective. In W. M. Goldstein & R. M. Hogarth (Eds.), Research on judgment and decision making: Currents, connections, and controversies (pp. 537-551). Cambridge: Cambridge University Press.

    Goldstein, W. M., & Hogarth, R. M. (1997). Judgment and decision research: Some historical context. In W. M. Goldstein & R. M. Hogarth (Eds.), Research on judgment and decision making: Currents, connections, and controversies (pp. 3-65). Cambridge: Cambridge University Press.

    Graham, M. E., & Cable, D. M. (2001). Consideration of the incomplete block design for policy-capturing research. Organizational Research Methods, 4, 26-45.

    Hair, J. F., Jr, Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate data analysis (6th ed.). Upper Saddle River, NJ: Pearson Prentice Hall.

    Hammond, K. R. (1955). Probabilistic functioning and the clinical method. Psychological Review, 62, 255-262.

    Hammond, K. R., McClelland, G. H., & Mumpower, J. (1980). Human judgment and decision making: Theories, methods, and procedures. New York, NY: Praeger.

    Hammond, K. R., Stewart, T. R., Brehmer, B., & Steinmann, D. O. (1975). Social judgment theory. In M. F. Kaplan & S. Schwartz (Eds.), Human judgment and decision processes (pp. 271-312). New York: Academic Press.

    Highhouse, S. (2001). Judgment and decision-making research: Relevance to industrial and organizational psychology. In N. Anderson, D. S. Ones, H. K.Sinangil & C. Viswesvaran (Eds.), Handbook of industrial, work and organizational psychology (Vol. 1, pp. 314-331). London: Sage Publications.

    Hitt, M. A., & Barr, S. H. (1989). Managerial Selection Decision Models: Examination of Configural Cue Processing. Journal of Applied Psychology, 74, 53-61.

    Hoffman, P. J. (1960). The paramorphic representation of clinical judgment. Psychological Bulletin, 57, 116-131.

    Karren, R. J., & Barringer, M. W. (2002). A review and analysis of the policycapturing methodology in organizational research: Guidelines for research and practice. Organizational Research Methods, 5, 337-361.

    Klein, K. J., Berman, L. M., & Dickson, M. W. (2000). May I work part-time? An exploration of predicted employer responses to employee requests for parttime work. Journal of Vocational Behavior, 57, 85-101.

    Levin, I. P., Huneke, M. E., & Jasper, J. D. (2000). Information processing at successive stages of decision making: Need for cognition and inclusion-exclusion effects. Organizational Behavior and Human Decision Processes,82, 171-193.

    Mellers, B. A., Schwartz, A., & Cooke, A. D. J. (1998). Judgment and decision making. Annual Review of Psychology, 49, 447-477.

    Peters, P., & den Dulk, L. (2003). Cross-cultural differences in managers' support for home-based telework: A theoretical elaboration. International Journal of Cross Cultural Management, 3, 329-346.

    Powell, G. N., & Mainiero, L. A. (1999). Managerial decision making regarding alternative work arrangements. Journal of Occupational and Organizational Psychology, 72, 41-56.

    Priem, R. L., & Harrison, D. A. (1994). Exploring strategic judgment: Methods for testing the assumptions of prescriptive contingency theories. Strategic Management Journal, 15, 311-324.

    Sherer, P. D., Schwab, D. P., & Heneman, H. G., III. (1987). Managerial salary-raise decisions: A policy-capturing approach. Personnel Psychology, 40, 27-38.

    Slovic, P., & Lichtenstein, S. (1971). Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organizational Behavior and Human Performance, 6, 649-744.

    Stevenson, M. K., Busemeyer, J. R., & Naylor, J. C. (1990). Judgment and decision making theory. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (2nd ed., Vol. 1, pp. 283-374). Palo Alto, CA: Consulting Psychologists Press.

    Stewart, T. R. (1988). Judgment analysis: Procedures. In B. Brehmer & C. R. B. Joyce (Eds.), Human judgment: The SJT view (pp. 41-74). Amsterdam: Elsevier Science Publishers BV.

    Wallace, H. A. (1923). What is in the corn judge's mind? Journal of the American Society of Agronomy, 15, 300-304.

    Wise, S. (2005). The right to time off for dependants: Contrasting two organisations' responses. Employee Relations, 27, 126-140.



    Academic Scholar?
    Join our team of writers.
    Write a new opinion article,
    a new Psyhclopedia article review
    or update a current article.
    Get recognition for it.





    Last Update: 6/18/2016