Chat with us, powered by LiveChat Ch9-11BusinessIntelligence.pdf - Credence Writers
+1(978)310-4246 [email protected]

IMPLEMENTATION AND EVALUATION

To implement a DSS is to realize the planned system. Implementation includes interpreting designs into code, but it goes far beyond coding. It also includes creating and populating databases and model bases and administering the final product, which means installation, deployment, integration, and field testing. Training users and ensuring they accept the DSS as a useful and reliable tool is yet another aspect of implementation. Finally, evaluation includes all of those steps to ensure that the system does what is needed and does it well. We will begin the discussion with implementation.

IMPLEMENTATION STRATEGY

The success of any implementation effort is highly affected by the process adopted by the implementation team. Unfortunately, there are no standard steps to ensure success; what works well in one implementation might be inappropriate in another. However, Swanson has noted nine key factors in the success or failure of information systems. These include measures that address the system itself (such as design quality and performance level), the process of design (such as user involvement, mutual understanding, and project man-agement) and the organization within which the DSS will be used (such as management commitment, resource adequacy, and situational stability). Table 9.1 provides examples of how these factors may facilitate or inhibit the implementation process. Throughout this book, specific strategies for addressing these nine factors to result in successful implemen-tation have been noted. The strategies can be summarized in five principles.

Decision Support Systems for Business Intelligence by Vicki L. Sauter Copyright © 2010 John Wiley & Sons, Inc.

370 IMPLEMENTATION AND EVALUATION

Design Insights The Fable of the Three Ostriches

Three ostriches had a running argument over the best way for an ostrich to defend hiτηself The youngest brother practiced biting and kicking incessantly* and held the black belt. He asserted thai "the best defense is a good offense." The middle brother lived by the maxim that "he who fights and runs away, lives to fight another day," Through arduous practice, he had become the fastest ostrich in the desert—which, you must admit, is rather fast. The eldest brother being wiser and more worldly, adopted the typical attitude of mature ostriches: "What you don't know can't hurt you." He was far and away the best head-burier that any ostrich could recall.

One day a feather hunter came to the desert and started robbing ostriches of their precious tail feathers. Each of the three brothers therefore took on a group of followers for instruction in the proper methods of self-defense—according to each one's separate gospel.

Eventually the feather hunter turned up outside camp of the youngest brother, where he heard the grunts and snoits of all the disciples who were busily practicing kicking and biting. The hunter was on foot, but armed with an enormous club, which he brandished menacingly. Fearless as he was, the ostrich was no match for the hunter, because the club was much longer than an ostrich's legs or neck. After taking many lumps and bumps and not getting in a single kick or bite, the ostrich fell exhausted to the ground. The hunter casually plucked his precious tail feather, after which all his disciples gave up without a fight

When the youngest ostrich told his brothers how his feather had been lost, they both scoffed at him, "Why didn't you run?1' demanded the middle one, "A man cannot catch an ostrich."

"If you had put your head in the sand and ruffled your feathers properly" chimed in the eldest, tlhe would have thought you were a yucca and passed you by."

The next day the hunter left his club at home and went out hunting on a motorcycle. When he discovered the middle brother's training camp, all the ostriches began to run—the brother in the j lead, But the motorcycle was much fasten and the hunter simply sped up alongside each ostrich and plucked his tail feather on the run.

That night the other two brothers had the last word. 4tWhy didn't you turn on him and give him a good kick?" asked the youngest. 'One solid kick and he would have fallen off that bike and broken his neck."

"No need to be so violent,'1 added the eldest, ''With your head buried and your body held low, he would have gone past you so fast he would have thought you were a sand dune."

A few days later, the hunter was out walking without his club when he came upon the eldest brother's camp. "Eyes under!" the leader ordered and was instantly obeyed. The hunter was unable to believe his luck, for all he had to do was walk slowly among the ostriches and pluck an enormous supply of tail feathers.

When the younger brothers heard this story, the youngest said, "he was unarmed/' "One good bite on the neck and you'd never have seen him again."

"And he didn't even have that infernal motorcycle" added the middle brother. "Why, you could have outdistanced him at a half trot."

But the brothers' arguments had no more effect on the eldest than his had had on them, so they all kept practicing their own methods while they patiently grew new tail feathers.

MORAL: It*s not know-how that counts; it's know-when. IN OTHER WORDS: NO single "approach " will suffice in a complex world.

Source. lThe Three Ostriches: A Fable" Material reprinted courtesy of Dorset House Publishing from G.M. Weinberg, Rethinking Systems Analysis and Design, pp. 23-24, Copyright © 1983, 1982, All rights reserved,

IMPLEMENTATION STRATEGY 371

Table 9.1. Factors Influencing Success

Issues Success Factors Failure Factors

User involvement

Management commitment

• •

• •

User involvement and interest Much user involvement and user-level application documentation Lack of end-user involvement User and data processing department cooperation

Full-management attention Top-management support

Value basis

Mutual understanding

Design quality

Performance level

Project management

Resource adequacy

Good public reaction to DSS Value of application "Second system" based on established value of first system

Designers' understanding of user needs

• Good design • Flexible design

Strong project and budget control Frequent creative project meetings Use of prototypes Careful planning and testing Good planning

• Lack of user commitment to application

• Local user involvement only

Insufficient management interest Lack of top-management involvement in key area Lack of support for required project organization

High risk

More attention to technical than to user issues Lack of user acceptance of information value Failure to understand the choice process

Nonspecific functional design specifications Inflexible design

Poor performance No performance objectives Clumsy implementation of key function

Lack of training package Excessively complex implementation approach Implementation too rushed Poor timing in terms of deadlines

Excessive use of computing resources Inadequate or poorly used resources Project leader's time not fully committed Lack of resources to make system "friendly" Insufficient technical skills Lack of designer's commitment Bad input data

{Continued).

IMPLEMENTATION AND EVALUATION

Table 9.1. Factors Influencing Success (Continued)

Issues Success Factors Failure Factors

Situational · Stability of user requirements · Departure of designer during stability implementation

• Collapse of cost justification • Change of rules during

implementation • Increasing expenses

Adapted from Swanson, E.B. Information System Implementation Homewood, IL: Irwin, 1988. Material is reprinted here with permission of the author.

Ensure System Does What It Is Supposed To Do the Way It Is Supposed To Do It

The success of a DSS implementation depends to a large measure on the quality of the system and the ease and flexibility of its use. Clearly, if decision makers do not perceive that the DSS facilitates their decisions, they will not use it. The more help the system can provide—in terms of accessing information decision makers might not otherwise know, providing insights decision makers might not otherwise have, or combining information which would have otherwise been kept isolated—the more likely the decision makers are to use it. Further, the easier it is for decision makers to access information and models, the more likely they will be to use them. Much of this book has been dedicated to describing what kinds of features need to be considered and included and how to make the information support richer.

Prototypes. One of the keys to ensuring the system will provide the kinds of in-formation desired in an appropriate fashion is to use prototypes of the DSS throughout analysis and design. Unlike with the design of transaction processing systems, designers should not expect to obtain concrete specifications at the initiation of the project. Decision makers often have difficulties abstracting how they might make choices and how they might use a system if they do not have previous experience with DSS. Further, most manual "support systems" are not well documented; decision makers simply implement a process but are not aware of it fully. Using prototypes, decision makers can discuss specific issues such as movement among screens and windows, kinds of help or other information, and layout and adequacy of information. Decision makers respond better to specific features if they see them in a prototype. Designers and decision makers decrease the likelihood of misunderstanding if they discuss the system in terms of the prototype.

Of course, there are risks associated with using a prototype. First, in order to evaluate a prototype, decision makers must be willing to spend some time using the product. This takes commitment on the part of the decision makers that may be difficult to secure. Second, if only some decision makers participate in the development of a multiuser DSS, designers risk overspecifying design to meet the needs of a subset of the population of users. Designers need to ensure that those decision makers participating in the design process are typical. Third, the final system may not respond in the same manner as did the prototype, particularly in terms of response time. Since users expect the same kind of response, designers need to manage those expectations and make sure the prototype is realistic. The evolutionary approach to designing DSS is an extension of the prototype philosophy. In this approach,

IMPLEMENTATION STRATEGY

designers start with a small but important part of the problem. As users come to reply upon this one portion of the system and thereby become more knowledgeable about their needs, they can better explain their support needs for future parts of the DSS.

Interviewing. While prototypes will help designers gain this information, they alone are not sufficient; designers must gain much of their information, particularly early in the process, from interviewing. Good interviewing requires preparation. Interviewers must prepare the environment, the opening, gather interview aids, select a strategy, and prepare a closing for the interview.

The goal of preparing the environment is to set a stage where the interviewee will focus on the task at hand and feel sufficiently comfortable to reply usefully. The location must be comfortable, private, and free of distractions and interruptions. A neutral site allows the interviewee and interviewer to work together without interruption from telephone, visitors, or other tasks that need completion (such as piles on one's desk or a calendar). The timing of the interview must also be considered. Generally it is better not to schedule interviews when the interviewee is in the middle of a task or it is close to lunch or quitting time because it is hard to get the individual's full attention. Of course, the timing must consider when the interviewer also will be free from distraction and the amount of time necessary to prepare materials. If the interviewee needs to complete a task, or review materials, or bring materials to the interview, allow time for that to be done.

The purpose of preparing the opening is to build rapport with the interviewee. Often it is helpful to consider the interviewee's background and interests or shared experiences and history. Interviewers need to be friendly and sincere and explain the purpose of the interview as well as the benefits associated with being involved. This opening must be consistent with the purpose of the interview and should not be misleading to the individual.

Prior to the interview, the designers should have gathered the relevant and necessary data, documents, checklists, or access to the information system. These materials might be part of the interview or could provide interviewers with the background necessary to complete a meaningful exchange. Interviewers should complete a checklist or interview schedule that will guide them through the process. This helps maintain the focus of the interview while ensuring that important topics will not be missed. For example, initial interviews often focus on support needs. This means the interviewer must ascertain the scope and boundaries of the tasks in which the decision makers are involved as well as the tasks in which they are not involved. Within particular activities, where possible, interviewers must determine the sequence in which decision makers complete tasks and the factors they need to consider. This includes identifying relationships of importance and the means for identifying them, the heuristics followed, and the process of verifying the outcome of an analysis.

Generally the hardest part of an interview is getting started, so it is particularly impor-tant for the interviewer to have ready a series of questions to begin the discussion. These might include the following:

• Could you give me an overview of what you do? • What initiates your activities and decisions? • How do you determine when you have examined a problem/opportunity enough to

act upon it? • What is the output of your decision-making effort? Where does it go when it leaves

you?

374 IMPLEMENTATION AND EVALUATION

• Do other individuals contribute to your decision-making effort? • What are the basic components of your decision-making effort? • Can we define terms?

Postintroductory questions are determined by the strategy of the interview. There are three basic choices: directive, nondirective, and hybrid. In a directive interview, the goal is to get specific information from the decision maker. The questions one selects are highly structured, such as multiple-choice questions or short-answer questions. Where elaboration is allowed, the questions are primarily closed, allowing very little room to deviate from a specific point. When using the directive strategy, one must be very prepared and knowledgeable about the system. Interviewers must ensure that all important issues have been identified and relevant options given.

Nondirective interview strategies, on the other hand, encourage the decision maker to speak freely within a particular domain. The style of interview is highly unstructured and questions are most likely open-ended or probe questions. Clearly, it is crucial that the interviewer be a good listener and know when to probe appropriately. The hybrid approach allows a mixture of both kinds of questions.

Often decision makers respond better to the nondirective strategy, particularly at the beginning of a project. While some decision makers will talk freely, others require more probing before the important information is obtained. Hence, the interviewer needs to be prepared with probing questions, such as:

• Can you think of a typical incident that illustrates how you make decisions? • What advice would you give to a novice just getting started? • Have you ever had a situation where…? How did you proceed? • When you get stuck, what do you do? • What was the hardest decision you ever had to make? What did you do? • What would you recommend if the data…?

If the goal is to elicit heuristics for the choice process, the interviewer might attempt questions such as:

• Do you have any rules of thumb for approaching choices such as . . .? • In these circumstances [previously described], you seem to Are there any ex-

ceptions to this process? • Are there solutions that are possible but not acceptable? How do you proceed in

those cases? • How do you judge the quality of your decision? Of the choice process itself? • How do others judge the quality of your decision? Of the choice process itself? • How do you make a decision? For what outcomes are you looking?

On the other hand, if the goal is to determine relationships between tasks, interviewers might attempt questions such as:

• This decision process X and the process Y seem to be similar. How are they alike? How are they different?

• Can you compare the task Z to anything else? • Does the process that you complete, X, depend on something else? What about Y?

IMPLEMENTATION STRATEGY 375

Similarly, if the goal is to verify the interviewer's understanding of a description, questions such as the following are appropriate:

• I understood you to say Have I misunderstood? • How would you explain in lay terms? • Is there anything about your decision process that we have omitted? • Would it be correct to say that… means…?

Of course, it is also important to understand the sources of information to which the decision maker turns when he or she needs more data, an opinion, or advice. Sources may include colleagues (who may or may not be at the company), mentors, or even people who report to them. Typically different sources are useful for different kinds of information and advice. Knowing when decision makers turn to what kinds of resources helps the designer know more about the kinds of decision aids to include in the system. By the same token, it is useful to know what websites, RSS feeds, and other resources the decision maker follows and trusts so those can be implemented into the system.

Keep Solution Simple

It is important that the DSS provide the support that the users want. That means the system must provide the necessary tools for the choice task without making the technology the focus of the decision maker's efforts. Too often, designers lose perspective on users' needs and try instead to provide users with the latest "new technology" or all of the "bells and whistles" associated with the available technology. Or, designers may computerize parts of the operation just because it is possible, not because it facilitates the choice process. This may be appealing to the designer who wants to experiment with these technologies, but it seems only a diversion to getting "real work done" to the decision maker. Hence, such approaches are likely to impede implementation processes.

Most decision needs are not "simple." In those cases, the DSS cannot be designed to be simple. However, the system as the decision maker sees it needs to be simple. Generally, the decision maker does not need to know all of the operation of the system. Similarly, the approach to solving a problem, and therefore the steps decision makers need to take, must be intuitive and uncomplicated. For example, users do not need to be aware of all components of determining the system's confidence in particular information; rather they need to know that the operation exists. Similarly, new or unsophisticated users need not understand all the flexibility in running models the system has afforded; rather they need to know how to get the base model implemented. Simplicity of use will facilitate decision makers' acceptance and ultimate institutionalization of the system.

Develop Satisfactory Support Base

User Involvement. Most people do not like change. For decision makers, this dislike may be well grounded; often they have been successful because they have long operated in a particular fashion; changing it seems counterproductive. Adapting to a new computer system, especially if they are not terribly comfortable with computers, can be a difficult enterprise. There are many reasons why such concerns exist. For example, decision makers may fear they will become obsolete with the introduction of technology, and their job responsibilities will change or ultimately have no job security. Others may feel a certain

IMPLEMENTATION AND EVALUATION

possessiveness about information which previously only they could obtain or generate. Still others may view the introduction of the DSS as an invasion into their privacy. Many managers are not secure about all of the methods they use in the choice process and therefore find the analysis phase (where informational and modeling needs are determined) uncomfortable. Finally, the introduction of the DSS may change the balance of power operating within the organization. If "information is power," by shifting the availability of information, the introduction of a DSS may be threatening the power or influence of a given decision maker or department.

While a fear of change can affect the implementation process, more often it is resistance losing control of the process that causes the bigger problem. For this reason, most designers will need to involve users throughout the analysis and design process. Users who are involved will better understand the reason for the system, the reason for choices for the design of the system, and the reason why some options were not taken. Their expectations will then be more realistic, which is crucial to effective implementation.

User involvement will also help shape the DSS and its features. Different people approach the same problem with quite different methods, including the manner in which they perceive the problem, the importance of features, and the navigation within the system. If users whose style is likely to be employed with the system participate in the design process, the system will be more usable to them in the long run. If they are involved from the beginning, they can affect the system in a stage where it is inexpensive and easy to do so. Furthermore, others not involved in the design effort might be more willing to accept the needs expressed by their co-workers but not "outsiders" of system designers.

User interaction correlates highly to later use of the system. With some users, however, designers should act on the principle of "small encounters." In other words, the designer and the decision maker will have only brief—and generally informal—interactions during which they address one or two specific issues with regard to the system. In fact, it may seem that these interactions are composed more of nonsystem discussions (or "chitchat") than of system-relevant material. The goal is to address a specific concern and to increase the decision maker's comfort level with the system.

Whenever individuals encounter unknown situations, they build a hypothesis about how their lives will change as a result, Peters (1994, p, 74) notes "the less we know for sure, the more complex the wehs of meaning (mythology) we spin,1' This leads to one of the foremost problems in implementation. If the decision makers and users do not understand what the system will do, how it will do it, or how it will be used, Ihey will tend to create scenarios about the system and its use. The greater the delay between the hint that something about the new system could be undesirable and the explanation of or discussion about the new system, the worse the scenario is drawn.

The lesson to be learned from this is to keep users and decision makers informed about j the progress of development. This leads them to perceive greater control over the situation and therefore will lead to less resistance to the implementation.

Further, they arc likely to have suggestions which, if introduced early enough in the process, might lead to a better DSS in the long run. If, however, they do not have the opportunity to voice an opinion until the system is complete, the suggestion is likely to be too expensive to implement.

IMPLEMENTATION STRATEGY 377

Table 9.2. Problems Eminating from unbalanced Influence gross design

IT Dominance User Dominance

Too much emphasis on database hygiene No recent new supplier or new distinct services New systems always must fit data structure of existing system All requests for service require system study with benefit identification Standardization dominates—few exceptions Benefits of user control over development discussed but never implemented IT specializing in technical frontiers, not user-oriented markets IT thinks it is in control of all Users express unhappiness Portfolio of development opportunities firmly under IT control General management not involved, but concerned

Too much emphasis on problem focus Explosive growth in number of new systems and supporting staff Multiple suppliers delivering services; frequent change in supplier of specific service Lack of standardization and control over data hygiene and system Hard evidence of benefits nonexistent Soft evidence of benefits not organized Technical advice of IT not sought or, if received, considered irrelevant User building networks to own unique needs (not corporate need) While some users are growing rapidly in experience and use, other users feel nothing is relevant because they do not understand No coordinated effort for technology transfer or learning from experience between users Growth and duplication of technical staffs Communications costs are rising dramatically through redundancy

User involvement in the analysis and design processes requires a balance between the influence of the designers from IT and the influence of users and decision makers. When the balance is lost, the system suffers. For example, if IT has too much influence in the system design, the DSS may not provide innovative links to resources because of concerns about compliance with other standards in the corporation. On the other hand, if the decision makers have too much influence on the system, standardization may be eliminated, and hence too many resources may be spent on maintenance and integration. Table 9.2 illustrates other examples of imbalances between designers and users of the DSS.

Commitment to Change. Commitment to change is also important. It comes only after the users have bought into the system. If they were involved throughout the process, decision makers are probably already committed to it. If not, it is difficult to gain their commitment without a demonstration of the clear benefits of the system. The organization must be committed to changing the way in which people make decisions and how infor-mation is made available. It must be committed to the project so that during the phases of development, installation, and use management understands the problems and develops solutions to them. In addition, they must have commitment to making a good effort and making the system work.

Commitment begins at the top. High-level managers cannot be negative about the project or even benignly negligent. Since their priorities set the tone and agenda for an

IMPLEMENTATION AND EVALUATION

Table 9.3. Factors Influencing Acceptance of a DSS

Organizational climate • Degree of open communication • Level of technical sophistication of users • Previous experiences with using DSS and other computer-based systems • General attitude about computer-based systems and IT • Other disruptive influences which might parallel the DSS development and implementation

Role of senior management • Attitudes of senior management toward computer-based products and the IT department, in

terms of both their actions and their statements • Adequacy of the resources devoted to the IT function in general and the DSS development in

particular • Amount of time spent on IT-related issues by senior management • Expectancies of senior management • Integration of IT personnel in strategic decision making

Design process • Recognition of IT impacts in the organizational planning process • Participation of IT management in the organizational planning process • Perceived need for IT in the strategic goals

organization, they must support the system if people are to be involved enough to make the system work.

Managing Change. Management of change is important for the successful introduc-tion of a system. It has three basic phases: unfreezing, moving, and refreezing. Unfreezing, as the name suggests, is the process of creating a climate favorable to change. This includes recognizing there is a need for change. Moving is the process of introducing the new system, and refreezing is the process of reinforcing the change that has occurred.

In the first phase, designers must work with the organization to establish a climate that encourages honest discussion of the advantages and disadvantages of the current system and allows brainstorming of possible solutions and opportunities. In terms of DSS acceptance, this phase hinges on the development of objectives for the DSS to impact the decision-making process, and hence it is begun early in the analysis phase of the project. Designers want to assess those factors which will encourage and discourage implementation. Some possibilities are noted in Table 9.3. This table highlights that the organizational climate, the role of senior management, and the design process all can affect the success of the implementation process. For example, the organizational climate conducive to new systems implementation, as outlined in Table 9.3, is one in which users can talk openly about their needs and concerns, both because of open communication channels and because of a high amount of knowledge and experience with systems. However, the environment can be affected by other unrelated issues as well. For example, a corporation in the midst of merger or financial difficulties might not be conducive to change regardless of the levels of sophistication and communication available. Employees might be so focused upon the survivability of their own employment positions that they cannot focus properly on the DSS under construction or implementation.

IMPLEMENTATION STRATEGY

Similarly, the role that senior management plays in the process is crucial. As reflected in Table 9.3, senior management personnel who use systems, provide adequate resources to their use, and have high expectations for the payoff of such systems set an environment that is more conducive to implementation than one in which senior management is not involved. In addition, the greater the parallel between the DSS development and strategic plans for the department or organization, the more likely the implementation process will be successful because managers and users will see the need for the system.

Designers must focus on the nature of the users' problems as well as the opportunities that a DSS might affect. Decision makers must perceive a real need and must see that a DSS might meet that need. Of course, during this phase, designers and decision makers must agree upon the goals for the DSS and procedures to monitor progress upon those goals. In addition, it is desirable to define a person or group of people who will champion the idea and to gain the commitment of upper management to make the project work. In fact, evidence suggests that implementation success is improved substantially if upper manage-ment demonstrates commitment to the introduction of a DSS. Furthermore, if they initiate DSS development, implementation success increases substantially. Such commitment may be shown in the amount of resources, time, and people (both the design team and the users) dedicated to the project. For it to have an impact, though, the commitment must be ongoing and continuous, not simply for the initial development. All DSS need ongoing support for maintenance and operations. However, if a DSS is to become an important tool, the support must come in gaining new databases and models and other enhancements for the system.

One particular difficulty is the difference between real and perceived problems as well as between real and perceived opportunities. These differences can lead to resistance to implementation or to misstatement of system needs. For example, resistance often results from perceptions that the introduction of the DSS will change one's authority, influence, or even job status. While such perceptions may be unwarranted, knowing about them and attempting to get at their cause may lead to important information that will help with the unfreezing stage of change.

The second phase of change is moving. During this phase, effort focuses upon the development of the DSS. Both technical and managerial resources play a role. Management focuses upon involving users, balancing the influence of the designers and the users, responding to resistance, and creating an environment for eventual acceptance of the new tools. A team of users and designers sets priorities for the project and evaluates trade-offs of possibilities. During the process, the team should provide feedback to the entire community of users and seek their advice. In addition to the technical factors, the team should evaluate how the introduction of a new DSS will change the organizational dynamics associated with decision making. Throughout this phase, the team needs to focus on:

• Perceived needs and commitment • Mutual understanding • Expectancies • Power and change needs • Technical-system issues • Organizational climate • Project technical factors

The final phase of change is refreezing. In this phase, designers must work with users to ensure that the system meets needs adequately and that decision makers understand

380 IMPLEMENTATION AND EVALUATION

how to use new procedures. More important, it requires the development of organiza-tional commitment and institutionalization of the system. This is described in the next section.

Institutionalize System

With a number of factors acting against successful implementation of the system, the designers, in concert with managers, need to plan to institutionalize the system gradually. For example, the manner in which the system is introduced is crucial. If uninterested individuals are offered the system for voluntary use, the DSS is likely to sit idly. Voluntary use will happen only when individuals have the intellectual curiosity to experiment with the system or when the need for the system and its ability to meet that need are well established. On the other hand, managers who insist on mandatory usage of a DSS also face potential failure. It is difficult to legislate decision-making styles. Hence, users may not really use the system but only provide the appearance of doing so.1 Others may work harder to find the weaknesses of the system so as to "prove" it is not worth the time.

A better approach to systems institutionalization is to provide incentives to use the system. Appropriate incentives will, of course, differ from application to application and from organization to organization. However, they need not be elaborate or even financial. For example, one incentive is to pique curiosity by providing information only on the DSS or on the OSS first. If the system is well designed, it should then sell itself on its usefulness to the choice process.

Sometimes the incentive might be the availability of a job "perk" such as the exclusive use of a laptop, netbook, or even smart phone. The perk, which actually facilitates the use of the DSS, makes it desirable and easy to use the system. Or, another form of incentive is to build tools in the system that will help users complete unrelated but important tasks more efficiently or effectively. For example, Sauter and Free (2005) described the building of a DSS for a tertiary hospital which included a feature of "private" notes accessible only to the author. This feature provided, among other options, personal information management for the user, to which they previously had not had access.

Once the incentive system has gained the attention of some individuals to the DSS, they can help others to see the advantage of using the DSS. Enthusiasts can demonstrate the benefits of the systems to others in their work or provide informal incentives for the use of the system. In fact, there is much evidence that the word-of-mouth approach to institutionalization of a system is the one that works best. Hence, it is important for developers and managers explicitly to facilitate its use.

!Many students who were taught to program by drawing "flowcharts" can appreciate this strategy. In most procedural programming language classes, students historically were taught to draw the flow chart to facilitate the development of the logic for writing the code for a program. This is similar to using a DSS to help the decision maker understand all of the possible influences of adopting a particular course of action. However, students often write their code and then create the flowchart that corresponds to their code. In other words, the flowchart was not an aid in their decision-making process, but rather documentation that they followed the appropriate procedures. Similarly, unhappy users, especially those for whom use of a DSS has been legislated, may form their decisions first and then use the DSS to try to prove their choice. In other words, the system will not support the choice process, only document or justify an unaided process.

IMPLEMENTATION STRATEGY 381

Associated with the need for incentives to institutionalize systems is, of course, a need for training. Since each potential user cannot be involved in the design process, some users will not know how it operates or why it flows in a particular manner, and hence they need training. However, DSS are used by managers, often upper level managers. Since managers often cannot make substantial commitments of time to training because they cannot abandon the remainder of their operations for an extended period, training for DSS cannot follow conventional training schedules. One approach that works well, especially with upper level managers, is to train on a one-to-one basis. In that way, the trainer goes to the manager's office (or vice versa) and works through the system with the decision maker. Since there are no other individuals present, the approach and the focus can be customized to the user and managers experience less discomfort about asking questions and voicing their concerns. Finally, since the meetings do focus around the manager, trainers can provide as little training as is necessary at a given meeting and schedule as many sessions as necessary to gain the appropriate comfort level of the manager.

Not only are one-on-one meetings less uncomfortable for the decision maker, they are more focused from a training perspective, in that they allow the time to be spent on activities relevant to the individual user and the individual situation. Evidence suggests that training is most effective when it considers needs from an individual's, the task's, and the organization's perspectives. Training on a one-on-one basis allows trainers to work with individuals to help them learn specific knowledge and skills necessary for effective performance. This may include a remedial lesson on using a mouse or an overview of the Internet or other necessary technology not known by a particular decision maker. Trainers can also ensure that the program includes information and skills necessary to complete specific tasks regardless of the user. For example, this might include guidelines on how to search the new databases or how to merge models. Finally, trainers also can identify how the goals of an individual affect or constrain performance or motivation to learn and develop a training program in response to them.

This method can be particularly effective if it is coupled with some postimplementation tailoring of the system to meet a given user's needs or capabilities. Such a strategy may mean allowing the user access to a command line level of control or to turn on the assistance menu so that it automatically appears. The value is that the trainer can determine what

Many years ago I had a colleague who thought it was time I learned to use electronic mail. Although he often spoke about the benefits, which I as an MIS person should certainly understand, I resisted because I had no immediate need to learn email and felt my time was better spent addressing other priorities« My colleague disagreed. Hence, for two weeks, he would send me a message every morning with some little "bit" of information he thought I would find amusing, interesting, or helpful. Just to ensure that I knew the bait was available, he would drop by my office to tell me he had sent me email but not tell me the information contained in the email* Although ΐ found this annoying, it provided just enough incentive to check my email After a few weeks, it became habit to check my email regularly. Over the years, as more of my colleagues, friends, and students have begun lo use email, I have found endless possibilities for its use (as most of my colleagues, friends, and students would tell you). Clearly it is a tool without which I now could not function* Probably I would have learned to use it anyway, eventually. However, t wonder whether I would have discovered its uses as rapidly or as early without my colleague who provided just the right incentive to get me started. Such small, subtle, and customized incentives often provide the best motivation to use new systems.

382 IMPLEMENTATION AND EVALUATION

"works" best for a given user, help the user to do the necessary tasks as well as possible, and then change the system where the user cannot adapt.

IMPLEMENTATION AND SYSTEM EVALUATION

How does a designer know when a DSS and the implementation ofthat DSS are successful? This question really takes two different approaches—how to test the DSS and how to test the implementation of the DSS. In the first case, it is the technical appropriateness of the system and in the second case it is the overall usefulness of the system.

Technical Appropriateness

If the technical requirements of the decision makers are not achieved, then the system will not be used. If the system is not used, then by definition the implementation has been a failure. Hence, one possible measure for determining implementation success is the extent of use of the DSS, especially compared to the intended use. However, a more pragmatic measure might be the number of features consistent with the user's information needs, especially compared to the number of possible features. If the system provides information that is consistent with regard to decision making needs on all these dimensions of information, then it is successful. Similarly, the model management chapter suggested the need for variability in models and model management features, such as intelligent assistance and model integration. If the system provides appropriate models and model management capabilities, then the DSS can be considered successful.

To determine whether the system functions properly, we can test it to see whether or not the system does what it is supposed to do. For example, database calls can be performed to determine if the correct information is called, and models can be tested to determine whether they perform the correct manipulations. The decision aids, such as intelligent help, can be checked by testing a modeling situation in which such assistance should be invoked. Success of these components can be judged by measuring the percentage of cases for which appropriate advice was given and the adequacy of the explanations provided by the system.

It is imperative that such tests be done under client conditions. For example, testing a network-based system in "supervisor" or "administrator" mode does not measure whether the DSS works properly. "Administrator" mode allows many privileges not available to the typical user that may be crucial to the system functioning effectively. Nor is testing a system away from the user's station sufficient. Users, particularly managers, are likely to have a variety of programs residing on their machines, each with its own peculiarities; these programs may alter the path by which the operating system will check for programs and/or files. They may have drivers that conflict with the DSS or they may affect the allocation of memory in a way that conflicts with the DSS. It is not sufficient to tell the managers the DSS would work if only they would quit using other applications. Testing is meant to see whether the system works from the users' stations under their general operating conditions.

Many aspects can be tested individually. However, unlike transactional processing systems, DSS can never be completely tested for all possible contingencies. Designers cannot anticipate all of the uses to which decision makers will put the system, and so they cannot ensure the system will work properly in all of those applications. Hence, it is also imperative that some tests be done by the potential users themselves. Often minor system flaws are associated with the order in which programs are loaded or the manner in which

IMPLEMENTATION AND SYSTEM EVALUATION

functions are invoked, which experienced programmers may address instinctively (and hence not detect the malfunction); less experienced users are likely to find such problems early. Even if the problem is not a "bug" per se, it might just be a bad or difficult way for the software to function.

Finlay and Wilson (1997) proposed criteria for evaluating DSS, most of which are gen-eralizable to any DSS. The relationships among these criteria are shown in Figure 9.1. The criteria evaluate the system along five dimensions: the logic, the data, the user interface, gen-eral issues, and face validity. Logical validity addresses how well specific action-reaction sequences in the DSS are constructed. This involves two aspects of the logic: analytical va-lidity examines the individual pairwise relationships of the model, while theoretical validity examines the holistic nature of the model in terms of the theory underlying the decision under construction. In other words, does the system work as expected given what is known about how to solve the problem(s) addressed by the DSS. This might include whether cost is calculated appropriately, forecasts and other models and operations work appropriately (especially as they share data), and there are not systematic errors in how logic is executed. Data validity, as the name suggests, considers whether the data included in the DSS are appropriate for the decision under consideration and whether they are accurate, unbiased, and measured (and represented) at an appropriate level of precision. This includes the relia-bility of the source and the data-scrubbing techniques. Interface validity examines how the user would interact with the system. First, it is important to evaluate the usability in terms of the people who will use it and the conditions under which the system will be used. That means is the DSS simple, consistent, informative, and flexible from the users' perspectives. The interface is the window to the system, and if it is not clear, then how to use the system and what results the DSS generates also will not be clear. In addition, it is necessary to examine whether the necessary and sufficient items are displayed and whether they are displayed in an appropriate manner and are understandable to the user. Of course, all of this assumes that the system is easy for the users to manipulate. This includes examination of whether it is easy to learn to use and to remember how to use, speed of use, and its similarity to other well-known systems. The fourth set of criteria, general validity, looks at the overall usability of the system. This examines whether the system is designed from the right perspective, it is able to utilize the appropriate data, and the users will believe that it can provide the counsel needed. Taking that a bit further, the system needs to be evaluated in terms of whether it can be stretched beyond the specific boundaries of the system and, if so, how far. By the very nature of poorly defined and even wicked problems, decision makers are likely to consider scenarios and options beyond the scope of the original design and so need a system that will stretch with them. They also need a system that is replicable and consistent in its methods and procedures, so that data can be processed consistently and results are consistent. Of course, it must not only be replicable but also be correct. Models must be used correctly and check for appropriate assumptions. Recommendations must flow well from the analysis that is provided. The recommendations and thus the system as a whole will be believed when the subsequent actions from the phenomenon being modeled actually behave in the predicted way. Finally face validity asks the question of whether the DSS has access to information similar to or better than that of conventional sources and whether it behaves (analyzes and gives results for) similarly to conventional sources.

Petkova and Petkov (2003) supplement these technical characteristics with how the system fits into the environment in which it operates. They add 4 items to the list that need evaluation. First, the system should be at an appropriate level of complexity. Although the data might support quite extensive modeling, users may not have the competence to build and evaluate the data. Hence, the DSS designer needs to balance the complexity needs of

Sys

tem

effe

ctiv

enes

s

Logi

cal

valid

ity

Dat

a va

lidity

Ana

lytic

al

valid

ity

Theo

retic

al

Theo

retic

al

Acc

urac

y va

lidity

va

lidity

—i

Pre

cisi

on

Theo

retic

al

valid

ity

Inte

rfac

e va

lidity

Usa

bilit

y

Gen

eral

va

lidity

Fa

ce

valid

ity

Info

rmat

ion

va

lidity

C

once

ptua

l va

lidity

R

obus

tnes

s In

tern

al

Ope

ratio

nal

valid

ity

valid

ity

Exp

erim

enta

l va

lidity

I I

Acc

urac

y P

reci

sion

I

I C

larit

y R

elia

bilit

y V

arifi

catio

n

Rep

licat

ive

Pre

dict

ive

valid

ity

valid

ity

Fig

ure

9

.1.

Rel

atio

nsh

ips

Am

on

g

mea

sure

s o

f E

ffec

tive

nes

s.

[So

urc

e:

P.

N.

Fin

lay,

a

nd

J.

M.

Wil

so

n,

"Va

lid

ity

of

Dec

isio

n

Su

pp

ort

S

yste

ms:

To

wa

rds

a

Va

lid

ati

on

Me

tho

do

log

y,"

Sys

tem

s R

esea

rch

an

d

Beh

avio

ral

Sci

ence

, 14

(3),

19

97

, p

p.

16

9-1

82

.] U

sed

wit

h p

erm

issi

on

.

IMPLEMENTATION AND SYSTEM EVALUATION 385

the system with the competence of the users. Second, the DSS should be consistent with the preferences of the organization. These, preferences might be toward modeling or toward the hardware and software selected for the DSS. Designers should, whenever possible, provide a system that meets conventions of the organization. Third, Petkova and Petkov claim a DSS should be evaluated on the quality of the documentation. In most cases such documentation will exist in terms of the online assistance, automatic popup messages, and other assistance provided by the system rather than separate manuals. Finlay and Wilson (1997) suggest that the system needs to be able to handle unforseen problem formulations and solution alternatives. Petkova and Petkov suggest that a measure of evaluation is how well the system can adapt (or be adapted) to such unforseen issues. This might include how easy it is to add new models or data sources or how easily new logic can be implemented in the system. Finally, Petkova and Petkov indicate the system should be evaluated on how well it addresses preconceived notions of problem solving and potential solutions by decision makers. They suggest that a good system is one that abides by appropriate preconceived approaches but protects decision makers from biased or other problematic approaches.

Overall Usefulness

To measure the system as a whole, designers must measure its usefulness to the subject and determine if the system facilitates a logical analysis of the problem. This can first be determined by decision maker-users testing the system. It is necessary to have ex-perienced decision makers during this phase of testing. They would use the system and determine whether it provides reasonable advice and reasonable suggestions for the sit-uation under consideration. If so, then it can be judged to be functioning properly. A problem flag can be generated when these decision makers find lapses in the advice or peculiar steps through analyses. Sometimes these are actual problems in the software, which needs maintenance. Other times, these flags denote a nonintuitive approach to anal-ysis that might call for more assistance windows or greater use of artificial intelligence aids.

Another way of testing the system is with a modified Turing test.2 The purpose of such a test is to determine whether the system is providing appropriate advice and analyses that are consistent with what an expert analyst might provide. Prior to the test, expert analysts are asked to provide solutions or explanations for situations that a decision maker using the DSS might encounter. These human-based, expert solutions or explanations are intermixed with those generated by the DSS. Decision makers are provided two solutions or explanations to a problem and asked to compare them. If the decision makers cannot tell the difference between a human-based answer and a machine-based answer, then the DSS

2The original Turing test was created by the English computer scientist Alan Turing to measure whether or not a computer system demonstrated "artificial intelligence." The Turing test required a human interviewer to "converse" with both an unseen human and a computer on a particular topic. If the interviewer could not determine when he or she was conversing with the human or computer, the computer system was said to have artificial intelligence. If it was obvious when the computer responded, then the system failed the test. Many individuals have challenged the Turing test. Clearly it is not appropriate for evaluation of a DSS. However, the modified Turing test does provide some insight into the adequacy of analyses and advice provided by the system.

386 IMPLEMENTATION AND EVALUATION

is judged to be working properly. Clearly some form of comparison of the outcome of the DSS and that of an expert analyst is necessary.

Implementation Success

Scott (1995) characterizes three approaches to identifying success, depending upon whether a measure reflects "input," "output," or "process" models of the organization. For exam-ple, using an input model of the organization means the evaluator examines how the DSS impacted organizational resources. In particular, the measures of a system's success would focus upon how the DSS helped the organization acquire additional resources or the mea-sures of success would reflect improvements in the use of scarce resources. Dickson and Powers (1973) suggest quantitative measures, including (a) ratio of actual project execution time to the estimated time and (b) ratio of the actual cost to develop the project to the bud-geted cost for the project. While these may measure the efficiency of the implementation, they do not reflect the effectiveness of the implementation.

Measuring implementation success with an output view of the organization causes the evaluator to measure the improvement in organizational effectiveness attributable to the DSS. For example, this might include measurement of the success of the implementation by the payoff to the organization, especially in terms of benefits-to-costs ratios. However, DSS by their very nature, are associated with difficult decisions, managerial operations, and significant externalities. The system might be effective but still not change the way operations are conducted or not help to anticipate an unusual external event that strongly affects an outcome.

We must therefore separate the issues good or bad "decision" from good or bad "outcome." Good decisions, as we stated earlier, are well informed. It is not always true that good decisions are linked with good outcomes or that bad decisions are always linked with bad outcomes. Often that interaction is a function of chance or other factors we do

Design Insights Clients Testing Software

Some insights into implementation can be found by considering the procedures implemented by Edmark, an educational software company. (Educational software provides the same function for children that a DSS does for managers. Good educational software helps children discover opportunities to learn new concepts, identify howr those new concepts are similar to what they have used in the past, determine what they need to know, discover how to apply that information, and help them make appropriate decisions about how to move onto a new topic. Hence, some of the same design principles can be applied to both kinds of effort)

Of course, the programmers test the software to ensure it works. However, in addition, the sons of the CEO and the CFO, as well as some of their friends, also test the software. In fact, the CEO's son began testing the software when five years old. These software ''testers" represent the children who ultimately will be the users of the system. If they cannot use the software, find errors in the functionality, or find the procedures kludgy, it is redesigned before it goes to market.

Similarly, the company employs mothers of young children to spend time in stores explain-ing its products to clerks and customers. In this way, nonthreatening facilitators can adapt the assistance and information they provide to users appropriately. Since users better understand how to use the product, they are more satisfied in its use,

Source: From D. J, Yang, "The Pied Piper of Kids' Software," Business Week, August 7, 1995, pp. 70-71.

IMPLEMENTATION AND SYSTEM EVALUATION

not yet understand. In other words, the DSS might have helped the decision maker make a good decision or a well-informed decision, but that decision resulted in a bad outcome.

While it might be desirable to evaluate a DSS in terms of input costs or benefits to the organization, neither of these two help designers to make a system better. The third option is the process model, which focuses evaluation upon the way in which the system works within an application. In general, the DSS should meet a recognized need, be easy to use, meet even the most sophisticated informational needs, have exploration capabilities, provide intelligent support, and be easy to maintain. As a support system, the DSS must also meet the decision-making needs and the organizational restrictions and be accepted by users. Hence, for implementation to be successful, the designer must address (a) technical appropriateness and (b) organizational appropriateness. While many of these aspects have been discussed in some detail in earlier chapters, we will review the important issues here.

Measurement Challenges. There are other measures designers consider when evaluating system success. Some designers check the degree to which the system meets its original objectives or the degree of institutionalization of the DSS. Others measure the amount of system usage as a surrogate of system effectiveness. However, there are problems associated with this measurement. First and foremost is how does one actually measure usage? The number of keystrokes and other mechanized measurements only relate the number of times one invoked particular commands. The number of times a system is invoked tells us very little about how much or how well the system contributed to the choice process. Decision makers might invoke commands multiple times to ensure themselves that the command will be read the same way each time or because they forget they have already done so. In these cases, many observations of usage would not reflect greater importance or usefulness to the decision maker. Similarly, a small number of usages might not reflect lesser importance or usefulness. For example, sometimes simply seeing an analysis once might initiate a creative solution to a problem that would not otherwise have been apparent.

While electronic monitoring of usage can have difficulties, so can reported usage. If designers rely upon the decision maker to report system usage, they might receive faulty information. Most decision makers are too involved in a decision task to be accurately aware

The task in building a DSS is like the job any other engineer confronts when faced with new technologies and new materials. Suppose that a critical step in building an airliner once required assembling two parts in an awkward location, demanding a special wrench that could reach that location and apply the proper torque« If you had the job of designing that wrench, it would be easy to think of tightening the nut as your goal

It would take a higher-level view to envision the goal as one of holding those two parts together. As new generations of adhesives became available, the engineer with this view would consider them while the tlnut tightener" engineer would not,

But only the highest level of thinking would recall that the goal is to transmit a force or a bending moment through the structure, with the assembly of these two parts being merely a means to that end. If new materials made it practical to make a one-piece part to do the job, the question of how to fasten the two parts disappears.

Source: Adapted from P. Coffee, "Value Tools by Their Decision Making Power," PC Week, 12(27), July 1G1995, p, 27.

388 IMPLEMENTATION AND EVALUATION

of how much or how little they use the tool. If decision makers were favorable toward the introduction, they may bias their estimates positively; if they were unfavorable toward the introduction, they may bias their estimates negatively. Finally, even if we could measure use reliably, use does not equal usefulness. Studies in the mid-1980s (see, e.g., Srinivasan, 1985) showed that system usage did not correlate highly with perceived usefulness of a DSS and thus did not provide reliable measures of system success.

Once upon a time, long ago, in a land far away, a farmer had a goose inai iaiu goiaen eggs. It was not too clear how this happened. The goose ate seemingly ordinary food and did the

ordinary things geese do, and demanded nothing more—but she kept laying golden eggs. Geese are not good at communicating to humans, but the farmer, a kind lady named Mrs. Mulrooney, seemed to be providing whatever minimal care the goose required, and the eggs kept coming.

The eggs kept coming, that is, until ., ♦ No, no, the farmer did not cut open the goose to see how the eggs grew. What a silly story |

that would be! This was a modern, corporate agricultural business. What happened in this case was that Higher Management cut open the farmer—but I am getting ahead of the story.

How, the Higher Managers first inquired, did the farmer's management processes produce such good results from this seemingly ordinary goose? And how could the continuation of these excellent results be assured? In fact, now that they thought about it, how good were the golden eggs, and how could that be verified?

So they asked many questions of Mrs. Mulrooney, but she simply shrugged and explained, 'The goose just does this. 1 feed her, keep her safe and leave her alone, and go collect the eggs. That's all I can tell you."

Clearly, Higher Management concluded, this simple-minded approach lacked proper ana-lytical rigor. ^Everything this goose produces must be properly reviewed!" they cried, So they | appointed inspectors to examine the eggs and make sure they were really gold. (The fact that people actually were buying the eggs, getting expert appraisals and paying the price of gold, did not seem to impress Management.) Then they had the inspectors map out all the steps in the production process and recommend improvements. Since the inspectors had no idea how the goose actually produced the eggs, these changes simply slowed delivery and mildly annoyed the goose, but Management pressed on,

Management next turned their attention to input: "Cut the goose's feed ration, reduce the size of her coop, clean it less often, and see whether we can produce the same output for less cost," they insisted. Mrs. Mulrooney objected but did as she was directed when Higher Management persisted,

"This is silly," she argued. "Processes that contribute nothing to production are worse than useless. Friends have told me about companies in the city where someone kept producing bigger and bigger results, on time and within budget, and the clients loved it and kept ordering more and more. When the orders got big enough, the bosses decided more management control was needed, but that just interfered with the work and the client relationships rather than helping. Aren't we in danger of doing something like that here?1' To which, of course, the Higher Managers snorted, "You just don't understand the Big Picture, and clearly you are unprepared to control a matter as critical as this."

So far, Management had done little harm, but now an August Personage added another aspect to the situation. This Personage was another farmer nearby, under the same corporate management. He had some background in metallurgy, however, so he issued grand memoranda to Higher Management explaining why metallic products should all be produced at his farm, where he had proper inspectors and processes and controls. Higher Management agreed to do this if the August Personage could show them that his geese, too, would produce golden eggs. The August

IMPLEMENTATION AND SYSTEM EVALUATION

Personage delivered a fine-looking production plan with a schedule of expected deliveries. He then set several of his subordinates to work stealing the golden eggs, for which he promptly claimed credit in his production reports.

Now Higher Management could see results! It appeared to them that Mrs. Mulrooney had been lucky, but now her luck was running out. Her farm's production was down, the other farmer's production was up, and he had a more convincing (at least to them) story to tell about it. Also, the growing friction between the farmers at staff meetings was becoming a nuisance, as Mrs. Mulrooney claimed with increasing edginess that her goose was the big producer and that now the whole organization was impeding productivity and rewarding dishonesty.

It was at this point that someone in Higher Management suggested, "OK, if it's really Mrs. Mulrooney's goose that is laying the golden eggs, and if she's really vital to making that happen, then she's the factor we need to understand. She can't explain, so we need to analyze everything about her ourselves. Besides, this might end the arguments." And that's when they cut her open, to determine whether something about her diet, or metabolism, or whatever, might be the key to success.

Higher Management's latest innovation had precisely one effect: the goose eventually got tired of not being fed and flew off to another farm, away from all this nonsense.

Source: D. Samuelson, "Oracle; The Golden Goose," OR/MS Today, 34(4), August 2007, p. 72. This article is reprinted with permission.

To address such problems, others measure user satisfaction. The logic behind this measurement is that if the DSS is effective, it will make users more satisfied with the system. Many devices have been constructed to determine whether users are satisfied with the system. Ives, Olson, and Baroudi (1983) examined many of the instruments being used to measure satisfaction and found they could standardize them by examining factors relating to decision makers' satisfaction with regard to about 40 factors. While reliable measurements can be made by asking about users' satisfaction with each individual factor, many decision makers are not willing to take the time to complete such a questionnaire. Furthermore, users tend to generalize these factors (such as ease of use) and may report their first, last, or typical experience rather than an overall experience. However, this approach does work well during the development process if designers are using prototypes. Specifically, if the users are queried with regard to specific technical attributes of the system iteratively (rather than only at the end of the design process), decision makers and designers can understand the components which work best and which work most poorly in the system. This then leads to a better design and long term to more satisfaction with the system.

Davis (1989) found that measures of "perceived usefulness" and "perceived ease of use" were easier to obtain and thus more reliable measures of DSS success. He used Likert scales to measure attributes of perceived usefulness and attributes of perceived ease of use. To measure perceived usefulness, Davis provided Likert scales which asked users to rate a product (i.e., a DSS) on a scale from "extremely likely" to "extremely unlikely" with regard to seven perspectives of usefulness. These have been adapted here with regard to DSS use:

• Enable the decision maker to accomplish analyses more quickly.

• Improve the decision maker's choice performance. • Increase the user's productivity. • Enhance the user's effectiveness in making choices. • Make it easier for decision makers to make choices. • Help the user to find the DSS useful in making decisions.

390 IMPLEMENTATION AND EVALUATION

Similarly, Davis used the Likert scales to measure perceived ease of use. In the context of a DSS, these measurements might involve the following:

• Learning to use the DSS would be easy for the decision maker. • The decision maker would find it easy to get the DSS to do what it wanted to do. • The decision maker's interaction with the DSS would be clear and understandable.

• The DSS would be flexible in interactions. • It would be easy to become skillful at using the DSS. • The decision maker would find the DSS easy to use.

However, Sauter (2008) conducted a study of actual use of a system (instead of the more common intent to use the system study). The results of this study, which are illustrated in Figure 9.2, show that there may be other mitigating factors that impact the acceptance

Comfort with purpose and/or

family of applications Primary task

directly associated with intrinstic IT characteristics

PEOU

Comfort with approach to

completing task

Perceptions of external control

Mandate

PU

Technology adoption

Values

Media richness

Compatibility with business

processes

Figure 9.2. Factors Impacting System Acceptance Source: V. L. Sauter, "Information Technology Adoption by Groups Across Time," International Journal of e-Collaboration, 4(3), July-September 2008, pp. 51-76. Reprinted here with permission of the Publisher.

IMPLEMENTATION AND SYSTEM EVALUATION

of the DSS. In particular, users are more likely to adopt a system if, among other reasons, they are familiar with the family of technology, they have comfort in completing the task, and the tool is compatible with business processes. So, if new technology is implemented, such as voice recognition or artificial intelligence, decision makers may not adopt the system simply because they are not comfortable with that kind of application, even if the application itself is easy to use and is effective. This means that training might involve more than just how the system works. It perhaps needs to involve some rudimentary exposure to the new technology and some time to become comfortable with it before training for the DSS begins. Similarly, if the users are not comfortable with the decision process itself, they may not adopt the system because that gives too much visibility to the process they use in the task, which they may not be willing to risk. To overcome this problem, implementers might provide decision training or developing a supportive infrastructure before the DSS training. Finally, if the use of the DSS implies an incompatibility with their normal business processes, the users may not adopt the system because it is too difficult. Obviously, this requires some change in the process before implementing the DSS. If these situations are not addressed before the DSS is introduced, users are more likely to fall back on current technology and reject the change.

Organizational Appropriateness

Also in those earlier chapters was a discussion of how the system must become a component of the entire system of the organization. To do this, it must support the decision styles of the users and the manner in which those decision styles change over time. In addition, it must behave appropriately for the organization in which it exists. It must provide levels of security and use consistent with corporate policy and provide information consistent with the expectations of the users. Just as new employees must "fit in" to a department and an organization, the system must "fit in" and meld comfortably with the department. This might include the appropriateness of the user interface, the appropriateness of the data availability, or the appropriateness of the modeling methodologies. If the system does not fit in to the department, it is likely to suffer the same fate as an employee who does not fit in, and hence it will not be implemented.

Dickson and Powers (1973) believe one can capture the behavioral appropriateness of the implementation by measuring (1) managerial attitudes toward the system, (2) how well information needs are satisfied, and (3) the impact of the project on the computer operations of the firm. These measures all reflect perceptions of the system. In addition, they are all measures taken after the system is implemented. Hence, they are not in keeping with the philosophy of planning for implementation throughout the project. A better approach would be to evaluate the various types of nontechnical feasibility discussed in Chapter 2.

The DSS must also fit within the constraints placed upon it by the organization. For example, Meador and his colleagues (1984) concluded that a DSS is successful if it:

• Fits with the organization's planning methods • Helps with decision makers' way of thinking about problems • Improves the decision makers' thinking about problems • Fits well with the "politics" of how decisions are made • Use results in choices that are implemented • Is cost-effective and valuable relative to its cost • Is expected to be used for some time

392 IMPLEMENTATION AND EVALUATION

In other words, DSS need to interface well with other systems within the organization. Even if the DSS does a great job facilitating decisions, it cannot be a success if it does not facilitate mandated decision steps or other activities.

DISCUSSION

Implementation implies realization of the planned system. The purpose of this chapter is to highlight some of the barriers to implementation and some of the strategies that can increase the likelihood of successful implementation. Clearly, the better the analysis of real needs, the greater the sensitivity of the designers to organizational climate, the greater the involvement of users early in the process, and the greater the commitment of management will improve the likelihood that a technically appropriate DSS is implemented.

SUGGESTED READINGS

Adams, D. A., R. R. Nelson, and P. A. Todd, "Perceived Usefulness, Ease of Use and Usage of Information Technology: A Replication," MIS Quarterly, Vol. 16, No. 2, June 1992, pp. 227-247.

Alavi, M., and E. A. Joachimsthaler, "Revisiting DSS Implementation Research: A Meta-Analysis of the Literature and Suggestions for Researchers," MIS Quarterly, Vol. 16, No. 1, March 1992, pp. 95-116.

Alexander, C, The Timeless Way of Building, New York: Oxford University Press, 1979.

Alter, S. L., Decision Support Systems: Current Practice and Continuing Challenges, Reading, MA:

Addison-Wesley, 1980.

Beynon, M., S. Rasmequan, and S. Russ, "A New Paradigm for Computer-Based Decision Support,"

Decision Support Systems, Vol. 33, 2002, pp. 127-142.

Beyer, H., and K. Holtzblatt, Contextual Design, San Francisco: Morgan Kaufmann, 1998.

Brown, T, "Design Thinking," Harvard Business Review, Vol. 86, No. 6, June 2008, pp. 84-92. Carmel, E., R. D. Whitaker, and J. F. George, "PD and Joint Application Design: A Transatlantic

Comparison," Communications oftheACM, Vol. 36, No. 4, June 1993, pp. 40-48.

Cash, J. I., F. W. McFarlan, J. L. McKenney, and M. R. Vitale, Corporate Information Systems

Management: Text and Cases, 2nd ed., Homewood, IL: Irwin, 1988. Clement, A., and P. C. Den Besselaar, "A Retrospective Look at PD Projects," Communications of

theACM, Vol. 36, No. 4, June 1993, pp. 29-39.

Davis, F. D., "Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information

Technology," MIS Quarterly, Vol. 13, No. 3, September 1989, pp. 319-342. DeLone, W. H., and E. R. McLean, "Information Systems Success: The Quest for the Dependent

Variable," Information Systems Research, Vol. 3, No. 1, March 1992, pp. 60-95.

Dickson, G., and R. Powers, "MIS Project Management: Myths, Opinions and Realities," in W. McFarlin et al. (Eds.), Information Systems Administration, New York: Holt, Reinhart and Winston, 1973.

Dickson, G. W., and J. C. Whetherbe, The Management of Information Systems, New York: McGraw-Hill, 1985.

Dumas, J., and P. Parsons, "Discovering the Way Programmers Think about New Programming

Environments," Communications oftheACM, Vol. 36, No. 6, June 1995, pp. 45-56.

Finlay, P. N., and J. M. Wilson, "Validity of Decision Support Systems: Towards a Validation Methodology," Systems Research and Behavioral Science, Vol. 14, No. 3, 1997, pp. 169-182.

SUGGESTED READINGS

Galletta, D. F., M. Ahuja, A. Hartman, T. Teo, and A. G. Peace, " Communications of the ACM, Vol. 38, No. 6, July 1995, pp. 70-79.

Gass, S. I., "Model World: Danger, Beware of the User as a Modeler," Interfaces, Vol. 20, No. 3,

1990, pp. 60-64.

Ginzberg, M. J., "Steps Towards More Effective Implementation of MS and MIS," Interfaces,

Vol. 8, No. 3, May 1978, pp. 57-73.

Ginzberg, M. J., "Key Recurrent Issues in MIS Implementation Process," MIS Quarterly, Vol. 5, No. 2, June 1981, pp. 47-59.

Guimataes, T. et al., "The Determinants of DSS Success: An Integrated Model," Decision Sciences,

Vol. 23, No. 2, March/April 1992, pp. 409-431.

Ives, B., and M. H. Olson, "User Involvement and MIS Success: A Review of Research," Management

Science, Vol. 30, No. 5, May 1984, pp. 586-603. Ives, B., M. H. Olson, and J. J. Baroudi, "The Measurement of User Information Satisfaction,"

Communications oftheACM, Vol. 26, No. 10, October 1983, pp. 785-793. Jenkins, A. M., and J. A. Ricketts, "Development of an Instrument to Measure User Information Satis-

faction with Management Information Systems," Unpublished Working Paper, Indiana University,

Bloomington, 1979.

Johnson, J., My Life Is a Failure: 100 Things You Should Know to be a Successful Project Leader, West Yarmouth, MA: The Standish Group, International, 2006.

Keen, P. G. W, "Information Systems and Organizational Change," Communications of the ACM,

Vol. 24, No. 1, January 1981, pp. 24-33.

Lewin, K., "Group Decision and Social Change," in T. M. Newcomb and E. L. Hartley (Eds.),

Readings in Social Psychology, New York: Reinhart & Winston, 1947. Loofbourrow, T., "Expert Systems Are Still Alive," InformationWeek, Issue 536, July 17, 1995,

p. 104. Meador, C. L., M. J. Guyote, and P. G. W Keen, "Setting Priorities for DSS Development," MIS

Quarterly, Vol. 8, No. 2, June 1984, pp. 117-29.

Mysiak, J., C. Giupponi, and P. Rosato, "Towards the Development of a Decision Support System for Water Resource Management," Environmental Modelling and Software, Vol. 20, No. 2, February 2005, pp. 203-14.

Nelson, R. R., E. M. Whitener, and H. H. Philcox, "The Assessment of End-User Training Needs,"

Communications oftheACM, Vol. 38, No. 6, July 1995, pp. 27-39.

Norman, D. A., The Design of Everyday Things, New York: Basic Books, 2002. Norman, D. A., Emotional Design, New York: Basic Books, 2005.

Peters, T., The Pursuit of WOW! New York: Vintage Books, 1994.

Petkova, O., and D. Petkov, "A Holistic Approach Towards the Validation and Legitimisation of

Information Systems," Kybernetics, Vol. 32, Nos. 5-6, 2003, pp. 703-714. Preece, J., H. Rogers, and H. Sharp, Interaction Design: Beyond Human-Computer Interaction,

New York: Wiley, 2002.

Port, O., "Computers That Think Are Almost Here," Business Week, July 17, 1995, pp. 68-73.

Raghunathan, B., and T. S. Raghunathan, "Information Systems Planning and Effectiveness—An Empirical Analysis," Omega: The International Journal of Management Science, Vol. 19, Nos. 2-3, 1991, pp. 125-135.

Robey, D., "User Attitudes and MIS Use," Academy of Management Journal", Vol. 22, No. 3, September 1979, pp. 527-538.

Sambamurthy, V, and L. J. Kirsch, "An Integrative Framework of the Information Systems Devel-opment Process," Decision Sciences, Vol. 31, No. 2, Spring 2000, pp. 391-411.

Sauter, V L., "Information Technology Adoption by Groups Across Time," International Journal of e-Collaboration, Vol. 4, No. 3, July-September 2008, pp. 51-76.

IMPLEMENTATION AND EVALUATION

Sauter, V. L., and D. Free, "Competitive Intelligence Systems: Qualitative DSS for Strategie Decision Making," The Database for Advances in Information Systems, Vol. 36, No. 2, Spring 2005, pp. 43-57.

Scott, J. E., "The Measurement of Information Systems Effectiveness: Evaluating a Measur-ing Instrument," Database: Advances in Information Systems, Vol. 26, No. 1, February 1995, pp. 43-59.

Snyder, C, Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces, San Francisco: Morgan Kaufmann, 2003.

Srinivasan, A., "Alternative Measures of System Effectiveness: Associations and Implications," MIS Quarterly, Vol. 9, No. 3, September 1985, pp. 243-253.

Swanson, E. B., Information System Implementation, Homewood, IL: Richard D. Irwin, 1988. Wagner, I., "A Web of Fuzzy Problems: Confronting Ethical Issues," Communications of the ACM,

Vol. 36, No. 4, June 1993, pp. 94-100. Weinberg, G. M., Rethinking Systems Analysis and Design, Boston: Little, Brown and Company,

1982. Yang, D. J., "The Pied Piper of Kids' Software," Business Week, August 7, 1995, pp. 70-71. Zmud, R. W., and J. F. Cox, "The Implementation Process: A Change Approach," MIS Quarterly,

Vol. 3, No. 2, June 1979, pp. 35-43.

QUESTIONS

1. The chapter identifies five principles to successful implementation. Discuss how inat-tention to each of them could discourage implementation efforts.

2. Compare and contrast the use of interviewing and prototyping during the design process in terms of the impact on the implementation process.

3. Why and how should users be involved in the design process?

4. How can we establish whether a given DSS is effective?

5. What incentives can one use to encourage users to try the technology?

6. Compare and contrast the use of measures of utilization with measures of user satis-faction in measuring DSS effectiveness.

7. Create an interview schedule for users of a hypothetical DSS design project.

8. What activities would a designer engage in to develop a satisfactory support base?

9. What role does senior management play in the design of a DSS?

10. Compare and contrast technical appropriateness and organizational appropriateness in the DSS evaluation process.

11. How would you evaluate a DSS to determine if it is effective? Discuss the procedures for testing and the mechanisms for evaluation.

12. The DSS of the future will continue to be deployed over intranets or the Internet. As such, their user interfaces will be evaluated both as decision tools and as Web pages. Discuss the guidelines you should follow to design the user interface of such a system.

13. What is technical appropriateness and how does it impact DSS design?

14. How does one evaluate the overall effectiveness of a DSS?

15. What is a Turing test and how might it impact DSS design?

ON THE WEB 395

ON THE WEB

On the Web for this chapter provides additional information about the implementation and evaluation processes associated with DSS design. Links can provide access to demonstration packages, general overview information applications, software providers, tutorials, and more. Additional discussion questions and new applications will also be added as they become available.

• Links provide overview information. Some links provide access to general informa-tion about implementation and evaluation processes.

• Links provide access to successful implementation and evaluation efforts. Where available, links can also provide access to unsuccessful efforts that illustrate pro-cesses to avoid.

• Links provide interview and evaluation questionnaire hints. Information obtained from these links could be incorporated into other applications.

• Links provide access to prototyping tools. In addition to providing an access to the tools, the Web provides product reviews and success stories about their use. The links also provide bibliographies and general information about prototyping as a DSS analysis and design tool.

You can access material for this chapter from the general Web page for the book or directly athttp://www.umsl.edu/^sauterv/DSS4BI/impl.html.

IV

EXTENSIONS OF DECISION SUPPORT SYSTEMS

Decision Support Systems for Business Intelligence by Vicki L. Sauter Copyright © 2010 John Wiley & Sons, Inc.

10

EXECUTIVE INFORMATION AND DASHBOARDS

In the early 1980s, executive information systems (EIS) were developed as specialized DSS intended to help executives analyze critical information and use appropriate tools to address the strategic decision making of an organization. In particular, EIS help executives develop a more accurate and current global view of the organization's operations and performance as well as that of competitors, suppliers, and customers. The goal of EIS was to provide an easy-to-use tool that would help improve the quality of top-level decision makers, reduce the amount of time needed to identify problems and opportunities, provide mechanisms to improve organizational control, and provide better and faster access to data and models. The focus of these systems included events and trends that were both internal and external so as to prepare executives to make strategic changes to avail the organization of opportunities and eliminate problems. In the early 1990s, it was believed that EIS applications were rising at a rate of about 18 % per year (Korzenlowski, 1994). At that time, some estimates were that an EIS has been installed on the desks of between 25 and 50% of senior executives of the largest companies. Others claimed EIS were in use in 60% of the Fortune 1000 companies.

This was a visionary goal for systems, especially in an era before data warehouses, balanced scorecards, and OLAP. Systems at that time were plagued with problems of collecting, correcting, storing, integrating, and accessing data in a meaningful way. As the technologies evolved, they provided support primarily to those individuals who were proficient with computers and analytical tools, not generally those people in the executive suites. So, it seemed as though the idea of EIS was, at least, before its time.

Two events occurred around the turn of the twenty-first century that made the concept of EIS regain its importance. It is not clear which had more impact, but clearly the confluence

Decision Support Systems for Business Intelligence by Vicki L. Sauter Copyright © 2010 John Wiley & Sons, Inc.

EXECUTIVE INFORMATION AND DASHBOARDS

of the two, especially in light of improvements in technology (both hardware and software), caused managers to reconsider the importance of EIS.

One of the events was the Enron scandal of 2001. This scandal, revealed in October 2001, involved the energy company Enron and the accounting, auditing, and consultancy partnership of Arthur Andersen, and ultimately lead to the downfall of both companies. Enron's executives used accounting loopholes, special-purpose entities, and poor financial reporting to hide billions in debt from failed deals and projects. Their nontransparent financial statements did not clearly detail its operations and finances with shareholders and analysts. In addition, its complex business model stretched the limits of accounting, requiring that the company use accounting limitations to manage earnings and modify the balance sheet to portray a favorable depiction of its performance. The chief financial officer and other executives misled Enron's board of directors and audit committee of high-risk accounting issues. In addition, these executives put pressure on Andersen to ignore the high-risk accounting issues. In the end, Enron declared bankruptcy, and Andersen was dissolved. As a result of this scandal, there was significant pressure to bring greater accountability to the upper executives of large corporations, and the result was the adoption of the Sarbannes Oxley legislation that required executives (and in fact managers at all levels) to monitor their organizations closely and to be able to attest to the veracity of the reports provided about their companies. So, corporate executives became more interested in using the support systems available to them.

The second event was the introduction of key performance indicators (KPIs) and balanced scorecards into the new management practices. This practice helped executives identify measurable objectives that could be monitored directly to understand what their organization was doing.

KPIs and Balanced Scoreboards

Key performance indicators are simply measures of performance that are of importance to the organization. Specifically KPIs are a measurable objectives, including a direction of improvement, a benchmark or target, and a time frame that can relate specific activities to long-term goals. For example, a university might look at attrition, transfer rates, graduation rates, and new student acquisition to reflect its long-term goal of serving the student base. Or, a production company might examine the breakdown and profitability of various demographic segments for its products. These KPIs vary depending on the organization because they define factors of importance to stakeholders that relate to corporate goals; these are the factors that are evaluated and measures against which to evaluate them to ensure the corporation is progressing in its mission.

It is important to find the correct KPIs, especially for presenting to executives. Most organizations have entirely too many reports that are generated periodically because some-one requested the information (perhaps years before), but no one knows how or why to use it. The factors must, instead, be accurate measures of the success in meeting the organiza-tion's mission. Hubbard (2007, p. 43) identified five questions that one should ask before adopting a KPI:

• What is the decision this is supposed to support? • What really is the thing being measured? • Why does this thing matter to the decision being asked? • What do you know about it now? • What is the value to measuring it further?

EXECUTIVE INFORMATION AND DASHBOARDS 401

These measure should be:

• Practical indicators of company processes; • Directional indicators that specify progress (or the absence thereof); • Actionable indicators that direct management on what to change, if necessary; • Targeted to what the business values the most; • Cost-effective indicators relative to the value of knowing the information.

Returning to the example of the university, suppose the goal was "quality teaching." Many universities simply take course evaluations as their measure ofthat goal. If a university were to apply the KPI philosophy, they would instead examine what "quality teaching" really is, what could affect it, and what would be practical to measure. In addition, it would be important to reflect on what could be affected by it, what should be targeted to ensure that it happens, and what indicators would indicate that someone needed to take action to ensure that it happens. If following this process, administrators would focus on a limited number of questions on course evaluations and would supplement with other measures of quality.

Some believe that all KPIs must be quantitative. While quantitative indicators make measurement and interpretation easier, they do not reflect all of the concerns of an upper level manager. For example, knowing one's top-10 customers or 10 most productive sales-people is useful for management even if the measure cannot be quantified. Similarly, there are times when knowing the tasks needing completion, the issues to be investigated, or the people to consult is important.

The point of the balanced scorecard is to monitor the KPIs to determine whether operations are aligned with strategic goals of an organization. The scorecard then brings together the most important KPIs to help executives maintain a comprehensive view of the business that looks beyond just the financial outcomes but also includes operational, marketing, and other aspects of the business.

To build a scorecard, one selects strategic objectives regarding the important parts of the organization. Executives (generally with the help of consultants) review and reflect upon annual reports, mission and vision statements, project plans, consultant reports, com-petitive data and analyses, stock market reports, trade journal stories, and other background information as a basis of determining these objectives. So, for example, there might be ob-jectives regarding financial goals, objectives regarding customer goals, objectives regarding operations, and objectives regarding growth, as shown in Figure 10.1. Executives select the most important and strategic objectives within each category and link them to other objectives that define a cause-effect chain. For example, if more students are attracted to our university, and there are reductions in the number of transfer students, then the gross revenue for the university would be increased. A balanced scorecard of strategic perfor-mance measures is derived directly from the strategic objectives. Information about the KPIs and scorecards are represented in a DSS in dashboards.

Dashboards

Dashboards provide a mechanism to monitor whatever is important to a decision maker. They can represent KPIs and scorecards or any aspect of the operation of the organization or the environment. These systems provide a bird's-eye view of the factors that are important to the decision maker. Few (2006, p. 34) provides the most comprehensive definition of a dashboard as a "visual display of the most important information needed to achieve one or more objectives which fits entirely on a single computer screen so it can be monitored at a

402 EXECUTIVE INFORMATION AND DASHBOARDS

Figure 10.1. Building a balanced scorecard. {Source: From R. S. Kaplan, and D. P. Norton, The

Balanced Scoreboard: Translating Strategy into Action, Boston, MA: Harvard Business School Press,

1996.) Image is reprinted here with permission of the publisher.

glance." An example of the kind of dashboard Few describes is shown in Figure 10.2. This particular dashboard, called the IT Dashboard, provides the breakdown of IT expenditures in the federal government by agency. As you can see, the left side of the dashboard provides a chart of the IT expenditures by agency. Since the expenditures for IT are intended to facilitate the long-term missions of the individual agencies, they are reported by agency rather than the overall federal budget. Clicking on the histogram on the left brings up data about that agency on the right; in this figure, IT expenditures for Veterans Affairs appears on the right.

The goal of the dashboard is to present organized data to the decision maker in an easy-to-understand format. In addition to providing the data, the dashboard provides a measure that helps decision makers understand that factor, such as values at comparable times last year, standards, budgeted value, competitor's value, or any other metric to which a comparison is of value to the decision maker.

But, there is more to a dashboard than simple reporting provided in an EIS. The dashboard is also interactive, allowing decision makers to drill down for additional in-formation. If you click on the graphic on the right, you see more information about the expenditures for the Department of Veterans Affairs, as shown in Figure 10.3. Using a standard red-yellow-green coloring for significant concerns, needs attention, and normal, respectively, this view of the dashboard gives information about projects by cost, schedule, or evaluation. You can see by looking at the pie chart on the left that 63% of the projects

EXECUTIVE INFORMATION AND DASHBOARDS 403

Figure 10.2. The federal IT dashboard. (Source: Your window into the Federal IT Portfolio, http://it.usaspending. gov/.) The screen is reprinted with permission.

have significant concerns. More data about those concerns are shown in the bar chart on the right. If we look at the cost of the projects, only 7% of the projects have significant concerns because of budget, while 49% of the projects have concerns about the schedule, and 64% of the projects have concerns because of evaluation of the agency CIO.

Figure 10.3. First drill-down. (Source: Your window into the Federal IT Portfolio, http://it.usaspending.gov/.) The screen is reprinted with permission.

404 EXECUTIVE INFORMATION AND DASHBOARDS

Figure 10.4. Second drill-down. (Source: Your window into the Federal IT Portfolio, http://it.usaspending.gov/.) The screen is reprinted with permission.

To better understand the problems of these projects, managers can, in turn, click on the projects for which there are significant concerns. They can select all projects for which there is concern by clicking the largest part of the pie chart for "overall rating" or select only projects for which budget, schedule, or evaluation poses significant concerns by selecting that portion of the bar chart. The result is a list of projects in this category, as shown in Figure 10.4. Here you see a listing of projects and the expenditures for those projects in this fiscal year. Managers can then select one of the projects, such as "Blood Banks," by clicking the name and they see more information about that specific project, as shown in Figure 10.5. Notice this dashboard gives you information about variance from the cost (which is small), schedule (on average they are late by 120 days), and evaluation by the agency CIO. Clicking on any of those measures provides more specific details, such as that shown in Figure 10.6. What we find is that the CIO has not yet rated the project and so the rating was set at " 1 " automatically. For this project, the CEO might then ask the appropriate CIO to rate the project so he or she can get more information. In this case, the project appears close to target, but only after the CIO has shared an overview and non-quantitative factors can the CEO be sure.

EXECUTIVE INFORMATION AND DASHBOARDS 405

Figure 10.5. Third drill-down. (Source: Your window into the Federal IT Portfolio, http://it.usaspending.gov/.) The

screen is reprinted with permission.

Dashboards can lead to more analytical abilities too. By selecting the "analysis" tab, decision makers can peruse more about the expenditures. For example, Figure 10.7 shows a chart of the percent change in spending associated with the total spending. Many other combinations are available with the selections of different axes allowing deeper analyses. For example, Figure 10.8 shows how the agencies are splitting their spending between mission area projects and infrastructure. It highlights that the NSF is spending more of its budget on infrastructure than on mission-critical projects, whereas the Department of Education is spending proportionally more of its budget on mission-critical projects than on infrastructure.

The use of dashboard arrows a decision maker to get an overview on the entire state of affairs and allows him or her to know where greater focus is necessary and to perform that greater focus. So while dashboards are visual displays that summarize data, they are also gateways to detailed information and even analyses that the decision maker might want. Dashboards are always one screen in size, but they are customized to the user's needs, decision-making sphere, and visual preferences. Following the pattern of the scorecards, there would be strategic dashboards for senior executives, operational dashboards for middle management, and tactical dashboards for front-line managers. These dashboards would all be tied together with the goals and objectives they represent.

406 EXECUTIVE INFORMATION AND DASHBOARDS

Figure 10.6. Fourth drill-down. (Source: Your window into the Federal IT Portfolio, http://it.usaspending.gov/.) The

screen is reprinted with permission.

Figure 10.7. Analysis in a dashboard. {Source: Your window into the Federal IT Portfolio, http://it.usaspending.gov/.)

The screen is reprinted with permission.

Figure 10.8. Additional analysis. {Source: Your window into the Federal IT Portfolio, http://it.usaspending.gov/.)

The screen is reprinted with permission.

408 EXECUTIVE INFORMATION AND DASHBOARDS

Dashboards can provide data for any domain in the organization. In sales, for example, the dashboard might include number of orders, sales pipeline, and order amounts. In manufacturing, on the other hand, the dashboard might include production rates, defect rates, and absenteeism. A university dashboard might include number of students, first-year retention rate, student satisfaction, number of faculty, faculty to student ratios, and graduation rates. In other words, the dashboard contains information about whatever is important to that decision maker.

Dashboard as Driver to EIS

There are a number of basic requirements for an effective dashboard that behaves as an EIS. Tables 10.1-10.3 summarize some of the characteristics that have been illustrated earlier in this book. The most important characteristic of a successful dashboard is that it be simple and easy to use. Well-designed dashboards allow the executive to understand the corpo-rate performance easily. In addition, the system anticipates some needs by automatically generating prespecified exception reports and trend analyses that help executives to iden-tify both problems and opportunities. Dashboards must have user-friendly interfaces that encourage system use. Often this is achieved with the use of color screens and easy-to-understand graphics. In particular, the use of red-yellow-green to illustrate the interpre-tation of a value is common. Generally, however, the use of color is supplemented with a

Medsphere, a leading commercial provider of open source-based electronic health record systems and services for hospitals and clinics, has a strong focus on project management. Program managers are responsible for presenting the overall progress on the company's projects to their different stakeholders, including board members and customers. However, they did not have a succinct and comprehensive way of communicating the myriad of details to their various stakeholders until they adopted dashboards,

Using the full-dash board technology of an EIS, they can now convey information across the organization regarding each customer's implementation. Two basic dashboards were imple-mented:

• The Project Performance Dashboard helps Medsphere managers, executives, and board members quickly obtain an understanding of the progress of customer implementations. For example, they can easily see budget and schedule performance that can be used to forecast future performance.

• The Project Status Dashboard helps Medsphere managers, executives, and board members quickly obtain information on project challenges, including burn-off and aging status.

Of course individual decision makers can customize their dashboard with a variety of measures, including scheduling, actual versus planned, issues, top-5 challenges, risk management, and earned-value management dashboards.

Managers at Medsphere note the dashboards have improved communications among employ-ees because they can see and understand the relevant information at the same time. In addition, because they can drill down into the data to find the reason for a particular outcome, it allows decision makers to understand the reason for a result and the items thai might be adjusted to improve it.

EXECUTIVE INFORMATION AND DASHBOARDS 409

Table 10.1. Information Needs to Be Met by a Dashboard

Timely

Sufficiency

Aggregation level

Redundancy Understandability

Freedom from bias Reliability

Decision relevance Comparability Appropriateness of format

Information needs to be available as soon as possible Response time should be very short Information needs to be complete Users need extensive external data Users need historical data as well as most current data Users need access to global information of the organization and

its competitors Information should be provided in an hierarchical fashion Information needs to be provided at various levels of detail,

with "drill-down" capability Users need "exception" reports or problem "flags" Should be minimized System should save users time Problem indicators should be highlighted Written explanations should be available Should support open-ended problem explanation Information must be correct and complete Information must be validated Access must be controlled but reliable for those approved to use

the system System must meet the needs of executives Users need trends, ratios, and deviations to interpret Flexibility is crucial Format should reflect user preferences Integrates text and graphics appropriately

Table 10.2. Modeling Needs to Be Met by a Dashboard/EIS

Extensive use of click-through and drill-down abilities Easy to use ad hoc analysis Extensive use of exception reports and facility for tracing the reason for the exceptions Models are provided appropriate to address critical success factors Forecasting models are integrated into all components User has easy access to filters for data analysis Extensive use of "what-if" facility Extensive use of planning models

Table 10.3. User Interface Needs to Be Met by a Dashboard

Interface must be user friendly Interface must incorporate sophisticated use of graphic user interface Interface should incorporate alternative input/output devices such as mouse, touch pads,

touch screens, etc. System must be accessible from a variety of machines in a variety of locations Interface should be intuitive Interface should be tailored to management style of individual executives Interface should contain help menus for functions of the system Interface should contain content-sensitive help menus

EXECUTIVE INFORMATION AND DASHBOARDS

shape for accessibility purposes. For example, it is common to use not only the red-yellow-green metaphor of a traffic signal, but also to place that color in its standard location on a signal so that even if a user cannot distinguish the colors of red and green, he or she can still understand the message by whether it is located at the top or bottom of the signal.

Further, the data must be presented in an easy-to-understand format with tools that allow executives to change the format of presentation if necessary. Hence, a related concern is that dashboard must be flexible in presentation and graphics capabilities. Dashboards must allow—and facilitate—the executives to follow paths they think are appropriate with a minimum amount of effort. This includes flexible data browsing, data manipulation, and presentation modes that facilitate executives gaining insights into competitive trends, business opportunities, and emerging problems. Consider, for example, a dashboard with which the user can investigate the reasons for and patterns in sales by considering only certain regions or certain states. Executives should be able to ask questions relating to forecast projections, inventory status, or budget planning as they feel appropriate.

Third, dashboards must provide the broadest possible base of information. Executives need both qualitative and quantitative information and information from within the firm and without. The internal data must represent corporatewide performance and operations. It must include both current and historical data that support long-term trend analyses. The external data must facilitate the evaluation of external forces affecting the corporation. Dashboards have a well-organized presentation of data that allows the executive to navigate the system quickly. Often, dashboards offer a "snapshot" of the present (or the past) in an easy-to-understand format. In addition, the systems have "drill-down" capabilities that enable the executive to investigate analyses underlying the summary information that might better identify problems and opportunities; an example of such drill-down screens was discussed earlier. These prepared drill-down screens are supplemented by an ad hoc query capability through which executives can investigate unanticipated questions or concerns.

Fourth, dashboards must respond quickly. This includes, of course, the time the system takes to respond to a particular request. Executives are busy and are accustomed to fast response from their employees; they expect nothing less from their computer systems. In addition, dashboards must facilitate fast reaction to ideas generated from the system. Dash-boards need to provide easy-and-quick communication and report-generating capabilities to allow executives to react to the information provided.

Design Requirements for Dashboard

The dashboard is, of course, a graphical user interface. As such, its goal is to provide images that engage the human visual system so that patterns can be discerned quickly and

One CEO removed the EIS dashboard, even though it included the right physical interfaces and was implemented on the basis of critical success factors, In her mind, the dashboard was more of a toy lhan a tool because the CEO lacked any mechanism to shave insights from it with others in the company. Since the dashboard lacked any way to be integrated into an email or other communication took she had to print the result, comment on it and send it through company mail. In other words, the CEO had no good way to communicate the points while the feeling was "hot/' Integration of the tool into regular work processes is critical.

EXECUTIVE INFORMATION AND DASHBOARDS

accurately. In order to take advantage of the power of the visual system, designers need to understand the principles of the system. Short-term visual memory is limited. Humans focus only on a fraction of what the eyes sense and only a fraction of that actually becomes a focus of attention. In fact, humans store only three to nine "chunks" of information at a time and they are replaced when new chunks of information arrive. In other words, to get the greatest possible message in a dashboard, items that belong together (in the decision maker's mind) must be placed together on the dashboard, and things that are different need to be clearly demarcated.

The dashboard must be encoded for rapid perception. Consistency in how data are rep-resented and how decision makers navigate is crucial. Most experts will suggest designers avoid ugly interfaces. But, Norman (2005) emphasizes that aesthetically designed things are more enjoyable, which causes them to consider the data more intensively and prepares the viewer for greater insight and more creative response. Said differently, if the dashboard is designed aesthetically, it will allow executives to make better choices.

The items that can be adjusted to affect this encoding include the color, the form, and the position of the information on the dashboard. Color must be used in a pleasing, yet useful manner. The hue and the intensity must be used to bring the decision maker's attention to important facts and to highlight differences. Bright, fully saturated colors tend to grab the user's attention and so should be reserved for the most important or the most critical information on the dashboard. Too much of the bright colors make the dashboard difficult to view for an extended period and may reduce concentration on the data. Generally dashboards are designed using soft colors to reduce stress of the user and to emphasize the selective bright colors of important data. Colors must also contrast well so that users can see the visual differences of the dashboard. For example, black fonts tend to be easy to read except when they do not contrast well as if you had a navy blue background. Similarly, too little contrast between the colors used for different categories will make it difficult for the user to demark the differences. It is a rule of thumb to limit color variation to five shades.

Form attributes of course include the length, width, size, shape, and position of the objects in the dashboard. Generally dashboards are more effective if the magnitude of quantitative information is conveyed in terms of position and line length rather than line width and shape because those are easier for the human eye to discern. While dials are appropriate to show continuous functions, bar charts and line diagrams are preferred for all other representations. Bar charts are appropriate for nominal and ordinal scales because they allow easy comparison of adjacent values. Sometimes dashboards use stacked bar charts to show related issues, such as year, salesperson, or channel. Line graphs, on the other hand, emphasize the pattern in the data, especially when multiple phenomena appear on the same graph. Shapes should be reserved to indicate something about the data, such as arrows to show increases or decreases stars to show important projects. If the icon is being used to indicate an alert, it should be simple yet noticeable to the user. If there need to be multiple alerts, then they should all have some similarity, such as shape, to designate it is an alert. Most interface designers discourage the use of icons for representation or even pie charts because they can lead to misperceptions. Consider the two representations of product contribution to profit shown in Figure 10.9. Notice how much easier the bar chart at the bottom is to read and interpret than is the pie chart, even without the data being ordered. Similarly, consider Figure 10.10, which shows the market capitalization (what it would cost to buy all of a company's stock at the current price) of 15 major banks as of January 20, 2009, to their market capitalization in the second quarter of 2007, before the

412 EXECUTIVE INFORMATION AND DASHBOARDS

Figure 10.9. Comparison of pie chart and bar chart.

banking crisis hit. Notice how difficult it is to make those comparisons using the bubbles in the top graph compared to the bar chart at the bottom.

It should be noted that similarity of size, shape, color, and orientation of items, even when they are separated on the dashboard, suggests a visual pattern. If that similarity is intended, either because of the meaning of the cue (such as colors of red, yellow, and green indicating the status of a project) or the kind of item being represented (such as similar aspects of different projects), then that helps to reduce the amount of effort the decision maker needs to expend to understand the data. If there is not a parallel in how the decision makers should interpret the information, the similarity will serve to confuse decision makers.

The position on the dashboard is important to help the user interpret the data. The proximity of items on a dashboard suggests they belong to the same group, such as the absenteeism at each plant. Enclosure with a line or color will bring things together. Where

EXECUTIVE INFORMATION AND DASHBOARDS 413

Figure 10.10. Comparison of bubble and bar charts. (Adapted from graphic design examples by Stephen Few of Perceptual Edge, available: http://www.perceptualedge.com/example18.php.) Graphic is reprinted courtesy of Stephen Few.

EXECUTIVE INFORMATION AND DASHBOARDS

there are differences in location, continuity also suggests similarity. All items on a con-tinuous color or within a closure should be interpreted together. Data should be organized (colocated) according to business functions, products, divisions, or other meaningful units. The delineation between and among those groups should be subtle, such as a background color or a thin line, so the emphasis continues to be on the data themselves. Data should be arranged on the dashboard to facilitate analysis, support meaningful analyses, and discour-age meaningless comparisons. If items are colocated, they are more likely to be compared; the greater the distance between the items, the less likely they are to be compared. So, for example, productivity of different plants should be co-located to encourage compar-isons. On the other hand, plant productivity statistics should not be co-located with sales statistics because they should not be compared. Similarly, items that should be compared should be combined in a single table or graph to encourage that comparison. If that is not appropriate for some reason, then the items should be coded with a common color or hatch pattern the similarity of colors or patterns will encourage decision makers to see them together.

So, what makes for a good dashboard? There are four specific rules for designing a good dashboard. The first is to simplify! Users can perceive only so many images at once. Associated with simplicity is the need for it to be well organized and condensed. Users can drill down if they need additional information. The dashboard should be specific to and customized for the audience's objectives and, of course, always use the client's vocabulary. Colors should be chosen carefully and designers should avoid "cute" displays.

But, of course, dashboards should always be evaluated first in terms of their ability to meet the needs of decision makers. Consider the dashboards in Figures 10.11 and 10.12. Figure 10.11, which is an example of creative dashboard design from the blog,

Figure 10.11. An overpopulated dashboard. (Source: myxcelsius.com, a blog dedicated to Xcelsius dashboards.)

Graphic reprinted with permission.

EXECUTIVE INFORMATION AND DASHBOARDS 415

Figure 10.12. A good dashboard. (Adapted from S. Few, Information Dashboard Design: The Effective Visual Com-munication of Data, Sebastopol, CA: O'Reilly, 2006, p. 177.) The dashboard is reprinted courtesy of O'Reilly Publishers and Stephen Few.

myxcelsius.com, provides easy access to information about a variety of issues of costs and allows the decision maker to change values to perform sensitivity analyses. It would not, however, be a good dashboard for monitoring an organization; there is so much information that the user could not identify important factors quickly and might be disrupted from focus on the issues of importance. In contrast, the dashboard shown in Figure 10.12 is much cleaner and allows the decision maker to focus on the important characteristics but does not facilitate sensitivity analyses.

Few (2006) lists 13 mistakes for designing dashboards, as shown in Table 10.4. The complements of the mistakes provide good guidelines for making a dashboard not only more usable but also more likely to have an impact upon a decision. Paramount in Few's list is the need to keep the dashboard to a single screen. Decision makers frequently do not

EXECUTIVE INFORMATION AND DASHBOARDS

Table 10.4. Design Mistakes

Exceeding the boundaries of a single screen Supplying inadequate context for the data Displaying excessive detail or precision Choosing a deficient measure Choosing inappropriate display media Introducing meaningless variety Using poorly designed media Encoding quantitative data inaccurately Arranging the data poorly Highlighting important data ineffectively or not at all Cluttering the display with useless decoration Misusing or overusing color Designing an unattractive visual display

Source: Adapted from S. Few, Information Dashboard Design: The Effective Visual Communication of Data, Sebastopol, CA: O'Reilly, 2006, p. 49. The table is reprinted courtesy of O'Reilly Publishers and Stephen Few.

scroll because they believe that which is below what they can see is less important. Further, scrolling does not allow decision makers to see the big picture or to do appropriate compar-isons. Second, a dashboard must not only present data but also help in the interpretation of the data. A number alone is not useful, but one in context can be very useful. Is the reading good or bad. Is it on track? Notice how this is done in Figure 10.12. The key metrics graph has not only the actual values for each metric but also a shadow graph that defines the good and bad regions. The revenue graph not only shows the relative performance of the various units but also has a bar indicating the goal and highlights the unit that missed the goal. These subtle context indicators make the solution more elegant and do not unduly burden the decision maker. The goal of the dashboard is a big-picture view. As such it is important not to provide excessive detail or precision. Every unnecessary piece of information slows down evaluation. So, unnecessary precision of data, too much detail in measures, and other unnecessary particulars should be avoided.

The fourth point of choosing deficient measures does not refer to the content of the measure that appears on the dashboard. Clearly, the appropriate KPI and the appropriate data to support that indicator must be selected. In addition, the information of that measure needs to be represented so the decision maker sees the important issues most easily. For example, providing two graphs on the same axes allows the decision maker to see patterns in the trends. However, if the goal is to focus on how different they are at different points of time, such graphs do not provide enough data. Rather, it is appropriate to provide a single metric of the amount of deviation or the percent of deviation. That will help the decision maker focus on the salient details. Similarly, appropriateness of the medium refers to the type of visual that is displayed. Designers must ask themselves whether a given visual displays the information the user needs and whether it provides the information with the least amount of work for the decision maker. The most commonly noted example of this is the pie chart discussed earlier in this chapter. Further, since decision makers need to focus on the data, not on the delivery of the data, designers should provide consistent kinds of visuals for consistent messages. While it may appear boring to provide all bar charts, it does help the decision maker focus better on the data.

EXECUTIVE INFORMATION AND DASHBOARDS

Once the appropriate media have been selected, they need to be defined as well. Data are best interpreted when they are ordered. Such order might be by size of the metric or by order of plant or some other meaningful order. Data should also be labeled. It is much easier to interpret dashboards when the values are right there rather than having to look all over the screen for the data. If you want users to be able to distinguish values, do not use too much color, but do not use colors that are too close together (if they are varied). Make items easy to read by using an appropriate font. Most experts believe a sans serif font, such as arial or helvetica, is easier to read on a screen.

When we discussed user interfaces, we discussed problems of inaccurately representing quantitative data. In the dashboard, it is important to attend to scaling properly, draw graphs and charts properly, and not use graphs that distort the relationship of interest. These factors must be considered in the design of dashboards as well.

The last five factors in the table relate to the "big picture" that is represented. Arrange data in the order you want the decision maker to consider them. The top-left portion of the screen is considered the prime spot. The summary or the most important information should be located there. Use the space well to draw similarities and dissimilarities of visuals. Make the dashboard comfortable to examine. In addition, use color to highlight factors appropriately. Color should be used sparingly, since too much can cause confusion. However, color should be used to highlight important metrics or metrics that are out of range so that the eyes are drawn there first. But, if there are not differences or the differences are not important, do not vary the color just because it is possible.

Dashboards should be designed to make the data prominent and to encourage the user to focus on the data. As such, designers should avoid cluttering the display with decoration. Users tend to tire of the decoration quickly and it does not contribute to their understanding of the meaning of the data. Similarly, users should be conservative when using color. Too much is also clutter. Too little can be downright boring. Be consistent in the color and use variations, especially among hot colors, to something demanding attention. However, be aware that some individuals cannot distinguish between specific colors and may not be able to discern differences in the data. Finally, remember aesthetics. People do not want to look at something ugly, and so they are likely not to focus on the dashboard if it is ugly.

Dashboard Appliances

Most application packages that include any kind of data analysis features have some form of dashboard facility. Large-scale packages, such as those provided by IBM and SAP, and data warehouses have built-in dashboard functionality. In addition, there exist proprietary and open-source stand-alone dashboard tools that can be configured to work with an organization's data. Even Microsoft's Excel has the ability to build a dashboard.

The key in building a dashboard for EIS support is not in which product is selected to support the system but rather in selecting the indicators that will be represented. As with all DSS technology, the tool will only support decision makers if the factors that they need to see are represented. So, it is important to take the time to determine KPIs that are most reflective of the health of the organization. Once the indicators have been agreed upon, the next critical step is to integrate the dashboard with the systems that produce the data. Dashboards that draw data from normal production systems in standard time periods work best to ensure that data are not interrupted. As stated earlier, the dashboards should be simple, with no more "bells and whistles" or data than are necessary to convey the key aspects of the organization. Finally, the dashboard should not be seen as a stand-alone

EXECUTIVE INFORMATION AND DASHBOARDS

object. Providing decision makers with the drill-down capability to determine the "why" behind a reading is as important as providing the reading.

Value of Dashboard and EIS

A dashboard (with the associated EIS) can help executives use their time more effectively. They can reduce search time for information and identify and respond to exceptions as soon as they are recorded. Furthermore, the dashboard provides information that is more timely, accurate, and relevant. Decision makers also can identify and resolve problems more quickly and easily make better decisions. In this way, the corporation can treat information as a "strategic resource" and free MIS personnel and other assistants to work on longer term projects.

The dashboard can function only in an environment that is ready for it. Several issues need to be addressed to determine readiness. First, prior to implementation, there must be an information delivery problem. In particular, there must be critical information that is not available in a timely fashion prohibiting executives from making high-quality decisions. Alternatively, there may be a real business problem that cannot be addressed because of information delivery problems. Without a prior problem, the value of the dashboard is not apparent to the decision makers and hence they are unlikely to take the time to learn how to use the system.

Second, prior to implementation, there must be some level of technological maturity of either the executives or the organization. This means that the organization (or the executives themselves) must have experience with the technology or must be willing to change technology. Clearly, some organizations are more resistant to technological change than others or require a more planned approach to evolve to greater use of technology.

The process of movement to dashboards also needs to be managed. Many executives have a staff that addresses analytical problems for them, monitoring important indicators and bringing them to the attention of the executive when necessary. In addition, these staffs provide analysis when requested. Sometimes the move from this situation to a dash-board/EIS is too big for the executives to make. That is, sometimes the move toward their own integration of and focus upon information and learning a computer system is not successful. In these cases, designers get better results if they decompose the change into two separate components, learning to use the computer and learning to focus on their own information analyses. For example, some move executives to a "query" stage by getting them used to online capabilities first. Others move the executive first to just the dashboard where questions are asked and reports are generated at the request of the executive using "executive briefing books." After they feel comfortable with half of it, moving to a full system is easier.

Of course, not all predesign concerns involve the executive. Prior to implementation, designers need to understand the management process. Since the dashboard and associated EIS functions address upper level management and strategic choices, the system needs to be molded more to management processes than general DSS. In addition, designers need to be creative in their development of incentives to encourage senior management to use the system.

The design of the EIS must be managed more carefully than other DSS design because of the kind of decision and the kind of user. Several factors need to be considered when implementing an EIS. For example, Volonino and Robinson (1991) offer guidelines for development.

EXECUTIVE INFORMATION AND DASHBOARDS

• A prototype of the dashboard should be built quickly after a decision is made to implement it. In this way, executives have "hands-on" experience with a system early, thereby keeping the enthusiasm and momentum at a high level. In addition, the prototype allows the designer to understand upper management's needs better.

• Customization of the dashboard and the information it provides must be an ongoing process. Clearly the focus of upper level managers changes over time. If the system is to be effective, it needs to adapt to these changes and their associated information requirements.

• Designers must have an executive sponsor to help guide the project in the organi-zation. The person should be a strong advocate placed as highly as possible in the organization (preferably among the top-three people in the organization). Without this kind of support, even the best EIS are likely to fail.

• Avoid assumptions about design needs. Too often designers think they understand the needs or do not want to bother high-level executives with their questions. It is crucial that the dashboard reflect real information needs, and these needs are most likely to be reflected if the designer and decision makers communicate well from the beginning of the process.

• The dashboard and its interactive components must be easy to use. Watson and Satzinger (1994, p. 46) state that "[b]ecause of the nature of the executive user, the system has to go beyond user friendly and be 'user intuitive' or even 'user seductive.'" Designers should standardize screens and provide a menu as a gateway to any access to the system. Further, they should use standard definitions for terms so that users do not need to guess what is meant.

• The EIS must contain current information from both within and without the organi-zation.

• The system should have fast response time. In fact, some designers suggest that the response time needs to be less than 5 seconds. Whatever standard is chosen, it is clear that faster is better because high-level executives are intolerant of waiting for the response. More important, the system must be designed to anticipate increased usage without degradation of response time. System usage is likely to grow over time, sometimes exponentially, and the system needs to be designed to provide similarly fast response time with the greater usage. Watson (1995) cites an unnamed developer as defining "maximum acceptable time to move from screen to screen as 'the time it takes the executive to turn a page of The Wall Street Journal?" However, he noted that executives are more tolerant of response time for ad hoc queries than simple scanning of prefabricated, standard analyses.

Although fast response time is important to the executive, designers need to be aware that a sudden move to fast information upon which the executives can act can lead to instabilities in the organization. Consider, for example, the experience seen with database technology, as summarized by Chapnic (1989t p. 7):

Information feedback that is too rapid and not controlled properly is very desta-bilizing for a system, causing its behavior to oscillate wildly … wc may inad-vertently destabilize large organizations by forcing them to react too quickly to changes.

EXECUTIVE INFORMATION AND DASHBOARDS

• The EIS must provide information through a variety of media that are easy to use and provide content quickly. Graphical displays are important to present information quickly. In addition, hypertext and hypermedia allow executives to move through text more quickly. However, even if an EIS has the most up-to-date capabilities, it will be wasted if the executive quits using the system because it is too slow in response.

• Designers must not only provide the technical ability to eliminate paper from the decision process but also address the political, legal, and organizational implica-tions of doing so. Such an analysis must provide alternatives for addressing those problems.

• Screens need to be designed carefully. They must carry useful messages and only useful messages. Furthermore, they must be easy to follow and should minimize the designer's influence and bias associated with their design.

• The system must be cost effective. Unfortunately, we cannot justify an EIS using the same terms we would for a transaction processing system because the benefits rarely can be traced directly to a dollar savings for the enterprise. Rather, the key benefit is in providing relevant information quickly and reliably.

Several methodologies have been put forward for designing an EIS. Most fall into the class of traditional systems development life cycle methodologies. Rockart (1979) developed the critical success factors methodology, which allows users to define their own key indicators of performance. These indicators track the most important pieces of company and market information for the executive. Further, the method keeps executives involved with the evolution of their system by periodically requiring them to review and modify their indicators as their needs change.

Another methodology, developed by Volonino and Robinson (1991), is the strategic business objectives (SBO) methodology. The SBO methodology focuses on company goals rather than the executive's views of performance. It requires users to identify and prioritize critical business objectives. These priorities then specify the information identified and captured in the EIS.

The one critical aspect in each methodology is the successful identification, capture, and inclusion of information to meet the requirements of strategic planning. Watson and Frolick (1993) conducted studies to examine the manner in which dashboards/EISs are developed. Too often, they found, executives were only consulted in the initial design phase or after implementation when modifications are considered. However, they found that greater discussions with executives during planning meetings and throughout the project lead to better outcomes. Some of the criteria used to evaluate products found in another project by Watson and his colleagues (1992) are shown in Table 10.5.

Once the framework for implementing a dashboard is in place, the next major area of consideration is the hardware. A number of factors affect the appropriateness of the hard-ware. First, the hardware should be capable of supporting management functions critical to executive tasks, such as deductive reporting, trend analysis, and exception reporting. Second, the hardware must have high-resolution, bit-mapped display screens to provide superior output to the paper-based methods. Too small a screen or unclear output will be distracting and unusable for managers. Third, the processor speed must be sufficient to ensure a timely response to a request. The processor must not only meet current demand but also meet future increases in demand. Fourth, the computer hardware must allow input and output by mechanisms other than the traditional keyboard. Executives respond better

EXECUTIVE INFORMATION AND DASHBOARDS 421

Table 10.5. Sample EIS Adoption Criteria

3.0

1.0 Ease of use Development

• Applications to be easy and quick to develop

• New users to be easy and quick to add to the system

• Suitability for quick prototyping

• Display alternative output formats quickly

Learning

• Learning time for developers

• Learning time for users

• Availability of appropriate documentation and tutorials

End user

Menu system

Customized menus for each user

Ability to bypass menus not required

Various modes of use (mouse, touchscreen

Minimal number of keystrokes

Consistent use of functions

Maintenance

Easy to add and modify data

Ability to maintain integrity and timeliness of data (handling of frequent updates)

Easy to add and modify screens, reports, and graphs

Availability of standard templates

Ability to copy existing screens, graphs, and so on

Ability to monitor system usage

Easy to add additional users

Ability to incorporate changes to corporate 4.0

structure

2.0 Reporting capability

• Reports to be presented as both graphs and tables • Ability to display graphs, tables, and text on a

single screen • Ability to switch between tabular and graphic

output • Ability to color code exceptions on the current

screen • Ability to present a summary screen listing all

exceptions throughout the system

• Support analysis of budgeted actual and forecast figures

Effective presentation of time series data

Ability to highlight variations

Support interactive user-defined variance criteria

Retrieval of historical data as required

Maintain historical data and discard after a user-defined period

Analysis of historical data and identification of trends

Built-in restrictions to protect historical data

Facility for personalized queries (i.e., ability for users to scan the database according to interactively defined criteria)

Explanatory notes to be attached to reports

Graphic presentation

Quality of graphics

Speed of presentation

Effective use of default color coding

Ability to highlight areas of concern

Availability of individual color schemes

Ability to include explanatory notes for each graph Ability to produce a variety of graphs (pie, bar, 3D bar, line)

Automatic generation of simple, default formats which can be customized

Easy to produce executive defined graphs

Automatic scaling

Graph limitations

Automatic legends

General functionality

Drill-down cability

Built-in statistical capabilities

Lookaside capability for interrupting a process to

use another facility

Screen scrolling (horizontal and vertical)

Multiple tasks to be operating and displayed

concurrently (e.g., windows, split screens)

Access to notepad facility

Integration with DSS

Import data from spreadsheets/word processing

Minimal screen repainting

Ability to display other languages

(Continued)

422 EXECUTIVE INFORMATION AND DASHBOARDS

Table 10.5. (Continued)

5.0 Data handling · Ability to incorporate EIS reports and graphs into , , . , , . „ . mail facility

• Version checking to ensure all users are accessing the same version of software, applications, and 9.0 Security data

• Interfaces with external databases and internal WMC systems

• Efficient storage of time series data

• Stored aggregates for rapid access • Built-in periodicity conversions 1 0 · 0 Environments and hardware

• Efficient indexing and retrieval mechanism · Local access

• Instantaneous distribution of new data among users · Across networks

• Ability to consolidate various sources and formats · Multiuser access to the same data (only 3 users

Restricted system access

Restricted function access

Add/edit/delete restrictions for applications and data

of data into an EIS database via manual input or tested) electronic data transfer from other systems · Portability

• Ability to sort screen data according to · pc-mainframe links user-defined criteria

6.0 Output options π 0 Documentation

• Laser printer, plotter, color printer, transparencies . Reference manual, introductory guide, tutorials

• Large-screen presentations for meetings . Overall style of documentation

7.0 Performance · Online, context-sensitive help screens

Λ η · Meaningful error messages • Response times ° ° Ä TM-. · r · *· i j· j · Appropriate cross-referencing and indexing • PC-mainframe communications uploading and vr r © ©

downloading data * Stand-alone chapters • Efficient resource usage 12.0 Vendor support • Capacity issues (i.e., number of users, volume of . ^ · · r J i

ή s Training courses for developers ^ ,. , .,. r n · Technical support

• Reliability of software n n .-.. · Local support

• Recovery facility • Timeliness and smoothness of initial installation

8.0 Electronic mail . Availability of off-the-shelf applications • Ability to run corporate mail · Availability of source code

• Hot-line support Source: Adapted from H. J. Watson, B. A. Hesse, C. Copperwaite, and V. Devos, "EIS Software: A Selection Process and the Western Mining Experience," Journal of Information Technology Management, 3(1), 1992, pp. 19-28. The table is reprinted courtesy of the editor.

to media such as voice-activated systems and touch screens. Fifth, the computer hardware should enable executives without computer skills to enhance their daily work experience. Sixth, the computer must be networked. The executive must be linked to departmental, cor-porate, and external management information as well as electronically linked to managers who might provide insights into the problems under consideration. Finally, the hardware must be integrated with other technological equipment of importance to the decision maker such as electronic mail systems, instant messaging, voice mail, and video conferencing systems.

SUGGESTED READINGS

DISCUSSION

Dashboards, when used as EIS, provide decision support technology to the highest level of managers. In many ways, they resemble the DSS we have addressed elsewhere in the book. Among the most significant difference, however, is that the dashboard provides prefabricated analyses and the drill-down sends decision makers to primarily standard analyses selected particularly for a decision maker. In addition, since these are designed to support high-level managers, their needs for implementation and monitoring are different. Finally, since they tend to support strategic decisions, the kinds of analyses provided must be different.

SUGGESTED READINGS

Bergerson, F., et al., "Top Managers Evaluate the Attributes of EIS," in DSS '91 Transactions, Manhattan Beach, CA: College on Information Systems of The Institute of Management Sciences, 1991.

Burkan, W. C, "Making EIS Work," in DSS '88 Transactions, Manhattan Beach, CA: College on

Information Systems of The Institute of Management Sciences, 1988.

Burkan, W. C, "Making EIS Work," in P. Gray (Ed.), Decision Support and Executive Information

Systems, Englewood Cliffs, NJ: Prentice-Hall, 1994, p. 331. Burkan, W. C, Executive Information Systems, New York: Van Nostrand Reinhold, 1991.

Chapnic, P., "Editor's Buffer," Database, Programming and Design, Vol. 2, No. 4, April 1989,

pp. 7-8.

Coffee, P., K. D. Moser, and J. Frentzen, "Software Tools Support Decision Making," PC Week,

Vol. 7, No. 25, June 25, 1990, pp. 119-121.

Darrow, B., "ElSes Put Data at Users' Fingertips," Inforworld, Vol. 12, No. 33, August 13, 1990, p. 13.

DeLong, D. W., and J. F. Rockhart, "Identifying the Attributes of Successful Executive Information System Implementation," in J. Fedorowicz (Ed.), DSS '86 Transactions, Washington, DC: College on Information Systems of The Institute of Management Sciences, 1986.

Eckerson, W. W, Performance Dashboards: Measuring, Monitoring and Managing Your Business,

Indianapolis, IN: Wiley Publishing, 2005. Eliot, L., "High ROI on Modern EIS," Decision Line, May 1994, pp. 7-8.

Ferranti, M., "Pilot Aims Windows-Based EIS at Non-Programmers," PC Week, Vol. 7, No. 36,

September 10, 1990, pp. 39, 50.

Few, S., Show me the Numbers: Designing Tables and Graphs to Enlighten, Oakland, CA: Analytics, 2004.

Few, S., Information Dashboard Design: The Effective Visual Communication of Data, Sebastopol,

CA: O'Reilly, 2006.

Few, S., Now You See It: Simple Visualization Techniques for Quantitative Analysis, Oakland, CA: Analytics, 2009.

Fitz-Gibbon, C.T., Performance Indicators, Bristol, UK: Multilingual Matters, 1990. Gray, P. (Ed.), Decision Support and Executive Information Systems, Englewood Cliffs, NJ: Prentice-

Hall, 1994. Gupta, S. K., "Streaming Multidimensional Data by Bypassing Multidimensional Query Processor,"

United States Patent Application, Cognos Incorporated (Ottawa, CA) 20080301086, available: http://www.freepatentsonline.com/20080301086.html.

EXECUTIVE INFORMATION AND DASHBOARDS

Healy, P. M., and G. P. Krishna, "The Fall of Enron," Journal of Economic Perspectives, Vol. 17, No. 2, Spring 2003, pp. 3-26.

Houdeshel, G., and H. J. Watson, "The Management Information and Decision Support (MIDS) System at Lockheed-Georgia, in R. H. Sprague, Jr. and H. J. Watson (Eds.), Decision Sup-port Systems: Putting Theory into Practice, 3rd ed. Englewood Cliffs, NJ: Prentice-Hall, 1993, pp. 235-252.

Hubbard, D.W., How to Measure Anything: Finding the Value of Intangibles in Business, New York: Wiley, 2007.

Kaplan, R. S., and D. P. Norton, The Balanced Scoreboard: Translating Strategy into Action, Boston,

MA: Harvard Business School Press, 1996.

Korzenlowski, P., "C/S Opens Data Access Tool Door to Fresh Competitors," Software Magazine,

February 1994, pp. 71-77. McLean, B., and P. Elkind, The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall

of Enron, New York: Portfolio Hardcover, 2003.

Miller, G. A., "The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for

Processing Information," Psychological Review, Vol 63, 1956, pp. 81-97.

Norman, D. A., The Design of Everyday Things, New York: Basic Books, 2002. Norman, D. A., Emotional Design: Why We Love (or Hate) Everyday Things, New York: Basic Books,

2005.

Oland, D., "The Impact of a Powerful New Process at Moosehead," CMA Magazine, February 1994,

p. 6.

Parmenter, D., Key Performance Indicators: Developing, Implementing, and Using Winning KPIs, New York: Wiley, 2007.

Quezada, L. E., F. M. Cordova, P. Palominos, K. Godoy, and J. Ross, "Method for Identifying Strategic Objectives in Strategy Maps," International Journal of Production Economics, Vol. 122, No. 1, November 2009, p. 492.

Rockart, J. R, "Chief Executives Define Their Own Information Needs," Harvard Business Review,

Vol. 57, No. 2, 1979, p. 81-93. Rockart, J. F., and D. DeLong, Executive Support Systems, Homewood, IL: Dow Jones-Irwin,

1988.

Scheier, R. L., "Information Resources Unveils Tool Set that Combines Best of EIS with DSS," PC

Week, Vol. 7, No. 46, November 19, 1990, p. 11.

Sprague, R. H., Jr, and B. C. McNurlin, Information Systems Management in Practice, 3rd ed., Englewood Cliffs, NJ: Prentice-Hall, 1993.

Tsai, W, W. Chou, and W Hsu, "The Sustainability Balanced Scorecard As a Framework for Select-ing Socially Responsible Investment: An Effective MCDM Model," Journal of the Operational

Research Society, Vol. 60, No. 10, October 2009, pp. 1396-1421. Tufte, E., The Visual Display of Quantitative Information, Chesire, CT: Graphics, 1983.

Tufte, E., Beautiful Evidence, Cheshire CT: Graphics, 2006.

Volonino, L., and S. Robinson, "EIS Experiences at Marine Midland Bank," North American Journal

of Information Technology Management, Vol. 2, No. 2, 1991, pp. 33-38. Volonino, L., H. J. Watson, and S. Robinson, "Using EIS to respond to dynamic business conditions,"

Decision Support Systems, Vol. 14, No. 5, June 1995, pp. 105-116.

Ware, C , Information Visualization: Perception for Design, 2nd ed., San Francisco, CA: Morgan Kauffman, 2004.

Watson, H. J., "Avoiding Hidden EIS Pitfalls," in R. H. Sprague, Jr. and H. J. Watson (Eds.), Decision Support Systems: Putting Theory into Practice, 3rd ed. Englewood Cliffs, NJ: Prentice-Hall, 1993, pp. 276-283.

QUESTIONS 425

Watson, H. J., B. A. Hesse, C. Copperwaite, and V. Devos, "EIS Software: A Selection Process and the Western Mining Experience," Journal of Information Technology Management, Vol. 31, No. 1, 1992a, pp. 19-28.

Watson, H. J., et al., Executive Information Systems, New York: Wiley, 1992b.

Watson, H. J., R. K. Rainer, and C. Koh, "Executive Information Systems: A Framework for Devel-opment and a Survey of Current Practices," in R. H. Sprague, Jr. and H. J. Watson (Eds.), Decision Support Systems: Putting Theory into Practice, 3rd ed., Englewood Cliffs, NJ: Prentice-Hall, 1993, pp. 253-275.

Watson, H. J., and M. N. Frolick, "Determining Information Requirements for an EIS," MIS Quarterly, September 1993, pp. 255-269.

Watson, H. J., and J. Satzinger, "Guidelines for Designing EIS interfaces," Information Systems Management, Fall 1994, Vol. 11, No. 4, pp. 46-52.

Watson, H. J., M. T. O'Hara, C. G. Harp, and G. G. Kelly, "Including Soft Information in EIS," Information Systems Management, Summer 1996, Vol. 13, Issue 3, pp. 1058-0530.

Ye, L., and W. Seal, "The Balanced Scorecard," Financial Management, September 2009, pp. 27-29.

QUESTIONS

1. Discuss how the design components of a EIS are different from those of a DSS.

2. Describe the factors that would influence the design of a transnational executive in-formation system. Include cultural factors that are either unique to a country and/or strongly influence the decision-making process as well as the specifications of design that would be affected. Is this effect more or less than you would expect with a DSS?

3. Critique the concept of using a standardized methodology to design dashboards and executive information systems.

4. Design a dashboard that might be useful to you in monitoring your academic progress. Discuss how you decide to balance the long-term performance with the semester performance measures.

5. What key performance indicators (KPIs) might the dean of your university implement to monitor health of his or her unit?

6. Examine the annual report of an organization. Discuss how data would need to flow from transaction processing systems within the organization to a dashboard to help monitor the factors of importance in the annual report.

7. Examine the dashboard of IT expenditures in the federal government discussed in this chapter. What recommendations for changes in the budget can you find by examining these data?

8. Prototype a dashboard for some decision. How do you make your decisions about how to represent your data? How do you make your decisions about color?

9. What is the difference between a KPI and a balanced scorecard? How are they related?

10. What is the purpose of a dashboard?

11. Why must a dashboard allow drill-down capabilities?

12. Find example dashboards on the Web. Which of the 13 mistakes of design are apparent? How might you fix them?

426 EXECUTIVE INFORMATION AND DASHBOARDS

ON THE WEB

On the Web for this chapter provides additional information about executive information systems. Links can provide access to demonstration packages, general overview informa-tion, applications, software providers, tutorials, and more. Additional discussion questions and new applications will also be added as they become available.

• Links to overview information about executives and their decision-making styles and needs. These links provide access to bibliographies and overview papers about group decision making, both with and without dashboards.

• Links to products. Several dashboard providers have pages that allow users to demon-strate their products. Others provide testimonials and/or reviews.

• Links provide access to dashboard examples in business, government, and research. Some links provide access to papers on the Web describing EIS applications and their uses. Others provide descriptions of the process for developing the application.

• Links provide guidelines for dashboard design. The good, the bad, and the ugly are all discussed on the Web.

You can access material for this chapter from the general Web page for the book or directly at http://www.umsl.edu/~sauterv/DSS4BI/eis.html.

11

GROUP DECISION SUPPORT SYSTEMS

Many decisions in an organization are made not by an individual, but rather by groups of individuals. By its very nature, a group enriches the choice process by gathering the knowl-edge, experience, and probably different perspectives of several people. The enrichment may in turn allow the group to understand the problem better, spark synergy for creative solutions, and identify errors in the information or process. Finally, since more people are involved, they create a deeper commitment to the choice and thus less resistance to its implementation.

However, groups bring a few drawbacks to the decision process. Most group decisions take longer than individual decisions. Groups tend to spend significant nonproductive time waiting, organizing, or repeating what already has been said. Group dynamics can inappro-priately influence the process if there are substantial differences in the rank or temperament of the members. Often, the supporting work may be uncoordinated if completed by multiple individuals or some people may abdicate their tasks and responsibilities to others. Finally, there is social pressure to conform to a group position. "Groupthink" can exist in any group and may exacerbate incomplete or inappropriate uses of information.

Groupthink is an agreement-at-any-costs mentality that often results in ineffective group decision making and poor decisions (Hellriegel et αί, 2007). It is associated with groups that have a high degree of conformity and cohesion, that are insulated from outside information sources challenging their decisions, that have excessively directive leadership, and/or that exist in a complex and rapidly changing environment. When groupthink occurs, members ignore limitations or impropriety of their analyses as well as possible conse-quences of their choice process. In fact, the group collectively rationalizes its choice and

Decision Support Systems for Business Intelligence by Vicki L. Sauter Copyright © 2010 John Wiley & Sons, Inc.

428 GROUP DECISION SUPPORT SYSTEMS

process, going so far as to censor itself when group members deviate from the established position, solution, or parameters.

The problem with groupthink is obviously that it can lead to poor decision processes. In particular, it is associated with:

• Incomplete generation of alternatives • Incomplete understanding of goals • Failure to examine risks of preferred choices • Poor search of information • Bias in the interpretation of information • Failure to appraise and reappraise alternatives.

Each of these in turn is associated with bad decision making. Unfortunately, DSS as it has been defined to this point does not provide methods for addressing these problems.

Hence, to support group decision making, a tool needs to have not only those char-acteristics of DSS discussed throughout this book but also the hardware, software, and procedures necessary to reveal the positive aspects of the group and inhibit the negative. Group DSS (GDSS) represent this hybrid technology; they combine DSS and groupware technologies. Group DSS should have the components of a DSS, including the model man-agement system, the database management system and user interface management system, as they have been described previously. The system must be able to support the needs of all of the decision makers easily. Group DSS must have the range of models and model management functions necessary to meet the choice needs of the participants. Further, they must be able to access and aggregate information from a variety of sources in a variety of formats to meet the group's broad information needs. Finally, GDSS must be easy for all users to operate.

Too often, the group dynamics themselves block active participation by one or more people and discourage innovative thinking. Group DSS must therefore include tools that address the group dynamics so decision makers can gain consensus about a particular problem or opportunity and group dynamic management systems to address the special needs of group processes. Group consideration of any problem allows the use of additional information, knowledge, and skills, but only if all participants have equal opportunity to be

Collective rationalization is the characteristic that allowed North American automobile executives to agree upon two "facts" about the consumers in the 1970s. In particular, the executives agreed that (a) only a small segment of North American automobile buyers would, in fact, purchase Japanese-manufactured automobiles and (b) North American consumers would be willing to tolerate a per-gall on gas price of over $2.50, It is likely that at least one of those executives had concerns about the validity of these two assumptions and their impact upon the automobile design decision-making process. However, he or she may have been hesitant to express concerns in a meeting where others perceived the assumptions to be true. This was groupthink and it had a remarkably negative impact upon the North American automobile industry. Over time, the American automobile industry has repeated this mistake multiple times.

GROUPWARE 429

heard and to have ideas received. Since GDSS use the technologies of groupware, before discussing more about GDSS, we will examine the concept of groupware in more depth.

GROUPWARE

Groupware or group support systems (GSS) have evolved over time. One definition available in the literature is that GSS are computer-based information systems used to support intellectual, collaborative work (Jessup and Valacich, 1993). This definition is too broad for one discussion, because it does not specifically address the role of groups. Another definition emerges as "tools designed to support communications among members of a collaborative work group" (Hosseini, 1995, p. 368). Another way to describe a GSS is as "the collective of computer-assisted technologies used to aid group efforts directed at identifying and addressing problems, opportunities and issues" (Huber, Valacich, and Jessup, 1993, p. 256).

Groupware exists to facilitate the movement of messages or documents so as to enhance the quality of communication among individuals in remote locations. It provides access to shared databases, document handling, electronic messaging, work flow management, and conferencing. In fact, groupware can be thought of as a development environment in which cooperative applications—including decisions—can be built. Groupware achieves this through the integration of eight distinct technologies: messaging, conferencing, group document handling, work flow, utilities/development tools, frameworks, services, and ver-tical market applications. Hence, it provides the foundation for the easy exchange of data and information among individuals located far apart. Although no currently available prod-uct has an integrated and complete set of capabilities, Table 11.1 summarizes the range of functions that may be included in groupware.

There are many examples of successful use of groupware to enhance communications. In fact, it is believed that over 90% of firms using groupware will receive returns of 40% or more, with some as large as 200%. Boeing engineers collaborated with engineers at parts manufacturers as well as maintenance experts and customer airlines while designing the 777. Using groupware technologies, engineers shared ideas through e-mail and specifi-cations through computer-aided-design (CAD). Similarly, Weaton Industries used desktop

Group decision making is supposed to provide a richer poo] of knowledge and experience and therefore better choices. Research has shown that groups that share unique information, that which is known only to a few members, rather than to discuss information shared by most or all of its members tend to make better decisions. Further groups that talk to each other more make better decisions. Unfortunately, a meta-analysis of 72 studies involving 4795 groups and over 17,000 individuals showed that groups tend to spend most of their time discussing the redundant information shared by most members, rather than discussing information known only to one or a minority of members. In addition, the analysis found that groups that talked more tended to share less unique information. The problem seems particularly bad when groups seek a consensus opinion or judgment rather than solving a problem for which a correct answer exists. There is good news, however. Groups improved both their unique information sharing and the range of discussions among group members when the group was more focused and highly structured. Such structure can be created when using a GDSS to manage the meeting.

430 GROUP DECISION SUPPORT SYSTEMS

Table 11.1. Functionality of Groupware Products

Enterprise needs

• Cross-vendor support

• Local/remote servers

• Integrated networks

• Executive information systems standards

• Network operating systems

• Database

• Document and image repository

• Object repository and knowledge ware

Group needs

• GDSS

• Desktop video and audio conferencing

• Group application development environment

• Group editing

• Work flow management

Tools

• E-mail and messaging

• Calendar management and scheduling

• Personal productivity applications

• Models and model management

videoconferencing to diagnose and repair giant blow-molding machines around the world. Finally, law firms use groupware to gain access to documents for improved efficiency and customer service.

DSS in Action Around the Clock Processing

Many companies are goin^ beyond simple document sharing, deploying such programs on an enterprise-wide basis and using repository-based groupware as databases, internal communication networks, and work flow systems. Many companies are using groupware products to spearhead efforts to reengineer the way they do business. For ex am pi e, a Wall Street investment firm used groupware to help prepare the final details of a merger and acquisition deadline. It became clear to this management that they could not finish those details without help at 3 p<m< the day before the proposal was due. This company contracted with Coopers & Lybrand to finish the proposal by 9 am the next morning,

Using ixrtus Notes, Coopers & Ly brand met its needs. At the end of the day for the Dallas office of Coopers & Lybrand, management handed the work to the San Francisco office, These employees worked on the project until the end of their work day when they, in turn, passed the project to the Sydney office. Sydney employees eventually passed the work to the London office, which in turn passed it to the New York office, which eventually returned the work to the Dallas office for presentation to the client at the originally scheduled time (i.e., the next morning),

GROUPWARE 431

The main groupware competitors at this time are:

• FacilitatePro from Facilitate.com • Lotus Notes from IBM • Net Meeting and MeetingWorks from Microsoft • Oracle Beehive from Oracle Corporation • GroupWise 4.1 from WordPerfect: The Novell Applications Group • WebEx from Cisco

Each one provides some kind of meeting ability. Typically the products include agenda-setting, discussion, and voting capabilities, such as those shown in Figure 11.1. This screen shot from FacilitatePro shows the brainstorming options after participants voted on their desirability. Characteristics of the voting pattern are illustrated both graphically and statistically to help users understand the votes of their colleagues. In addition, since all of the information is stored electronically, the tools help organizations meet the regulations associated with the storage and disclosure. However, they do not provide the analytical tools associated with DSSs that we have discussed in this book.

One of the major problems with most groupware products at this time is that they rarely interface with one another nicely. They have, however, adopted standards that allow most of them to provide e-mail, calendar, and scheduling through a single standard (most use Microsoft Outlook) as proposed early in the millennium. Further, over time, the various products have increased the modules available with the products, making them more able

Figure 11.1. Voting tools available with groupware. A Screenshot from FacilitatePro web meeting software. Used with permission of Facilitate.com (http://www.facilitate.com). (Source: http://www.facilitate.com/video/video-tour.html.)

GROUP DECISION SUPPORT SYSTEMS

Table 11.2. Possible Standards for Groupware Products

• The multivendor scheduling standard should support transparent scheduling for all store-and-forward messaging transports as well as via real-time network protocols

• The standard should include hooks into shared X.500 directory services as well as proprietary e-mail, groupware, and network operating system directories

• The standard should support the calendar synchronization policies maintained by various scheduling tools

• The standard should support interfaces to multivendor network-enabled project planning and management tools

• The standard should allow users to control who may access their personal calendars, what fields may be viewed and modified, and what types of events may be scheduled without the owner's prior consent

• The standard should mediate between the various techniques used by scheduling tools to request meetings, negotiate meeting times and places, and reconcile conflicting schedule

to stand alone for the range of functionality they provide. Furthermore, it means that users must adopt and maintain a single product line regardless of whether it continues to meet their needs because it is expensive for all users to change. Hence, there is a move in the industry to develop a groupware standard, including items such as those described in Table 11.2.

GDSS DEFINITIONS

A group DSS incorporates groupware technology with DSS technology. As such, GDSS consist of hardware, software, and procedures for facilitating the generation and evaluation of alternatives as well as features for facilitating to improve group dynamics. However, a GDSS is not a reconfiguration of an existing DSS but rather a specially designed system that integrates DSS and groupware technologies.

A typical configuration includes model management, database management, and group management tools interconnected and managed by a facilitator. The purpose of the facilitator is to coordinate the use of the technology so that the focus of the decision makers is on the problem under consideration, not on the use of technology. Early GDSS included interconnected machines located in one room (sometimes called a decision room) to create a decision conference attended by an appropriate group of individuals to consider options and find a solution to the problem. An example of a decision room is shown in Figure 11.2. In this configuration, information can be communicated to and from participants via a network or by use of one or more public screens projecting the output of a particular computer. Over time, GDSS have expanded to include people located in different places, at different times, and with a variety of support tools. In fact, it is now a mature technology, many of whose concepts are now embedded in the way organizations work

A typical decision-making process has several stages. After an introduction by the facilitator, the group is asked to discuss the issues and concerns so that the problem can be detected and defined. Once a set of alternatives is understood, the group attempts to construct a model of the choice context through which to evaluate the several alternatives. The analyst then assists the participants to refine the model and evaluate its results.

The process generally is guided by support staff. There must be a facilitator to help the group focus on the task by addressing and solving the technology issues. In addition,

GDSS DEFINITIONS 433

Figure 11.2. Typical decision conferencing configuration. Configuration of a decision confer-

ence. Typically a control room and one or more "breakout" rooms are adjacent to this room.

there is an analyst who provides expertise in developing computer models and a recorder who chronicles the proceedings by recording the critical issues and syntheses as they occur (although frequently that is captured electronically now).

If located at the same location, workstations are networked and documents are projected onto several public screens. If the users are not colocated, documents and models are displayed on their individual monitors. If the meeting is not synchronous, then materials are stored for other users to recall when they participate.

Variations of the workstation methodology include teleconferencing and the remote decision-making approach. In teleconferencing, group support is like that in the decision conference, but participants are geographically separated from one another. In addition to the electronic connection, there is visual and audio communication so users can see and hear one another as if they were in the same location. An example of this setup is shown in Figure 11.3. Remote decision making is similar to the workstation approach but with offices that are not in close proximity. These sessions might also have videoconferencing support, or they may simply be electronic.

The systems allow users to draft ideas at their own workstation. After some consid-eration of the document, the user may elect to share ideas, hold documents for a later, more appropriate time, or discard weak results. The display of many ideas on one or more public screens can lead to a more integrated discussion of a topic. Since it is not possible to identify the originator of a particular idea, the opinions of particular individuals can be shared anonymously.

Watson et al (1988) completed an extensive study of this type of configuration and compared the results to group meetings with other kinds of assistance. Their overall conclu-sion was that in general the workstation approach seems to provide greater process support than other methodologies. Of course, in this day of cloud computing, not only might the

434 GROUP DECISION SUPPORT SYSTEMS

Figure 11.3. GDSS and videoconferencing room at University of Missouri. Photo taken by Alexia

Lang for University News at UMKC. Photo is used courtesy of Ms. Lang.

people not be in the same room (as is often the case), but the software might not be located with the users.

Some of the GDSS products available today include:

• Brainstorming.com • Expert Choice • Facilitate • GroupSystems Tools • Grouputer • Logical Decisions • Robust Deicisions • WeblQ

The functionality and support needs of these tools vary.

FEATURES OF SUPPORT

Decision-Making Support

The GDSS must provide both decision-making support and process support. Decision-making support begins with the features that have already been addressed with regard to all DSS. That is, the GDSS must include access to models and model management tools, data and database management tools, and mail and mail management tools. However,

FEATURES OF SUPPORT 435

Figure 11.4. Facilitating problem definition.

groups generally are created to solve particularly poorly structured problems, often with strategic or long-term implications. Hence, GDSS need to provide particular support for alternative generation and issue interpretation. Alternative generation requires an electronic brainstorming tool that records ideas and comments about ideas. Furthermore, the tool needs to facilitate consolidation of ideas by helping either the group members or the facilitator to identify common concerns, common attributes, and/or relationships among ideas. This facility is sometimes known as an issue analyzer tool. For example, consider the tool illustrated in Figure 11.4, which shows how the system helps the users consider a wide range of options of the problem, thereby helping them to brainstorm solutions more effectively. Finally, the GDSS needs to facilitate the identification of stakeholders, the assumptions being made with regard to them, and what role and importance they will play in the process.

Alternative generation, analysis and categorization can be quite difficult in a group setting because everyone wants to participate at once and because participants follow different thought processes. Group DSS tools can provide the distinctive feature of parallel communications, or "the ability . . . [for] group members to communicate information simultaneously" (Bostrom, Anson, and Clawson, 1993, p. 461). With this in place, members need not wait for others to complete thoughts prior to expressing their own opinions. This keeps an individual's train of thought focused yet prevents time lags between the expression of one idea and another (Wilson and Jessup, 1995). The ability for group members to work in parallel "may account for the increased productivity of GSS idea-generating groups" and the higher satisfaction levels of participants (Dennis and Gallupe; 1993). In addition, parallel communication can lead to time savings. Since there is no competition for "air time," domination by an outspoken member of the group can be reduced (Wilson and Jessup, 1995). Also, since ideas can be contributed simultaneously, the total time to collect information is reduced (Dennis et al, 1995).

Consider the screens from GroupSystems shown in Figures 11.5 and 11.6. Figure 11.5 illustrates the ease with which users can define and utilize a variety of criteria with different

436 GROUP DECISION SUPPORT SYSTEMS

Figure 11.5. Definition of multiple criteria and weights in decision making. (Source: http://

www.groupsystems.com/documents/ThinkTank-Quick-Start-Guide.pdf.) Used with permission.

Figure 11.6. Helping users understand sensitivity of decision to criteria and weights. (Source:

From http://www.groupsystems.com/documents/ThinkTank-Quick-Start-Guide.pdf.) Used with

permission.

FEATURES OF SUPPORT

weights when evaluating alternatives. Of course, different users will emphasize different criteria and will certainly give different weights to those criteria. The tool facilitates these differences and performs the necessary summary. Figure 11.6 illustrates how the results might be displayed to help users understand the sensitivity of their decision to the criteria and the weight of the criteria considered.

Another way in which the GDSS provides decision support is by acting as a "group memory." In particular, it provides an electronic record of the meeting, both in summarized and raw form. This allows individuals who want to review the process access to the concepts and alternatives that were identified as well as the flow of the information being compiled by the group (Hosseini, 1995). In other words, not only can an individual get the overall impression of the meeting, he or she can also follow the exchanges to determine how final positions were derived. This retracing of the group thought process can help the individual to understand the "why" behind the "what" that resulted from the meeting. It can be defined as "a sharing of interpretations among individuals of a group" (Hoffer and Valacich, 1993). Some of the components necessary to support group memory are listed below:

• Access to a wide variety of information both external and internal to the organization as well as internal and external to the group process

• The ability to capture information easily and to store and integrate information generated by group interactions and about group processes dynamically

• Support for use of both quantitative and qualitative decision models and aids (Hosseini, 1995)

• The ability to support weighting and ranking of alternatives that have been proposed and stored in group memory

These features will allow group members to examine information available to the group, whether it was generated by the group itself or prepared externally and presented to the group. The group will have access to the raw data, the molding of data into information, and the group's implied evaluation of the relevance, accuracy, and importance of data.

This information must be available to group members on an "as-needed" basis. Mem-bers might need to review activities that have occurred since they left the conference and to

The NATO Research and Technology Organization (RTO) sponsored a workshop for national security executives, scientists, engineers, and technologists from 13 countries to develop a list of high-impact research and technology areas to combat terrorism and to facilitate multinational exchange of ideas for combating terrorism. The participants were broken down into four groups based on topics: indications and warnings, survivability and denial, consequence management and recovery, and attribution and counteractions.

Using Group Systems, four workgroups brainstormed ideas, discussed strategies, and priori-tized their recommendations using a variety of collaborative technologies and techniques.

They used GroupSystems to list ideas, expand and discuss these ideas, evaluate the impact of the projects, and prioritize R&D projects. After completing these general "brainstorm-organize-prioritize process" sessions, they then presented their recommendations in a plenary session during the final day of the workshop,

On the day after the workshop, the RTO cadre and the facilitators worked in an electronic meeting environment to integrate the various briefings, lists, charts, notes, and recommendations into a consolidated report.

GROUP DECISION SUPPORT SYSTEMS

be brought "up to speed" easily once they rejoin the group discussion. The group memory should allow group members to peruse the results of prior meetings they were unable to attend (Wilson and Jessup, 1995). Such a feature will be of particular importance to the use of GSS in reengineering because it will facilitate diverse membership and cross-functional attendees who might not all be available for meetings simultaneously. The group memory configuration also must allow browsing of what has transpired even while the meeting con-tinues. This implies individuals can leave the conference, digest information at their own pace, and then rejoin the conference. Such a feature allows for disparity in learning speed and learning style without biasing the group's opinion of the member (Hosseini, 1995).

There are technical considerations associated with providing an adequate group mem-ory, especially in terms of preserving the richness of the information associated with discussion. However, when accomplished properly, it can assist in increasing task focus and thereby aid effectiveness.

Process Support

As was stated earlier in this chapter, one of the main contributions provided by GDSS technology is support of the process. Research has demonstrated that large groups benefit most from the use of a GDSS. This is the case because in traditional, non-GDSS settings the larger the group, the greater the negative aspects of group behavior. Since a GDSS manages the negative aspects of group behavior and makes a group more effective in accomplishing its goals, it therefore brings about a greater impact on larger groups. This is not to say that it cannot be an effective aid in small groups. Rather, it suggests that because the negative aspects of group behavior are not as prominent, the relative impact is not as great. This includes all features which encourage the positive attributes of group decision making while suppressing the negative group dynamics.

One GDSS process feature is that the technology allows greater flexibility in the definition of meetings. Often, group members might not always attend the same meetings. This aspect of group meetings is a growing phenomenon as more diverse individuals—who have diverse responsibilities and schedules—are brought together to work on projects. As corporations downsize, it is likely that the expertise necessary to solve a problem or to complete a project will not be available at common locations. Also, if high-level managers are involved in the project, they might need to be away from the group to respond to needs in their own department. Group DSS can be extended for use in different places and at different times. Hence, the discussion and decision-making meetings will be populated by "virtual groups." Group members might meet at the same time in the same place. Or, as discussed earlier, they might meet at the same time but in geographically different locations joined through teleconferencing. With GDSS, they might meet in the same place but at different times. Finally, the GDSS allows the groups to meet at different times in different places. This extension of the technology will mean that the number of face-to-face meetings will decline, and the meetings will not interfere with other productivity gains.

A second process feature allowed by GDSS is the anonymity feature. In particular, this feature allows group members to pose opinions, provide analyses, or vote without revealing their own identity to other members of the group. The anonymity feature allows for a more democratic exchange of information, because individuals must evaluate information on its own merits, not those which seem politically most expedient. If the author of a proposal is not known, then the evaluation of the proposal hinges not upon the status of the author but rather on the merit of the idea itself. This feature is most important when a group consists of individuals of significant differences in stature. In meetings where pressure to conform

FEATURES OF SUPPORT

is perceived to be high, the anonymous feature allows for the most open contributions and hence is most highly valued. There is also the possibility that preserving anonymous contributions will eliminate personalities from the process and allow the focus to be on the analysis of the problem on the table.

With a GDSS, an environment can be created in which group members participate equally, vote their conscience and participate more often than they might in a non-computerized environment where their contributions are more easily identified. Hence, anonymity can result in more information being generated, better analyses, and hence better decision making .

Of course, the GDSS must also provide facilities for voting and negotiating aids for the group meeting. As a first step, the participants need to agree upon or at least understand the different approaches to making decisions. The most important of those is who will make the decision. The group may make the decision or they might only be consultative and someone else actually makes the decision. If the group is making the choice, they might follow a consensus approach in which group members continue to discuss, compromise, and negotiate until one final decision is agreed upon by all. Or, the group might use the more common alternative: The democratic approach in the adopted alternative is the one that received the majority of members' votes. If the group is just being consulted, it may be because the managerial authority is being dictatorial (only he or she will decide) or because the group has given that right to the leader. In addition, the final choice might be given to an external body or person, as in the case of arbitration.

There are other tools that the GDSS can provide to facilitate the group. For example, the GDSS might include an electronic version of Robert's Rules of Order or some other parliamentary procedure or it might provide the facility to develop and call upon rules for discussion and voting in the meeting. An "intelligent counselor" is a knowledge-based system that can provide advice on the rules applying to a particular situation. Support for voting might include the provision of numerical and graphical summarization of votes and ideas. The DSS might also include programs for the calculation of weights in multiattribute decision problems and Delphi techniques for progressive movement toward consensus building.

Another resource that can be built into the meeting process is the use of facilitators. Facilitation can be defined as "a set of functions carried out before during and after a meeting to help the group achieve its own outcomes" (Bostrom, Anson, and Clawson, 1993, p. 147). A facilitator can increase the likelihood that a meeting will produce the desired outcomes. In other words, if a facilitator is used, then the meeting will make use of the GDSS tools, but the process will not be driven by the GDSS tools. A facilitator should be adept at exploiting the GDSS technology to achieve the goals of the group; the additional talents that need to be utilized are far too numerous and embrace too many disciplines to be outlined here. Otherwise, the group either will become overly focused on the technology (at the loss of the topic at hand) or will not avail itself of the richness of the tool to address the topic.

GDSS and Reengineering

Reengineering projects draw upon employees from diverse areas of the organization. This diversity must be present to ensure that every element of the process is considered care-fully (Ziguram and Kozar, 1994). For example, consider three case studies in which GSS were used: U.S. Army Installation Management, Flagstar, and the Department of Defense Battlefield Logistics. A review of these case studies illustrates that the GSS technology

440 GROUP DECISION SUPPORT SYSTEMS

facilitated their success (Dennis et al, 1995). The most significant factor to emerge from the analysis was the essential nature of the team concept. Top managers need to provide support, but a team of middle managers is the core of the process, and they need to work as one. Cross-functional teams, whose members are diverse in style and experience, need to hit the ground running and not waste time establishing ground rules and procedures. A good GSS handles those problems. The team that "owned" the business process redesign had its skills enhanced by the qualities of the GSS while consulting with some IT staff for the technical characteristics of making it work.

History is full of problems in implementation because lower level managers were not part of the discussions, thereby requiring upper level managers to rely upon their memories as to how functions were performed. For example, consider the reengineering effort of Garland Power and Light. Although this company had failed collaborative projects in the past, management believed that a reengineering effort was needed. To this end, the strategic plan developed highlighted commonality in purpose and definition, collaboration among the managers of the five divisions, and dissolution of the boundaries between divisions to provide more end-to-end work. Unfortunately, the process at Garland Power and Light failed. An analysis of the failure identified problems of collapsed coordination and lack of communication (Ziguram and Kozar, 1994). The use of a GSS could have helped avoid the failure. The fundamental processes present in a GSS would facilitate collaboration and blurring of boundaries. Group memory would help team members converge the purpose and definition of the project.

DISCUSSION

Group DSS merge group ware technology with decision support technology. All of the characteristics and needs of DSS discussed earlier need to be fulfilled. In addition, these systems provide tools to help exploit the advantages of group decision making while avoiding some of the problems thereof. There have been many applications of GDSS to problems, and much research has been devoted to understanding how to apply them to solving group choice processes.

SUGGESTED READINGS

Anonymous, "Groupware and the Virtual Enterprise," Datamation, March 15, 1995, pp. S4-S8. Antunes, P., and T. Ho, "The Design of a GDSS Meeting Preparation Tool," Group Decision and

Negotiation, Vol. 10, No. 1, January 2001, pp. 5-25. Barkhi, R., V S Jacob, and H. Pirkul., "The Influence of Communication Mode and Incentive Structure

on Gdss Process and Outcomes," Decision Support Systems, Vol. 37, No. 2, May 2004, p. 287. Bostrom, R. P., R. Anson, and V. K. Clawson, "Group Facilitation and Group Support Systems," in

L. M. Jessup and J. S. Valacich (Eds.), Group Support Systems: New Perspectives, New York: Macmillian Publishing Co., 1993, pp. 146-168.

Cole, B., "Channels' Surfing across the Groupware Seascape," Network World, September 5, 1995, pp. 35-38.

Dennis, A. R., D. Daniels, G. Hayes, G. Kelly, D. Lange, and L. Massman, "Business Process Reengineering with Groupware," in Proceedings of the Twenty-Eighth Annual Hawaii Interna-tional Conference on System Sciences, Vol. IV, Los Alamitos, CA: IEEE Computer Society Press, 1995, pp. 378-387.

SUGGESTED READINGS

Dennis, A. R., and B. R. Gallupe, "A History of Group Support Systems Empirical Research: Lessons Learned and Future Directions," in L. M. Jessup and J. S. Valacich (Eds.), Group Support Systems: New Perspectives, New York: Macmillian Publishing, 1993, pp. 59-77.

DeSanctis, G., and R. B. Gallupe, "A Foundation for the Study of Group Decision Support Systems," Management Science, Vol. 33, No. 5, May 1987, pp. 589-609.

DeSanctis, G., et al, "The Minnesota GDSS Research Project: Group Support Systems, Group Processes, and Outcomes," Journal of the Association for Information Systems, Vol. 9, Nos. 10-11, 2008, pp. 551-609.

Dias, L. C., and J. N Climaco, "Dealing with Imprecise Information in Group Multicriteria Decisions: A Methodology and a GDSS Architecture," European Journal of Operational Research, Vol. 160, No. 2, January 16, 2005, p. 291.

Elfvengren, K., H. Karkkainen, M. Torkkeli, and M. Tuomine, "A GDSS Based Approach for the Assessment of Customer Needs in Industrial Markets," International Journal of Production Economics, Vol. 89, No. 3, June 18, 2004, p. 275.

Elfvengren, K., S. Kortelainen, and M. Tuominen, "A GSS Process to Generate New Product Ideas and Business Concepts," International Journal of Technology Management, Vol. 45, Nos. 3-4, 2009, p. 337.

Eom, S. B., The Development of Decision Support Systems Research: A Bibliometrical Approach,

Lewiston, New York: Edwin Meilen Press, 2007. Gray, P., "The Nature of Group Decision Support Systems," in F. Burstein and C. W. Holsapple

(Eds.), Handbook on Decision Support Systems, Vol. I, Berlin: Springer-Verlag, 2008, pp. 371— 390.

Haseman, W. D., D. L. Nazareth, and P. Souren, "Implementation of a Group Decision Support System Utilizing Collective Memory," Information & Management, Vol. 42, No. 4, May 2005, p. 591.

Heighler, E., "Obstacles Block the Road to Groupware Nirvana," Inforworld, October 24, 1994,

p. 56.

Hellriegel, D., S. E. Jackson, and J. W. Slocum, Managing: A Competency Based Approach, Boston, MA: South-Western College Publishing, 2007.

Hoffer, J. A., and J. S. Valacich, "Group Memory in Group Support Systems: A Foundation for Design," in L. M. Jessup and J. S. Valacich (Eds.), Group Support Systems: New Perspectives, New York: Macmillian Publishing Co., 1993, pp. 215-229.

Hosseini, J., "Business Process Modeling and Organizational Memory Systems: A Case Study," in Proceedings of the Twenty-Eighth Annual Hawaii International Conference on System Sciences, Vol. IV, Los Alamitos, CA: IEEE Computer Society Press, 1995, pp. 363-371.

How to Improve your Organization's Group Intelligence, White Paper, GroupSystems Corporation, Broomfield, CO, 2006, available: http://www.groupsystems.com/documents/GroupJntelligence-A_White_Paper.pdf.

Huber, G. P., "Issues in the Design of Group Decision Support Systems," MIS Quarterly, Vol. 8, No. 1,1984, pp. 195-204.

Huber, G. P., J. S. Valacich, and L. M. Jessup, "A Theory of the Effects of Group Support Systems on an Organization's Nature and Decisions," in L. M. Jessup and J. S. Valacich (Eds.), Group Support Systems: New Perspectives, New York: Macmillian Publishing, 1993, pp. 255-269.

Jessup, L. M., and J. S. Valacich, "On the Study of Group Support Systems: An Introduction to Group Support System Research and Development," in Group Support Systems: New Perspectives, New York: Macmillian Publishing, 1993, pp. 3-7.

Kobielus, J., "Scheduling Standards Are Key to Groupware Strategies," Network World, August 29, 1994, pp. 27-29.

Lawton, G., and J. Vaughn, "Groupware Vendors Jockey for Position," Software Magazine, September, 1994, pp. 23-24.

GROUP DECISION SUPPORT SYSTEMS

Limayem, M., P. Banerjee, and L. Ma, "Impact of GDSS: Opening the Black Box," Decision Support Systems, Vol. 42, No. 2, November 1, 2006, p. 945.

Mesmer-Magnus, J., and L. DeChurch, "Information Sharing and Team Performance: A Meta-

Analysis," Journal of Applied Psychology, Vol. 94, No. 2, 2009, pp. 535-546.

Nunamaker, J. R, and A. V Deokar, "GDSS Parameters and Benefits," in F. Burstein and C. W. Holsapple (Eds.), Handbook on Decision Support Systems, Vol. I, Berlin: Springer-Verlag, 2008, pp. 391-414.

Opper, S. and H. Fersko-Weiss, Technology for Teams: Enhancing Productivity in Networked Orga-nizations. NY: VanNostrand Reinhold, 1992.

Read, M., and T. Gear, "Developing Professional Judgement with the Aid of a 'Low-profile' Group Support System," Journal of the Operational Research Society, Vol. 58, No. 8, August 2007, pp. 1021-1030.

Rees, J., and G. J. Koehler, "An Evolutionary Approach to Group Decision Making," INFORMS Journal on Computing, Vol. 14, No. 3, Summer 2002, pp. 278-292.

Schräge, M., "Groupware Griping Is Premature," Computerworld, August 29, 1994, Vol. 28, No. 35,

p. 37.

Simpson, D., "Variations on a Theme," Client/Server Today, July 1994, pp. 45-47.

Sprague, R. H., and H. J. Watson, Decision Support Systems: Putting Theory into Practice, New York: Prentice-Hall, 1989.

Stahl, S. "Groupthink," Informationweek, August 22, 1994, pp. 12-13.

Straub, D. W, and R. A. Beauclair, "Current and Future Uses of Group Decision Support Systems Technology: Report on a Recent Empirical Study," Journal of MIS, Vol. 5, No. 1, Summer, 1988, pp. 101-116.

Sunstein, C. R., Infotopia: How Many Minds Produce Knowledge, London: Oxford University Press, 2008.

Suroweicki, J., The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective

Wisdom Shapes Business, Economies, Societies and Nations, Boston: Little, Brown, 2004.

Turoff, M., R. H. Starr, A. N. F. Bahgat, and A. R. Rana, "Distributed Group Support Systems," MIS Quarterly, Vol. 17, No. 2, December 1993, pp. 399-417.

Watson, R. T, G. DeSanctis, and M. S. Poole, "Using a GDSS to Facilitate Group Consensus: Some Intended and Some Unintended Consequences," MIS Quarterly, Vol. 10, No. 3, September 1988, pp. 463-478.

Watson, R. T., T. H. Ho, and K. S. Raman, "Culture as a Fourth Dimension of Group Support Systems," Communications oftheACM, Vol. 37, No. 10, October 1994, pp. 45-54.

Whiting, R., "Not for Workgroups Only," Client/Server Today, July 1994, pp. 58-66.

Wilson, J., and L. M. Jessup, "A Field Experiment on GSS Anonymity and Group Member Status," in Proceedings of the Twenty-Eighth Annual Hawaii International Conference on System Sciences, Vol. IV, Los Alamitos, CA: IEEE Computer Society Press, 1995, pp. 212-221.

Ziguram, I., and K. A. Kozar, "An Exploratory Study of Roles in Computer-Supported Groups," MIS Quarterly, Vol. 18, No. 3, September 1994, pp. 277-297.

QUESTIONS

1. What is the difference between group decision support systems and group ware? What features would one expect in GDSS but not in groupware?

2. What are the advantages of having groups consider issues? What attributes of GDSS exploit those advantages?

ON THE WEB 443

3. What are the disadvantages of having groups consider issues? What attributes of GDSS help to minimize those disadvantages?

4. How would reengineering efforts be improved by using GDSS?

5. Discuss two decisions in which you have been involved that might have been improved with the use of GDSS.

6. What is the difference between DSS with an active mail component and a group DSS?

ON THE WEB

On the Web for this chapter provides additional information to introduce you to the area of DSS. Links can provide access to demonstration packages, general overview information, applications, software providers, tutorials, and more. Further, you can see some DSSs available on the Web and use them to help increase confidence in your general understanding of this kind of computer system. Additional discussion questions and new applications will also be added as they become available.

• Links to overview information about group decision making. These links provide bibliographies and overview papers on the topic of group decision making, both with and without GDSS tools.

• Links to products. Several groupware and GDSS providers have pages describing tools that allow collaborative projects with people in the same room or across the world.

• Links provide access to GDSS examples in business, government, and research. Some links provide access to papers on the Web describing GDSS applications and their uses. Others provide descriptions of the process by which the application was developed.

• Links provide summaries of applications in particular industries. Examples of how specific business problems have been solved using GDSS are identified and reviewed.

You can access material for this chapter from the general Web page for the book or directly at http://www.umsl.edu/-sauterv/DSS4BI/GDSS.html.

INDEX

Access, model-based management systems, 159-163

Acquisti, A., 149 Action language component (user interface),

224-233 command language, 229-231 free-form natural language, 232-233 input-output structured formats, 231-232 menus, 224-229 question-answer format, 229

Adhikari, R., 365 Advanced beginner level, decision making

process, 42, 44 AIDSPLAN, 128 Airlift applications, DSS, 145 Al-Gialain, A., 284 Algorithms, modeling, methodology dimension,

139-142 Al-Jafaray, A., 297 Alphabets, DSS design, 286 Al-Sardi, R., 284 Alternatives:

automobile purchase example, modeling, 177-179, 183-189

identification of, DSS process, 24 Analogy, decision making process, 4 3 ^ 4 Analytics:

decision making process, 57-58 DSS design, 53 modeling, 125-129

Anonymity, group decision support systems, 438-439

Anson, R., 435, 439 Apfel, A. L., 72 Appliance approach, DSS design, 331-340 Arabic language, 284 Architecture, data warehouses, 97-101 Arnott, D., 317 Arthur Andersen (accounting firm), 400 Artificial intelligence, 197-213

historical perspective, 197-200 reasoning, programming of, 200-206 uncertainty, 206-210

Assumptions: challenging of, DSS design, 40-41 testing of, DSS design, 50-51

Athletics applications, DSS, 16, 141, 233 Attention allocation, DSS, 10 Automobile purchase example, 101-118.

See also Object-oriented technologies Cold Fusion tool, 107-118 criteria, 101-102 data warehouse, 102 information uses, 102-107 modeling, 177-190 user interface, 256-271

Backward-chaining reasoning, artificial intelligence, 201-203

Bar charts, bias, user interface, 245 Bar graphs, dashboard design, 413 Baroudi, J. J., 389 Baseball application, DSS, 16 Beane, Billy, 16 Berger, A. A., 285 Bhatnagar, S. C , 299 Bias:

data, 77-78 display or presentation language component

(user interface), 242-247 modeling options, representation,

131-132 support systems, 33-36

Bin Laden, Osama, modeling, 146 Bissinger, Buzz, 29 Boeing Company, 146 Bostrom, R. R, 435, 439 Bounded rationality approach, decision making

process, 29-31 Brainstorming:

automobile purchase example, modeling, 177-179

DSS process, 24 Braybrooke, D., 30 Bubble charts, dashboard design, 413 Business applications, DSS, 4-5

Decision Support Systems for Business Intelligence Copyright © 2010 John Wiley & Sons, Inc.

by Vicki L. Sauter

445

INDEX

Business intelligence: competitive, decision making process,

58-60 cultural differences, 304-307 decision making process, 53-57 DSS,9-13 modeling, 125

Business process reengineering, DSS design, 341-344

CADUCEUS, 197-198 Calendar, internationalization, 287 Canada, 293 Card sorts, DSS design planning, 323-324 Car example. See Automobile purchase example Carlson, E., 320 Carr, D. K., 343 Causality, correlation versus, modeling options,

137-138 Change commitment, 377-378 Change management, 378-380 Chapnic, P., 419 Chauffeured mode, knowledge base component,

252 Chess, 199 Chip architecture, modeling, 132 Choice process:

decision support systems (DSS), 6-7 DSS process, 24-25

Chronology, data and, 69-70 Clarke, Arthur C, 209 Classification, data mining, 150-151 Clawson, V. K., 435, 439 Client testing software, DSS design, 386 Clustering, data mining, 151, 152 Cold Fusion tool, automobile purchase example,

107-118,262 Command language, action language

component (user interface), 229-231 Communication mode, knowledge base

component (user interface), 252-256 Comparability factor, data, 78-79 Competence, decision making process, 42 Competition, DSS, 11-12, 13 Competitive business intelligence, 58-60 Compound DSS, 16-17 Comprehension factor. See Understandability

factor Computer Security Act of 1987, 291 Cooper, A., 217-218 Correlation, causality versus, modeling options,

137-138 Cost efficiency, data, 80-81

Criminal justice system, DSS, 5-9 Criteria measurement, DSS design, 29 Cross-cultural modeling, internationalization,

297-303. See also Internationalization Crossen, Cynthia, 34 Culture. See Internationalization Currencies:

data adjustment, 96-97 internationalization, 281, 285, 287

Dashboards, 401-422 appliances, 417-418 design, 410-417 function of, 401-408 requirements, 408^410 value of, 418^22

Data, 69-85 availability of, internationalization, 295,

307-309 bias, 77-78 comparability factor, 78-79 cost efficiency, 80-81 defined, 69 detail level, 75-76 DSS design and, 69-71 inclusion guidelines, 72-73 presentation format, 82-83 quantifiability, 81-82 quantity factor, 83-85 redundancy factor, 80 relevance factor, 78 reliability factor, 80 sufficiency factor, 74-75 timeliness factor, 73-74 undestandability factor, 76-77

Data adjustment, warehouses, 96-97 Database(s):

management systems, 86-87 overview, 85-86

Database management system (DBMS): described, 86-87 DSS component, 14-15

Database-oriented DSS, 16-17 Data flow, internationalization, 296 Data mining, 148-159

classification, 150-151 clustering, 151, 152 forecasting, 153-156 intelligent agents, 156-159 regressions, 151, 153 sequences, 153

Data processing, DSS process, 24 Data scrubbing, warehouses, 94-96

INDEX

Data sources: bias, 35 decision support systems (DSS), 6 internationalization, 289-296 manager characteristics, 32

Data support: decision making process, 36-46 information processing models, 37-45 virtual experience, decision making process,

52-53 Data vs. mythology, DSS design, 25 Data warehouses, 87-101

architecture, 97-101 automobile purchase example, 102 business intelligence, 11-12 data adjustment, 96-97 data scrubbing, 94-96 described, 87-93

Davenport, T. H., 9, 57 Davis, F. D., 389 DeBono, E., 35-36 Decision making process, 23-66

analytics, 57-58 bounded rationality approach, 29-31 business intelligence, 53-57 competitive business intelligence, 58-60 concept of, 3-5 data support, 36-46

information processing models, 37-45 tracking experience, 45-46

decision support systems (DSS), 5-13 display or presentation language component

(user interface), 247, 249, 251 group decision making, 46-47 intuition, 47-53 manager characteristics, 31-33 modeling, 127-128 overview, 23-25 rationality, 25-29 support systems, 33-36 virtual experience, 51-53

Decision support systems (DSS): adoption pressure factors, 5-13 applications:

airlift, 145 athletic, 141 attention allocation, 10 business, 4-5 comparability, 79 emergency dispatch system, 222 environmental, 5 expertise, 32 greeting cards, 15

grocery store, 12 group decisions, 430 healthcare, 13,408 model management, 44 Negotiation Ninjas, 143 Olympics, 233 political campaign, 18 public planning, 299 sensitivity analysis, 128 sports, 16 terrorism, 437

artificial intelligence, 197-213 cultural effects on, 303-309 defined, 13-17 design, 315-347 (See also Object-oriented

technologies) alphabets, 286 analytics, 53 appliance approach, 331-340 assumptions, challenging of, 40-41 business process reengineering, 341-344 client testing software, 386 criteria measurement, 29 dashboards, 410 data and, 69-71 data presentation format, 83 data reliability, 80 data scrubbing, 96 data vs. mythology, 25 data warehouse preparation, 90 distraction mechanism, 51 evaluation, 387, 388-389 failures, 146 Franklin decision process, 52 group decisions, 428, 429 intuition, 52 jargon, 341 learning incentives, 381 modeling chip architecture, 132 morality, 200 nudge concept, 32 object-oriented technologies, 349-368 one-stage, complete-system approach,

329-331 ostrich fable, 370 overview, 315-319 perception, 37 planning phase, 319-329 privacy issues, 293 response time, 419 speech emulation, 240 statistical bias, 34 team selection, 316, 340-341

Decision support systems (DSS) (Continued) timeliness factor, 74 Toubon Law (France), 282 translation problems, 283, 284, 285 Turing Test, 207 uncertainty, 376 virtual reality interfaces, 235

internationalization, 280 modeling, 125 types of, 16-17 uses of, 17-19

Deep Blue, 199 Demography, key performance indicators (KPI),

56 Dennis, A. R., 435, 440 DePodesta, Paul, 16 Descriptive modeling, normative modeling

versus, 136-137 Design. See Decision support systems (DSS),

design Design team selection, 316 Detail level, data, 75-76 Deterministic modeling, stochastic modeling

versus, 135-136 Dickson, G. W., 386 Diet, decision making, 3-4 Display or presentation language component

(user interface), 233-251 bias, 242-247 decision making process, 247, 249, 251 ownership (of analysis) perception, 241-242 representations, 236-241 visual design issues, 233-235 windowing, 235-236

Distraction mechanism, DSS design, 51 Di Talamo, N., 294 Dodson, G., 317 Dreyfus, H. L., 42 Dreyfus, S. E., 42 Dyment, J., 279

Economic factors, rationality, 25-29 Education, Implementation, 380-382 Einstein, Albert, 29 Electronic memory, support systems, 33 El-Haj, M. 0., 284 Emergency dispatch system application,

222 Enron scandal, 400 Enterprise Resource Planning (ERP), business

intelligence, 11-12 Environmental applications, DSS, 5, 79 Equifax, 4-5, 293

European Convention on Human Rights (ECHR), 294

Evaluation. See System evaluation Evan, W., 300, 302 Evans, W. A., 297, 301, 302 Executive information systems (EIS), 399-426

dashboards, 401-422 appliances, 417-418 design, 410-417 function of, 401^08 requirements, 408-410 value of, 418^22

historical perspective, 399-400 key performance indicator (KPI), 400^-01

Experience, virtual, decision making process, 51-53

Experiential model, representation dimension, 130-132

Expertise: assumptions, challenging of, DSS design,

40-41 decision making process, 42-43 DSS application, 32

External models, automobile purchase example, 189-190

Facets, object-oriented technologies, 365 Few, S., 401-402, 415-416 Financial data, inclusion guidelines, 72-73 Financial modeling, problems of, 147 Finlay, P. N., 383, 385 First Responder Interactive Emergency

Navigation Database (FRIEND) system, 222

Flexibility: automobile purchase example, modeling,

179-183 data detail level, 76 group decision support systems, 438 manager characteristics, 31

Forecasting, data mining, 153-156 Forward-chaining reasoning, artificial

intelligence, 203-206 France, Toubon Law, DSS design, 282 Franklin, Benjamin, 52 Free, D., 380 Free-form natural language, action language

component (user interface), 232-233 FRIEND system, 222 Frolick, M. N., 420

Gachet, A., 316 Galinsky, Adam, 51

INDEX

Gallupe, B. R., 435 Gartner, Inc., 55 Generalization, bias, 35 Generalized Air Mobility Model (GAMM), 145 Geoffrion, A., 364 Geographie information systems (GIS), DSS

design planning, 330 Gestures, internationalization, 290 Gialain, A., Al-, 284 Gladwell, Malcolm, 52 Global Reporting Initiative, 28 Gottman, John, 52 Graphs, bias, user interface, 244 Greeting cards application, DSS, 15 Grocery store application, DSS, 12 Gross, R., 149 Group decision making, 46-47 Group decision support systems, 427-443

definitions, 432-434 features, 434-440

decision-making support, 434^38 process support, 438-439 reengineering, 439-440

group dynamics, 427-429 group ware, 429-432

Groupthink, 427-429 Guralink, D. B., 25

Hallmark greeting card company, 15 Hammer, M., 341 Hammo, B. H., 284 Harris, J. G., 9, 57 Hau,K.C.,301,302 Health care applications:

CADUCEUS, 197-198 dashboards, 408 data mining, 148-149 DSS, 13, 128 simulations, modeling, 145

Hellriegel, D., 427 Heuristics, modeling, methodology dimension,

139-142 Histograms, bias, user interface, 245-247,

249-250 Hoffer, J. A., 437 Hofstede, G., 297, 299, 302, 305 Hogue,J.T., 18 Hollingsworth, A. T., 297 Hopsapple, C. W., 16 Hospitals, decision support systems (DSS), 13 Hosseini, J., 429, 437,438 Howson, C., 9, 55, 57 Hubbard, D. W., 400-401

Huber, G. P., 429 Human-computer interaction, 216-218. See also

User interface

IBM, 199 Icons, internationalization, 288 Identity theft, 149 Idioms, communication mode, user interface,

255 Implementation, 370-382. See also System

evaluation change commitment, 377-378 change management, 378-380 factors in, 370-372 institutionalization, 380-382 interviews, 373-375 prototype use, 372-373 simplicity, 375 success measurement, system evaluation,

386-391 user involvement, 375-377

India, 299 Individualism, cross-cultural factors, 302-303 influence diagrams, DSS design planning,

324-325 Information. See Data sources Information availability standards,

internationalization, 289-296 Information processing models, data support,

37-45 Information uses, automobile purchase

example, 102-107 Informed consent, privacy issues, 292 Inheritance, object-oriented technologies,

361-365 Input-output (I/O) structured formats:

action language component (user interface), 231-232

DSS design, 341 Institutionalization, implementation phase,

380-382 Integration, model-based management systems,

166-168 Intelligent agents, data mining, 156-159 Internationalization, 279-313

corporate forms, 279-280 cross-cultural modeling, 297-303 cultural factors, 303-309 decision support systems, 280 information availability standards, 289-296 language translation, 282-289 money currencies, 281 user interface, 280-283

Interviews: DSS design planning, 322-323 implementation, 373-375

Intuition, decision making process, 47-53 Involvement, manager characteristics, 33 Ives, B., 389 Ives, Charles, 37

Jajoo, B. H., 299 Japanese language, 285 Jargon, DSS design, 341 Jessup, L. M., 429, 435, 438 Johanssen, H. J., 343

Kanazawa, Satoshi, 157 Kasparov, Garry, 199 Key performance indicators (KPI):

business intelligence, 57 dashboards, 417 data inclusion guidelines, 72-73 executive information systems, 40CM01

Kiplinger's Buyer's Guide, 102, 104 Klein, G. A., 40, 42 Kluckhohn, F. R., 301 Knowledge base component (user interface),

251-256 communication mode, 252-256 generally, 251-252

Kobayshi, N., 297 Korzenlowski, P, 399 Kozar, K. A., 439, 440 Kroc, Ray, 83

Language translation, internationalization, 282-289

LaPlante, A., 18 LaRussa, Tony, 29 Legal factors, rationality, 25-29 Lewis, Michael, 16 Libertarian paternalism, nudge concept, 32 Lichtman, Allan J., 142 Lindblom, C. E., 30 Linearity factor, modeling options, 134-135 Linear programming, modeling, 140-141 Lock, M., 12 Logistic support systems, model management,

44 Look of system, communication mode, user

interface, 255 Luhn, H. P., 53-55

Major League Baseball, 141 Management information system (MIS), DSS

component, 13-14

Management principles, data vs. mythology, 25 Management systems, modeling, 159-177

access, 159-163 integration between models, 166-168 sensitivity factors, 168-174 support tools, 174-177 understandability, 163-166

Manager: business intelligence, 53-57 characteristics of, 31-33

Marketing: data detail level, 76 data inclusion guidelines, 72-73 decision support systems (DSS), 12

Mastery, decision making process, 4 2 ^ 3 Matsushita, 235 McDonald's restaurant chain, 83 Measurement units, internationalization, 287 Media, user interface, 220-223 Medical system, decision support systems

(DSS), 13. See also Health care applications

Mental model, communication mode, user interface, 252-256

Menus, action language component (user interface), 224-229

Metaphor, communication mode, user interface, 252-256

Methodology dimension, modeling options, 138-147

Metriglyphs, user interface, 236-238 Military:

airlift applications, simulations, 145 Bin Laden, Osama, modeling, 146

Mintzberg, H., 31-32 Minyard, Charles Joseph, 238 Mitchell, R., 72 Model base management system (MBMS), DSS

component, 15 Modeling, 125-195

analytics, 125-129 automobile purchase example, 177-190 business intelligence, 125 cross-cultural, internationalization, 297-303 data mining, 148-159

classification, 150-151 clustering, 151, 152 forecasting, 153-156 intelligent agents, 156-159 regressions, 151, 153 sequences, 153

defined, 125, 129 failures, 146

INDEX 451

management systems, 159-177 access, 159-163 integration between models, 166-168 sensitivity factors, 168-174 support tools, 174-177 understandability, 163-166

options for, 129-147 causality versus correlation, 137-138 descriptive versus normative modeling,

136-137 deterministic versus stochastic modeling,

135-136 generally, 129 linearity factor, 134-135 methodology dimension, 138-147 representation, 130-132 time dimension, 132-134

problems of, 147 MOLAP architecture, data warehouses, 99-101 Money currencies:

data adjustment, 96-97, 285 internationalization, 281, 285, 287

Montlick, T., 355 Morality, DSS design, 200 Muddling through, bounded rationality

approach, 29-31 Mythology vs. data, DSS design, 25

National Highway Traffic Safety Administration (NHTSA), 102, 178

Navigation of model, communication mode, user interface, 255

Negandhi, A. R., 297, 300, 302 Negotiation Ninjas applications, 143 Nested menus, action language component (user

interface), 227-228 Netflix, 150 Non-object-oriented tools, 350-352 Norman, D. A., 217, 319,411 Normative modeling, descriptive modeling

versus, 136-137 Novice level, decision making process, 42, 44 Nudge concept, DSS design, 32 Nursing acuity system, 13

Obama, Barack, 18 Objective model, representation dimension,

130-132 Object-oriented technologies, 349-368. See also

Automobile purchase example attributes and methods, 354-361 benefits of, 365-366 facets, 365

inheritance, 361-365 non-object-oriented tools, 350-352 object definition, 352-354 overview, 349

OLAP architecture, data warehouses, 99-101 Olson, M. H., 389 Olympics application, DSS, 233 Orwell, George, 295n6 Ostrich fable, DSS design, 370 Otair, M., 284 Ownership (of analysis) perception, display or

presentation language component (user interface), 241-242

Peara, A., 170, 172 Pen-and-gesture-based user interface, 218-220 Perception:

data support, 37^45 DSS design, 37 modeling, 125-127

Personal Information Protection and Electronic Documents Act (PIPEDA), 292

Peters, T., 25, 376 Petkov, D., 383 Petkova, O., 383 Piaget, J., 38 Pick, R. A., 317 Pictures, data form, 70 Pie charts, bias, user interface, 245-247 Point of service (POS) systems, business

intelligence, 11-12 Politics:

campaign application, DSS, 18 presidential selection, heuristics, 142 rationality, 25-29

Powers, R., 386 Prabhakar, P. K., 280 Prediction, data mining, 153-156 Presentation format. See also specific charts,

graphs and format types dashboard design, 410-417 data, 82-83

Presidential selection, heuristics, 142 Prietula, Ericsson & Cokely, 52 Prietula, M. J., 52 Privacy, data sources, internationalization,

290-295 Probability theory, artificial intelligence,

208-210 Problem identification, DSS process, 24 Procedural factors:

group decision support systems, 439 rationality, 25-29

Proficiency, decision making process, 42 Profits, rationality, 25-26 Prospector Heuristico, 143 Prototype, use of, implementation, 372-373 Public Company Accounting Reform and

Investor Protection Act of 2002 (Sarbanes Oxley (SOX) Act), 12, 55, 400

Public planning applications, DSS, 299

Qualitative data, intuition, decision making process, 47-53

Quantifiability, data, 81-82 Quantitative modeling, problems of, 147 Quantity factor, data, 83-85 Question-answer format, action language

component (user interface), 229

Raden, N., 25, 53 Rasmussen, J., 39-40 Rationality:

bounded rationality approach, 29-31 defined, 25 DSS design, 25-29

Reasoning, programming of, artificial intelligence, 200-206

Redundancy factor, 80 Reengineering:

DSS design, 341-344 group decision support systems, 439-440

Regressions, data mining, 151, 153 Regulatory environment:

business intelligence, 55 decision support systems (DSS), 12

Relevance factor, 78 Reliability factor, 80 Representation dimension, modeling options,

130-132 Representations, display or presentation

language component (user interface), 236-241

Response time, DSS design, 419 Robinson, S., 418, 420 Rockart,J. F.,419 ROLAP architecture, data warehouses,

100-101 Rule-oriented DSS, 16-17

Sample size, sufficiency factor, 74-75 Sankar, C. S., 280 Sarbanes Oxley (SOX) Act (Public Company

Accounting Reform and Investor Protection Act of 2002), 12, 55, 400

Sardi, R. Al-, 284 Satzinger, J.,419 Sauter, V. L., 380, 390 Scaling, bias, user interface, 244, 247-248 Scott, J. E., 386 Screen size, user interface, 218-220 Sculli,D.,301,302 Sensitivity analysis application, DSS, 128 Sensitivity factors, model-based management

systems, 168-174 Sentiment analysis, 159 Sequences, data mining, 153 Simon, H. A., 23, 29 Simon, Herbert, 10 Simplicity, implementation, 375 Simulation, modeling, methodology dimension,

142-147 Single-malt Scotch whiskey, 152 Situational analysis, DSS design planning,

325-329 Six hat concept, bias detection, 35-36 Smith, M., 72 Social (ethical) factors, rationality, 25-29 Social networks:

privacy issues, 291-292 sentiment analysis, 159

Solver-oriented DSS, 16-17 Sources. See Data sources Speech emulation, DSS design, 240 Sports applications, DSS, 16, 141, 233 Sprague,R.,316,317,320 Spreadsheet-oriented DSS, 16-17 Standards, cultural differences, 308-309 Statistical bias, DSS design, 34 Stereotyping, cross-cultural factors,

internationalization, 302-303 Stochastic modeling, deterministic modeling

versus, 135-136 Strategic business objectives (SBO)

methodology, 420 Strodtbeck, F. L., 301 Structured Query Language (SQL), 17 Style, manager characteristics, 31-32 Subscription mode, knowledge base component

(user interface), 252 Sufficiency factor, data, 74-75 Sunstein, Cass R., 32 Support systems, decision making process,

33-36 Support tools, model-based management

systems, 174-177 Swanson, E. B., 370

INDEX

System evaluation, 382-392. See also Implementation

implementation success measurement, 386-391

organizational appropriateness, 391-392 technical appropriateness, 382-385 utility, 385-386

Systems development life cycle (SDLC), DSS design planning, 321, 329

Taylor, J., 25 Team selection, DSS design, 316, 340-341 Technical factors, rationality, 25-29 Terminal mode, knowledge base component,

252 Terrorism application, DSS, 437 Testing software, DSS design, 386 Text-oriented DSS, 16-17 Thaler, Richard H., 32 Time dimension, modeling options, 132-134 Time horizon, sufficiency factor, data, 75 Timeliness factor:

data, 73-74 groupware, 430

Total Information Awareness Program (TIAP), 150-151

Toubon Law (France), DSS design, 282 Tracking experience, decision making process,

data support, 45-46 Training, implementation, 380-382 Transaction processing system (TPS):

data scrubbing, 94-96 DSS component, 13-14 DSS design and, 316

Translation (of language), internationalization, 282-289

Transnational organizations. See Internationalization

Turing Test, 207, 385 2001: A Space Odysset (film), 209

Uncertainty: artificial intelligence, 206-210 bias, 35 DSS design, 376

Understandability factor: data, 76-77 menus, user interface, 226 model-based management systems, 163-166 user interface, 217

Units of measurement, internationalization, 287

User interface, 215-275 action language component, 224-233

command language, 229-231 free-form natural language, 232-233 input-output structured formats, 231-232 menus, 224-229 question-answer format, 229

automobile purchase example, 256-271 decision support systems (DSS), 7-8 design of, 216-218 display or presentation language component,

233-251 bias, 242-247 decision making process, 247, 249, 251 ownership perception, 241-242 representations, 236-241 visual design issues, 233-235 windowing, 235-236

historical perspective, 215 internationalization, 280-283 knowledge base component, 251-256

communication mode, 252-256 generally, 251-252

mechanisms of, 218-223 User involvement, implementation, 375-377

Valacich, J. S., 429, 437 Values model, data inclusion guidelines,

72-73 Verbal description, data form, 70 Virtual experience, decision making process,

51-53 Virtual reality interfaces, DSS design, 235 Virtual reality technology, data form, 70 Visual design issues, display or presentation

language component (user interface), 233-235

Volonino,L.H.,418,420

Watson, H.J., 18,419,420 Watson, R. T., 433 Whinston, A. B., 16 Wilson, J. M., 383, 385, 435, 438 Windowing, display or presentation language

component, 235-236 Word cloud analysis, 153-154

Ziguram, I., 439, 440

error: Content is protected !!