Formative and Summative Decision Support System Evaluation

Originally published 14 February 2008

Understanding formative versus summative evaluation provides some useful distinctions to consider when planning the evaluation of decision support projects. My perception is that more evaluation is needed of decision support system (DSS) projects.

Formative evaluation occurs during decision support system design and development and summative evaluation occurs once the development project is completed and the decision support system is in use. Some authors associate formative evaluation with evaluations by users and summative evaluations with expert and managerial evaluations. There are a number of different approaches to evaluation that differ based on when the evaluation occurs, either during the development process or when the project is complete, the intentions of the evaluator, providing a formative constructive evaluation versus obtaining a judgmental summative evaluation, and who does the evaluation, internal or external evaluators. You'll want to confirm how formative and summative are used when you are in a discussion about evaluating a DSS. I suggest the following definitions.

A formative evaluation involves judging the worth of a program/project, activity or software system while development activities are occurring. Formative evaluation focuses on intermediate or preliminary outcomes and results during the development process.

A summative evaluation involves judging the worth of a program/project, activity or software system at the end of the development process and following implementation. The focus is on assessing immediate and longer term outcomes and results.

Potential users should provide the primary feedback for the formative evaluation of a decision support/business intelligence system and the evaluation criteria should primarily focus on user interface and usability issues. As part of a formative evaluation of a model-driven DSS, the model needs to reviewed and validated by an "expert." Formative evaluation of a knowledge-driven DSS needs to verify the rules and knowledge base. Examining data and document quality are legitimate issues in the formative evaluation of data or document-driven DSS.

For a large-scale model-driven DSS project, summative evaluation should include assessments by users and expert evaluators. Criteria should be broader and the impact of the model-driven DSS on decision making and the organization should be assessed.

For both formative and summative evaluations, one can collect four main types of data using a variety of data collection methods:

  1. Impressionistic or subjective data from developers, users or potential users of the DSS.
  2. Objective data from an unbiased observer. In most situations, the observer will use an explicit, structured assessment protocol.
  3. Qualitative data in text, audio or video format. The data may include answers from potential users to open-ended questions, or anecdotal or impressionistic comments from an observer or a developer. Based upon my own experiences in formative evaluation situations, videotapes of user interactions with a DSS prototype can be especially helpful.
  4. Quantitative data is used, but some DSS developers seem to favor anecdotal evidence. Quantitative data should be collected about the use of a DSS. The data may be collected by the decision support software, in a user questionnaire, or from numerical scores given by observers.

As this discussion suggests, a comprehensive evaluation of a DSS may include collecting all four types of data. We generally expect that qualitative data is more likely to be subjective or impressionistic. Also, we can collect and interpret both quantitative and qualitative objective data. We can collect data using questionnaires and expert reviewers, by videotaping one-on-one interaction between a user and an evaluator, and by using a small group of observers. In either a formative or summative evaluation, data from users and potential users should have the major impact on the conclusions. It seems that the key is to create a positive, constructive feedback loop in formative evaluation. If the evaluation suggests the system cannot be built, then managers need to act quickly to end the project. A positive approach to evaluation can result in ending or improving a DSS project or in discontinuing use or rebuilding a legacy DSS.

A number of web pages credit Robert Stakes with the following quote: "When the cook tastes the soup, that's formative; when the guests taste the soup, that's summative." I haven't found a citation for Stakes' quote, but it's interesting and worth repeating. As always your comments, feedback and questions are most welcome.

References:

  1. LinguaLinks®, SIL International

  2. Northwest Regional Educational Laboratory, Program Evaluation,

  3. Phillips, B., Social Research: Strategy and Tactics (3rd Edition), New York, NY: Macmillan, 1976.

  4. Power, D. J., “What is the difference between formative and summative DSS evaluation?” DSS News, Vol. 4, No. 2, January 19, 2003.

 

SOURCE: Formative and Summative Decision Support System Evaluation

  • Dan PowerDan Power

    Daniel J. "Dan" Power is a Professor of Information Systems and Management at the College of Business Administration at the University of Northern Iowa and the editor of DSSResources.com, the Web-based knowledge repository about computerized systems that support decision making; the editor of PlanningSkills.com; and the editor of DSS News, a bi-weekly e-newsletter. Dr. Power's research interests include the design and development of decision support systems and how these systems impact individual and organizational decision behavior.

    Editor's Note: More articles and resources are available in Dan's BeyeNETWORK Expert Channel. Be sure to visit today!

Recent articles by Dan Power

 

Comments

Want to post a comment? Login or become a member today!

Be the first to comment!