|
| DOE-DP-STD-3023-98
9.3.3
Guideline 3.3-- Aggregating Performance Measures. Performance measures should
be appropriately weighted, scaled, and aggregated to produce a quantitative measure
of the total incremental net benefit obtained from conducting the decision options, or
other measures of value as defined for the specific RBP application. Efforts should be
taken to minimize the potential for cognitive and motivational scoring biases in
developing the performance measures.
Discussion. Weights should reflect the relative values of obtaining improvements on
the various performance measures by the different decision options. Scaling functions
should account for any nonlinearities in the relative value of achieving various levels of
improvement against the corresponding performance measures. The method of
aggregation should be consistent with the dependent/independent relationships
among performance measures. Specifically, if uncertainties in performance measures
are not explicitly considered, then the method of aggregation should normally be a
measurable value function. Otherwise, if uncertainties are considered, a von
Neuman-Morgenstern utility function (reference [d]) should be used unless a
compelling case can be made that it is unnecessary or inappropriate to the role
assigned to the RBP. Value judgments are subjective and likely to differ among
different stakeholders. It is important that such values accurately reflect the
preferences of the decision maker, not the system designer.
9.3.4
Guideline 3.4-- Consistency in Scoring. The scoring process should be designed to
ensure that scorers make consistent assumptions in the assignment of scores.
Discussion. All prioritization methods require that decision options be rated or scored
against the objectives. There are three aspects of scoring that, if controlled, can do
the most to ensure consistency. These are discussed below.
a. Scoring Teams. The direct approach to achieving consistency is to have a single
team evaluate all decision options. In addition to a large time commitment, it may
be difficult to assemble a single team with an appropriate range of expertise. If
multiple teams of individuals are assembled to share the scoring responsibility, it is
essential to include a quality assurance mechanism to ensure that evaluations are
comparable across scoring groups.
b. Judgments and Biases. The guidance provided by OMB (reference [k]) is
instructive in dealing with judgments and biases. This guidance states that the
assessment should generate a credible, objective, realistic, and scientifically
balanced analysis, presenting the information used in scoring, such as dose
response and exposure (or analogous material for non-health assessments); and,
explain the confidence in each assessment by clearly delineating strengths,
uncertainties, and assumptions, along with the influence of these factors on the
scoring. These data and assumptions should not reflect unstated or unsupported
preferences for protecting public health and the environment, or unstated safety
factors to account for uncertainty and unmeasured variability. If systematic biases
are identified, adjustments can be made to counter these biases. If scores have
been collected over the course of several applications within a specific area,
18
|
Privacy Statement - Press Release - Copyright Information. - Contact Us |