Short paper – Client Results

Second part of previous assignment. 

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

Module Eight Assignment Guidelines 

Overview: This assignment will allow you to consider ways to deliver the results of an assessment in an ethical and strength-based manner. You will be using the results from a previous assignment and transforming them into a transcript that could be used with a real-life client.

Prompt: Before you begin this assignment, revisit the short paper you wrote for Module Five, in which you analyzed the results of Bob’s intelligence and achievement testing. Specifically, you identified his strengths and weaknesses related to the WRAT-4 and WASI-2. Elements of your paper included Bob’s 

strengths and weaknesses, how his strengths and weaknesses applied to his overall functioning, and suggestions or recommendations for him.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

For this assignment, you will be using the elements from that paper and turning them into a written or verbal transcript, as if you were delivering the results to Bob in real life. This must be done in an ethical manner, with the client’s best interests at the forefront of the delivery. You will be providing a review of the 

results in layman’s terms, using strength-based and nonjudgmental language and focusing on the summary of results, the use of strength-based language, the summary of recommendations, and an accurate portrayal of the findings.

Your assignment must be submitted as a written transcript, an audio recording, or a video. There is no

Page requirement or time requirement for this assignment as long as all critical elements are visited. Remember that your intended audience is Bob and not your instructor, so remember to speak directly to 

Bob when delivering the results. You should use the terms “you,” “your,” and “yours.”

Specifically, the following critical elements must be addressed:

I. Summary of Results: Results from Module Five Short Paper are summarized in a manner that is organized and ethical.

II. Use of Strength-Based Language: Appropriate, ethical language is used to speak to the patient.

III. Summary of Recommendations: Recommendations from Module Five Short Paper are summarized in a manner that is organized and ethical.

IV. Accurate Portrayal of Findings: Results and recommendations are accurately portrayed to the patient.

Runninghead: ANALYZING A SAMPLE INTELLIGENCE-ACHIEVEMENT REPORT 1

ANALYZING A SAMPLE INTELLIGENCE-ACHIEVEMENT REPORT 2

Analyzing a Sample Intelligence-Achievement Report

Analyzing a Sample Intelligence-Achievement Report

The Sample Intelligence-Achievement Report articulates Bob’s scores in the Wide Range Achievement Test 4 (WRAT-4) AND Wechsler Abbreviated Scale of Intelligence 2 (WASI-2). In relation to the WASI-2 test, Bob’s Full Scale IQ Score (FSIQ-4) was established to be average. Average scores in the subscales of this test show that the individual shows performance or intellectual abilities that are normal relative to the peers of similar age. Such scores show that the individual should be able to exhibit what is considered normal intellectual performance. Bob’s ability in most of the subscales are average, including his Verbal Comprehension Index, his knowledge of English word definitions and verbal reasoning abilities, his Perceptual Reasoning Index, as well as his nonverbal problem solving abilities. However, Bob’s score in visual spatial skills fall within the low average range. This presents his first weakness. This means that Bob has weakness in positioning himself properly when confronted by differing interfaces. For example, when exposed to different visual environments, he may not perform as other peers of his age.

On the other hand, the WRAT-4 test is used to evaluate fundamental academic skills (Keat & Ismail, 2011). There are specific subscales in this test where Bob exhibits average performance as compared to how his peers of the same age would perform, these include his Word Reading (standard score of 99), sentence comprehension (standard score of 93), and his Reading Composite (standard score of 95). However, Bob’s standard score of 78 in Spelling falls within the borderline range which suggests that he is more likely to perform much worse than his peers. This is clearly a weakness for Bob and reflective of a potentially poor performance in English word spelling tasks. Another weakness for Bob manifests in his Math Computation (standard score of 83). This means that Bob will most likely perform worse as compared to his peers, especially on tasks involving increasingly complex mathematical problems.

As already mentioned, an average score in the subscales of both WASI-2 and WRAT-4 show that Bob depicts normal intellectual ability in relation to his peers. These may not be characterized as strengths because a strength is a subjective characterization. Bob had to depict an ability of above average or higher in any one of the scores to achieve this characterization. However, it is clear that he has weaknesses in specific areas, especially those that require visual-spatial processing skills. Because Bob does not have any strength that can be distinguished from the average scores discussed above, this analysis will outline how his weaknesses may potentially affect his overall functioning. Bob’s comparative scores in the two areas of nonverbal abilities show that he may struggle among his peers. The WRAT-4 has outlined his weaknesses in both spelling and math computation. These weaknesses will definitely affect his functioning in academic environments. This is because spelling and math computation appear repetitively in numerous academic areas. This disadvantage may see him struggle in an academic environment and potentially perform lower than his peers.

Based on this analysis, there are some recommendations that can be advanced to Bob to help his situation. To begin with, there are specific behavioral interventions that can be instituted to help individuals sharpen their visual spatial skills. This can be recommended for Bob to help him improve his abilities in this competency. Additionally, it is possible to improve his spelling skills by embracing behavioral activities that sharpen this particular competency. Similarly, there are specific mathematics interventions that can be used on Bob to improve his computational skills (Codding, et al., 2007).

References

Codding, R. S., Shiyko, M., Russo, M., Birch, S., Fanning, E., & Jaspen, D. (2007). Comparing mathematics interventions: Does initial level of fluency predict intervention effectiveness? Journal of School Psychology, 45(6), 603-617.

Keat, O. B., & Ismail, K. B. (2011). The relationship between cognitive processing and reading. Asian Social Science, 7(10), 44.

HUMAN PERFORMANCE

Delivering effective performance feedback:
The strengths-based approach

Herman Aguinis *, Ryan K. Gottfredson, Harry Joo

Kelley School of Business, Indiana University, 1309 E. Tenth Street, Bloomington, IN 47405-1701, U.S.A.

Business Horizons (2012) 55, 105—111

Available online at www.sciencedirect.com

www.elsevier.com/locate/bushor

KEYWORDS
Human resource
management;
Performance
management;
Performance appraisal;
Employee
development;
Job performance;
Feedback

Abstract Performance feedback has significant potential to benefit employees in
terms of individual and team performance. Moreover, effective performance feed-
back has the potential to enhance employee engagement, motivation, and job
satisfaction. However, managers often are not comfortable giving performance
feedback and such feedback, if improperly relayed, causes more harm than good.
In this installment of HUMAN PERFORMANCE, we describe a shift from traditional
weaknesses-based feedback (which relies on negative commentary focused on
employees’ shortcomings) to the more constructive approach of strengths-based
feedback (which relies on employee affirmation and encouragement). We explain why
a strengths-based approach to performance feedback is superior to the weaknesses-
centered approach, and offer nine research-based recommendations on how to
deliver effective performance feedback employing a strengths-based method.
# 2011 Kelley School of Business, I

ndiana University. All rights reserved.

Success is achieved by developing our
strengths, not by eliminating our weaknesses.
� Marilyn vos Savant

1. Building up vs. breaking down

A key responsibility of successful managers is to help
their employees improve job performance on an
ongoing basis (Aguinis, Joo, & Gottfredson, 2011).
Managers carry out this responsibility by implement-
ing performance management systems that are de-
signed to align performance at the individual, unit,
and organizational levels. Notably, performance

* Corresponding author.
E-mail address: haguinis@indiana.edu (H. Aguinis).

0007-6813/$ — see front matter # 2011 Kelley School of Business, I
doi:10.1016/j.bushor.2011.10.004

feedback is a critical component of all performance
management systems (Aguinis, 2009; DeNisi &
Kluger, 2000). Performance feedback can be defined
as information about an employee’s past behaviors
with respect to established standards of employee
behaviors and results. The goals of performance
feedback are to improve individual and team per-
formance, as well as employee engagement, moti-
vation, and job satisfaction (Aguinis, 2009).

Unfortunately, managers are often uncomfort-
able giving performance feedback (Aguinis, 2009),
and such feedback often does more harm than good
in terms of helping employees improve their perfor-
mance (DeNisi & Kluger, 2000). For example, Kluger
and DeNisi (1996) conducted an extensive literature
review and concluded that in more than one-third of
the cases, performance feedback actually resulted
in decreased performance across the 131 studies

ndiana University. All rights reserved.

http://dx.doi.org/10.1016/j.bushor.2011.10.004

http://www.sciencedirect.com/science/journal/00076813

mailto:haguinis@indiana.edu

http://dx.doi.org/10.1016/j.bushor.2011.10.004

106 HUMAN PERFORMANCE

they analyzed. Furthermore, employees involved in
a qualitative study said the following about the
feedback that they had received: ‘‘The feedback
meeting is a conflict meeting,’’ ‘‘It was devastat-
ing,’’ ‘‘The process was a waste of time,’’ and
‘‘Feedback equals criticism and it is not nice’’
(Bouskila-Yam & Kluger, 2011). The discrepancy
between performance feedback’s intended and ac-
tual consequences constitutes a major concern to
employees, managers, and organizations.

Although managers share an intuitive under-
standing that feedback plays a crucial role in im-
proving individual and team performance, many
managers do not know how to deliver feedback
effectively. More specifically, managers quite fre-
quently provide feedback in a manner that is exces-
sively focused on employees’ weaknesses. Yet, the
same managers are typically unaware that such
weaknesses-based feedback often fails to improve
employee performance. To fully reap the benefits of
using feedback, managers should instead primarily
rely on a strengths-based approach to feedback
that consists of identifying employees’ areas of
positive behavior and results that stem from their
knowledge, skills, or talents. Next, we describe the
traditional weaknesses-centered approach to feed-
back, the novel strengths-based approach, and why
the strengths-based approach is superior. We close
with a set of nine research-based recommendations
on how to give effective performance feedback
using a strengths-based approach.

2. The traditional weaknesses-based
approach to feedback

Under the weaknesses-based approach to feedback,
managers identify their employees’ weaknesses
(e.g., deficiencies in terms of their job performance,
knowledge, and skills); provide negative feedback on
what the employees are doing wrong or what the
employees did not accomplish; and, finally, ask them
to improve their behaviors or results by overcoming
their weaknesses. The rationale behind weaknesses-
based feedback is that weaknesses are areas where
employees have potential to improve, and it is as-
sumed that informing them of these problems will
motivate them to improve their performance. In
other words, the assumption is that, absent such
communication, employees will not improve their
performance (Steelman & Rutkowski, 2004).

Because employees’ weaknesses can be detri-
mental to not only individual but also team and
organizational performance, managers often
point out what the employee did wrong and why
the employee needs to improve. Such negative

feedback can be illustrated with the following con-
versation between Tony, a branch manager at a
bank, and Lisa, a teller at the bank:

Tony: Lisa, you haven’t been greeting customers by
saying, ‘‘Hi, welcome to XYZ Bank.’’ We’ve
talked about this a number of times now.

Lisa: I haven’t done it a couple of times, but I’m
getting better.

Tony: Okay; well, then, I need you to do even
better. We need to make sure that we receive
high customer service rankings so that we can
get a big bonus at the end of the year.

Lisa: (Thinking to herself: He hasn’t paid any at-
tention to what I have been doing. I’ve been
greeting almost all of my customers the way
that he has asked. He never acknowledges me
when I do things right and takes it for
granted, but he sure is quick to point out
any relative shortcomings. What a jerk!)

Although weaknesses-based feedback informs
employees that certain behaviors and results are
inappropriate or inadequate, several studies have
concluded that such feedback entails unintended
negative consequences. For example, negative
feedback and criticism often lead to employee dis-
satisfaction, defensive reactions, a decreased de-
sire to improve individual performance, and less
actual improvement in the same (Burke, Weitzel,
& Weir, 1978; Jawahar, 2010; Kay, Meyer, & French,
1965). Negative feedback is also frequently per-
ceived as being inaccurate, and is unlikely to be
accepted by the person receiving it (Fedor, Eder, &
Buckley, 1989; Ilgen, Fisher, & Taylor, 1979; Steel-
man & Rutkowski, 2004). When feedback is focused
on employee weaknesses, those giving the feedback
generally adopt negative views of and attitudes
toward the employees being evaluated (Gardner
& Schermerhorn, 2004). These negative conse-
quences help explain the general lack of empirical
support for the benefits of feedback and why many
managers have not experienced significant success
in using feedback to boost employee performance
(Kluger & DeNisi, 1996). Next, we describe an alter-
native and superior approach to feedback.

3. The superior strengths-based
approach to feedback

Under the strengths-based approach to feedback,
managers identify their employees’ strengths in

HUMAN PERFORMANCE 107

terms of their exceptional job performance, knowl-
edge, skills, and talents; provide positive feedback
on what the employees are doing to succeed based
on such strengths; and, finally, ask them to maintain
or improve their behaviors or results by making
continued or more intensive use of their strengths.
The reasons behind strengths-based feedback are
that employee strengths are of great potential for
growth and development, and that highlighting how
these strengths can generate success on the job
motivates employees to intensify the use of their
strengths to produce even more positive behaviors
and results (Buckingham & Clifton, 2001).

In contrast to weaknesses-based feedback,
strengths-based feedback enjoys a significant num-
ber of advantages with few, if any, negative con-
sequences. For example, strengths-based feedback
enhances individual well-being and engagement
(Clifton & Harter, 2003; Seligman, Steen, Park, &
Peterson, 2005). This effect is particularly notewor-
thy because employee engagement is negatively
related to turnover (r = -.30) and positively related
to business-unit performance (r = .38) (Clifton &
Harter, 2003). Strengths-based feedback also tends
to increase employees’ desire to improve their
productivity (Jawahar, 2010) and heightens actual
productivity (Clifton & Harter, 2003). Moreover,
employees experience increased job satisfaction,
perceptions of fairness, and motivation to improve
job performance when their managers adopt helpful
and constructive attitudes that are typical under
the strengths-based approach (Burke et al., 1978;
Seligman & Csikszentmihalyi, 2000).

Put simply: Given its documented advantages,
the strengths-based approach to providing feedback
is a superior alternative to the weaknesses-based
approach. As is the case with many other manage-
ment practices, however, execution is key (Bossidy
& Charan, 2002). For instance, managers can make
the mistake of being too vague, thereby limiting the
potential performance and job satisfaction-related
benefits that such feedback can have on employees.

So, what can managers do to improve the effec-
tiveness of performance feedback? To answer this
question, we provide nine research-based recom-
mendations on how to deliver feedback focused on a
strengths-based approach.

4. Research-based recommendations
for implementing a strengths-based
approach to performance feedback

Table 1 represents a summary of our nine recom-
mendations. Based on earlier discussion, our first

recommendation is to focus on a strengths-based
approach. The strengths-based approach involves
identifying strengths, providing positive feedback
on how employees are using their strengths to ex-
hibit desirable behaviors and achieve beneficial
results, and asking them to maintain or improve
their behaviors or results by making continued or
more intensive use of their strengths.

The second recommendation is to not completely
abandon a discussion of weaknesses, but concentrate
on employees’ knowledge (i.e., facts and lessons
learned) and skills (i.e., steps of an activity) rather
than talents (i.e., naturally or mainly innately recur-
ring patterns of thought, feeling, and behavior). The
feedback should be focused thus because knowledge
and skills can be learned and improved, while talents
are typically inherent to the individual. Given this
recommendation, what are managers to do when an
employee’s inappropriate behaviors or inadequate
results stem from weaknesses in certain talents rath-
er than weaknesses in knowledge and skills? Our next
recommendation addresses this issue.

The third recommendation is that managers
adopt a strengths-based approach to managing their
employees’ talent weaknesses. In doing so, manag-
ers can follow Buckingham and Clifton’s (2001) five
suggestions. The first suggestion is to help employ-
ees improve a bit on the desired talents. But, keep in
mind that employees are unlikely to substantially
improve the talents that they lack. The second
suggestion is that both managers and employees
should design a support system that will serve as
a crutch for talent weaknesses. For example, em-
ployees who engage in public speaking can remain
calm by imagining that the audience members are
naked. According to Buckingham and Clifton’s third
suggestion, managers should encourage their em-
ployees to see how their strongest talents can
compensate for their talent weaknesses. For exam-
ple, if an employee possesses the talent of respon-
sibility yet struggles in networking because he
possesses few social talents, then help the employ-
ee see that networking is an important responsibili-
ty. To follow the fourth suggestion, make it easier
for employees to work with partners who possess
the talents that the employees lack. The fifth
and final suggestion is to prevent employees from
engaging in tasks that strongly require talents
they lack. Ways to implement this last suggestion
include re-designing jobs for employees who are
deficient in certain talents or giving other employ-
ees the responsibilities that require talents certain
employees lack.

The fourth recommendation in Table 1 is that the
person providing feedback needs to be familiar with
the individual reviewee’s knowledge, skills, and

108 HUMAN PERFORMANCE

Table 1. Nine recommendations for delivering effective performance feedback focusing on a strengths-based
approach

Recommendation Short description

1. Adopt the strengths-based
approach as the primary means
of providing feedback

� Identify employees’ strengths.
� Provide positive feedback on how employees are using their strengths

to exhibit desirable behaviors and achieve beneficial results.
� Ask employees to maintain or improve their behaviors or results by
making continued or more intensive use of their strengths.

2. Closely link any negative
feedback to employees’
knowledge and skills rather
than talents

� Focus weaknesses-based feedback on knowledge and skills
(which are more changeable) rather than talents (which are more
difficult to acquire).

3. Adopt a strengths-based
approach to managing
employees’ talent weaknesses

� Help employees improve a bit on the desired talents with an
understanding that employees are unlikely to substantially improve
the talents that they lack.
� Create a support system that will serve as a crutch for a talent weakness.
� Encourage employees to see how their strongest talents can compensate

for their talent weaknesses.
� Make it easier for employees to work with partners who possess the

talents that they lack.
� Re-design jobs for employees who are deficient in certain talents, and

give other employees the responsibilities that require talents that
certain employees lack.

4. Make sure the person providing
feedback is familiar with the
employee and the employee’s
job requirements

� Make sure you are familiar with the employee’s knowledge, skills,
and talents.
� Make sure you are familiar with the employee’s job requirements

and work context.

5. Choose an appropriate setting
when giving feedback

� Deliver feedback in a private setting.

6. Deliver the feedback in a
considerate manner

� Provide at least three pieces of positive feedback for every piece
of negative feedback.
� Start the feedback session by asking the employee what is working.
� Allow employees to participate in the feedback process.

7. Provide feedback that is
specific and accurate

� Avoid making general statements such as ‘‘Good job!’’
� Evaluate and give feedback closely based on concrete evidence.

8. Tie feedback to important
consequences at various levels
throughout the organization

� Explain that the behaviors exhibited and results achieved by the
employee have an important impact not only on the employee in terms
of rewards or disciplinary measures, but also on the team, unit,
or even organization.

9. Follow up � Provide specific directions by including a development plan and
checking up on any progress that is made after a certain period of time.

talents, as well as his or her job requirements
(Fulk, Brief, & Barr, 1985; Kinicki, Prussia, Wu, &
Mckee-Ryan, 2004; Landy, Barnes, & Murphy, 1978;
Steelman & Rutkowski, 2004). This is important
because the credibility of the feedback provider
can be quickly lost if feedback is given improperly.
An example of feedback coming from a source with
insufficient familiarity is when a district manager,
who is not involved in the day-to-day operations of a
work group and does not know the job requirements
and work context very well, visits a local office and

provides feedback that is based on hearsay or indi-
rect third-party information.

Our fifth specific recommendation is to choose an
appropriate setting when giving feedback, as the
setting/location in which feedback is delivered truly
matters. Specifically, feedback should be relayed in
a private rather than public setting. Receiving feed-
back in front of coworkers can be very demeaning
and detrimental to the employee. Also, although
most people do not have a problem receiving
strengths-based feedback in public, managers

HUMAN PERFORMANCE 109

should take into account that certain individuals
may be uncomfortable in the spotlight of public
praise or recognition. Regardless of the approach,
public feedback will not result in positive conse-
quences if given in the wrong setting.

Our sixth recommendation is to deliver feedback
in a considerate manner (Steelman & Rutkowski,
2004). One way of doing so is to maintain an opti-
mal ratio between strengths- and weaknesses-based
feedback. That is, a manager should provide at least
three pieces of positive feedback for every piece of
negative feedback (Bouskila-Yam & Kluger, 2011).
Another way of providing feedback in a considerate
manner is to start the feedback by asking the em-
ployee what is working (Foster & Lloyd, 2007). Doing
so allows the employee to feel more hopeful regard-
ing their future and remain less defensive when
negative feedback is given (Foster & Lloyd, 2007).
Finally, we also encourage managers to allow em-
ployees to participate in the feedback process.
Employees’ satisfaction with their given feedback
increases and their defensiveness decreases when
they have an active role in the feedback process
(Cawley, Keeping, & Levy, 1998).

Our seventh recommendation is that feedback
should be specific and accurate. It should center on
certain work behaviors and results, as well as the
situations in which these were observed (Goodman,
Wood, & Hendrickx, 2004). Avoid making general
statements such as ‘‘Good job,’’ ‘‘You’re struggling
today,’’ or ‘‘Pick up the pace.’’ Lack of specificity will
result in failure to get the message through (Aguinis,
2009). In addition to being specific, feedback must be
accurate (Elicker, Levy, & Hall, 2006; Steelman &
Rutkowski, 2004). One way to maximize accuracy is
to rely on concrete evidence (Jawahar, 2010).

Under our eighth recommendation, we encour-
age managers to give feedback that ties employee
behaviors and results to other important conse-
quences at various levels throughout the organization
(Aguinis, 2009). Specifically, the person providing
feedback should explain that the behaviors exhib-
ited and results achieved by the employee have an
important impact on not only the employee in terms
of rewards or disciplinary measures, but also that
person’s team, unit, and even organization (Aguinis,
2009). If employees’ behaviors and results are not
explained as being closely linked to other important
outcomes, employees might develop the impression
that their positive behaviors and results produced by
their strengths are not sufficiently beneficial or im-
portant; they may similarly think that their negative
behaviors and results are not particularly detrimental
or significant.

Finally, our ninth recommendation is to follow
up on feedback (Aguinis, 2009). Doing so entails

providing specific directions to the employee through
a development plan, as well as checking up on any
progress that is made after a certain period of time.
Via such diligence, employees will recognize that
the feedback should be taken seriously.

5. How it’s done: The nine principles of
effective performance feedback at play

How would our recommended principles of feedback
play out in an actual feedback session? Recall the
conversation between Tony and Lisa that we used
previously to provide an example of concepts
related to feedback. Now, consider the following
vignette in which Tony has been informally observ-
ing Lisa’s performance and decides to provide feed-
back, both because of things she did well and areas
in which she could improve when interacting with
customers:

Tony: Lisa, after helping the remaining customers in
line, will you come talk to me in my office? I
want to compliment you on the great work
you have been doing. I also want to talk about
areas in which you can improve to become
even better.

(10 minutes later)

Tony: Come in, Lisa; have a seat. As I mentioned
earlier, I want to talk to you about some of the
great things that you’ve been doing lately, as
well as areas where you can improve. I’d like
this time to be about how I can help you be
your very best.

Lisa: I hope I have been doing well. I’ve been
trying.

Tony: I can tell. Specifically, in what ways do you
feel like you’ve been standing out?

Lisa: Well, maybe it’s just me, but I hate it when
our customers have to wait in line. Because of
this, I really try my best to work quickly so
that people don’t have to wait so long.

Tony: That’s really good. In fact, our monthly fig-
ures show that of all the tellers during the
month of April, you conducted the most
transactions. How does that make you feel?

Lisa: Really? I even took a few vacation days last
month.

110 HUMAN PERFORMANCE

Tony: And because of your great work, we have a
$50 gift card for you.

Lisa: Wow, thanks!

Tony: Obviously, you’re great at being quick and
efficient when working with customers. How
do you feel this affects the quality of inter-
actions that you have with them?

Lisa: I’m not sure. I can see that I could probably be
more engaging, but I figure our customers just
want to get in and get out. I mean, I always
make sure that I greet them and ask how their
day is going. So, I feel like I have a good
balance between speed and quality.

Tony: I like how you are maintaining such a good
balance; that’s why you’re one of our most
accomplished employees. At the same time, I
want to fulfill my duty of helping you become
even better, so I’d appreciate your reflection
on our monthly teller goal of 15 referrals for
new bank accounts, checking accounts, and
credit cards. Last month you had four refer-
rals, and so far this month you’ve acquired
two. How do you feel you’re doing in this area?

Lisa: I guess I’m not doing as well as I probably
could. I get so concerned with moving people
through the line that I forget to ask them if
they want to start up new accounts.

Tony: I see. So it seems that you are more likely to
ask for referrals when there isn’t a line, but
when there is a line, you have a tendency to
not ask for referrals. I want you to remember
that your monthly bonus and the bank’s over-
all yearly bonus are tied directly to the num-
ber of referrals you get. I want you to be
happy with your bonuses, so what do you
think you can do better?

Lisa: Now that I think about it, I do typically ask for
referrals when there isn’t a line. I don’t know.
I always see the prompting on the computer
screen before I end a transaction, but I just
don’t want to inconvenience the people
standing in line.

Tony: Preventing customer inconvenience is an im-
portant aspect of the job. So, what if, rather
than asking people at the end of transactions
whether they’re interested in a new account,
you instead ask them while you are running
their transactions?

Lisa: Hmm, that’s actually a good idea. I always
just think about it after I am done with the
transaction. Let me give it a shot the next few
days and see how it goes.

Tony: Great. I’ll follow up with you at the end of the
week. Why don’t we plan on having another
conversation like this before you go to lunch
on Friday?

Lisa: That sounds good. I’ll look forward to it.
Thanks!

In this vignette, Tony followed nearly all of the
recommendations for effective strengths-based
feedback. He began the interview by praising and
discussing in detail Lisa’s strengths, but he did not
shy away from discussing her weaknesses, either.
Tony emphasized how Lisa can use her strengths to
improve performance even further, and demon-
strated that he was familiar with the work Lisa
was doing. By establishing a proper setting in which
to provide his feedback, Tony guaranteed that the
conversation was confidential, thereby limiting any
defensiveness on Lisa’s part. To ensure Lisa that he
was providing credible feedback, Tony was consid-
erate and very specific. Although Tony did not dis-
cuss three positive pieces of feedback for each piece
of negative feedback, he did provide Lisa with a
reward in the form of a gift card, which probably
made her more open to the weaknesses-based feed-
back that he provided. In addition, Tony’s feedback
was based on concrete evidence; for example, he was
able to motivate Lisa to mention when she had a
tendency to ask for referrals and when she did not.
Tony also discussed how Lisa’s lack of referrals tied
into specific rewards, demonstrating that referrals
were important to her as well as to the bank. Finally,
Tony gave Lisa some time to improve her behavior and
then established when he could follow up with her.

6. Conclusion

The purpose of performance feedback is to improve
individual and team performance, as well as em-
ployee engagement, motivation, and job satisfac-
tion. In this article, we described two alternative
approaches to feedback: the traditional weaknesses-
based approach and the superior strengths-based
approach. There are significant negative conse-
quences associated with the exclusive use of the
weaknesses-based approach. Accordingly, managers
should primarily adopt a strengths-based approach,
which focuses on what employees do well and
encourages the continued and further use of

HUMAN PERFORMANCE 111

these strengths. Table 1 provides a summary of nine
specific recommendations on how to deliver feed-
back using a strengths-based approach. Following
these recommendations will not only improve
future performance, but also make it easier for
managers to deliver feedback that will result in
important benefits for employees, managers, and
organizations.

References

Aguinis, H. (2009). Performance management (2nd ed.). Upper
Saddle River, NJ: Pearson Prentice Hall.

Aguinis, H., Joo, H., & Gottfredson, R. K. (2011). Why we hate
performance management–—and why we should love it. Busi-
ness Horizons, 54(6), 503—507.

Bossidy, L., & Charan, R. (2002). Execution: The discipline of
getting things done. New York: Crown Publishing.

Bouskila-Yam, O., & Kluger, A. N. (2011). Strength-based perfor-
mance appraisal and goal setting. Human Resource Manage-
ment Review, 21(2), 137—147.

Buckingham, M., & Clifton, D. O. (2001). Now, discover your
strengths. New York: The Free Press.

Burke, R. J., Weitzel, W., & Weir, T. (1978). Characteristics of
effective employee performance review and development
interviews: Replication and extension. Personnel Psychology,
31(4), 903—919.

Cawley, B. D., Keeping, L. M., & Levy, P. E. (1998). Participation in
the performance appraisal process and employee reactions:
A meta-analytic review of field investigations. Journal of
Applied Psychology, 83(4), 615—633.

Clifton, D. O., & Harter, J. K. (2003). Investing in strengths. In K.
S. Cameron, J. E. Dutton, & R. E. Quinn (Eds.), Positive
organizational scholarship: Foundations of a new discipline
(pp. 111—121). San Francisco: Berrett-Koehler.

DeNisi, A. S., & Kluger, A. N. (2000). Feedback effectiveness: Can
360-degree appraisals be improved? Academy of Management
Executive, 14(1), 129—139.

Elicker, J. D., Levy, P. E., & Hall, R. J. (2006). The role of leader-
member exchange in the performance appraisal process.
Journal of Management, 32(4), 531—551.

Fedor, D. B., Eder, R. W., & Buckley, M. R. (1989). The contribu-
tory effects of supervisor intentions on subordinate feedback

responses. Organizational Behavior and Human Decision
Processes, 44(3), 396—414.

Foster, S. L., & Lloyd, P. J. (2007). Positive psychology principles
applied to consulting psychology at the individual and group
level. Consulting Psychology Journal: Practice and Research,
59(1), 30—40.

Fulk, J., Brief, A. P., & Barr, S. H. (1985). Trust-in-supervisor and
perceived fairness and accuracy of performance evaluations.
Journal of Business Research, 13(4), 301—313.

Gardner, W. L., & Schermerhorn, J. R., Jr. (2004). Unleashing
individual potential: Performance gains through positive or-
ganizational behavior and authentic leadership. Organiza-
tional Dynamics, 33(3), 270—281.

Goodman, J. S., Wood, R. E., & Hendrickx, M. (2004). Feedback
specificity, exploration, and learning. Journal of Applied
Psychology, 89(2), 248—262.

Ilgen, D. R., Fisher, C. D., & Taylor, M. S. (1979). Consequences of
individual feedback on behavior in organizations. Journal of
Applied Psychology, 64(4), 349—371.

Jawahar, I. M. (2010). The mediating role of appraisal feedback
reactions on the relationship between rater feedback-related
behaviors and ratee performance. Group and Organization
Management, 35(4), 494—526.

Kay, E., Meyer, H. H., & French, J. R. P., Jr. (1965). Effects
of threat in a performance appraisal interview. Journal of
Applied Psychology, 49(5), 311—317.

Kinicki, A. J., Prussia, G. E., Wu, B., & Mckee-Ryan, F. M. (2004). A
covariance structure analysis of employees’ response to per-
formance feedback. Journal of Applied Psychology, 89(6),
1057—1069.

Kluger, A. N., & DeNisi, A. S. (1996). The effects of feedback
interventions on performance: A historical review, a meta-
analysis, and a preliminary feedback intervention theory.
Psychological Bulletin, 119(2), 254—284.

Landy, F. J., Barnes, J. L., & Murphy, K. R. (1978). Correlates of
perceived fairness and accuracy of performance evaluation.
Journal of Applied Psychology, 63(6), 751—754.

Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive
psychology: An introduction. American Psychologist, 55(1),
5—14.

Seligman, M. E. P., Steen, T. A., Park, N., & Peterson, C. (2005).
Positive psychology progress: Empirical validation of inter-
ventions. American Psychologist, 60(5), 410—421.

Steelman, L. A., & Rutkowski, K. A. (2004). Moderators of em-
ployee reactions to negative feedback. Journal of Managerial
Psychology, 19(1), 6—18.

  • Delivering effective performance feedback: �The strengths-based approach
  • Building up vs. breaking down
    The traditional weaknesses-based approach to feedback
    The superior strengths-based approach to feedback
    Research-based recommendations for implementing a strengths-based approach to performance feedback
    How it's done: The nine principles of effective performance feedback at play
    Conclusion
    References

Enhancing feedback and improving feedback: subjective
perceptions, psychological consequences, behavioral
outcomes
Constantine Sedikides1, Michelle A. Luke2, Erica G. Hepper3

1Psychology Department, University of Southampton
2School of Business, Management and Economics, University of Sussex
3School of Psychology, University of Surrey

Correspondence concerning this article should

be addressed to Constantine Sedikides,

Psychology Department, Center for Research

on Self and Identity, University of

Southampton, Southampton SO17 1BJ,

England, UK. E-mail: cs2@soton.ac.uk

doi: 10.1111/jasp.12407

Abstract

Three experiments examined subjective perceptions, psychological consequences,

and behavioral outcomes of enhancing versus improving feedback. Across experi-

ments, feedback delivery and assessment were sequential (i.e., at each testing junc-

ture) or cumulative (i.e., at the end of the testing session). Although enhancing

feedback was seen as more satisfying than useful, and improving feedback was not

seen as more useful than satisfying, perceptions differed as a function of short-term

versus long-term feedback delivery and assessment. Overall, however, enhancing

feedback was more impactful psychologically and behaviorally. Enhancing feedback

engendered greater success consistency, overall satisfaction and usefulness, optimism,

state self-esteem, perceived ability, and test persistence intentions; improving feed-

back, on the other hand, engendered greater state improvement. The findings pro-

vide fodder for theory development and applications.

Feedback is a common occurrence in daily life. Employees,

students, actors, or athletes receive it frequently from their

managers, instructors, directors, or coaches, respectively. A

body of literature attests to its relevance. Feedback, for exam-

ple, may contribute to the formation of competence self-

views and intrinsic task values (Gniewosz, Eccles, & Noack,

2014; Harackiewicz, 1979). It may also influence subsequent

responses, including job performance (Brown, Hyatt, & Ben-

son, 2010; Whitaker & Levy, 2012) and educational attain-

ment (Hattie & Timperley, 2007; Kluger & DeNisi, 1996).

Such responses, however, may not be what the feedback

giver (e.g., manager, teacher) had in mind (Fisher, 1979;

Gabriel, Frantz, Levy, & Hilliard, 2014; Kluger & DeNisi,

1996) and may not necessarily be in the recipient’s (e.g.,

employer’s, student’s) best interest (Gregory & Levy, 2012;

Ilgen & Davis, 2000; Kulhavy, 1977). Therefore, understand-

ing how recipients perceive the feedback in the first place is

crucial, if well-meaning evaluators wish to shape effectively

recipient responding for organizational or educational bene-

fit, or if recipients wish to maximize feedback-derived advan-

tages (Atwater & Brett, 2005; Brett & Atwater, 2001; Hattie &

Timperley, 2007). Do recipients, for example, perceive feed-

back as satisfying or useful? Perceptions of satisfaction and

usefulness are arguably prerequisites for recipients to engage

with and benefit from feedback. Understanding the psycho-

logical consequences and behavioral outcomes of feedback is

equally important. How do recipients, for example, feel about

and respond to feedback that aims at satisfying them versus

improving them? We explore, in this article, comparative per-

ceptions of enhancing and improving feedback, as well as

some of its potential psychological consequences (i.e., opti-

mism, state self-esteem, state improvement, perceived ability)

and behavioral outcomes (i.e., persistence

intentions).

Background and scope

The bulk of the literature has been concerned with the critical

(i.e., negative) versus enhancing (i.e., positive) dimension of

feedback. This literature, for example, has examined critical

and enhancing feedback in terms of recall, goal pursuit, or

performance (Fishbach, Eyal, & Finkelstein, 2010; Sedikides,

Green, Saunders, Skowronski, & Zengel, 2016), perceptions

of one’s competence or the evaluator (Aronson & Linder,

1965; Vallerand & Reid, 1984), and judgments of test validity

or credibility (Campbell & Sedikides, 1999; Wyer & Frey,

1983). A generalized statement based on this large literature

VC 2016 Wiley Periodicals, Inc

Journal of Applied Social Psychology 2016, 46, pp. 687–700

Journal of Applied Social Psychology 2016, 46, pp. 687–700

is that, on balance, enhancing feedback is seen as more satis-

fying and useful than critical feedback (Brett & Atwater,

2001; Hepper & Sedikides, 2012; Hsee & Abelson, 1991; Sedi-

kides & Gregg, 2008; Sutton, Hornsey, & Douglas, 2012).

Little research, however, has addressed another pivotal

feedback dimension, enhancing versus improving. For the

purposes of our research, enhancing feedback will refer to

consistently positive information linked to task performance,

whereas improving feedback will refers to an upward infor-

mation trajectory linked to task performance. How enhanc-

ing versus improving feedback is perceived, felt, and reacted

upon is not well understood. This is somewhat surprising,

given the growing presence of improvement motivation (e.g.,

the desire to improve) in the self-evaluation literature

(Breines & Chen, 2012; Collins, 1996; Green, Sedikides, Pin-

ter, & Van Tongeren, 2009; Heine & Raineri, 2009; Kurman,

2006; Pyszczynski, Greenberg, & Arndt, 2012; Sedikides,

2009). Do individuals perceive one type of feedback as more

satisfying or more useful than the other? Do the two feedback

types elicit different psychological and behavioral reactions?

Are perceptions, psychological consequences, and behavioral

outcomes influenced by repeated (i.e., multiple-occasion)

feedback delivery?

We explored, in three experiments, how subjective percep-

tions, psychological consequences, and behavioral outcomes

are impacted within a particular type of feedback and also

between types of feedback. We were concerned with task level

feedback (i.e., how well tasks are performed; Hattie & Tim-

perley, 2007) and externally-framed (rather than internally-

framed) feedback (MoEller, Pohlmann, Koeller, & Marsh,

2009). Further, we focused on feedback that was (a) based on

multiple testing occasions; (b) delivered to recipients sequen-

tially (i.e., at each testing juncture) or cumulatively (i.e., at

the end of the testing session); and (c) assessed (in terms of

perceptions, psychological consequences, and behavioral out-

comes) sequentially or cumulatively. Enhancing feedback was

consistently positive (e.g., percentile rankings in relation to

other test-takers could be 92, 90, 91, and 92 across four ses-

sions), whereas improving feedback tracked an upward per-

formance trajectory (e.g., percentile rankings in relation to

other test-takers could be 59, 68, 81, and 92 across four

sessions).

Theoretical and practical
considerations

Our exploratory foray was informed by two contrasting theo-

retical perspectives. The self-enhancement perspective posits

that individuals strive mostly for information positivity, with

information improvement value playing a secondary hand

(Alicke & Sedikides, 2011; Brown & Dutton, 1995; Dunning,

2005; Hepper, Gramzow, & Sedikides, 2010; Sedikides &

Strube, 1997). This perspective predicts that enhancing (i.e.,

uniformly-positive) feedback will be perceived as more satis-

fying than improving (i.e., upward-trajectory) feedback, and

also as generally more satisfying than useful, because of its

hedonic tone. The perspective also anticipates that enhancing

feedback will exert stronger psychological and behavioral

impact than improving feedback. The self-improvement per-

spective, on the other hand, posits that individuals strive

mostly for improvement information, giving secondary

importance to information positivity (Gregg, Sedikides, &

Gebauer, 2011; Markman, Elizaga, Ratcliff, & McMullen,

2007; Prelec & Loewenstein, 1997; Sedikides & Hepper, 2009;

Taylor, Neter, & Wayment, 1995). This perspective predicts

that improving feedback will be perceived as more useful

than enhancing feedback, and also as generally more useful

than satisfying, because of its utilitarian value. Further, this

perspective anticipates that improving feedback will have

greater psychological and behavioral impact than enhancing

feedback. Although the two perspectives make general pre-

dictions about perceptions of feedback, they do not offer spe-

cific enough guidance about perceptions of feedback at

distinct junctures of delivery or assessment; this is a matter of

exploration.

Not only will the investigation of perceptions, psychologi-

cal consequences, and behavioral outcomes of enhancing and

improving feedback stretch the scope of the self-

enhancement and self-improvement perspectives, but it will

also address external validity issues. In ecological settings

(e.g., occupational environments, classrooms, artistic per-

formances, athletic events), feedback is often targeted toward

both enhancement and improvement, while being delivered

on multiple (as opposed to single) occasions. In addition, in

organizational settings, employees appear to desire, not just

self-enhancement feedback, but constructive or self-

improvement feedback, if one were to consult popular busi-

ness coaching and training books (e.g., Silberman & Hans-

burg, 2005). Self-improvement motivation has indeed been

investigated in such settings as organizations (Seifert, Yukl, &

McDonald, 2003), university enrolment (Clayton & Smith,

1987), the classroom (Harks, Rakoczy, Hattie, Besser, &

Klieme, 2014; Ryan, Gheen, & Midgley, 1998), volunteering

(Dickinson, 1999), correctional facilities (Neiss, Sedikides,

Shahinfar, & Kupersmidt, 2006), and enlistment in the army

(Pliske, Elig, & Johnson, 1986); however, perceptions of

improving feedback juxtaposed to perceptions of enhancing

feedback, as well as comparative psychological consequences

and behavioral outcomes, have not been addressed.

Perceptions of feedback satisfaction and usefulness ought

to be investigated for both theoretical and practical reasons.

Satisfaction reflects the affective and valence focus of the self-

enhancement motive, whereas usefulness reflects the con-

structive focus of the self-improvement motive. Moreover, in

organizational settings for example, it is arguably vital for

feedback (e.g., appraisals) to be perceived as useful in order

688 Enhancing and improving feedback

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

for staff to engage with both feedback and management in a

mutually beneficial manner. In addition, organizations, espe-

cially those competing for talent, are often under pressure to

devise ways to keep their staff satisfied.

Experiment 1: sequential feedback
delivery and cumulative feedback
assessment

In Experiment 1, we addressed, for the first time, subjective

perceptions of self-enhancing and self-improving feedback.

We note that in this and all subsequent experiments, we (a)

randomly assigned participants to between-subjects factors

of balanced designs, (b) tested participants in individual

cubicles, and (c) obtained no sex differences or counterbal-

ancing order effects.

Participants were under the impression that they were test-

ed in four key domains of human functioning: creativity, ver-

bal intelligence, social sensitivity, analytical ability. Numerical

feedback, either enhancing or improving, was delivered at

several (i.e., four) junctures, and feedback perceptions were

assessed cumulatively at the end of the testing session. The

starting point for enhancing and improving feedback was dif-

ferent (positive for enhancing, average for improving), but

the end-point was identical (i.e., positive). While providing a

preliminary test of the self-enhancement and self-improvement

perspective, the experiment simulated multiple-occasion feed-

back delivery to employees, students, actors, or athletes by a

supervisor, instructor, director, or coach, respectively. Would

such feedback be perceived as satisfying or useful at the end of

a business quarter, academic semester, rehearsal period, or ath-

letic event?

Method

Participants and design

Participants were 102 introductory psychology students at

University of North Carolina in Chapel Hill (71 female, 31

male), who volunteered for course credit. Information about

participant age is unavailable, due to a coding error. Never-

theless, the vast majority of participants were traditional stu-

dents, aged between 18 and 22 years. The design was a 2

(feedback type: enhancing, improving) 3 2 (feedback rating:
satisfaction, usefulness) mixed factorial, with repeated mea-

sures on the latter factor.

Procedure and measures

Participants learned that they would be assessed on four piv-

otal domains of human functioning: creativity, verbal intelli-

gence, social sensitivity, analytic ability. The relevant tests had

ostensibly been standardized and administered to university

students since 1985 by the Educational Testing Service in

order to study the impact of the university environment on

social skills. Participants were then handed a booklet contain-

ing the tests, which were divided into four sections. They

received feedback (featuring an enhancing or improving tra-

jectory) after each section.

The first section, consisting of Raven’s Progressive Matrices

(RPM; 10 minutes), assessed creativity. Participants learned

that the RPM measures spatial perception and creativity, and

is a valid indicator of superior memory and innovative think-

ing. The RPM comprised eight questions. Participants deci-

phered a pattern in the displayed figures and selected, from

eight choices, the correct item to complete the pattern. Feed-

back

followed.

The second section, consisting of the Verbal Fluency Test

(4 minutes) and the Analogies Test (5 minutes), assessed ver-

bal intelligence. Participants learned that better test scores

were associated with higher IQ and greater professional suc-

cess. For the Verbal Fluency Test, participants were given two

sets of four letters (L, C, E, N; F, O, S, P) and were asked to

generate as many 4-word sentences as possible using the

specified first letters for each word. For the Analogies Test,

participants were to complete 10 analogies. They received

three words, the first two of which were related. Their task

was to pick the word that related to the stimulus word in the

same way as the first two words. For example, the correct

answer for the analogy “Shoe: Foot:: Glove: (a. Arm, b.

Elbow, c. Hand)” would be Hand, because Hand is related to

Glove in the same way as Foot is related to Shoe. Feedback

followed.

The third section, consisting of the Perception of Relation-

ships Test (5 minutes) and the Perception of Deception Test (5

minutes), assessed social sensitivity. Participants learned that

individuals who performed well on these tasks were more

adept at solving interpersonal conflicts and had longer-

lasting relationships. We adapted the Perception of Relation-

ships Test from the Social-Cognitive Aptitude Test (Crocker,

Thompson, McGraw, & Ingerman, 1987). Participants read

paragraphs about two couples and indicated their impression

of each couple, whether the couple members were supportive

of each other, and the likelihood that each couple would still

be together in one year. In the Perception of Deception Test,

participants read two incidents (a man late for a date, a city

council member accused of neglecting to report campaign

contributions). Then participants indicated their impression

of each character, the quality of the relationship in the first

incident, the popularity of the city council member in the

second incident, and whether the main characters were lying.

Feedback followed.

The fourth and final section, consisting of the Analytical

Ability Test (9 minutes), assessed logical reasoning. Partici-

pants learned that better performance was linked with success

in careers that involve critical thinking skills. The test asked

participants to determine in what grade each of eight

Sedikides et al. 689

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

children was and what costume they wore in the Thanksgiv-

ing pageant. Feedback followed.

The feedback, in the form of percentile rankings in rela-

tion to other university student test-takers, was either

enhancing or improving across the test sections. In the

enhancing condition, participants received feedback that

started at a high level and remained constant. The section

scores were: 92, 90, 91, 92. In the improving condition, par-

ticipants received feedback that started relatively low and

became progressively higher. The section scores were: 59, 68,

81, 92.

Finally, participants completed the satisfaction and useful-

ness scales in counterbalanced order (1 5 not at all, 9 5 very
much). The satisfaction scale comprised three questions ask-

ing how pleased, satisfied, and content participants were with

the feedback (a 5 .95). The usefulness scale comprised three
questions asking how useful, helpful, and constructive partic-

ipants considered the feedback (a 5 .95). Responses to the
two scale indices were correlated, r(100) 5 .50, p< .001.

Results and discussion

Satisfaction and usefulness

Overall, participants in the enhancing condition (M 5 6.53,
SD 5 1.78) rated the feedback higher (i.e., perceived it as
more satisfying and useful) than those in the improving con-

dition (M 5 5.65, SD 5 1.78), feedback type main effect F(1,
100) 5 6.19, p 5 .015, g2partial 5 .06. Also, participants overall
perceived the feedback as descriptively but not significantly

more satisfying (M 5 6.25, SD 5 1.96) than useful (M 5 5.92,
SD 5 2.27), feedback rating main effect F(1, 100) 5 2.57,
p 5 .112, g2partial 5 .03.

Crucially, the interaction was significant, F(1, 100) 5 4.38,
p 5 .039, g2partial 5 .04. We proceeded to calculate four com-
parison tests, using the Bonferroni correction (.05/

4 5 .0125). We examined the effects of feedback type sepa-
rately on satisfaction and usefulness (i.e., each level of feed-

back rating). Participants in the enhancing condition

(M 5 6.91, SD 5 1.89) perceived feedback as more satisfying
than those in the improving condition (M 5 5.59,
SD 5 1.81), t(100) 5 3.58, p 5 .001, d 5 0.77; however, par-
ticipants in the enhancing (M 5 6.14, SD 5 2.37) and
improving (M 5 5.70, SD 5 2.16) conditions perceived feed-
back as equivalently useful, t(100) 5 1.00, p 5 .321, d 5 0.19.
We also examined the effects of feedback rating separately for

each feedback type condition (i.e., enhancing, improving).

Participants in the enhancing condition perceived the feed-

back as more satisfying than useful, t(50) 5 2.86, p 5 .006,
d 5 0.40; however, participants in the improving condition
perceived the feedback as equivalently satisfying and useful,

t(50) 5 20.32, p 5 .750, d 5 20.04.

Summary

Overall, participants regarded enhancing (compared to

improving) feedback as more satisfying. Furthermore, they

regarded enhancing feedback as more satisfying than use-

ful, whereas they regarded improving feedback as equiva-

lently useful and satisfying. Although these findings are

generally consistent with the self-enhancement perspec-

tive, it is possible that the design of Experiment 1 did not

allow for a fair test of the self-improvement perspective. In

particular, the delivery and assessment of the feedback

may have afforded limited opportunities for improvement,

thus reducing the feedback’s utilitarian value. Experiment

2 addressed this potential limitation.

Experiment 2: sequential feedback
delivery and sequential feedback
assessment

In Experiment 2, we asked a more focused question: Do par-

ticipants perceive the two feedback types (i.e., enhancing and

improving) differently when feedback is both delivered and

assessed at each performance juncture? Participants were

under the impression that they were tested in the same four

key domains as in the previous experiment. We delivered

feedback, either enhancing or improving, at several junctures

and assessed feedback perceptions separately at each juncture

(Ariely, 1998; Ilies, Nahrgang, & Morgeson, 2007; Tonidan-

del, Qui~nones, & Adams, 2002). This experiment simulated

situations such as the appraisal of multiple-occasion

(enhancing or improving) feedback administered to employ-

ees, students, actors, or athletes over the course of a business

quarter, academic term, rehearsal period, or athletic event.

Will recipients perceive such feedback as satisfying or useful

on each occasion? In addition, this experiment examined a

potential psychological consequence of feedback, optimism

about performance on future aptitude tests. Will enhancing

or improving feedback elicit higher optimism at the end of

the testing session (i.e., cumulatively)? This was an open-

ended question, as the relevant literature is equivocal (Sedi-

kides, 2012; Sedikides & Hepper, 2009; Taylor & Brown,

1988).

Method
Participants and design

Sixty University of Southampton undergraduates (35 female,

6 male, 19 undeclared; MAGE 5 19.27, SDAGE 5 3.21) partici-
pated in exchange for course credit. We excluded (on an a

priori basis) 10 additional participants due to incomplete

responses (n 5 3), errors during data collation (n 5 6), or
suspicion (n 5 1). The design was a 2 (feedback type:
enhancing, improving) 3 2 (feedback rating: satisfaction,

690 Enhancing and improving feedback

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

usefulness) 3 4 (time: 1, 2, 3, 4) mixed factorial, with repeat-
ed measures on the last two factors.

Procedure and measures

Under a pretext similar to that of Experiment 1, participants

completed four testing sections via computer and received

feedback (enhancing or improving) following each one. Dis-

tinctly from Experiment 1, they also indicated their percep-

tions of feedback following each section.

The first section, consisting of the Uses Test (6 minutes),

assessed creativity. Participants generated as many uses as

possible for a candle, a brick, and a spoon (Sedikides, Camp-

bell, Reeder, & Elliot, 1998). The second section, consisting

of the Verbal Fluency Test (4 minutes) and the Analogies Test

(5 minutes), assessed verbal intelligence and was the same as

in Experiment 1. The third section, consisting of the Percep-

tion of Relationships Test (5 minutes) and the Perception of

Deception Test (5 minutes), assessed social sensitivity and was

virtually identical to that of Experiment 1. The fourth and

final section, consisting of an Analytical Capacity Test (10

minutes), assessed logical thinking by asking participants to

decipher the full names and habitual situations of several per-

sons who had recently moved house.

After each section, participants received computer-

administered feedback, which represented a percentile rank-

ing in relation to other university student test-takers. In the

enhancing condition, the feedback started and ended at a

high level (92, 90, 91, 92). In the improving condition, the

feedback started low and increased steadily (59, 68, 81, 92).

Four times (i.e., once after each feedback administration),

participants completed the satisfaction (as> .88) and then
usefulness (as> .86) scales used in Experiment 1. Responses
to the two scales at each administration time were correlated,

rs(58)> .44, ps< .001.

At the end of the testing session, participants completed a

3-item optimism measure. The items assessed optimism

about performance on future aptitude tests (10 5 low, not at
all, 100 5 high, very much). They were: “Using the percentile
scores below, how do you expect to perform on aptitude tests

in the future?,” “How confident are you about your ability to

successfully perform on aptitude tests in the future?,” and

“How optimistic are you about your ability to excel at apti-

tude tests in the future?” (a 5 .78).
Finally, given the positive relation between optimism and

mood (Cheung et al., 2013; Segerstrom, Taylor, Kemeny, &

Fahey, 1998), we included a mood measure in order to rule

out the possibility that participants in the improving condi-

tion were in a negative mood due to their low performance

(e.g., 59th percentile) on a valued dimension and therefore

less optimistic. Specifically, all participants indicated how

sad, blue, content, happy, pleased, and unhappy (Martin,

Abend, Sedikides, & Green, 1997) they were currently feeling

(1 5 not at all, 5 5 extremely; a 5 .86). Participants in the
improving condition (M 5 3.79, SD 5 0.75) did not differ
significantly from those in the enhancing condition

(M 5 4.06, SD 5 0.61), F(1, 58) 5 2.42, p 5 .125, g2partial 5
0.04. Thus, the reported results cannot be attributed to

between-condition mood differences and the mood variable

is not discussed further.

Results and discussion

Satisfaction and usefulness over time

In replication of Experiment 1, overall participants in the

enhancing condition (M 5 6.37, SD 5 1.07) perceived the
feedback as more satisfying and useful compared to those in

the improving condition (M 5 5.74, SD 5 1.07), feedback
type main effect F(1, 58) 5 5.15, p 5 .027, g2partial 5 0.08.
Also, consistent with Experiment 1’s directional pattern, par-

ticipants overall perceived the feedback as more satisfying

(M 5 6.81, SD 5 0.96) than useful (M 5 5.30, SD 5 1.50),
feedback rating main effect F(1, 58) 5 76.80, p< .001, g2partial 5 0.59. Neither the time main effect, F(2, 116) 5 0.38, p 5 .685, g2partial 5 0.007, nor the feedback type 3 feedback rating interaction, F(1, 58) 5 1.15, p 5 .289, g2partial 5 0.02, were significant. However, the feedback type 3 time interaction, F(2, 116) 5 22.50, p< .001, g2partial 5 0.28, as well as the feedback rating 3time interaction, F(3, 154) 5 8.64, p< .001, g2partial 5 0.13, were significant.

Crucially, the significant effects were qualified by the

three-way interaction, F(3, 154) 5 4.56, p 5 .006, g2partial 5
0.07 (Figure 1). We conducted two 2 (feedback type) 3 4
(time) Analyses of Variance (ANOVAs), followed by pairwise

comparisons with Bonferroni correction, for each level of

feedback rating—that is, separately for satisfaction (.05/

4 5 .0125) and usefulness (.05/4 5 .0125).
First, we examined satisfaction. A 2 (feedback type) 3 4

(time) mixed ANOVA revealed a significant interaction, F(2,

129) 5 32.86, p< .001, g2partial 5 0.36. The linear trend for time differed by feedback type, F(1, 58) 5 54.97, p< .001, g2partial 5 0.49. Although the linear trends were significant for the enhancing condition, F(1, 29) 5 10.19, p 5 .003, g2par- tial 5 0.26, and improving condition, F(1, 29) 5 54.33, p< .001, g2partial 5 0.65, the effect of the trend was greater for the improving condition. Thus, participants perceived the

enhancing feedback as less satisfying over time, but perceived

improving feedback as more satisfying over time (Figure 1).

Pairwise comparisons of feedback type showed that partici-

pants in the enhancing condition were more satisfied than

those in the improving condition at time 1, t(58) 5 8.52,
p< .001, d 5 2.20, and at time 2, t(58) 5 4.29, p< .001, d 5 1.10, but not at time 3 or 4, ts(58)< |1.50|, ps> .141,
ds< |0.40|.

Sedikides et al. 691

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

We proceeded with examining usefulness. The feedback

type 3 time interaction was again significant, F(2,
125) 5 7.71, p 5 .001, g2partial 5 0.12, with the linear trend
differing by feedback type, F(1, 58) 5 12.64, p 5 .001, g2partial 5
0.18. The linear trend was significant in the enhancing condi-

tion, F(1, 29) 5 11.61, p 5 .002, g2partial 5 0.29, but not in the
improving condition, F(1, 29) 5 2.72, p 5 .110, g2partial 5 0.09.
Given that the means decreased over time, we conclude

that participants perceived enhancing feedback as less use-

ful over time (Figure 1). Pairwise comparisons revealed

that participants in the enhancing condition found feed-

back marginally more useful than those in the improving

condition at time 1, t(58) 5 3.60, p 5 .001, d 5 0.92, but
not at time 2, t(58) 5 1.60, p 5 .115, d 5 0.45, nor time 3
or 4, ts(58)< |.661|, ps> .510, ds< |0.21|. Together, as

illustrated in Figure 1, these patterns demonstrate that

feedback was perceived as more satisfying over time in the

improving condition but not in the enhancing condition,

and was perceived as less useful over time in the enhancing

condition but not in the improving condition.

Optimism

Participants in the enhancing condition expressed more opti-

mism (M 5 73.11, SD 5 11.84) compared to those in the
improving condition (M 5 67.11, SD 5 8.79), F(1, 58) 5
4.97, p 5 .030, g2partial 5 0.08.

Summary

Consistent with the findings of Experiment 1 and the self-

enhancement perspective, participants regarded enhancing

feedback as more satisfying and useful compared to improv-

ing feedback. However, several effects, which emerged due to

sequential feedback assessment, added texture to this conclu-

sion. First, participants in the enhancing feedback condition

rated the feedback as less satisfying and useful over time. Sec-

ond, participants in the improving feedback condition rated

the feedback as more satisfying but not more useful over

time. Third, participants in the enhancing condition began

by rating the feedback as more satisfying and useful than

those in the improving condition, but by time 3 and 4 this

was no longer the case. In all, participants regarded enhanc-

ing (compared to improving) feedback as more satisfying

and useful, but they did so in the short-term rather than

long-term. Finally, participants reported higher levels of opti-

mism following enhancing than improving feedback.

Experiment 3: subjective perceptions,
psychological consequences and
behavioral outcomes as a function of
sequential feedback delivery and
feedback assessment

Experiments 1–2 delivered enhancing or improving feedback

on several domains (i.e., creativity, verbal intelligence, social

sensitivity, analytical ability), although these domains were

said to exemplify “human functioning.” Nevertheless, in aca-

demic and employment settings, repeated feedback often per-

tains to a single ability domain. Moreover, arguably the

improvement value of feedback is highest when that feedback

targets a specific domain instead of spreading over multiple

domains. Therefore, in Experiment 3 we tested the replicabil-

ity of Experiment 2 findings while delivering feedback, at sev-

eral (i.e., five) junctures, about participants’ performance in

one domain: cognitive flexibility. How do recipients perceive

single-domain feedback when it is delivered and assessed

sequentially?

Experiment 3 additionally aimed to extend our prior work

in two ways. To begin, it expanded the measures of psycho-

logical outcomes to include not only optimism about future

performance, but also overall satisfaction and usefulness,

state self-esteem and state improvement, as well as perceived

ability. Also, it included a behavioral outcome, test persis-

tence intentions. Do enhancing and improving feedback

Figure 1 Satisfaction and usefulness as a function of feedback type and

time in Experiment 2. Error bars indicate standard errors of the mean.

692 Enhancing and improving feedback

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

affect differentially psychological consequences and behavior-

al outcomes?

Method
Participants and design

Participants (n 5 50; 32 females, 18 males; MAGE 5 20.64,
SDAGE 5 2.39) were recruited from several academic depart-
ments at the University of Southampton in return for course

credit or £5 payment. We excluded on an a priori basis 11
additional participants for suspicion. The design was a 2

(feedback type: enhancing, improving) 3 2 (feedback rating:
satisfaction, usefulness)3 5 (time: 1, 2, 3, 4, 5) mixed factori-
al design with repeated measures on the last two factors.

Procedure and measures

Participants were led to believe that they were involved in the

establishment of normative UK data on an index of cognitive

flexibility, integrative orientation (IO), which predicted per-

formance on IQ and GRE tests as well as successful manage-

ment of relational conflict. They responded to all measures

on computer.

Participants began by completing a 3-item pre-test mea-

sure of perceived IO ability. Each item required them to move

a sliding scale between two opposing anchors (e.g., 0 5 I
have extremely low IO ability . . . 9 5 I have extremely high IO
ability; a 5 .87).

Subsequently, participants took the ostensible IO test,

which consisted of five rounds of nine Remote Associates

Test (Mednick & Mednick, 1967) items, and lasted 10–25

minutes. Participants in the enhancing condition responded

to test items that were relatively easy in every round (as per

normative data: Bowden & Jung-Beeman, 2003; McFarlin &

Blascovich, 1984). Participants in the improving condition

responded to test items that were difficult in round 1 and

became increasingly easy, with those in round 5 being identi-

cal to those in round 5 of the enhancing condition. We

recorded the number of correct responses as a manipulation

check index of test performance.

After each round, participants received feedback in the

form of percentile scores. In the enhancing condition, feed-

back started at a relatively high level and remained there

(92, 90, 93, 91, 92). In the improving condition, feedback

started at a relatively low level and became progressively posi-

tive (54, 65, 77, 84, 92). Following each round, participants

rated the feedback on satisfaction (pleased, satisfied; a> .85)
and usefulness (useful, helpful; a> .78) by moving a sliding
scale between two anchors (0 5 not at all, 100 5 extremely).
These ratings constituted the satisfaction and usefulness over

time measure. Responses to the two scales were weakly or

moderately correlated at each time-point, rs(48) ranging

from .22, p 5 .122 (time 5), to .48, p< .001 (time 3). Finally, at the conclusion of the testing session, participants complet-

ed, in randomized order, psychological consequences mea-

sures (i.e., overall satisfaction and usefulness, optimism,

state self-esteem and state improvement, perceived ability)

and a behavioral outcomes measure (i.e., test persistence

intentions).

Overall satisfaction and usefulness

These scales were identical to the ones used in Experiment 1

(as 5 .90). Responses to the two scales were uncorrelated,
r(48) 5 .21, p 5 .149.

Optimism

This scale was similar to the one used in Experiment 2. We

reworded the three items to reflect optimism about perfor-

mance on future integrative orientation tests (0 5 low, not at
all, 100 5 high, very much; a 5 .89).

State self-esteem and state improvement

One item assessed state self-esteem: “Right now, I am feeling

good about myself” (0 5 strongly disagree, 9 5 strongly agree).
Six items assessed how much participants believed they had

improved during the session (0 5 not at all, 9 5 extremely;
a 5 .80). Examples are: “To what extent did your ability to
solve IO questions improve during the course of the test?,”

“How much progress do you feel you made over the

session?,” “To what extent was your ability to solve integra-

tion orientation questions stuck in a rut during the test?”

(reverse-scored).

Perceived ability

The same three items as the relevant pre-test measure

assessed perceived IO ability (a 5 .87).

Test persistence intentions

One item assessed test persistence intentions by asking how

willing participants would be to complete a similar test in the

future (0 5 not at all, 9 5 extremely).

Results and discussion

Test performance

We began by examining the effectiveness of the manipula-

tion. Were participants in the enhancing condition consis-

tently successful at the IO test, and did participants in the

improving condition improve over time? To address these

questions, we conducted a 2 (feedback type) 3 5 (time)
mixed ANOVA on number of correct responses in the test.

Overall, participants in the enhancing condition (M 5 5.54,

Sedikides et al. 693

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

SD 5 1.76) performed better than those in the improving
condition (M 5 3.96, SD 5 1.04), feedback type main effect
F(1, 48) 5 14.99, p< .001, g2partial 5 0.24. Also, performance improved on average across rounds, time main effect F(4,

192) 5 28.24, p< .001, g2partial 5 0.37; linear trend F(1, 48) 5 102.02, p< .001 (Figure 2). Importantly, the feedback type 3 time interaction was significant, F(4, 192) 5 27.46, p< .001, g2partial 5 0.36. The linear trend differed significant- ly by feedback type, F(1, 48) 5 91.11, p< .001, g2partial 5 0.66. Performance did not increase over time in the enhancing con-

dition, F(1, 24) 5 0.15, p 5 .707, g2partial 5 0.01, but it did
increase in the improving condition, F(1, 24) 5 205.88,
p< .001, g2partial 5 0.90 (Figure 2). Pairwise comparisons with Bonferroni correction (.05/5 5 .01) confirmed that participants in the enhancing condition performed better

than those in the improving condition at time 1, F(1,

48) 5 67.69, p< .001, g2partial 5 0.59, and time 2, F(1, 48) 5 42.17, p< .001, g2partial 5 0.47, but not at time 3, 4, or 5, Fs< 1, ps> .346. In all, the manipulation was effective.

Satisfaction and usefulness over time

In replication of Experiment 2, overall participants in the

enhancing condition (M 5 72.49, SD 5 12.89) perceived the
feedback as more satisfying and useful compared to those in

the improving condition (M 5 63.62, SD 5 12.71), feedback
type main effect F(1, 48) 5 6.00, p 5 .018, g2partial 5 0.11.
Also, consistent with Experiment 2, participants overall per-

ceived the feedback as more satisfying (M 5 71.60,
SD 5 15.35) than useful (M 5 64.51, SD 5 16.32), feedback
rating main effect F(1, 48) 5 9.31, p 5 .004, g2partial 5 0.16.
Overall, evaluations of feedback increased over time, F(4,

192) 5 18.01, p< .001, g2partial 5 0.27 (Figure 3). The analysis also produced significant interactions between feedback type

and time, F(3, 141) 5 36.10, p< .001, g2partial 5 0.43, and

feedback rating and time, F(2, 116) 5 31.22, p< .001, g2partial 5 0.39.

Crucially, the significant effects were qualified by the three-

way interaction, F(2, 116) 5 19.03, p< .001, g2partial 5 0.28 (Figure 3). As in Experiment 2, we conducted two 2 (feed-

back type) 3 5 (time) mixed ANOVAs, followed by trend
and pairwise analyses with Bonferroni correction (.05/

5 5 .01) for satisfaction and usefulness.
First, we examined satisfaction. The feedback type 3 time

interaction was significant, F(3, 122) 5 52.23, p< .001, g2par- tial 5 0.52. The linear trend for time differed by feedback type, F(1, 48) 5 74.38, p< .001, g2partial 5 0.61: it was signifi- cant for the improving condition, F(1, 24) 5 88.75, p< .001, g2partial 5 0.79, but not for the enhancing condition, F(1, 24) 5 0.57, p 5 .458, g2partial 5 0.02. Thus, participants per- ceived improving (but not enhancing) feedback as more sat-

isfying over time (Figure 3). Pairwise comparisons of

feedback type showed that participants in the enhancing con-

dition were more satisfied than those in the improving con-

dition at time 1, F(1, 48) 5 61.46, p< .001, g2partial 5 0.56,

Figure 2 Task performance as a function of feedback type and time in

Experiment 3. Error bars indicate standard errors of the mean.

Figure 3 Satisfaction and usefulness as a function of feedback type and

time in Experiment 3. Error bars indicate standard errors of the mean.

694 Enhancing and improving feedback

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

time 2, F(1, 48) 5 10.60, p 5 .002, g2partial 5 0.18, and time 3,
F(1, 48) 5 6.79, p 5 .012, g2partial 5 0.12, but not at time 4 or
5, Fs< 1, ps> .436, g2partial< 0.02.

We proceeded with examining usefulness. The feedback

type 3 time interaction was again significant, F(3,
143) 5 3.33, p 5 .012, g2partial 5 0.07, with the linear trend
differing by feedback type, F(1, 48) 5 5.06, p 5 .029,
g2partial 5 0.10. The linear trend was significant in the enhanc-
ing condition, F(1, 24) 5 5.76, p 5 .024, g2partial 5 0.19, but
not in the improving condition, F(1, 24) 5 0.69, p 5 .415,
g2partial 5 0.03. Thus, participants perceived enhancing feed-
back as less useful over time (Figure 3). Pairwise comparisons

showed that participants in the enhancing condition found

feedback marginally more useful than those in the improving

condition at time 1, F(1, 48) 5 5.22, p 5 .027, g2partial 5 0.10,
but not at times 2, 3, 4, or 5, Fs(1, 48)< 1.87, ps> .176, g2par-
tial< 0.04. Together, as illustrated in Figure 3, these patterns

demonstrate that feedback was perceived as more satisfying

over time in the improving condition but not in the enhanc-

ing condition, and was perceived as less useful over time in

the

enhancing condition but not in the improving condition.

Overall satisfaction and usefulness

Participants in the enhancing condition were more satisfied

overall (M 5 7.45, SD 5 1.14) than those in the improvement
condition (M 5 6.24, SD 5 1.29), F(1, 48) 5 12.45, p< .001, g2partial 5 0.21. However, participants in the enhancing (M 5 5.15, SD 5 2.26) and improving (M 5 4.96, SD 5 2.00) condition did not differ in how useful they found the feed-

back, F(1, 48) 5 0.10, p 5 .758, g2partial 5 0.002. These results
replicate those of Experiment 1.

Optimism

In replication of Experiment 2, participants in the enhancing

condition (M 5 82.59, SD 5 11.66) expressed more opti-
mism about their future performance on aptitude tests than

their improving condition counterparts (M 5 68.49,
SD 5 14.13), F(1, 48) 5 14.79, p< .001, g2partial 5 0.24.

Self-esteem and state improvement

We examined participants’ state self-esteem and state

improvement in a 2 (feedback type) 3 2 (feedback rating)
mixed ANOVA. Overall, participants reported higher state

self-esteem (M 5 6.64, SD 5 13.37) than state improvement
(M 5 5.81, SD 5 1.57), feedback rating main effect F(1,
48) 5 12.30, p 5 .001, g2partial 5 0.20. There was no main
effect of condition, F(1, 48) 5 0.47, p 5 .498, g2partial 5 0.01,
but there was a significant feedback type 3 feedback rating
interaction, F(1, 48) 5 24.10, p< .001, g2partial 5 0.33. Pair- wise comparisons with Bonferroni correction (.05/2 5 .025) confirmed that, whereas participants in the enhancing

condition (M 5 7.12, SD 5 1.01) reported higher state self-
esteem than those in the improving condition (M 5 6.16,
SD 5 1.52), F(1, 48) 5 6.91, p 5 .011, g2partial 5 0.13, partici-
pants in the improving condition (M 5 6.49, SD 5 1.131)
reported higher state improvement than those in the enhanc-

ing condition (M 5 5.12, SD 5 1.53), F(1, 48) 5 11.60,
p 5 .001, g2partial 5 0.20. Enhancing and improving feedback
elicited feelings of self-esteem and improvement, respectively.

Perceived ability

We conducted a one-way Analysis of Covariance on per-

ceived IO ability, controlling for perceived IO ability before

test-taking. Participants in the enhancing condition

(M 5 6.79, SD 5 1.20) believed that they were higher on IO
ability than those in the improving condition (M 5 5.95,
SD 5 .94), F(1, 47) 5 7.75, p 5 .010, g2partial 5 0.13. Partici-
pants in the enhancing condition incorporated their consis-

tently positive feedback into a positive self-view in this

domain.

Test persistence intentions

Participants in the enhancing condition (M 5 8.16, SD 5 .94)
were more willing to persist at the task than those in the

improving condition (M 5 7.12, SD 5 1.33), F(1, 48) 5 10.14,
p 5 .003, g2partial 5 0.17.

Summary

Experiment 3 replicated and extended the findings of Experi-

ment 2. Participants in the enhancing condition were more

satisfied than those in the improving condition at time 1, 2,

and 3, but not 4 or 5. Also, participants in the enhancing

condition found feedback more useful than those in the

improving condition at time 1, but not at times 2, 3, 4, or 5.

From a different vantage point, participants found the feed-

back more satisfying over time in the improving condition

but not in the enhancing condition (Elicker et al., 2010; Hsee

& Abelson, 1991), and found it less useful over time in the

enhancing condition but not in the improving condition.

In addition, Experiment 3 expanded the range of psycho-

logical consequences of enhancing and improving feedback.

Participants in the enhancing condition were more satisfied

overall, were more optimistic about future performance,

reported higher state self-esteem, and believed that they were

higher on IO ability; conversely, participants in the improv-

ing condition reported higher state improvement. Finally,

Experiment 3 revealed a behavioral outcome: Participants in

the enhancing condition were more willing to persist at the

test in the future.

Sedikides et al. 695

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

General discussion

Feedback is prevalent in organizational settings. Investigating

reactions to feedback is important for theoretical as well as

practical reasons. Reactions to feedback are included in many

theories of interpersonal or intragroup behavior (Sutton

et al., 2012), as the feedback process is considered an imme-

diate predecessor of performance. That is, assuming that

recipients are willing to accept and respond to it (Cawley,

Keeping, & Levy, 1998; Latham, Cheng, & Macpherson,

2012), feedback can augment performance (Ilgen & Davis,

2000). It is because of this theoretical and practical relevance

that reactions to feedback have been studied in such contexts

as performance appraisal (Keeping & Levy, 2000), 360-degree

and upward feedback programs (Brett & Atwater, 2001),

computer-adaptive testing (Tonidandel et al., 2002), selection

decisions (Bauer, Maertz, Dolen, & Campion, 1998), and

management development (Ryan, Brutus, Greguras, & Hakel,

2000).

Yet, what has been studied in such settings is feedback

preferences, reactions to different versions of the same feed-

back, or reactions to enhancing versus critical feedback. Lack-

ing is a systematic investigation of reactions to another

feedback dimension, enhancing versus improving. The objec-

tive of our research was to begin to address this gap in the

literature.

We wondered how these two distinct types of feedback

would be perceived, and how they could influence the recipi-

ents—both psychologically and behaviorally. Two broad the-

oretical perspectives provided the impetus for our empirical

quest: self-enhancement and self-improvement. According to

the self-enhancement perspective (Alicke & Sedikides, 2009;

Brown & Dutton, 1995; Hepper et al., 2010), enhancing feed-

back will be perceived as more satisfying than improving

feedback, and also as generally more satisfying than useful. In

addition, enhancing feedback will exert stronger psychologi-

cal and behavioral impact than improving feedback. On the

other hand, according to the self-improvement perspective

(Prelec & Loewenstein, 1997; Sedikides & Hepper, 2009; Tay-

lor et al., 1995), improving feedback will be perceived as

more useful than enhancing feedback, and also as generally

more useful than satisfying. In addition, improving feedback

will exert stronger psychological and behavioral impact than

enhancing feedback.

Summary of findings

We carried out three experiments, in which we systematically

manipulated aspects of enhancing and improving feedback

delivery and assessment. Each experiment simulated a perti-

nent naturalistic setting. In Experiment 1, feedback delivery

was sequential, whereas the assessment of feedback percep-

tions was cumulative. In Experiment 2, both feedback

delivery and perception assessment were sequential; this

experiment also began to examine psychological conse-

quences (i.e., optimism) of feedback. Finally, in Experiment

3, feedback delivery and feedback perception assessment

were both sequential and cumulative. More important, in

this experiment a fuller range of psychological consequences

were assessed (i.e., optimism, state self-esteem, state improve-

ment, perceived ability) as well as a behavioral outcome (i.e.,

test persistence intentions). In addition, feedback here per-

tained to a single aptitude domain (also used in Experiments

1–2), whereas feedback in the prior experiments pertained to

multiple domains.

In general, participants considered (a) enhancing feedback

as more satisfying and useful relative to improving feedback,

and (b) enhancing feedback as more satisfying than useful

(Experiments 1–3). These result patterns were anticipated by

the self-enhancement perspective. Nevertheless, the implica-

tions of feedback came to be more intricate, as a function of

delivery time and assessment time. Participants who received

enhancing feedback perceived it initially (times 1–2, Experi-

ment 2; times 1–3, Experiment 3) as more satisfying than

those who received improving feedback, but later (times 3-4,

Experiment 2; times 4-5, Experiment 3) this difference van-

ished. Similarly, participants who received enhancing feed-

back perceived it initially (times 1–2, Experiment 2; time 1,

Experiment 3) as more useful than those who received

improving feedback, but later (times 3–4, Experiment 2;

times 2–5, Experiment 3) this difference vanished. Moreover,

participants who received enhancing feedback found it either

less satisfying (Experiment 2) or equally satisfying (Experi-

ment 3) over time, and found it less useful over time (Experi-

ment 2–3); however, participants who received improving

feedback found it more satisfying, albeit not more useful,

over time (Experiment 2–3). Also, enhancing (compared to

improving) feedback sparked greater optimism, overall satis-

faction, state self-esteem, belief in aptitude ability, and inten-

tions to persist on the test; improving feedback, on the other

hand, sparked greater state feelings of improvement.

Implications

The findings have theoretical and practical implications. On

the basis of cumulative assessments of feedback perceptions,

psychological consequences, and behavioral outcomes, the

results are congruent with the self-enhancement perspective.

Participants found enhancing (relative to improving) feed-

back more satisfying and useful, and found enhancing feed-

back more satisfying than useful. Also, under the influence of

enhancing (relative to improving) feedback, they reported

higher optimism about future test performance, overall satis-

faction, state self-esteem, belief in their ability on the relevant

aptitude domain, and intentions for test persistence. Enhanc-

ing feedback fueled a multitude of processes. It elevated

696 Enhancing and improving feedback

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

feelings of satisfaction, self-esteem, and optimism; it was

incorporated into participants’ self-efficacious beliefs, and it

instigated stronger behavioral intentions of persistence (and

thus achievement) on similar future tasks. From a practical

standpoint, then, enhancing feedback is likely to be more

impactful than improving feedback when assessment is

cumulative.

However, on the basis of sequential assessments of feed-

back perceptions, the results proved intricate and were con-

gruent with neither the self-enhancement nor the self-

improvement perspective. Participants found enhancing (rel-

ative to improving) feedback more satisfying and more useful

in the short-term but not long-term. Alternatively, they

found enhancing feedback less satisfying and less useful over

time, but they found improving feedback more satisfying,

albeit not more useful, over time. Time, then, qualifies the

effects of cumulative assessment. Viewed from a different

angle, enhancing feedback per se is less satisfying and useful

in the long-term (than short-term), but improving feedback

per se is more satisfying, but not more useful, in the long-

term (than short-term). The results provide the fodder for

subsequent theory development. From a practical standpoint,

the impact of enhancing versus improving feedback will

depend on its temporal assessment. It could be, for example,

that people come to appreciate the value of improving feed-

back only over time (in accord with the self-improvement

perspective), or alternatively that they only value it as it

becomes more positive (in accord with the self-enhancement

perspective).

Limitations and future directions

Some results from Experiments 2 and 3 are amenable to a

more nuanced interpretation. In Experiment 2, differences

between perceptions of enhancing and improving feedback

declined as discrepancies in performance information dimin-

ished, and ultimately such differences disappeared by time 4

when participants in both feedback conditions received an

identical performance score (i.e., percentile score of 92). A

similar trend emerged in Experiment 3. As we mentioned

above, the self-enhancement and self-improvement perspec-

tives do not provide detailed guidance that would allow a full

understanding of these temporal changes. At a low construal

level, one could argue that we have simply documented that

people find uniformly positive feedback more satisfying com-

pared to feedback that starts negative before it becomes posi-

tive, and that people (in both conditions) find feedback that

ends at the same level of positivity as satisfying. A more sub-

stantive interpretation would state that the low percentile

scores (negative feedback) that we provided in the improve-

ment condition implied unexpectedly weak ability, whereas

successively higher scores contributed to perceptions of hav-

ing reached an acceptably positive level. Regardless, the issue

is whether satisfaction and usefulness ratings merely reflected

participants’ percentile rankings: as the ranking increased, so

did satisfaction and usefulness perceptions. Indeed, the fact

that participants’ perceptions in the enhancement condition

varied over time, in spite of percentile scores remaining at

approximately the same level, would argue against a mono-

tonic relationship between percentile scores and feedback

perceptions. Limitations in our operationalization of enhanc-

ing and improving feedback may be responsible for such

result patterns. Follow-up research could manipulate the

starting position of feedback (i.e., high vs. low, while manip-

ulating orthogonally upward vs. stable trajectory) or intro-

duce a setback within the improvement sequence.

More general limitations included structural characteristics.

We were concerned exclusively with task level feedback and

delivered it in a specific format (i.e., in terms of percentile

rankings). Future investigations will need to address other

types of feedback (Hattie & Timperley, 2007; Kamins &

Dweck, 1999), such as process level feedback (i.e., the key

process presumed to underlie task performance), self-

regulation level feedback (i.e., directing and monitoring one’s

own behavior), self or person level feedback (i.e., person-

directed evaluative or affective statements), and outcome lev-

el feedback (i.e., concrete, action-directed feedback). Future

investigations will also need to address internally-framed (as

opposed to externally framed) feedback (MoEller et al.,

2009). In addition, the findings will need to be replicated

with bigger samples, and also with more diverse (e.g.,

gender-balanced, organizationally-derived) samples.

Another limitation concerns the assessment of actual per-

formance. How does enhancing versus improving feedback

influence subsequent reactions and subsequent performance

in similar or different domains for which the original feed-

back was delivered? Do feedback satisfaction and usefulness

impact differentially on motivation (e.g., goal-setting), pro-

ductivity and quality of output, attitudes toward the feedback

provider, as well as organizational identification and commit-

ment? Does the impact of feedback satisfaction and useful-

ness vary as a function of feedback delivery and assessment

in the short-run and long-run? Do the results extend to other

feedback manipulations outside of the academic or achieve-

ment context? These are questions that need to be addressed

by future research. Other unresolved issues will also need to

be tackled. One concerns the circumstances under which

improving versus enhancing feedback is likely to be more

effective. It is possible, for example, that improving feedback

is more effective when the recipient (e.g., organizational

member) is an expert than a novice (Finkelstein & Fishbach,

2012) and when the rate of improvement is perceived to be

higher in later sequences (i.e., recency effect) than in earlier

sequences (Jones, Rock, Shaver, Goethals, & Ward, 1968).

Another issue concerns individual differences. Is improving

feedback likely to be more effective for low than high self-

Sedikides et al. 697

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

esteem persons (Brown, Farnham, & Cook, 2002), low than

high narcissists (Campbell, Rudich, & Sedikides, 2002),

incremental self-theorists than entity self-theorists (Plaks &

Stecher, 2007), individuals with mastery-approach goals than

mastery-avoidance goals (Elliot & McGregor, 2001), and per-

sons with a prevention-focus than a promotion-focus orien-

tation (Van Dijk & Kluger, 2010)? Yet another issue concerns

cultural context. Does culture qualify the findings we

reported? Here, the scant literature is mixed, with some evi-

dence pointing to higher impact of improving than enhanc-

ing feedback among East-Asians than Westerners (Heine

et al., 2001; Heine & Raineri, 2009) and other evidence point-

ing to equivalent impact of enhancing and improving and

enhancing feedback among East-Asian and Westerners

(Gaertner, Sedikides, & Cai, 2012; Sedikides, Gaertner, & Cai,

2015).

Finally, although we set to examine in our research the rel-

ative impact of enhancing and improving feedback, such

feedback may be temporally separated. Research by Gram-

zow, Elliot, Asher, and McGregor (2003) has indicated that

initial self-enhancement (i.e., GPA exaggeration at the begin-

ning of an academic semester) predicted improvement (i.e.,

better grades) at the end of the semester, controlling statisti-

cally for the relation between GPA exaggeration and initial

academic performance. Wright (2000), as well as Kurman

(2006), reported conceptually similar findings. It remains to

be seen whether enhancing feedback predicts better perfor-

mance, and whether this pattern is observed cross-culturally.

Coda

We examined perceptions, psychological consequences, and

behavioral outcomes of enhancing versus improving feed-

back that was delivered and assessed sequentially or cumula-

tively. Although, overall, enhancing feedback was seen as

more satisfying than useful and improving feedback was not

seen as more useful than satisfying, perceptions differed as a

function of short-term versus long-term delivery and assess-

ment. In general, though, enhancing feedback was more

impactful psychologically and behaviorally than improving

feedback. Our findings provide the fodder for theory devel-

opment and practical considerations.

Acknowledgment
This research was supported by Economic and Social

Research Council grant RES-000-22-1834. We thank Anna

Cobb, Natalie Fernandes, and Sara Morris Klinger for their

help with data collection.

References

Alicke, M. D., & Sedikides, C. (2009). Self-

enhancement and self-protection: What

they are and what they do. European

Review of Social Psychology, 20, 1–48.

Alicke, M. D., & Sedikides, C. (2011).

Handbook of self-enhancement and self-

protection. New York, NY: Guilford Press.

Ariely, D. (1998). Combining experiences

over time: The effects of duration, inten-

sity changes and on-line measurements

on retrospective pain evaluations. Journal

of Behavioral Decision Making, 11, 19–45.

Aronson, E., & Linder, D. (1965). Gain and

loss of esteem as determinants of inter-

personal attractiveness. Journal of Experi-

mental Social Psychology, 1, 156–171.

Atwater, L. E., & Brett, J. F. (2005). Antece-

dents and consequences of reactions to

developmental 360 feedback. Journal of

Vocational Behavior, 360, 532–548.

Bauer, T. N., Maertz, C. P., Dolen, M. R., &

Campion, M. A. (1998). Longitudinal

assessment of applicant reactions to

employment testing and test outcome

feedback. Journal of Applied Psychology,

83, 892–903.

Bowden, E. M., & Jung-Beeman, M. (2003).

Normative data for 144 compound

remote associate problems. Behavioral

Research Methods, Instrumentation, &

Computers, 35, 634–639.

Breines, J. G., & Chen, S. (2012). Self-com-

passion increases self-improvement

motivation. Personality and Social Psy-

chology Bulletin, 38, 1133–1143.

Brett, J. F., & Atwater, L. E. (2001). 3608

feedback: Accuracy, reactions, and per-

ceptions of usefulness. Journal of Applied

Psychology, 86, 930–942.

Brown, J. D., & Dutton, K. A. (1995). Truth

and consequences: The costs and benefits

of accurate self-knowledge. Personality and

Social Psychology Bulletin, 21, 1288–1296.

Brown, J. D., Farnham, S. D., & Cook, K. E.

(2002). Emotional responses to changing

feedback: Is it better to have won and

lost than never to have won at all? Jour-

nal of Personality, 70, 127–141.

Brown, M., Hyatt, D., & Benson, J. (2010).

Consequences of the performance

appraisal experience. Personnel Review,

39, 375–396.

Campbell, W. K., Rudich, E., & Sedikides,

C. (2002). Narcissism, self-esteem, and

the positivity of self-views: Two portraits

of self-love. Personality and Social Psy-

chology Bulletin, 28, 358–368.

Campbell, W. K., & Sedikides, C. (1999).

Self-threat magnifies the self-serving bias:

A meta-analytic integration. Review of

General Psychology, 3, 23–43.

Cawley, B. D., Keeping, L. M., & Levy, P. E.

(1998). Participation in the performance

appraisal process and employee reac-

tions: A meta-analytic review of field

investigations. Journal of Applied Psychol-

ogy, 83, 615–633.

Cheung, W. Y., Wildschut, T., Sedikides, C.,

Hepper, E. G., Arndt, J., & Vingerhoets,

A. J. J. M. (2013). Back to the future:

Nostalgia increases optimism. Personality

and Social Psychology Bulletin, 39,

1484–1496.

Clayton, D. E., & Smith, M. M. (1987).

Motivational typology of reentry women.

Adult Education Quarterly, 37, 90–104.

Collins, R. L. (1996). For better or worse:

The impact of upward social comparison

on self-evaluations. Psychological Bulletin,

119, 51–69.

Crocker, J., Thompson, L. L., McGraw, K.

M., & Ingerman, C. (1987). Downward

698 Enhancing and improving feedback

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

comparison, prejudice, and evaluations

of others: Effects of self-esteem and

threat. Journal of Personality and Social

Psychology, 52, 907–916.

Dickinson, M. J. (1999). Do gooders or do

betters? An analysis of the motivation of

student tutors. Educational Research, 41,

221–227.

Dunning, D. (2005). Self-insight: Roadblocks

and detours on the path to knowing thy-

self. New

York, NY: Psychology Press.

Elicker, J. D., Lord, R. G., Ash, S. R., Kohari,

N. E., Hruska, B. J., McConnell, N. L.,

et al. (2010). Velocity as a predictor of

performance satisfaction, mental focus,

and goal revision. Applied Psychology: An

International Review, 59, 495–514.

Elliot, A. J., & McGregor, H. A. (2001). A

2T2 achievement goal framework. Jour-

nal of Personality and Social Psychology,

80, 501–519.

Finkelstein, S. R., & Fishbach, A. (2012).

Tell me what I did wrong: Experts seek

and respond to negative feedback. Jour-

nal of Consumer Research, 39, 22–38.

Fishbach, A., Eyal, T., & Finkelstein, S. R.

(2010). How positive and negative feed-

back motivate goal pursuit. Social and Per-

sonality Psychology Compass, 4, 517–530.

Fisher, C. D. (1979). Transmission of posi-

tive and negative feedback to subordi-

nates: A laboratory investigation. Journal

of Applied Psychology, 64, 533–540.

Gabriel, A. S., Frantz, N. B., Levy, P. E., &

Hilliard, A. W. (2014). The supervisor

feedback environment is empowering,

but not all the time: Feedback orienta-

tion as a critical moderator. Journal of

Occupational and Organizational Psychol-

ogy, 87, 487–506.

Gaertner, L., Sedikides, C., & Cai, H.

(2012). Wanting to be great and better

but not average: On the pancultural

desire for self-enhancing and self-

improving feedback. Journal of Cross-

Cultural Psychology, 43, 521–526.

Gniewosz, B., Eccles, J. S., & Noack, P.

(2014). Early adolescents’ development

of academic self-concept and intrinsic

task value: The role of contextual feed-

back. Journal of Research on Adolescence,

25, 459–473.

Gramzow, R. H., Elliot, A. J., Asher, E., &

McGregor, H. (2003). Self-evaluation

bias in the academic context: Some ways

and some reasons why. Journal of

Research in Personality, 37, 41–61.

Green, J. D., Sedikides, C., Pinter, B., & Van

Tongeren, D. R. (2009). Two sides to self-

protection: Self-improvement strivings

and feedback from close relationships

eliminate mnemic neglect. Self and Iden-

tity, 8, 233–250.

Gregg, A. P., Sedikides, C., & Gebauer, J. E.

(2011). Dynamics of identity: Between

self-enhancement and self-assessment. In

S. J. Schwartz, K. Luyckx, & V. L.

Vignoles (Eds.), Handbook of identity the-

ory and research (Vol.1, pp. 305–327).

New York, NY: Springer.

Gregory, J. B., & Levy, P. E. (2012). Employ-

ee feedback orientation: Implications for

effective coaching relationships. Coach-

ing: An International Journal of Theory,

Research & Practice, 5, 86–99.

Harackiewicz, J. M. (1979). The effects of

reward contingency and performance

feedback on intrinsic motivation. Journal

of Personality and Social Psychology, 37,

1352–1363.

Harks, B., Rakoczy, K., Hattie, J., Besser, M.,

& Klieme, E. (2014). The effects of feed-

back on achievement, interest and self-

evaluation: The role of feedback’s per-

ceived usefulness. Educational Psychology,

34, 269–290.

Hattie, J., & Timperley, H. (2007). The

power of feedback. Review of Education

Research, 77, 81–112.

Heine, S. J., Kitayama, S., Lehman, D. R.,

Takata, T., Ide, E., Leung, C., et al.

(2001). Divergent consequences of suc-

cess and failure in Japan and North

America. An investigation of self-

improving motivations and malleable

selves. Journal of Personality and Social

Psychology, 81, 599–615.

Heine, S. J., & Raineri, A. (2009). Self-

improving motivations and culture: The

case of Chileans. Journal of Cross-

Cultural Psychology, 40, 158–163.

Hepper, E. G., Gramzow, R. H., &

Sedikides, C. (2010). Individual differ-

ences in self-enhancement and self-

protection strategies: An integrative anal-

ysis. Journal of Personality, 78, 781–814.

Hepper, E. G., & Sedikides, C. (2012). Self-

enhancing feedback. In R. M. Sutton, M.

J. Hornsey, & K. M. Douglas (Eds.),

Feedback: The communication of praise,

criticism, and advice (pp. 43–56). New

York, NY: Peter Lang.

Hsee, C. K., & Abelson, R. P. (1991). Veloci-

ty relation: Satisfaction as a function of

first derivative of outcome over time.

Journal of Personality and Social and Psy-

chology, 60, 341–347.

Ilgen, D., & Davis, C. (2000). Bearing bad

news: Reactions to negative performance

feedback. Applied Psychology, 49, 550–565.

Ilies, R., Nahrgang, J., & Morgeson, F. P.

(2007). Leader-member exchange and cit-

izenship behaviors: A meta-analysis. Jour-

nal of Applied Psychology, 92, 269–277.

Jones, E. E., Rock, L., Shaver, K. G.,

Goethals, G., & Ward, L. M. (1968). Pat-

tern of performance and ability attribu-

tion: An unexpected primacy effect.

Journal of Personality and Social Psycholo-

gy, 10, 317–340.

Kamins, M. L., & Dweck, C. S. (1999). Person

versus process praise and criticism: Implica-

tions for contingent self-worth and coping.

Developmental Psychology, 35, 835–847.

Keeping, L. M., & Levy, P. E. (2000). Perfor-

mance appraisal reactions: Measurement,

modeling, and method bias. Journal of

Applied Psychology, 85, 708–723.

Kluger, A. N., & DeNisi, A. (1996). The

effects of feedback interventions on per-

formance: A historical review, a meta-

analysis, and a preliminary feedback

intervention theory. Psychological Bulle-

tin, 119, 254–284.

Kulhavy, R. W. (1977). Feedback in written

instruction. Review of Educational

Research, 47, 211–232.

Kurman, J. (2006). Self-enhancement, self-

regulation and self-improvement follow-

ing failure. British Journal of Social Psy-

chology, 45, 339–356.

Latham, G. P., Cheng, B. H., & Macpherson,

K. (2012). Theoretical frameworks for

and empirical evidence on providing

feedback to employees. In R. M. Sutton,

M. J. Hornsey, & K. M. Douglas (Eds.),

Feedback: The communication of praise,

criticism, and advice (pp. 187–199). New

York, NY: Peter Lang.

Markman, K. D., Elizaga, R. A., Ratcliff, J.

J., & McMullen, M. N. (2007). The inter-

play between counterfactual reasoning

and feedback dynamics in producing

inferences about the self. Thinking and

Reasoning, 13, 188–206.

Sedikides et al. 699

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

Martin, L., Abend, T., Sedikides, C., &

Green, J. D. (1997). How would I feel

if. . .? Mood as input to a role fulfillment

evaluation process. Journal of Personality

and Social Psychology, 73, 242–253.

McFarlin, D. B., & Blascovich, J. (1984). On

the Remote Associates Test (RAT) as an

alternative to illusory performance feed-

back: A methodological note. Basic and

Applied Social Psychology, 5, 223–229.

Mednick, S. A., & Mednick, M. T. (1967).

Examiner’s manual: Remote associates

test. Boston, MA: Houghton Mifflin.

MoEller, J., Pohlmann, B., Koeller, O., &

Marsh, H. W. (2009). A meta-analytic

path analysis of the internal/external

frame of reference model of academic

achievement and academic self-concept.

Review of Educational Research, 79,

1129–1167.

Neiss, M. B., Sedikides, C., Shahinfar, A., &

Kupersmidt, J. (2006). Self-evaluation in

naturalistic context: The case of juvenile

offenders. British Journal of Social Psy-

chology, 45, 499–518.

Plaks, J. E., & Stecher, K. (2007). Unexpected

improvement, decline, and stasis: A predic-

tion confidence perspective on achievement

success and failure. Journal of Personality

and Social Psychology, 93, 667–684.

Pliske, R. M., Elig, T. W., & Johnson, R. M.

(1986). Towards an understanding of

army enlistment motivation patterns. US:

Army Research Institute/Publications.

Prelec, D., & Loewenstein, G. F. (1997).

Beyond time discounting. Marketing Let-

ters, 8, 97–108.

Pyszczynski, T., Greenberg, J., & Arndt, J.

(2012). Freedom versus fear revisited: An

integrative analysis of the dynamics of

the defense and growth of the self. In M.

R. Leary & J. P. Tangney (Eds.), Hand-

book of self and identity (2nd ed., pp.

378–404). New York, NY: Guilford Press.

Ryan, A. M., Brutus, S., Greguras, G., &

Hakel, M. D. (2000). Receptivity to

assessment based feedback for manage-

ment development. Journal of Manage-

ment Development, 19, 252–276.

Ryan, A. M., Gheen, M. H., & Midgley, C.

(1998). Why do some students avoid

asking for help? An examination of the

interplay among students’ academic effi-

cacy, teachers’ social-emotional role, and

the classroom goal structure. Journal of

Educational Psychology, 90, 528–535.

Sedikides, C. (2009). On self-protection

and self-enhancement regulation: The

role of self-improvement and social

norms. In J. P. Forgas, R. F. Baumeister,

& D. Tice (Eds.), The psychology of self-

regulation: Cognitive, affective, and moti-

vational processes. (pp. 73–92). New

York, NY: Psychology Press.

Sedikides, C. (2012). Self-protection. In M.

R. Leary & J. P. Tangney (Eds.), Hand-

book of self and identity. (2nd ed., pp.

327–353). New York, NY: Guilford Press.

Sedikides, C., Campbell, W. K., Reeder, G., &

Elliot, A. J. (1998). The self-serving bias in

relational context. Journal of Personality

and Social Psychology, 74, 378–386.

Sedikides, C., Gaertner, L., & Cai, H.

(2015). On the panculturality of self-

enhancement and self-protection moti-

vation: The case for the universality of

self-esteem. In A. J. Elliot (Ed.), Advances

in motivation science. (Vol.2, pp. 185–

241). San Diego, CA: Academ

ic Press.

Sedikides, C., Green, J. D., Saunders, J.,

Skowronski, J. J., & Zengel, B. (2016).

Mnemic neglect: Selective amnesia of

one’s faults. European Review of Social

Psychology, 27, 1–62.

Sedikides, C., & Gregg, A. P. (2008). Self-

enhancement: Food for thought. Per-

spectives on Psychological Science, 3,

102–116.

Sedikides, C., & Hepper, E. G. (2009). Self-

improvement. Social and Personality Psy-

chology Compass, 3, 899–917.

Sedikides, C., & Strube, M. J. (1997). Self-

evaluation: To thine own self be good, to

thine own self be sure, to thine own self

be true, and to thine own self be better.

Advances in Experimental Social Psycholo-

gy, 29, 209–269. New York, NY: Academ-

ic Press.

Segerstrom, S. C., Taylor, S. E., Kemeny, M.

E., & Fahey, J. L. (1998). Optimism is

associated with mood, coping, and

immune change in response to stress.

Journal of Personality and Social Psycholo-

gy, 74, 1646–1655.

Seifert, C. F., Yukl, G., & McDonald, R. A.

(2003). Effects of multisource feedback

and a feedback facilitator on the influ-

ence behavior of managers toward subor-

dinates. Journal of Applied Social

Psychology, 561–569.

Silberman, M., & Hansburg, F. (2005). How to

encourage constructive feedback from others.

San Francisco, CA: John Wiley & Sons.

Sutton, R. M., Hornsey, M. J., & Douglas,

K. M. (Eds.) (2012). Feedback: The com-

munication of praise, criticism, and

advice. New York, NY: Peter Lang.

Taylor, S. E., & Brown, J. D. (1988). Illusion

and well-being: A social psychological

perspective on mental health. Psychologi-

cal Bulletin, 103, 193–210.

Taylor, S. E., Neter, E., & Wayment, H. A.

(1995). Self-evaluation processes. Person-

ality and Social Psychology Bulletin, 21,

1278–1287.

Tonidandel, S., Qui~nones, M. A., & Adams,

A. A. (2002). Computer-adaptive testing:

The impact of test characteristics on per-

ceived performance and test takers’ reac-

tions. Journal of Applied Psychology, 87,

320–332.

Vallerand, R. J., & Reid, G. (1984). On the

causal effects of perceived competence

on intrinsic motivation: A test of cogni-

tive evaluation theory. Journal of Sport

Psychology, 6, 94–102.

Van Dijk, D., & Kluger, A. N. (2010). Task

type as a moderator of positive/negative

feedback effects on motivation and per-

formance: A regulatory focus perspective.

Journal of Organizational Behavior, 32,

1084–1105.

Whitaker, B., & Levy, P. E. (2012). Linking

feedback quality and goal orientation to

feedback-seeking and job performance.

Human Performance, 25, 159–178.

Wright, S. S. (2000). Looking at the self in a

rose-colored mirror: Unrealistically posi-

tive self-views and academic perfor-

mance. Journal of Social and Clinical

Psychology, 19, 451–462.

Wyer, R. S., & Frey, D. (1983). The effects

of feedback about self and others on the

recall and judgments of feedback-

relevant information. Journal of Experi-

mental Social Psychology, 19, 540–559.

700 Enhancing and improving feedback

VC 2016 Wiley Periodicals, Inc Journal of Applied Social Psychology, 2016, 46, pp. 687–700

Copyright of Journal of Applied Social Psychology is the property of Wiley-Blackwell and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder’s express written permission. However, users may print, download, or email
articles for individual use.

Ethical Considerations in Writing
Psychological Assessment Reports

Mark H. Michaels
Private Practice

In this article, the author addresses the ethical questions and decision
evaluators associated with the writing of psychological assessment reports.
Issues related to confidentiality, clinical judgment, harm, labeling, release
of test data, and computer usage are addressed. Specific suggestions on
how to deal with ethical concerns when writing reports are discussed, as
well as areas in need of further research. © 2005 Wiley Periodicals, Inc.
J Clin Psychol 62: 47–58, 2006.

Keywords: ethics; report writing; assessment

As the final product, and often the only communication about an evaluation, the psycho-
logical report is a powerful tool for influencing change or making decisions about the
individual being evaluated. The impact of such an evaluation can be life changing, such
as employment decisions, or simply informative, such as what psychiatric symptoms are
most prominent. Because the psychological report is often given immense weight, care
must be taken to ensure any written work is completed with due respect to the ethical
obligations involved. Some ethical issues, such as requests by employers for confidential
information regarding an employee’s evaluation, are fairly straightforward. Ethical deci-
sions in report writing, however, are less distinct and more subtle. Decisions are made all
throughout the process about matters such as the wording of reports or what data to
include.

Some guidance in making these ethical decisions can be found in the Ethical Prin-
ciples of Psychologists and Code of Conduct (EPPCC; American Psychological Associ-
ation [APA], 2002). However, ethical standards delineated by diverse sources do not
always coincide. For example, the Standards for Educational and Psychological Testing
(SEPT; American Education Research Association [AERA], APA, National Council on
Measurement in Education, 1999) state:

Correspondence concerning this article should be addressed to: Mark H. Michaels, 211 E. Ocean Blvd. #258,
Long Beach, CA 90802; e-mail: drsmnj@earthlink.net

JOURNAL OF CLINICAL PSYCHOLOGY, Vol. 62(1), 47–58 (2006) © 2006 Wiley Periodicals, Inc.
Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/jclp.20199

When test scores are used to make decisions about a test taker or to make recommendations to
a test taker or a third party, the test taker or the legal representative is entitled to obtain a copy
of any report of scores or test interpretation, unless the right has been waived or is prohibited
by law or court order.

This is not completely consistent with the EPPCC standard 9.04, which states that
“ . . . Psychologists may refrain from releasing test data to protect a client/patient or
others from substantial harm or misuse or misrepresentation of the data or the test . . .”
(p. 14). Updates of ethical codes, such as the 2002 EPPCC revision, typically superseded
previous versions. However, when various codes are not consistently comparable, or
when guidelines are offered by separate groups, newer codes may conflict with, rather
than supersede, previous practices. When addressing ethical questions, especially when
faced with disparate ethical guidelines, clinicians should make decisions with due delib-
eration of several general considerations.

This article will address ethical questions that fall within three general areas: the
balance between (a) providing information and protecting client welfare, (b) providing
information and protecting client confidentiality, and (c) utilizing information that may
be of assistance and ensuring information is reliable and valid.

Beneficence and Autonomy

Bricklin (2001) raises a critical issue of autonomy and beneficence. She underscores the
dilemma inherent in decisions about what and how to share information. Providing infor-
mation respects a client’s right to know (autonomy), while not providing information that
would be potentially harmful or disturbing protects that individual’s welfare (benefi-
cence). Though there is no one approach that will universally balance these disparate
aspirations, there are several considerations that help inform the evaluator’s approach to
writing a report.

Harm

One especially significant consideration when writing a report is how conclusions or
included data may harm the individual. Directly, a report can cause harm if it leads to
negative consequences for the individual. A few immediate consequences that can result
from the information added to a report include being denied employment, required to
stand trial, or denied health care services. Conversely, harm to others may be prevented
even if the client does not obtain a desired outcome. For example, an unqualified indi-
vidual being denied a public safety position may ultimately benefit others in the commu-
nity. In general, harm to a client may manifest in two primary ways—through a direct
impact on the individual’s emotional state or indirectly by modifying how others behave
toward that individual.

Smith (1978) discusses two problems that can arise when a client reads a report
about himself or herself: misuse of the knowledge obtained and impaired trust in the
clinician. Trusting the evaluator, provided that the clinician’s sole role is as tester, is
unlikely to be a problem in most cases. Harm from the information included in the report,
however, continues to be relevant long after any contact with the evaluator is concluded.
Although there is little direct information about harm caused by obtaining knowledge
from psychological reports specifically, information regarding psychiatric records may
enlighten this issue.

48 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

One way in which harm may be manifest is through the impact of report content on
others’ perceptions of the individual, including health care providers, school personnel,
or others. Markham (2003) found nursing staff rated their experience with patients receiv-
ing certain diagnoses more negatively. Socall and Holtgraves (1992) found that greater
rejection was evident toward people labeled as mentally ill, even when exhibiting similar
behavior as non-ill individuals. Similar findings regarding differential perceptions of
students have been found in school settings (Schwartz & Wilkinson, 1987).

A second aspect of harm is how emotional distress may be created for the client when
presented with information about them. Though there is little direct research on how
seeing report information impacts clients, there are some research findings that bear on
this issue. Kosky and Burns (1995) found that, for the most part, patients’ access to their
own records created no problems, though this was not true for all individuals. Roth,
Wolford, and Meisel (1980) note how limited access to records can be beneficial. Spe-
cifically, when patients were allowed to view their records, though not keep copies,
reactions were generally positive. Others have also found that access to records can
facilitate rapport and client cooperation (Doel & Lawson, 1986; Golodetz, Ruess, &
Milhous, 1976). Bernadt, Gunning, and Quenstedt (1991) found that 28% of patients
were upset after seeing a clinical summary. They also found differing reactions depend-
ing on diagnosis. Kantrowitz (2004) found that patients had varied, though generally
positive, reactions when reading about themselves.

Interestingly, Kantrowitz (2004) also noted that some of the writers acknowledged
that knowing patients were going to be reading what was written modified what was
included. Although she was investigating written work about treatment, it is easy to
imagine how knowing that a client will read an assessment report would affect the con-
tent as well. Specific investigation of this area would be illuminating.

A concern specifically raised by Smith (1978) is that report information may be
misused. For example, misunderstanding or inaccurate application of IQ scores, diagno-
ses, or personality descriptions may be utilized to limit access to services or funding.
Smith (1978) specifically argues that misuses may also involve prematurely gained self-
knowledge, perhaps leading to treatment resistance. In addition, misinterpretation of tech-
nical terms can lead to erroneous conclusions about the individual. Consistent understanding
of technical terms has been found to be absent even among clinicians (Rucker, 1967).
Given this, it is easy to imagine how technical information could be inaccurately applied
by those not already well versed in psychological principles and jargon.

Labeling

Diagnoses of mental retardation, psychiatric illness, or other personal challenges can be
stigmatizing (Hayne, 2003). For example, psychiatric, as compared to medical, patients
have been found to be viewed more unfavorably (Fryer & Cohen, 1988). Standard 8.8 of
the Standards of Educational and Psychological Testing (AERA et al., 1999) specifies
that if labels are employed, the least stigmatizing label be used. This presents a dilemma
when omitting a label such as a diagnosis might deny the individual resources. In such a
situation, beneficence can be assigned to both providing and not providing a label. For
example, providing a diagnosis may benefit the client by ensuring external resources, but
not providing a diagnosis may be of benefit by avoiding emotional distress. This high-
lights the complicated considerations involved in weighing beneficence and autonomy.

In addition to diagnoses, labeling can occur in subtle ways as well. Comments on
cognitive weaknesses or “poor coping skills” can be construed as congenital flaws rather

Ethical Considerations in Reports 49

Journal of Clinical Psychology DOI 10.1002/jclp

than as stylistic differences or as areas needing additional training. The report writer must
carefully consider how evaluation results are presented, as well as the intended and poten-
tial audiences, when offering any information that may be construed negatively.

An ancillary point is where a diagnosis may fall on the continuum of severity. Pro-
viding a more severe diagnosis may allow an individual to receive or afford services
needed while a more benign diagnosis might rescue the person from potentially prejudi-
cial labeling. Provided the decision follows the EPPCC section 6.06, regarding accuracy
of information given to payers for services, this decision should be made taking the best
interest of the individual in mind. This choice does require some judgment and may
present a struggle for many clinicians.

Caution about labeling is particularly relevant for evaluations of minors; such results
may be viewed by numerous individuals on a treatment team and by parents (Howe &
Miramontes, 1992). Moreover, evaluation comments can become incorporated into other
documents (e.g., Individual Education Plans) without the context of the original report,
and then transferred along with the child’s records from year to year.

Intelligence Quotient Scores

Like diagnosis, IQ represents technical information that may be misconstrued by untrained
individuals. Providing intelligence test scores has long been a point of debate (Kaufman
& Lichtenberger, 2002; Lezak 1988). This debate bears directly on EPPCC standard
9.04. Inclusion of scores allows for easy comparison, either normatively or to past eval-
uation findings. In contrast, IQ scores can easily become the focus, with subsequent
discussion of cognitive strengths and weaknesses being lost. For example, providing a
full-scale IQ may result in a child’s exclusion from an accelerated academic program,
even if the report subsequently explains the limited accuracy of the single score given
verbal and nonverbal differences or subtest scatter. The evaluator’s decision must weigh
the relative benefit of having a score included against the potential drawbacks.

Given the well-documented increase in IQ over time (Flynn, 1998), reporting IQ
scores from older tests (e.g., more than 10 years old) are likely to be inaccurate. For
example, in a few years, scores on the Wechsler Adult Scales of Intelligence (third edi-
tion) is anticipated to be 3 points higher than when the test was first published in 1997.
Accuracy of IQ test scores takes on new magnitude when decisions are being made about
life and death consequences, such as forensic evaluation in death penalty cases (Cici,
Scullin, & Kanaya, 2003). Using a range of error helps to ameliorate this problem, but
providing a numeric score could be considered inaccurate enough to raise ethical ques-
tions. The problem of increasing IQ scores may fall in a gray area when considering what
would constitute outdated test results (EPPCC section 9.08: Obsolete Tests). Not taking
the age of normative data into account may result in improper use of test results.

Bricklin (2001) states when considering autonomy and beneficence, autonomy usu-
ally takes precedence. Utilizing this guiding idea must be tempered, however, when a
choice may advance both principles to some extent. In the example of providing a diag-
nosis, including a label in the report both respects the individual’s right to know and their
welfare, at least in part, by facilitating provision of resources. Including intelligence test
scores, in particular, may span multiple ethical questions, being relevant to both potential
harm and to client confidentiality.

Beneficence and Confidentiality

Psychological evaluations contain some of the most intimate and influential information
one can obtain about another person, and care should be taken to ensure the report is only

50 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

shared with appropriate consent or as required by law. Confidentiality is a concern when
developing the content of the report as well. Section 4.04 of the EPPCC states, “Psy-
chologists include in written and oral reports and consultations, only information ger-
mane to the purpose for which the communication was made” (p. 7). For example, if
notable information emerges during an evaluation, that was not part of the original refer-
ral question, should that information be included in the report? On one hand, such infor-
mation could be very helpful to the referral source, and ultimately to the individual. On
the other hand, providing information that was not requested may violate the individual’s
right to confidentiality (EPPCC section 4.04: Privacy). Consent to release information
would solve this dilemma in most cases; however, there may be instances when including
information would not be desired by the client, such as in some personnel or forensic
evaluations. This ethical problem may be best addressed by clarifying in advance, how
such potential findings will be handled. If permitted by law, psychologists may release
information without consent to provide needed professional services (EPPCC section
4.05: Disclosures). It is questionable if this would apply to inclusion of information in a
report; however, despite the potential utility of that information. In the absence of advance
clarification and consent, evaluators should be cautious and keep information in reports
germane to the referral question.

Release of Test Data

Inclusion of raw data within a report has given rise to a significant debate (Matarazzo,
1995; Naugle & McSweeny, 1995). The Standards for Educational and Psychological
Testing generally encourages omission of test data (AERA et al., 1999). Although it has
been argued that raw data should be routinely appended to neuropsychological reports,
Naugle and McSweeny (1995) point out potential ethical violations, particularly of Stan-
dard 2.02 of the Ethical Principles of Psychologists (APA, 1992) regarding misuse of test
information and section 5.03 regarding privacy. Though renumbered with the 2002 ethics
code revision (APA, 2002), the sections noted by Naugle and McSweeny (1995) remain.
Moreover, a new section addressing the inclusion of test data clarifies that, in the absence
of a release from the client, data should be provided only as required by law or court
order. Notably, reference to the qualifications of those receiving information was deleted.

In the Statement on the Disclosure of Test Data (SDTD) by the American Psycho-
logical Association Committee on Psychological Tests and Assessment (1996), several
considerations on the disclosure of raw test data are identified. These include consent to
release information, disclosure to unqualified individuals, test security and copyright
obligations, and conformity with legal statutes, regulatory mandates, and organizational
rules. Some of these considerations do not directly parallel those in the EPPCC. For
example, the SDTD discourages release of information to “unqualified” individuals, though
the EPPCC has no such admonition. Evaluators should reflect on all positions carefully
before deciding how much, if any, information is disclosed.

Although differences may occur across jurisdictions, in general, legal and ethical
release of test information cannot be done without the client’s consent (APA, 2002).
Obtaining consent to release the report would effectively allow release of all relevant
data regardless of form. In some instances, however, such as when the client is an orga-
nization rather than an individual, release of any test scores to the individual tested might
not be authorized.

Perhaps the most compelling concern with regard to raw data is the intended reader
(Pieniadz & Kelland, 2001), especially for clinicians who are working in the legal arena.

Ethical Considerations in Reports 51

Journal of Clinical Psychology DOI 10.1002/jclp

It is not uncommon that raw data be requested in psycho-legal evaluations. Moreover, the
test items or questions that form the basis for an individual’s responses are also some-
times called into question. Although the limitations on who is qualified to have access to
raw data has changed with the 2002 revision of the EPPCC, the evaluator should still be
aware of who is requesting the release of data. Though qualifications are no longer addressed
in the EPPCC, test data may still be withheld (a) if data may be misused or misrepre-
sented, and (b) to protect the client or others from substantial harm (EPPCC section
9.04). Hence, release of data to unqualified individuals still may present an ethical trans-
gression (APA, 1996).

Even prior to the revision of the APA ethics code, some argued that release of data
did not represent an ethical problem (Matarazzo, 1995). Recent revision of the ethics
code and introduction of the Health Information Portability and Accountability Act of
1996 (HIPAA; 1996) requirements may have actually lessened concerns about release of
data. For example, Erard (2004) notes that release of test data present even less of a
dilemma, especially because the EPPCC no longer requires data be released only to
qualified individuals. However, he also notes that the changes do not fully clarify this
question, and suggests that clinicians continue to follow the Specialty Guidelines for
Forensic Psychologists (Committee on Ethical Guidelines for Forensic Psychologists,
1991) in taking reasonable steps to ensure data be interpreted by qualified professionals.
Presently, clinician’s may be wise to choose the more cautious approach.

Release of Test Procedures and Materials

It is standard practice for test publishers to require clinicians’ agreement not to release
any information about a test or test materials to unqualified persons. This agreement is
typically a prerequisite for a test publisher to allow use of that instrument. Moreover,
psychologists are generally discouraged from releasing information on ethical grounds
and are required to respect copyright laws (APA, 1996). However, psychologists are
frequently asked to provide information about the contents of a test. This may be for
comparison to more current evaluation results, to clarify the basis of the evaluator’s
conclusions, or for opposing parties in legal action to challenge the results or the test
itself. Whatever the purpose, maintaining the copyright or proprietary rights to the test
material may conflict with legal and clinical needs. The APA SDTD (1996) states:

It is prudent for psychologists to be familiar with the terms of their test purchase or lease
agreements with test publishers as well as reasonably informed about relevant provisions of
the federal copyright laws. Psychologists may wish to consult with test publishers and/or
knowledgeable experts to resolve possible conflicts before releasing specific test materials to
ensure that the copyright and proprietary interests of the test publisher are not compromised.

The Statement also suggested that individuals consider the audience that might receive
the test materials, and obtain permission of test publishers before reprinting or copying
any test material. Additionally, the EPPCC specifically distinguishes between test data
and test materials, and encourages psychologists to make reasonable efforts to “maintain
the integrity and security of test materials and other assessment techniques” (p. 14).

Knapp and VandeCreek (2001) note that in forensic reports it is particularly impor-
tant to substantiate findings. A clinician may feel it is important to include relevant
examples of responses or specific data to substantiate conclusions. For example, listing
Minnesota Multiphasic Personality Inventory-2 (MMPI-2) critical items the individual
endorsed is a powerful way of communicating about that person’s functioning. In these
instances, release of test material is not supplemental to the report, but directly included

52 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

in the report content. However, maintaining test security is an ethical requirement (EPPCC
section 9.11: Test Security). It is reasonable for evaluators to exert the same caution
applicable to release of appended test data when deciding whether to include such mate-
rial within the report narrative. Of course, the evaluator can choose not to include any
specific information about an instrument within the report itself. This approach circum-
vents having to address this question unless a separate direct request is made for test
materials.

In her discussion of the decision making process, Bricklin (2001) states that one
consideration in resolving ethical dilemmas is whether there are “compelling reasons to
deviate from the standard.” This process is made more challenging when disparate stan-
dards are, themselves, at variance. Any deviation from confidentiality standards can be
even more problematic as many jurisdictions require confidentiality by law. Again, choices
that enhance all competing interests may serve the client best. In the absence of this
possibility, striving to achieve the first objective set forth in the EPPCC (General Prin-
ciple A) should be the foremost guiding principle. That is, do no harm.

Validity and Utility

One specific aspect of providing information ethically involves the use of data that may
be of limited reliability or validity. Section 9.02 of the EPPCC states, “Psychologists use
assessment instruments whose validity and reliability have been established for use with
members of the population tested. When such validity or reliability has not been estab-
lished, psychologists describe the strengths and limitations of test results and interpretation”
(p. 13).

Further, client statements and behavior during an evaluation are considered to be test
data (EPPCC section 9.04: Release of Test Data). Many reports include client statements,
information from collateral sources, or data from less reliable instruments. This may
occur as background data or within the evaluation results. Providing information about
client’s statements or behavior may stretch the limits of section 9.02, but may also have
great utility in helping the reader understand the individual. Hence, the psychologist must
be cautious to ensure, or at least explain the limitations of, the validity of statements in all
sections of the report. These decisions are crucial as the referral source may not distin-
guish which information is more reliable or valid once it has been combined into a report.

Clarifying the limitations of an evaluator’s observations may be a delicate venture.
For example, “clarifying” the limited reliability of a client’s statement that she never
drinks alcohol may suggest the individual is lying. The evaluator needs to weigh the
benefit of incorporating less-reliable information against the drawback of violating EPPCC
section 9.02. Judging how critical questionably valid information is in clarifying assess-
ment findings may be the yardstick for determining if a compelling reason for violating
the standard is present.

Computer-Aided Assessment and Ethics

One area that has become increasingly relevant to report writing is computer-based test
interpretation (CBTI). Many programs incorporate, along with computer scoring, inter-
pretive statements organized in a format similar to portions of a written report. Though
test publishers usually include a disclaimer that these statements are not to be considered
a final report, even aggregate incorporation of the statements may risk a breech of ethics.
Neither the individual’s unique characteristics (Butcher, Perry, & Hahn, 2004), nor com-

Ethical Considerations in Reports 53

Journal of Clinical Psychology DOI 10.1002/jclp

binations of information from different sources are incorporated in generating these
statements. The algorithmic basis for statement generation may also not be available to
the evaluator. Nevertheless, evaluators are still ethically bound to ensure the information
utilized is accurate (EPPCC section 9.09: Interpretation Services). The level of detail
necessary to make this determination has yet to be determined, representing an important
area of future research.

Ensuring the accuracy of information is also challenging because the basis for inter-
pretive statements may not be clear (Lichtenberger, this issue, 2006, pp. 19–32). Mat-
arazzo (1986) notes that many computer interpretation programs follow from an expert’s
judgment, and disparate opinions are rarely included. Providing only one analysis does
not mean that interpretations are inaccurate, but does make the evaluator’s job discerning
interpretive precision more challenging. Butcher, Perry, and Atlis (2000) review several
studies that address the accuracy of CBTI interpretations. They conclude that most stud-
ies supported the accuracy of interpretation, though as much as 50% of interpretative
statements will not apply to a specific client. These findings are not universal though
(Feldstein et al., 1999).

Questions about validity of interpretive statements and computer algorithms are not
easily addressed within the narrative of a written report. However, deciding how much
weight to put on specific results can ultimately affect the written document contents.

Taking CBTI statements at face value, such as wholesale pasting of statements into a
report, would likely be considered unethical. Attempting to explain the limitations of
such data may result in the evaluator writing more about the interpretive process than
about the client. In the end, incorporation of CBTIs may be best addressed by integrating
computer-generated information as merely one source of data, analogous to all other data
generated from the evaluation. Any interpretation written in the report would thus reflect
an amalgam of information that converges on a particular conclusion. In this way, any
limitations of computer-generated statements’ reliability or validity are at least tempered.
Including CBTI interpretations only after confirming with other sources, such as refer-
ence texts or even clinical experience, would also be helpful.

How to Address Ethical Questions in the Written Report

Provided written reports are kept to a readable length (Harvey, this issue, 2006, pp. 5–
18), evaluators have limited space in which to offer information or suggestions. Some
decisions about what to include or leave out are required, including choices about back-
ground information, previous test results, interpretive statements, diagnoses, recommen-
dations, and raw data. Perhaps the most significant of these are choices regarding
interpretation of a person’s cognitive and personality functioning. For example, incorpo-
rating statements about a person’s weaknesses or problematic behavior could lead to
negative perceptions by others, or emotional distress for the individual. Decisions about
including any interpretive statements should be governed by the guiding principles of
autonomy, beneficence, confidentiality, and above nonmaleficence. Moreover, inclusion
of any information should minimize intrusion on privacy (EPPCC section 4.04).

Decisions about what to incorporate can follow several steps. The first consideration
is whether information included will harm the client. If so, it is better to leave that infor-
mation out or reword the explanation such that it is less likely to cause distress or lead to
labeling. Second, information should not be included if it will clearly or very likely
breech confidentiality. Clinicians should take reasonable care to avoid including data that
go beyond the agreed upon scope of the evaluation, even if that information may be of
help to the client. Perhaps the wisest approach is to ensure the individual is informed and

54 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

provides consent to all findings being discussed, even if those findings end up being
adverse or disagreeable. Finally, information should be included if it will be of benefit to
the individual, provided doing so does not compromise the prior considerations.

Language

As significant as whether information is provided is how it is provided. Specifically, how
interpretations and opinions are worded can have a significant impact on the conclusions
a reader may draw. For example, stating that an individual has a “weakness in simulta-
neous processing” has a more negative, and hence potentially deleterious, connotation
than stating that the individual “learns more effectively in a step-by-step manner.” The
content and style of any statements can moderate, or exacerbate, the impact a report may
have on perceptions of the client by the referral source or even by the client.

Writing clearly and precisely has been advocated by many authors (Harvey, this
issue, 2006, pp. 5–18; Lichtenberger, Mather, Kaufman, & Kaufman, 2004; Ownby, 1997).
Additionally, it is preferable that information be provided in a positive way, focusing on
the individual’s strengths (Snyder, Ritschel, Rand, & Berg, this issue, 2006, pp. 33– 46).
Utilizing precise and thoughtful language helps address the issues of harm, labeling, and
confidentiality. Moreover, emotional distress is likely to be minimized if the information
presented is focused on capabilities rather than liabilities.

Presenting the Report

Allen et al. (1986) note that a report may generate an in-person discussion of test find-
ings. Providing verbal feedback along with written information has been demonstrated to
enhance therapeutic rapport and client self-perception (Allen, Montgomery, Tubman,
Frazier, & Escovar, 2003); it also allows for questions to be answered thoroughly. Although
not a mandate that psychologists present information in-person, section 9.10 of the EPPCC
encourages psychologists to take “reasonable steps to ensure that explanations of results
are given to the individual” (p. 14). Providing verbal feedback along with the written
report, rather than merely a copy of the report, seems preferable given the potential
shortcomings of the latter approach (Kantrowitz, 2004).

The need to provide feedback to a client raises a final ethical question: Should a copy
of the report be given to a client? The position of refraining from releasing information
(section 9.04) and providing information (section 9.10) may present an ethical challenge
when the request is for a copy of the report rather than merely an explanation of results.
Although it has been argued that access to one’s own record enhances treatment in some
ways (Doel & Lawson, 1986), the full effect of such releases has yet to be established
with regard to psychological reports specifically. Clarifying this question will help inform
clinician’s decisions about balancing the competing interests of autonomy and beneficence.

Future Needs

The present discussion is not meant to be comprehensive. There will inevitably be vari-
ations or elaborations of the ideas discussed in this review when working with specific
populations or questions. Custody, organizational, and worker’s compensation evalua-
tions each present unique report writing challenges (Ackerman, this issue, 2006, pp. 59–
72). Still, many of the concepts currently identified are fundamental to all reports. Some
questions raised in this discussion require more information before a comprehensive list
of options for ethical resolution can be generated. These include:

Ethical Considerations in Reports 55

Journal of Clinical Psychology DOI 10.1002/jclp

1. What impact does the release of reports or interpretive information have on cli-
ents? This includes the release to others as well as to the client directly.

2. Does knowledge that a client will read a report change the content included?

3. How do clinicians take confidentiality into account when deciding to incorporate
specific conclusions or use specific wording in reports?

4. How do clinicians address and evaluate the accuracy of computer programs inter-
pretive algorithms?

5. What impact does the Internet have on computer-aided interpretation and narra-
tive generation, as well as confidentiality of report documents and test stimuli?
This reflects a broader question about confidentiality of reports in electronic media.

6. One final, if only tangentially related, emerging question is how much the prolif-
eration of readily available professionally developed tests that yield narrative
reports (e.g., e-harmony.com-style evaluations) change the way consumers view
psychological reports in general?

Answering these questions will go a long way toward improving the basis for mak-
ing sound ethical decisions when writing psychological reports.

References

Ackerman, M.J. (2006). Forensic report writing. Journal of Clinical Psychology, 62(1), 59–72.

Allen, A., Montgomery, M., Tubman, J., Frazier, L., & Escovar, L. (2003). The effects of assess-
ment feedback on rapport-building and self-enhancement process. Journal of Mental Health
Counseling, 25, 165–182.

Allen, J.G., Lewis, L., Blum, S., Voorhees, S., Jernigan, S., & Peebles, M.J. (1986). Informing
psychiatric patients and their families about neuropsychological assessment findings. Bulletin
of the Menninger Clinic, 50, 64–74.

American Education Research Association, American Psychological Association, & National Coun-
cil on Measurement in Education. (1999). Standards for educational and psychological testing.
Washington, DC: American Educational Research Association.

American Psychological Association. (1992). Ethical principles of psychologists and code of con-
duct. American Psychologist, 47, 1597–1611.

American Psychological Association (2002). Ethical principles of psychologists and code of con-
duct. American Psychologist, 57, 1060–1073.

American Psychological Association, Committee on Psychological Tests and Assessment. (1996).
Statement on the disclosure of test data. Washington, DC: Author.

Bernadt, M., Gunning, L., & Quenstedt, M. (1991). Patients’ access to their own psychiatric records.
British Medical Journal, 303, 967.

Bricklin, P. (2001). Being ethical: More than obeying the law and avoiding harm. Journal of Per-
sonality Assessment, 77, 195–202.

Butcher, J.N., Perry, J., & Hahn, J. (2004). Computers in clinical assessment: Historical develop-
ments, present status, and future challenges. Journal of Clinical Psychology, 60, 331–345.

Butcher, J.N., Perry, J.N., & Atlis, M.M. (2000). Validity and utility of computer based test inter-
pretation. Psychological Assessment, 12, 6–18.

Cici, S.J., Scullin, M., & Kanaya, T. (2003). The difficulty of basing death penalty eligibility on IQ
cutoff scores for mental retardation. Ethics & Behaviors, 13, 11–17.

Committee on Ethical Guidelines for Forensic Psychologists. (1991). Specialty guidelines for foren-
sic psychologists. Law and Human Behavior, 15, 655– 665.

56 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

Doel, M., & Lawson, B. (1986). Open records: The client’s right to partnership. British Journal of
Social Work, 16, 407– 430.

Erard, R.E. (2004). Release of test data under the 2002 ethics code and the HIPPA privacy rule: A
raw deal or just a half-baked idea? Journal of Personality Assessment, 82, 23–30.

Feldstein, S.N., Keller, F.R., Portman, R.E., Durham, R.L., Klebe, K.J., & Davis, H.P. (1999). A
comparison of computerized and standard versions of the Wisconsin card sorting test. Clinical
Neuropsychologist, 13, 303–313.

Flynn, J.R. (1998). IQ gains over time: Toward finding the causes. In U. Neisser (Ed.), The rising
curve: Long term gains in IQ and related measures. Washington, DC: American Psychological
Association.

Fryer, J.H., & Cohen, L. (1988). Effects of labeling patients “psychiatric” or “medical”: Favorabil-
ity of traits ascribed by hospital staff. Psychological Reports, 62, 779–793.

Golodetz, A., Ruess, J., & Milhous, R.L. (1976). The right to know: Giving the patient his medical
record. Archives of Physical Medicine Rehabilitation, 57, 78–81.

Harvey, V.S. (2006). Variables affecting the clarity of psychological reports. Journal of Clinical
Psychology, 62(1), 5–18.

Hayne, Y.M. (2003). Experiencing psychiatric diagnosis: Client perspectives on being named men-
tally ill. Journal of Psychiatric & Mental Health Nursing, 10, 722–729.

The Health Information Portability and Accountability Act. Pub. L No. 104–191 (1996).

Howe, K.R., & Miramontes, O.B. (1992). The ethics of special education. New York: Teachers
College Press.

Kantrowitz, J.L. (2004). Writing about patients: II. Patients’ reading about themselves and their
analysts’ perceptions of its effect. Journal of the American Psychoanalytic Association, 52,
101–123.

Kaufman, A.S., & Lichtenberger, E.O. (2002). Assessing Adolescent and Adult Intelligence (2nd
ed.). Boston: Allyn & Bacon.

Knapp, S., & VandeCreek, L. (2001). Ethical issues in personality assessment in forensic psychol-
ogy. Journal of Personality Assessment, 77, 242–254.

Kosky, N., & Burns, T. (1995). Patient access to psychiatric records: Experience in an inpatient
unit. Psychiatric Bulletin, 19, 87–90.

Lezak, M.D. (1988). IQ: R.I.P. Journal of Clinical and Experimental Neuropsychology, 10, 351–361.

Lichtenberger, E.O. (2006). Computer utilization and clinical judgment in psychological assess-
ment reports. Journal of Clinical Psychology, 62(1), 19–32.

Lichtenberger, E.O., Mather, N., Kaufman, N.L., & Kaufman, A.S. (2004). Essentials of assess-
ment report writing. New York: Wiley.

Markham, D. (2003). Attitudes towards patients with a diagnosis of ‘borderline personality dis-
order’: Social rejection and dangerousness. Journal of Mental Health (UK), 12, 595– 612.

Matarazzo, J.D. (1986). Computerized clinical psychological test interpretations: Unvalidated plus
all mean and no Sigma. American Psychologist, 41, 14–24.

Matarazzo, R.G. (1995). Psychological report standards in neuropsychology. The Clinical Neuro-
psychologist, 9, 249–250.

Naugle, R.I., & McSweeny, A.J. (1995). On the practice of routinely appending neuropsychological
data to reports. The Clinical Neuropsychologist, 9, 245–247.

Ownby, R.L. (1997). Psychological reports: A guide to report writing in professional psychology
(3rd ed.). New York: Wiley.

Pieniadz, J., & Kelland, D.Z. (2001). Reporting scores in neuropsychological assessments: Ethi-
cality, validity, practicality, and more. In C.G. Armengol, E. Kaplan, & E.J. Moes (Eds.), The
consumer-oriented neuropsychological report (pp. 123–140). Lutz, FL: PAR.

Roth, L.H., Wolford, J., & Meisel, A. (1980). Patient access to records: Tonic or toxic? American
Journal of Psychiatry, 137, 592–96.

Ethical Considerations in Reports 57

Journal of Clinical Psychology DOI 10.1002/jclp

Rucker, C.N. (1967). Technical language in the school psychologist’s report. Psychology in the
Schools, 4, 146–150.

Schwartz, N.H., & Wilkinson, W.K. (1987). Perceptual influence of psychoeducational reports.
Psychology in the Schools, 24, 127–135.

Smith, W.H. (1978). Ethical, social, and professional issues in patients’ access to psychological test
reports. Bulletin of the Menninger Foundation, 42, 150–155.

Snyder, C.R., Ritschel, L.A., Rand, K.L., & Berg, C.J. (2006). Balancing psychological assess-
ments: Including strengths and hope in client reports. Journal of Clinical Psychology, 62(1),
33– 46.

Socall, D.W., & Holtgraves, T. (1992). Attitudes toward the mentally ill: The effects of label and
beliefs. Sociological Quarterly, 33, 435– 445.

58 Journal of Clinical Psychology, January 2006

Journal of Clinical Psychology DOI 10.1002/jclp

Calculate your order
275 words
Total price: $0.00

Top-quality papers guaranteed

54

100% original papers

We sell only unique pieces of writing completed according to your demands.

54

Confidential service

We use security encryption to keep your personal data protected.

54

Money-back guarantee

We can give your money back if something goes wrong with your order.

Enjoy the free features we offer to everyone

  1. Title page

    Get a free title page formatted according to the specifics of your particular style.

  2. Custom formatting

    Request us to use APA, MLA, Harvard, Chicago, or any other style for your essay.

  3. Bibliography page

    Don’t pay extra for a list of references that perfectly fits your academic needs.

  4. 24/7 support assistance

    Ask us a question anytime you need to—we don’t charge extra for supporting you!

Calculate how much your essay costs

Type of paper
Academic level
Deadline
550 words

How to place an order

  • Choose the number of pages, your academic level, and deadline
  • Push the orange button
  • Give instructions for your paper
  • Pay with PayPal or a credit card
  • Track the progress of your order
  • Approve and enjoy your custom paper

Ask experts to write you a cheap essay of excellent quality

Place an order

Order your essay today and save 30% with the discount code ESSAYHELP