
Utilizing AI is usually a double-edged sword, in line with new analysis from Duke College. Whereas generative AI instruments could increase productiveness for some, they may additionally secretly injury your skilled status.
On Thursday, the Proceedings of the Nationwide Academy of Sciences (PNAS) revealed a research displaying that workers who use AI instruments like ChatGPT, Claude, and Gemini at work face damaging judgments about their competence and motivation from colleagues and managers.
“Our findings reveal a dilemma for individuals contemplating adopting AI instruments: Though AI can improve productiveness, its use carries social prices,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua Faculty of Enterprise.
The Duke crew carried out 4 experiments with over 4,400 contributors to look at each anticipated and precise evaluations of AI instrument customers. Their findings, introduced in a paper titled “Proof of a social analysis penalty for utilizing AI,” reveal a constant sample of bias in opposition to those that obtain assist from AI.
What made this penalty notably regarding for the researchers was its consistency throughout demographics. They discovered that the social stigma in opposition to AI use wasn’t restricted to particular teams.

Fig. 1 from the paper “Proof of a social analysis penalty for utilizing AI.”
Credit score:
Reif et al.
“Testing a broad vary of stimuli enabled us to look at whether or not the goal’s age, gender, or occupation qualifies the impact of receiving assist from Al on these evaluations,” the authors wrote within the paper. “We discovered that none of those goal demographic attributes influences the impact of receiving Al assistance on perceptions of laziness, diligence, competence, independence, or self-assuredness. This implies that the social stigmatization of AI use is just not restricted to its use amongst explicit demographic teams. The outcome seems to be a basic one.”
The hidden social value of AI adoption
Within the first experiment carried out by the crew from Duke, contributors imagined utilizing both an AI instrument or a dashboard creation instrument at work. It revealed that these within the AI group anticipated to be judged as lazier, much less competent, much less diligent, and extra replaceable than these utilizing standard know-how. In addition they reported much less willingness to reveal their AI use to colleagues and managers.
The second experiment confirmed these fears had been justified. When evaluating descriptions of workers, contributors persistently rated these receiving AI assist as lazier, much less competent, much less diligent, much less unbiased, and fewer confident than these receiving related assist from non-AI sources or no assist in any respect.