AI writing tools enhance email professionalism but can diminish trust. A study found that while employees appreciate AI-polished messages, they distrust managers who heavily rely on AI, particularly for interpersonal communication like praise.
Artificial intelligence has slipped into the everyday rhythm of office life. From ChatGPT and Gemini to Copilot and Claude, more than three-quarters of professionals now use AI to help craft or polish their messages. It’s quick, it’s efficient — but is it always the right choice when it comes to sensitive workplace communication?
A new study suggests not. Researchers from the University of Florida’s Warrington College of Business and the University of Southern California have found that while AI tools can make managers’ emails look sharper and more professional, they can also chip away at something far more valuable — trust.
“We see a tension between perceptions of message quality and perceptions of the sender,” said study co-author Anthony Coman, Ph.D. “Despite positive impressions of professionalism, managers risk their trustworthiness when they lean too heavily on AI for routine communication.”
The research, published in the International Journal of Business Communication, surveyed 1,100 professionals who were shown congratulatory messages written with varying degrees of AI assistance — from light grammar edits to heavily AI-crafted text. Participants then rated both the quality of the message and their perception of the sender.
The results were revealing: while AI-assisted writing often scored high for clarity and polish, employees became more skeptical when they believed a manager had used medium to high levels of AI help. Trust took a noticeable hit — only 40–52% of employees felt supervisors were sincere with high-AI messages, compared to 83% for lightly assisted ones.
The drop in perceived professionalism was just as sharp. Ninety-five percent of respondents called low-AI messages professional, but that figure fell to around 70% when heavy AI involvement was suspected.
Part of the problem is perception. People tend to view their own AI use as reasonable no matter the level, but are less forgiving when others — especially bosses — do the same. When AI is used for relationship-driven communication, such as praise, congratulations, or motivation, employees can interpret it as lazy or insincere.
“In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, pointing specifically to perceived ability and integrity — two key ingredients of cognitive-based trust.