Adam Kidan on the “Shadow Co-Worker”: How Hidden AI Use Is Forcing Leaders to Rethink Work
Walk through almost any workplace today and you will find an extra co-worker no one hired — that co-worker is artificial intelligence.
Oct. 13 2025, Published 1:28 p.m. ET

Employees are secretly turning to AI to get ahead. Instead of fighting it, leaders need to understand what it reveals about trust, culture, and the future of work.
The Co-Worker You Can’t See
Walk through almost any workplace today, and you will find an extra co-worker who no one hired. It never appears on payroll reports or in the staff directory, but it is there all the same. That co-worker is artificial intelligence.
Surveys suggest nearly half of American employees are already using AI on the job, often without telling their managers. This quiet adoption of what some call “shadow AI” is reshaping how work gets done in ways that most leaders have not fully grasped. For business leadership, this is not just a technology issue. It is a test of trust, governance, and culture.

Why Workers Keep It Quiet
Most employees who use AI are not cutting corners. They are trying to be more efficient, creative, and competitive. AI can draft proposals, clean up resumes, or generate marketing ideas in minutes instead of hours.
The secrecy usually comes from fear. Workers worry that managers will see AI as cheating, or worse, as evidence their role is less valuable. In trying to protect themselves, they end up creating invisible workflows that leadership cannot monitor or guide.
The Blind Spot in Leadership
When executives discover AI is being used without approval, the common reflex is to ban it. Block ChatGPT, send out policy memos, or threaten penalties. But bans rarely solve the problem. They simply push usage further underground.
The greater risk is not that employees are experimenting with AI, but that companies have no visibility into how it is being used. A line of AI-generated code might save a developer a day of work, but it could also introduce a costly security flaw. A polished report written by an algorithm might impress at first glance, yet miss the nuance a client expects.
Pretending shadow AI does not exist leaves organizations blind to both risks and opportunities.
Turning Risk Into Value
The smarter move is not to eliminate AI but to bring it into the open. Leaders can start by:
- Encouraging openness. Create a culture where employees can share how they are using AI without fear of punishment.
- Providing testing grounds. Give staff access to approved platforms where they can experiment safely.
- Setting clear rules. Develop simple policies for data security, confidentiality, and required human oversight.
Handled this way, shadow AI stops being a liability and becomes a source of innovation.

What Comes Next
Shadow AI will not remain invisible forever. As tools become embedded in enterprise platforms, companies will gain more oversight — but also more responsibility. The real question is no longer whether employees use AI, but whether leaders can guide that use responsibly.
The organizations that succeed will be the ones that build trust, encourage safe experimentation, and keep human judgment at the center of decision-making.
The shadow co-worker is already here. Leaders who choose to recognize it and shape it will not only reduce risk. They will shape the future of work itself.
About Adam Kidan
Adam Kidan is the President of Empire Workforce Solutions and an experienced entrepreneur. Throughout his career, he has been involved in building businesses, navigating workforce challenges, and advocating for approaches that give workers meaningful opportunities. His perspective combines lessons from leadership, law, and business reinvention, allowing him to speak with authority on how staffing and labor trends are reshaping the economy.