Editor’s Note: AI-powered credit appropriation is emerging as a distinct workplace behavior pattern — one that exploits the opacity of generative AI to let employees claim ownership of work built on colleagues’ prompts, processes, and institutional knowledge. Survey data reveals the scope: 55 percent of workers globally have presented AI-generated content as their own, and 61 percent have hidden their AI use from supervisors. The pattern has direct parallels in journalism, where attribution norms evolved over decades to draw a hard line between building on others’ reporting and stealing it — norms that corporate environments could adapt immediately. For cybersecurity, information governance, and eDiscovery professionals, the implications extend beyond office politics into operational risk, defensibility challenges, and regulatory exposure under the EU AI Act and NIST’s AI Risk Management Framework. The window for building attribution frameworks is narrowing fast. Professionals across regulated industries should track how their organizations are adapting recognition systems, prompt ownership policies, and audit trail requirements to match the reality of AI-augmented work.

Industry News – Artificial Intelligence Beat

The AI Appropriator: A New Species of Credit Thief Is Reshaping the Corporate Workplace

ComplexDiscovery Staff

Someone on your team just delivered a polished risk assessment in half the time it normally takes. The analysis is sharp, the formatting pristine, the language tight. The colleague accepts the praise, the promotion points, maybe even a bonus cycle nod — and never mentions that an AI tool did 80 percent of the heavy lifting, fueled by prompts and frameworks that other team members spent months developing.

Welcome to the era of the AI Appropriator.

The term describes a rapidly emerging workplace archetype: the employee who feeds internally developed prompts, proprietary processes, and colleagues’ work product into generative AI systems, then presents the polished output as their own original contribution. It is not garden-variety laziness. It is a sophisticated form of credit capture that exploits the opacity of AI-assisted workflows — and it is catching organizations flat-footed at the worst possible moment, just as enterprise AI adoption accelerates past the point of no return.

A Very Old Problem With a Very New Engine

Credit theft in the workplace predates electricity, let alone artificial intelligence. BambooHR’s “Bad Boss Index” found that taking credit for employees’ work ranked as the single most unacceptable boss behavior for roughly three‑quarters of surveyed workers. Research by Plymouth University psychologist Timothy Hollins and colleagues has documented “cryptomnesia”—the cognitive phenomenon in which individuals genuinely misremember another person’s idea as their own. The workplace has always had its share of people who repackage others’ thinking and claim authorship, whether deliberately or through the quieter machinery of self-serving memory.

What makes the AI Appropriator different is scale, speed, and plausible deniability.

In the pre-AI workplace, stealing credit required at least some effort: attending the meetings, reading the memos, manually rewriting the deliverable. The appropriator had to be present in the process, which left fingerprints. Today, an employee can copy a colleague’s carefully crafted prompt library — the product of weeks of iterative refinement — paste it into a generative AI tool, feed in source materials from a shared drive, and produce a polished deliverable in minutes. The output bears no visible trace of its origins.

“AI prompts can constitute valuable intellectual property and are protectable as trade secrets or works of copyright,” according to analysis by Klemchuk LLP, a Dallas-based intellectual property firm. The legal framework recognizes the value. Corporate culture has not caught up.

The Anatomy of AI Appropriation

The behavior manifests in several distinct patterns that cybersecurity, information governance, and eDiscovery professionals should recognize immediately, because these are the very disciplines where prompt libraries, analytical frameworks, and investigative methodologies carry enormous competitive value.

The first pattern is prompt harvesting. A team member accesses shared repositories of AI prompts — developed through collaborative effort and institutional knowledge — and uses them to generate outputs that they present as solo work. The prompts themselves encode expertise: a well-crafted eDiscovery review prompt, for example, might reflect years of experience with privilege classification, document prioritization, and relevance scoring. The employee who copies that prompt captures the embedded expertise without contributing to its development.

The second pattern is process piggybacking. Here, the appropriator doesn’t just take prompts — they feed entire workflows, templates, or analytical frameworks developed by colleagues into AI systems. A compliance team’s regulatory mapping methodology, painstakingly refined over multiple audit cycles, becomes raw material for someone else’s AI-generated “original” analysis.

The third pattern is output laundering. The appropriator uses AI to rewrite, restructure, or stylistically alter work product that was substantially created by others, producing something that looks different enough to claim as new. In information governance contexts, this might mean taking a colleague’s data classification scheme, running it through an AI tool with minor modifications, and presenting the result as an independent assessment.

Each pattern shares a common thread: the appropriator extracts value from collective intellectual effort and individualizes the credit.

Recognizing these patterns requires acknowledging that AI appropriation exists on a spectrum. At one end sits intentional appropriation: the employee who knowingly copies a colleague’s prompt library, generates output, and claims solo authorship with full awareness of what they are doing. In the middle sits inadvertent appropriation: the employee who uses shared AI tools and team repositories without understanding that attribution is expected, often because the organization has never established those norms. At the other end sits legitimate building: the employee who uses a shared prompt framework as a starting point, adds substantial original analysis, judgment, and domain expertise, and produces something genuinely new — but fails to credit the foundation. These are different behaviors with different remedies. Organizations that treat the entire spectrum as a single offense will alienate the builders and the unaware alike. Organizations that distinguish between them can target enforcement where it matters and education where it helps.

The Data Behind the Disconnect

The conditions enabling AI appropriation are already widespread. Gallup’s Q4 2024 tracking data shows 46 percent of U.S. workers have used AI at work, with adoption concentrated among leaders (69 percent) and managers (55 percent) compared to individual contributors (40 percent). A Pew Research Center analysis published in October 2025, based on a survey of 8,750 adults, found that 21 percent of U.S. workers now use AI on the job, up from the prior year. Globally, the numbers run higher: a study of over 32,000 workers across 47 countries, reported by The Conversation in April 2025, found that 58 percent of employees intentionally use AI at work, with a third doing so weekly or daily.

The transparency gap is where the appropriation problem festers. An EisnerAmper survey of 1,017 U.S. desk workers in mid-2025 found that only 41 percent of employees inform their manager or seek permission before using AI — even as 84 percent of managers acknowledged some level of AI use on their teams. The same global study of 32,000 workers delivered a finding that cuts directly to the heart of AI appropriation: 55 percent of employees admitted they had presented AI-generated content as their own work, and 61 percent had actively avoided revealing when they used AI. Nearly half — 47 percent — acknowledged using AI in ways that could be considered inappropriate, while 66 percent had relied on AI output without evaluating it. Meanwhile, according to EisnerAmper, 28 percent of employees said they would use AI at work even if their employer banned it.

For eDiscovery and information governance professionals, these numbers should trigger immediate concern. If employees are feeding proprietary data, case strategies, and client information into AI tools without disclosure, the risks extend well beyond credit misattribution. They encompass privilege waiver, data breach, and regulatory noncompliance.

The Organizational Blind Spot

Most enterprises have rushed to develop AI acceptable use policies, but the policies overwhelmingly focus on data security, bias, and compliance — not attribution. The question of “who did the work” when AI is involved remains largely unaddressed.

This is a consequential gap. HR Dive reported in 2025 that AI ethics policies for the workplace remain in early stages at many organizations, with attribution and credit-sharing frameworks ranking low on the priority list behind data privacy and algorithmic bias. Littler Mendelson, one of the largest employment law firms in the United States, flagged in its 2025 guidance that employers should consider addressing AI-assisted work attribution in their acceptable use policies — but acknowledged that few have done so.

The EU AI Act, with transparency obligations set to take full effect in August 2026, may force the issue internationally. The Act requires that AI-generated or AI-manipulated text published to inform the public must be disclosed unless it has undergone genuine human review and a natural or legal person assumes editorial responsibility. Non-compliance carries penalties reaching 35 million euros or 7 percent of global annual turnover for serious violations. While the Act targets public-facing content rather than internal work product, its underlying principle — that people deserve to know when AI produced something — has already begun reshaping corporate disclosure norms.

Old Parallels, New Urgency

The AI Appropriator is not without historical analogs in corporate life. Understanding these parallels helps organizations see that the core problem is not technological but behavioral — and that solutions already exist for the behavioral dimension.

The managerial credit hoarder — the boss who presents team work as personal achievement in executive meetings — has been a fixture of corporate life for decades. Research published in the Journal of Business Ethics in 2013 by William Graham and William Cooper examined how credit-claiming operates as a form of social undermining, separating individuals from recognition of their labor and eroding trust across teams. The AI Appropriator does the same thing, but the AI intermediary makes detection harder. Where colleagues might notice a manager repeating their talking points in a boardroom, they may never know their prompt library is being used to generate reports in another department.

The power asymmetry sharpens when the appropriator is the manager rather than a peer. A direct report who discovers that their supervisor is feeding team-built prompts into AI and presenting the polished output to senior leadership faces a structural problem that no attribution policy alone can solve. The subordinate cannot easily challenge the person who writes their performance review. This dynamic demands safeguards that go beyond organizational-level frameworks: skip-level reviews of AI-assisted deliverables, prompt-library access logs visible to governance teams rather than just direct supervisors, and anonymous reporting channels that allow contributors to flag appropriation without career risk. Without these mechanisms, attribution policies protect against lateral appropriation between peers but leave the vertical variety — historically the most common and most damaging form of credit theft — structurally intact.

The intellectual property poacher — the colleague who copies code, lifts research, or repurposes deliverables without attribution — is another established archetype. Attorney Aaron Hall, writing about IP disputes from employee AI use, noted that AI use policies integrated within employment contracts help mitigate these disputes by defining permissible applications, providing guidelines on data handling, and specifying attribution of AI-assisted outputs. The legal infrastructure for addressing IP theft exists. It simply has not been extended to cover the AI-assisted variety systematically.

The ghost contributor — the team member whose quiet, foundational work gets absorbed into someone else’s visible output — may be the closest analog. In data-intensive disciplines like eDiscovery, the analyst who builds the search methodology rarely gets the same recognition as the partner who presents the findings. AI amplifies this dynamic: the person who built the prompt framework may see their encoded expertise generating outputs across the organization with no trail back to their contribution.

The Journalism Parallel: An Industry That Already Solved This

Perhaps no profession offers a more instructive parallel than journalism — an industry built entirely on the practice of creating new works from the research, reporting, and intellectual labor of others, and one that developed attribution norms precisely because the alternative was chaos.

The wire service model is the clearest analog. For over a century, organizations like the Associated Press and Reuters have produced original reporting that thousands of local outlets then rewrite, localize, and publish. The norm is unambiguous: you credit the wire service. “According to reporting by the Associated Press” is standard practice, not a courtesy. When a local outlet runs a rewritten AP story under a staff byline with no wire credit, it is a professional violation — and newsrooms enforce that boundary. The AI Appropriator does the equivalent every time they take a colleague’s prompt framework, generate a polished deliverable, and strip out any trace of the intellectual origin.

Digital journalism confronted a version of this problem head-on during the aggregation wars of the 2010s. As outlets discovered they could summarize and repackage others’ original reporting for clicks, the line between aggregation and theft became a live ethical debate. Former New York Times executive editor Bill Keller warned publicly that the distinction between the two was often dangerously thin. The industry responded by developing norms: link to the original, credit the reporter by name and outlet, and add your own analysis or context to justify the new piece. The best aggregators — the ones that survived and built audiences — became valuable precisely because they added perspective rather than merely extracting it. The AI Appropriator skips every one of these attribution steps. They aggregate colleagues’ intellectual work through an AI intermediary and present the synthesized output as original.

Beat reporting offers another sharp parallel. A journalist covering cybersecurity or legal technology spends years cultivating sources — building trust, developing relationships, learning which questions yield actionable intelligence and which produce noise. When a colleague uses those sources without acknowledgment, it is a serious breach of newsroom ethics. In the corporate AI context, a carefully tuned prompt library is the functional equivalent of a source network: it encodes domain expertise, institutional knowledge, and hard-won understanding of what inputs produce reliable outputs. Appropriating someone’s prompt library is the intellectual equivalent of poaching their sources and filing the story under your own byline.

Journalism also offers a ready-made solution that corporate environments could adopt almost immediately: the “hat tip.” The practice — “h/t @reporter” on social media, “first reported by” in articles — is a lightweight attribution mechanism that acknowledges intellectual debt without slowing down the work. It costs nothing, takes seconds, and maintains the trust infrastructure that makes collaborative information work possible. The Society of Professional Journalists codified the underlying principle decades ago in two words: “Never plagiarize. Always attribute.” Organizations struggling with AI appropriation do not need to invent new norms from scratch. They need to adapt the ones journalism already proved work — and enforce them with the same professional seriousness. Journalism’s attribution norms haven’t eliminated the behavior — digital media created new pressures that test them daily — but they established the professional consensus that makes enforcement possible.

The parallel extends to one final, uncomfortable truth. In journalism, plagiarism is a career-ending offense — not because the words themselves were so valuable, but because the act destroys the trust that makes the entire enterprise function. Jayson Blair at the New York Times, Jonah Lehrer at The New Yorker, and dozens of others lost their careers not for the volume of material they appropriated, but for the systemic dishonesty the appropriation revealed. AI appropriation in the corporate workplace operates on the same fault line. The damage is not measured in any single deliverable. It is measured in the erosion of trust that occurs when people realize their intellectual contributions are being harvested without credit — and that the organization’s systems are either unable or unwilling to distinguish between creation and extraction.

Building a Framework That Protects Without Paralyzing

The challenge for organizations is nuanced. Crack down too hard on AI use, and you lose the productivity and quality gains that these tools deliver. Ignore the appropriation problem, and you erode trust, demoralize the people who actually build institutional knowledge, and create perverse incentives that reward extraction over creation.

Several principles can guide a balanced approach.

First, treat prompt libraries and AI workflows as institutional assets with clear ownership records. Just as organizations track code repositories with version control and attribution logs, they should maintain registries of AI prompts and workflows that record who created them, when, and under what conditions. Shift Law, a Canadian intellectual property firm, has argued that AI prompts warrant trade secret protection when businesses restrict access, use nondisclosure agreements, and label prompt repositories as confidential. That framework applies equally to internal attribution.

Second, require disclosure of AI assistance in work product — not as a punishment, but as a professional norm. The goal is not to stigmatize AI use. It is to create a culture where saying “I used AI tools, building on our team’s prompt library, to produce this analysis” is a mark of competence rather than a confession. Fisher Phillips, a national labor and employment law firm, recommended in 2025 that organizations clarify in their policies how copyright applies to AI-assisted work, noting that work involving substantial human drafting, editing, and original analysis retains copyright protection — while purely unedited AI output does not.

Third, align recognition systems with AI-augmented workflows. Performance reviews, project credits, and bonus structures should account for both visible outputs and the underlying intellectual contributions that made those outputs possible. The person who spent three weeks building a regulatory compliance prompt that the entire team now uses is delivering substantial value — and the recognition system should reflect that.

Fourth, invest in AI literacy at every level. BCG’s 2025 AI at Work report found that only 36 percent of employees were satisfied with their AI training, even though nearly three in four already use the tools regularly. The same report found that 54 percent of workers would use AI without official authorization — a number that skews even higher among younger employees. Organizations where employees understand both the capabilities and the attribution implications of AI tools are far less likely to develop appropriation cultures. Training that addresses not just how to use AI but how to credit the people and processes behind AI-assisted work closes both the skills gap and the ethics gap simultaneously.

Fifth, build lightweight audit trails into AI-assisted workflows. This does not require surveillance. It means adopting tools and platforms that naturally log which prompts were used, what source materials were fed into the system, and who initiated the process. For eDiscovery and information governance teams, this approach aligns with existing defensibility requirements — the same rigor applied to document review processes should extend to AI-assisted work.

The Stakes for Regulated Industries

For professionals operating in cybersecurity, information governance, and eDiscovery, the AI Appropriator problem carries risks beyond bruised egos and misallocated bonuses.

In eDiscovery, the chain of custody for analytical methodologies matters. If an AI-generated search methodology cannot be attributed to a qualified professional who can testify to its reliability, it may face challenges in court. If that methodology was built on someone else’s prompt framework without acknowledgment, the lack of transparency compounds the defensibility risk.

In cybersecurity, the stakes are operational, not just cultural. Security operations centers increasingly rely on AI-assisted threat detection, incident triage, and vulnerability prioritization. When an analyst appropriates a colleague’s carefully tuned detection prompts or threat-modeling frameworks and deploys them without understanding the assumptions baked into the logic — which threat intelligence feeds informed the model, what false-positive thresholds were calibrated, which attack vectors were prioritized — the organization’s defensive posture rests on a foundation that the credited individual cannot explain or defend under scrutiny. NIST’s AI Risk Management Framework, released in its updated form in 2024, explicitly calls for documentation of AI system design decisions and accountability for outputs. An appropriated AI workflow with no attribution trail fails that test.

In information governance, data classification and retention policies built through AI-assisted analysis must be traceable to accountable professionals. The EU AI Act’s transparency obligations, set to take full effect in August 2026, will require disclosure of AI-generated content in public-facing contexts. The second draft of the EU Code of Practice on Transparency of AI-Generated Content, published on March 3, 2026, pushed these principles closer to operational reality — and the final version, expected by June 2026, will establish the practical benchmarks that deployers across industries must meet. For multinational organizations, the challenge multiplies: teams in the EU, the United States, and Asia-Pacific operate under different AI disclosure regimes, different intellectual property protections for prompts, and different cultural norms around attribution. An AI appropriation framework that works in one jurisdiction may have no enforcement mechanism in another, creating gaps that cross-border teams — common in both eDiscovery and cybersecurity operations — must actively manage.

Across all three disciplines, the talent dimension may ultimately prove the most consequential. ISC2’s 2024 Cybersecurity Workforce Study estimated the global cybersecurity workforce gap at 4.8 million professionals — a 19 percent year-over-year increase — with 67 percent of organizations reporting staffing shortages. The eDiscovery sector faces parallel pressure: specialized recruiting firm Iceberg has documented persistent talent shortages in every major legal market, with burnout ranking as the top driver of job-seeking among eDiscovery professionals for 11 consecutive months. In an environment where skilled professionals are this scarce and this mobile, AI appropriation is not just an ethics problem. It is a retention problem. The analyst who spent months building the prompt framework that now powers the team’s output will leave when they see someone else collecting the credit — and they will take their expertise to an organization that values it. Every attribution failure is a quiet invitation for your most capable people to walk out the door.

A Question Worth Asking

The AI Appropriator is not a future problem. They are already in your organization, enabled by a combination of powerful tools, unclear norms, and attribution frameworks that were designed for a pre-AI workplace. The solution is not to restrict AI adoption — that ship has sailed, and the competitive disadvantages would be severe. The solution is to build cultures and systems where AI amplifies human contribution without obscuring it.

As organizations race to embed AI into every workflow, who is responsible for ensuring that the people who build the institutional knowledge powering those tools actually receive credit for their work — and what happens when no one is?

News Sources

Publisher’s note: Generative AI tools assisted with research synthesis and initial drafting for this article under human‑developed prompts and workflows. ComplexDiscovery editors provided final review and assume full editorial responsibility for the published content.



Assisted by GAI and LLM Technologies

Additional Reading

Source: ComplexDiscovery OÜ

ComplexDiscovery’s mission is to enable clarity for complex decisions by providing independent, data‑driven reporting, research, and commentary that make digital risk, legal technology, and regulatory change more legible for practitioners, policymakers, and business leaders.

The post The AI Appropriator: A New Species of Credit Thief Is Reshaping the Corporate Workplace appeared first on ComplexDiscovery.