Slop Cop: Real-Time Detection of LLM Writing Patterns in Your Browser

⬅️ Back to Tools

What it is

Slop Cop is a browser-based text editor that detects the characteristic writing patterns of LLM-generated prose and highlights them in real time.

Think of it like a linter, but for prose instead of code. You paste your text in, and it immediately starts flagging patterns that feel like AI: hedged language, throat-clearing openers, the same dozen intensifiers everyone uses, false conclusions, em-dash spam, and dozens of other tells.

The key insight here is that this is not a plagiarism detector. It does not try to answer “was this written by AI?” Instead, it answers “does this read like it was written by AI?” That distinction matters. Humans can write slop too, and AI can sometimes produce genuinely good prose. What Slop Cop catches is the texture of generic LLM output.

The project comes from Awnist, and it runs entirely in your browser.

Why it exists

LLMs trained on human feedback develop characteristic writing tics. They hedge reflexively, open with throat-clearing, reach for the same dozen intensifiers, structure arguments as “not X, but Y,” and inflate ordinary points to world-historical significance. These patterns are learnable and detectable.

The problem is that when you are the one writing, you often do not notice these patterns. You are too close to the text. Slop Cop makes these tics visible so you can make an informed decision about whether to keep or cut them.

The project is based on research from Wikipedia’s “Signs of AI Writing” page and a pattern taxonomy called LLM_PROSE_TELLS.md. The creator noticed that LLM output has recognizable fingerprints, and built a tool to surface them.

How it works

Detection runs in two tiers:

Client-side (instant): 35 rules implemented as regex and structural analysis. These fire on every keystroke after a 350ms debounce. No API key needed. You get instant feedback as you type.

Semantic (optional): Two parallel calls to the Anthropic API, triggered manually when you want deeper analysis. These go directly from your browser to Anthropic. Your API key is stored in localStorage and never leaves your machine.

  • Fast pass uses Claude Haiku (about 5 seconds) and catches sentence and paragraph-level patterns that require actual language understanding. Things like triple construction, sycophantic framing, and unnecessary elaboration. Large documents are automatically split into overlapping chunks and analyzed in parallel.

  • Deep pass uses Claude Sonnet (about 15 seconds) and catches document-level patterns only visible at scale. Things like dead metaphor repetition, one-point dilution, and fractal summaries.

Patterns detected

The client-side instant rules catch about 35 distinct patterns. Here are some of the key ones:

Overused Intensifiers: Words like crucial, robust, pivotal, unprecedented, tapestry, nuanced, paradigm, leverage, delve, and about 15 more. These words show up far more frequently in LLM output than in human writing.

Elevated Register: Words that sound fancy but add nothing. “utilize” instead of “use,” “commence” instead of “start,” plus words like facilitate, endeavor, demonstrate, craft, and “moving forward.”

Filler Adverbs: Sentence-opening words like “importantly,” “ultimately,” “essentially,” “fundamentally” that pad the start of sentences without adding meaning.

“Almost” Hedge: Phrases like “almost always,” “almost certainly,” “almost never” that hedge without committing.

Era Opener: “In an era of…” or “In a world where…” constructions that set up false importance.

Metaphor Crutch: Overused metaphors like “double-edged sword,” “game changer,” “north star,” “deep dive,” “paradigm shift,” “perfect storm.”

“It’s Important to Note”: Phrases like “it is important to note,” “it’s worth noting,” “it should be noted” that signal generic authority.

“Broader Implications”: Phrases like “broader implications” or “wider implications” that inflate ordinary observations.

False Conclusion: Phrases like “In conclusion,” “At the end of the day,” “To summarize,” “Moving forward” that pad endings.

Connector Addiction: Paragraph-opening words like “Furthermore,” “Moreover,” “Additionally,” “However,” “That said” that create a mechanical rhythm.

Unnecessary Contrast: Words like “whereas,” “as opposed to,” “in contrast to,” “unlike” that set up false dichotomies.

Em-Dash Overuse: Excessive em-dash and en-dash pivots that create that distinctive “punchy sales writing” rhythm.

Negation Pivot: “Not X, but Y” or “not X — Y” constructions that feel formulaic.

“Serves As” Dodge: Phrases like “serves as,” “stands as,” “acts as,” “functions as” that avoid direct language.

Dramatic Fragment: One-to-four-word standalone paragraphs used for false emphasis.

The semantic pass catches things like:

  • Triple Construction: Grouping things in threes for false completeness
  • Throat-Clearing Opener: “In this article, we will explore…” style intros
  • Sycophantic Frame: Excessive agreeableness and validation
  • Dead Metaphor: The same metaphor repeated across a document
  • One-Point Dilution: Saying the same thing multiple ways to fill space
  • Fractal Summaries: Repeating the same summary at multiple levels

Architecture

This is a frontend-only project. No backend. It uses Vite, React 19, and TypeScript.

The source is organized like this:

  • App.tsx handles the root editor state, undo/redo, popover, and apply-change wiring
  • rules.ts contains all rule definitions
  • detectors/ has the client-side detectors (wordPatterns.ts) and LLM detectors (llmDetectors.ts)
  • components/ includes the Toolbar (branding, API key entry, LLM run button), Sidebar (violation counts by category, eye toggles), and Popover (per-violation details with explanation and apply button)
  • hooks/useHashText.ts syncs text to the URL hash for shareable links
  • utils/buildHighlightedHTML.ts converts text plus violations into highlighted HTML

The editor is a contenteditable div with a custom undo/redo stack. Native browser undo gets destroyed by innerHTML replacement, so the undo history is maintained in refs and intercepted via keydown.

Text is stored in the URL hash, so any analysis is shareable via link. You can paste text, get it analyzed, and send the URL to someone else to see the same highlights.

Running locally

If you want to run this yourself:

pnpm install
pnpm dev        # starts at localhost:5173
pnpm build      # type-check + production build
pnpm test       # client-side unit tests (199 tests, no API key needed)
pnpm test:llm   # LLM integration tests (requires ANTHROPIC_API_KEY in .env)

What it is not

This is not an AI detector in the traditional sense. It does not try to determine if text was generated by AI. It focuses on whether the text reads like AI-generated prose, regardless of how it was actually written.

This means it will catch humans who have absorbed LLM writing patterns (which is increasingly common) and it will not automatically flag good AI output that was carefully prompted and edited.

The goal is to make the patterns visible so writers can decide for themselves what to keep.

Links

License

Not specified in the repository.