Evidence-first trust explainer

Detection Methodology

How DetectVideo builds an AI-likelihood estimate, what evidence streams actually contribute, and why file quality and module coverage matter before you trust a result.

Principle 1

Evidence before certainty

The product is designed to surface evidence patterns, not overstate what cannot be known from a single file.

Principle 2

Coverage changes confidence

A result with broad module coverage should be read differently from one built on a narrow or degraded evidence base.

Principle 3

Human review still matters

Source history, context, and provenance work remain essential when the decision matters beyond a quick triage pass.

Signal map

What DetectVideo reviews

No single signal is treated as enough on its own. The methodology works by comparing multiple evidence streams and using only the ones the media can support with a usable quality floor.

Visual signals

Sampled frames are reviewed for texture instability, repeated detail patterns, edge breakdown, lighting jumps, and other frame-level artifacts that often surface in synthetic or heavily altered media.

Temporal signals

Nearby frames are compared for flicker, motion drift, continuity breaks, landmark instability, and scene behavior that looks coherent in one frame but collapses over time.

Audio signals

When usable sound is present, the system checks speech-like regions and broader spectral patterns for repetition, abrupt transitions, and voice behavior that does not align cleanly with the video.

Metadata and provenance

Available file metadata, export fingerprints, container-level hints, and provenance clues are reviewed to understand whether the media still carries source context or has been stripped down by reposting.

Assembly logic

How the estimate is built

The estimate is not a fixed formula applied the same way to every file. It adapts to the inputs that are actually present and to the modules that produced usable evidence.

Step 1

Inspect the input quality first

The system measures practical constraints such as duration, resolution, visible motion, compression pressure, and whether audio or provenance data is present before deciding which modules are reliable enough to run.

Step 2

Run only the modules the file can support

Visual, temporal, face, audio, and provenance modules are not forced to contribute. If a module cannot compute a usable signal from the current media, it is marked unavailable instead of silently guessing.

Step 3

Weight evidence by quality and coverage

The AI-likelihood estimate is built from the modules that actually produced evidence. Stronger and cleaner signals matter more than weak, partial, or low-quality outputs.

Step 4

Return the result with context, not fake certainty

Outputs should be read as an evidence-backed estimate with coverage limits. Missing modules, degraded copies, and source uncertainty all reduce how far the conclusion should be taken.

Coverage logic

Confidence and module availability

Confidence should be read as a ceiling on how far the result can responsibly go. It is shaped by both the strength of the evidence and the completeness of the modules that were able to compute a result.

Practical rule

A narrow evidence base can still be informative, but it should produce a narrower operational conclusion. Missing audio, weak motion, stripped metadata, or degraded reposts are all reasons to stay more conditional.

Score is not the same thing as confidence

A high estimate can still carry lower confidence if the clip is short, heavily compressed, or missing important modules. Confidence reflects how complete and reliable the evidence base was.

Unavailable modules narrow the ceiling

No audio, weak face visibility, severe re-encoding, or stripped metadata can remove entire evidence streams. That should narrow the reading of the result rather than create false certainty.

Source quality changes what is possible

Platform downloads, screen recordings, stitched reposts, and low-light edits often behave differently from original files. The system can still help, but the conclusion should stay more conditional.

Boundaries

What the system does not claim

DetectVideo is designed to support verification work, not replace it. That matters most when stakes, reach, or reputational harm are meaningful.

Not proof of origin

The methodology estimates whether a clip behaves like AI-generated or heavily synthetic media. It does not certify the original source, authorship, or chain of custody.

Not an intent judgment

A flagged result does not tell you whether the creator disclosed the edit, meant to deceive, or used only a small AI-assisted step inside a broader human workflow.

Not a replacement for human review

Important decisions should still include source validation, contextual review, and provenance checks, especially when the clip has been reposted, trimmed, captioned, or heavily edited.

Interpretation

How to read the result responsibly

Broad evidence plus elevated score

Treat this as a stronger escalation case. The next step is usually source verification, provenance review, or deeper editorial moderation rather than immediate blind trust.

Mixed signals or limited module coverage

Keep the result provisional. Mixed outputs often mean the clip has both suspicious traits and meaningful quality limits, so a cautious interpretation is more honest than a hard call.

Low estimate on degraded media

A low score on a poor-quality repost should not be read as a guarantee of authenticity. Weak evidence can reflect missing information as much as it reflects genuine footage.

Related paths

Use the methodology as context for the actual analysis workflow

If you are ready to analyze a clip, compare plans, or review product details, move from this trust explainer into the main product flow.