Visual details
We inspect sampled frames for unusual textures, edge patterns, lighting mismatches, scene repetition, and compression behavior that can appear in manipulated or AI-generated media.
The report is built by reviewing multiple signals from the uploaded file, then combining them into an AI-likelihood estimate with confidence.
We inspect sampled frames for unusual textures, edge patterns, lighting mismatches, scene repetition, and compression behavior that can appear in manipulated or AI-generated media.
We compare nearby frames to look for flicker, shape drift, unstable motion, or details that change in ways that do not match normal video continuity.
When usable audio is present, we review speech rhythm, spectral patterns, and repeated artifacts that can suggest voice cloning or synthetic processing.
We read available container metadata, codec information, timing fields, and provenance clues to understand how the file was packaged and whether any origin evidence is present.
From upload to report, the pipeline breaks one review into a sequence of modules so each signal can be inspected clearly.
You provide a video file for analysis. The system works from that file only and prepares it for processing.
The video is broken into sampled frames, audio data when available, and file-level metadata so each signal type can be reviewed separately.
We check frame quality, scene realism, motion continuity, facial stability, lip-sync alignment, and other patterns that can change when content has been synthesized or heavily manipulated.
The system weighs findings across visual, scene, temporal, audio, and metadata signals instead of relying on a single clue.
The final report summarizes the strongest signals, estimates AI-likelihood, and shows how confident the system is based on signal quality and consistency.
Every scan is meant to be readable by normal users. Instead of a single verdict, the report shows the estimated AI-likelihood, how confident the system is, and which signals influenced the result.
This is an estimate of how strongly the observed signals resemble AI-generated or manipulated video. It is not a legal finding or absolute proof on its own.
Confidence reflects how clear and consistent the available signals were. Lower confidence can mean the file is short, noisy, compressed, or missing enough evidence for a strong conclusion.
The report shows which categories contributed to the result so you can review the reasoning instead of relying only on a single score.