FrameSentinel analyzes videos through 5 parallel detection modules and calculates an authenticity score (0-100%). The system automatically categorizes results into risk levels to guide your decision-making.
Video is authentic with high confidence. All detection modules passed. Safe to proceed with verification.
Minor anomalies detected. Requires human analyst review before making final decision.
Multiple fraud indicators detected. High probability of manipulation. Recommend rejection unless strong evidence suggests otherwise.
Strong fraud signals detected across multiple modules. Video is likely fake or manipulated. Reject verification immediately.
Deepfake Detection
Identifies AI-generated or synthetically modified faces using deep learning models
Replay Attack Detection
Detects videos recorded from screens or pre-recorded content being played back
Injection Detection
Identifies videos injected into the camera stream via virtual cameras or software
Face Swap Detection
Detects face replacement techniques and morphing attacks
Metadata Integrity
Analyzes video file metadata for signs of tampering or manipulation
{
"session_id": "sess_abc123",
"state": "COMPLETED",
"authenticity_score": 0.42, // 42% - REJECTED
"risk_level": "REJECTED",
"detection_flags": {
"deepfake_detected": true, // AI-generated face detected
"replay_detected": false,
"injection_detected": true, // Virtual camera detected
"face_swap_detected": false,
"metadata_anomaly": true // File metadata tampered
},
"frame_timeline": [
{
"frame_number": 5,
"timestamp": 0.167,
"flags": ["deepfake", "injection"],
"confidence": 0.89
}
],
"processed_at": "2024-01-15T10:30:45Z"
}