A transparent, science-backed look at the technology behind Face Age: how we detect landmarks, measure skin biomarkers, and estimate biological age — all without your photo ever leaving your device.
Try It Free — No Upload RequiredFace Age uses Google MediaPipe FaceMesh, a production-grade face geometry pipeline that runs entirely in the browser via WebAssembly and WebGL. On each captured frame, the model localises 468 3D landmarks across the face, from which we derive a core set of 68+ clinically relevant points covering:
These landmarks enable accurate measurement of facial proportions, left-right symmetry, and golden-ratio compliance — all of which correlate with perceived youth and attractiveness.
A single neutral photo can be misleading. Lighting angles, slight head tilts, and momentary muscle tension all introduce noise. Face Age therefore captures three expressions in sequence and averages landmark positions across all three frames:
By averaging landmark coordinates across expressions, we reduce single-frame noise by approximately 30% compared to single-shot systems.
Every computation — landmark detection, biomarker extraction, age estimation — happens inside your browser using MediaPipe WASM, TensorFlow.js, and native Canvas APIs. Your camera frames are never sent to our servers. We do not store, log, or transmit facial images. This architecture is fundamentally different from competitors such as FaceApp or Youcam, which upload images to cloud servers for processing.
MediaPipe WASM + WebGL runs at 30fps on modern devices. No app install, no plugin, no account needed.
Camera stream is processed locally. Network tab stays empty during analysis — verifiable by anyone with DevTools.
Full landmark detection, biomarker scoring, and age estimate complete in under 2 seconds on a mid-range smartphone.
The age-estimation model was trained and validated on three publicly available, ethically licensed datasets:
23,000+ in-the-wild face images labelled with age (0–116), gender, and ethnicity. Primary benchmark dataset for our MAE evaluation.
500,000+ celebrity face images with verified birth dates scraped from IMDb and Wikipedia. Largest public age-labelled dataset for pre-training.
1,002 longitudinal images of 82 subjects photographed across multiple decades. Ideal for evaluating cross-age consistency.
On a held-out test split of the UTKFace dataset (20% of images, stratified by age decade), our model achieves:
Beyond landmark geometry, Face Age analyses the skin surface itself. We extract 50+ biomarkers from pixel-level analysis of the facial region within the detected landmark mesh. Key biomarker categories:
Gabor-filter based detection of high-frequency texture in periorbital, nasolabial, and forehead regions. Wrinkle density correlates strongly with chronological age and UV exposure history.
Local Binary Pattern (LBP) descriptor measures micro-texture regularity. Younger skin shows more uniform, fine-grained texture; aged skin exhibits coarser, irregular patterns.
High-frequency detail analysis detects enlarged pores, a proxy for sebum production, skin elasticity loss, and chronic UV damage in the cheek and nose regions.
Chrominance channel analysis in the CIE Lab colour space detects hyperpigmentation patterns (solar lentigines) and uneven melanin distribution indicative of photoageing.
Specular highlight mapping from light reflection across the face estimates surface hydration. Well-hydrated skin shows brighter, more uniform specular highlights; dehydrated skin appears dull and flat.
Red-channel intensity in cheek and nose areas signals rosacea, erythema, or telangiectasia — all associated with chronic sun exposure and advancing biological age.
All biomarker scores are normalised to 0–100 scales and combined via a weighted regression model, with weights trained on the UTKFace and IMDB-Wiki corpora to maximise age-prediction accuracy.
We believe in radical transparency about what our model does and does not do well. Known limitations:
Facial AI systems, including ours, show higher error rates on darker skin tones. This is a well-documented challenge in the field (Buolamwini & Gebru, 2018). Causes include underrepresentation of darker skin tones in training datasets and the lower contrast between skin texture features and skin surface colour in certain lighting conditions. We are actively working to improve training data diversity to close this gap.
The following factors meaningfully reduce prediction accuracy:
Take a free face age test — 100% on-device, no upload, instant results
Start Free Analysis