Selfie Reframe is a browser-only, “blind-selfie” exercise that hides your live preview, mirrors a simple face-and-eyes tracker for framing, then lets you review & save only the takes you love. Two links deliver two distinct builds:
HD (default)
Capture Speed: Fast 1080 p; minimal memory
File Size: ≈1–3 MB PNG
Use-case: Mindfulness drill & quick journaling shots
Max-Res
Capture Speed: Requests your camera’s full native resolution
File Size: Can greatly exceed 10 MB depending on camera specs
Use-case: When you truly need the sharpest source for later edits
Both versions save pure PNGs—no filters, no compression—so you can crop, grade, or retouch afterwards.
1 How to Use
Idle mode
- White canvas + mirrored green face box + two blue eye squares for intuitive framing.
- Tap anywhere to snap. No other controls clutter the ritual.
Frozen mode
- Captured photo appears; tracker disappears.
- Save (download) or New Photo (return to idle).
- Only the PNG you choose is stored—misses are never written to disk, so there’s nothing to “delete,” avoiding the implicit rejection loop linked to higher stress and lower self-esteem.
2 Why Hide the Live Preview?
Constant self-view on video calls drives “Zoom dysmorphia”—heightened appearance vigilance, body dissatisfaction, and cosmetic intent (PMC, digitalcommons.dartmouth.edu). When you remove that feedback loop, you shrink the data stream your brain must monitor, freeing attention to posture, breath, and micro-expression—the same outward-in focus a photographer gives you (Medium).
3 Mindfulness Mechanisms at Work
3.1 Mirror-Exposure without the Mirror
Repeated sessions that direct non-judgmental attention to one’s image (rather than live critique) reduce body-image distress across clinical and non-clinical samples (PMC, ScienceDirect).
3.2 Interoceptive Focus
Blank-screen framing nudges users to feel their body rather than inspect it visually. Interoceptive attention is linked to better emotion-regulation strategy use (PMC, Frontiers) and lower anxiety when trained (The New Yorker).
3.3 Self-Compassion Instead of Self-Critique
Choosing only the takes you like acts as a micro self-compassion practice; higher trait/state self-compassion predicts blunted cortisol after social-evaluation stress (PubMed, Tandf Online) and healthier diurnal profiles (SpringerLink). Brief self-compassion inductions also speed cortisol recovery (ScienceDirect).
3.4 Skip-Deletion vs. Rejection
Because bad takes never save, there’s no “delete-and-reject” ritual that reinforces negative self-talk—a pattern tied to lower mood when heavy selfie interaction is followed by post-hoc curation (Child Mind Institute, ScienceDirect).
4 Why Two Resolutions?
- HD build loads quickest, ideal for daily mindfulness reps or slow-network contexts.
- Max-Res build leverages
getCapabilities().width.max/height.max
to push the sensor to its hardware ceiling; handy for prints or professional retouch workflows. .
5 Technical Snapshot
Feature | Implementation |
---|---|
Face + eye tracker | MediaPipe Tasks Face Detector (WASM) |
Mirror overlay | x = dx + (1 – normX – normW) * dw |
Tap-anywhere snap | Canvas click listener; no visible button |
High-res export | Off-screen <canvas> renders frozen ImageBitmap , guaranteeing overlay-free PNG |
Privacy | All processing runs client-side; no image leaves the device |
6 Take-Home
Selfie Reframe replaces self-monitoring with self-exploration. By masking the preview and letting you keep only the shots that spark joy, it:
- Short-circuits the Zoom-dysmorphia feedback loop (PMC),
- Leverages mirror-exposure gains without judgment (ScienceDirect),
- Trains interoceptive awareness for calmer emotion regulation (PMC), and
- Embeds a micro self-compassion ritual shown to lower stress-hormone reactivity (PubMed).
Run the HD link for swift, low-latency sessions; switch to Max-Res when ultimate detail matters. Either way, every tap is a mindful, low-pressure rehearsal in feeling rather than fixing your image—a small but potent step toward healthier, more confident self-portraiture.
Your privacy, in one sentence
Selfie Reframe processes video 100 % locally—none of the camera frames or photos ever leave your device browser unless you press “Save,” which simply triggers a normal file-download in the browser.
Why that’s true
getUserMedia()
only gives JavaScript a localMediaStream
.
The W3C spec and MDN docs make clear that the API “prompts the user for permission to use a media input which produces a MediaStream” and that stream lives solely in the page’s context unless you pipe it into WebRTC or a fetch/upload call, which Selfie Reframe never does. (MDN Web Docs, W3C)- No network calls touch the stream.
Inspect the source: the video element is markedplaysinline
and the only operations performed aredrawImage()
onto an off-screen canvas andtoBlob()
when you choose to download. There’s nofetch
,WebSocket
, or WebRTC peer in sight—so nothing is transmitted. (You can confirm this in DevTools → Network tab.) (MDN Web Docs) - Saving is a local download, not an upload.
ThesaveImage()
function writes the snapshot to a temporary<canvas>
, turns it into a Blob, then feeds that Blob into a temporary<a download>
link—standard practice for client-side downloads described in MDN’s canvas documentation. (MDN Web Docs)
Because the page runs over HTTPS and never opens a remote connection for the stream or image data, granting camera access is functionally the same as using your phone’s built-in camera app—only you control when (or whether) a photo file leaves your device.
Privacy & open-source note
Selfie Reframe relies on Google’s MediaPipe Tasks Vision JavaScript bundle, which is compiled to WebAssembly and runs entirely in your browser. No video frames or photos are ever transmitted to Google — or anywhere else — because the library exposes only in-page inference APIs; it has no built-in network calls. (Medium, GitHub)
MediaPipe Tasks Vision is published on npm/CDN under the permissive Apache 2.0 licence — free for commercial and non-commercial use with proper attribution. (npmjs.com, Google AI for Developers) The same licence covers the small face-detection model (blaze_face_short_range.tflite
) you host locally. (mvnrepository.com, GitHub)
In short, granting the camera permission only lets Selfie Reframe’s in-browser ML model draw the green/blue tracking boxes; nothing leaves your device unless you press “Save,” which simply downloads a PNG straight to your phone or computer.
Bottom line:
A white-screen, client-side camera with a minimalist tracker is technically lightweight yet scientifically grounded; it swaps self-objectification for self-exploration and turns a “humiliating selfie moment” into a 30-second humility skill.
Disclaimer: at day of publication tested only at Windows and Android OS.
Leave a Reply