AI tools
10 client-side AI utilities — OCR, image upscaling, background removal, speech, sentiment, summarization, face blur, keyword extraction, and QR decoding. Models load the first time you use them (5–30 seconds) and are cached locally. No API key, no upload, no signup.
AI Tools
-
AI OCR (Image to Text)
Extract text from images using Tesseract.js — fully in your browser, no upload.
Open tool →
-
AI Image Upscaler
Upscale images 2× or 4× using a small ESRGAN model running on your device.
Open tool →
-
AI Background Remover
Remove image backgrounds with a u2net ONNX model — runs 100% client-side.
Open tool →
-
Text to Speech
Convert text to speech using your browser's built-in Web Speech API voices.
Open tool →
-
Speech to Text
Live transcribe your microphone audio with the browser's Web Speech API.
Open tool →
-
Sentiment Analysis
Analyze text sentiment with a DistilBERT model via Transformers.js on-device.
Open tool →
-
AI Text Summarizer
Summarize long text with a BART model in your browser — opt-in heavy load (~140MB).
Open tool →
-
AI Face Blur
Detect and automatically blur faces in photos using MediaPipe — privacy-preserving.
Open tool →
-
Keyword Extractor
Extract top keywords from any text with a pure-JS RAKE algorithm — no model needed.
Open tool →
-
QR & Barcode Decoder
Decode QR codes and barcodes from images using zxing-js — instant, no upload.
Open tool →
Why on-device AI?
Cloud AI services charge per request, log every prompt, and reserve the right to train on your data. Running the same models in your browser via WebAssembly, WebGPU, and the Web Speech API keeps your text, images, and audio strictly local — and after the first model download, the tools work offline.
From OCR and image upscaling to sentiment, summarization, and face blur, every tool here is built for the moment you need AI without the bill, the wait, or the privacy compromise.
Frequently Asked Questions
- Do these AI tools upload my data?
- No. Every AI tool on this page runs entirely in your browser using WebAssembly, WebGPU, or built-in browser APIs. Your text, images, and audio never leave your device.
- Why is the first run slow?
- Heavier AI tools download a model file the first time you use them (typically 1–140MB). After that the model is cached in your browser, so subsequent runs are fast. Lighter tools use built-in browser APIs and need no download.
- Does my device need WebGPU?
- No. Every tool works on CPU via WebAssembly. WebGPU, when available, makes some tools faster — but a CPU fallback is always provided.