Monorepo StarterNext.js + FastAPI + OpenCV

A detection-first computer vision kit with room to grow.

The starter keeps one clear happy path: upload an image, run detection, inspect boxes and metrics, and keep the same contract when you switch to the built-in segmentation extension, add webcam capture, or move to a heavier model backend.

Explore Vision ConsoleOptional Webcam ModeRoot commands: `dev`, `check`, `api:types`

Starter Qualities

Architecture

Web app and inference API stay separate, but develop together.

Vision posture

Detection is the default demo, with CPU-first sample logic so the repo stays cloneable and teachable.

Upgrade path

Add webcam and segmentation later without changing the contract boundary.

01

Detection-First Demo

Start with upload, detection boxes, and a review panel instead of trying to explain every CV workflow at once.

02

Inference-First Contract

Keep frontend and backend aligned through one inference contract that survives model changes later.

03

Optional Webcam Mode

Add live camera capture as a frontend extension after upload works well, instead of making it the repo's main story.

Vision Consolechecking backend

Upload once, get detection boxes, and inspect the contract.

This is the main happy path for the template: send one image to the FastAPI service, get object-style detections back, and render a response shape you can keep when you later swap in YOLO, ONNX Runtime, or a hosted model API.

Drop in a PNG, JPG, or WebP file to preview it here. After analysis, detection boxes and segmentation polygons will render on top of the image.

Detection-First Contract

http://127.0.0.1:8000/api/v1/analyze

Optional webcam mode

Once upload analysis feels right, reuse the same API contract from a live camera capture flow instead of inventing a second backend path.

Open webcam mode

Starter Detection

Detection-first sample pipeline that returns object-style boxes and confidence scores.

object boxesconfidence scorescoverage metrics
detectiondefaultcpu

Response Shape

Typed detections, segmentations, metrics, and image metadata.

Waiting For Detection

Upload a frame and inspect the detection contract.

The response panel is intentionally built around detections first. Once the backend returns boxes, confidence, and metrics, you already have the review surface you need for QA, moderation, or human approval flows.

Why It Feels Like A Kit

The repo is organized for iteration, not just inference.

The point is to help you ship a computer-vision product faster. That means the sample processing is only one layer. The bigger win is having a place for contracts, scripts, UI patterns, and deployable app structure from day one.

frontend/

Next.js app shell, upload workflow, and generated API types.

backend/

FastAPI service, pipeline registry, image validation, and OpenCV starter logic.

docs/

OpenAPI source of truth for contracts between the interface and the inference layer.

scripts/

Root commands for local dev and verification, mirroring the monorepo kit ergonomics.