AI-assisted, local-first clip selection suite for creators and editors.
ClipSense turns long, messy footage into timeline-ready selects fast.
You ingest footage, run profile-based analysis, review ranked candidates, and export directly to your NLE.
- Local-first workflow: your raw media stays on your machine.
- Profile-driven analysis: purpose-specific scoring and tags.
- Editor-friendly review loop: accept/reject, reorder, export.
- Zero-copy ingest option: analyze host files without re-uploading huge videos.
- NLE-ready outputs: JSON, EDL, and FCPXML.
The analysis profile menu is dynamic by project purpose.
| Project Purpose | Available Profiles | Core Tags |
|---|---|---|
vlog_editing |
main_vlog, viral_vlog_shorts |
hook, reaction, payoff, transition, filler, discard |
live_stream_highlights |
main_vlog, viral_stream_shorts |
rage_quit, loud_reaction, laugh_attack, chat_interaction, filler, discard |
movie_trailer_cut |
trailer_cinematic, trailer_dialogue |
Cinematic: action_peak, tension, jump_scare, visual_spectacle, discard Dialogue: emotional_beat, one_liner, plot_reveal, whisper, discard |
- Create a project and pick a purpose.
- Ingest footage (upload or source path).
- Run analysis with the profile shown for that purpose.
- Review candidates in Candidate Rack and accept/reject.
- Reorder accepted clips in Timeline Dock.
- Export to your editing workflow.
- Git
- Docker Desktop (macOS/Windows) or Docker Engine + Compose plugin (Linux)
- Optional: Google AI Studio key for Gemini-powered scoring
git clone https://github.com/ogndgr/ClipSense.git
cd ClipSensecp apps/api/.env.example apps/api/.env
cp apps/web/.env.example apps/web/.env.localCopy-Item apps/api/.env.example apps/api/.env
Copy-Item apps/web/.env.example apps/web/.env.local
if (-not $env:HOME) { $env:HOME = $env:USERPROFILE }
$env:PWD = (Get-Location).Pathdocker compose up -d --builddocker compose ps
curl http://localhost:8000/healthExpected:
docker compose psshows bothclipsense-apiandclipsense-webasUp.curlreturns{"status":"ok"}.
- Web:
http://localhost:3000 - API:
http://localhost:8000 - API Docs:
http://localhost:8000/docs
- Docker Desktop or OrbStack both work.
- If your media is on external drives, uncomment
/Volumeslines indocker-compose.yml.
- If Docker permission is denied, run:
sudo usermod -aG docker $USER
newgrp docker- Use Docker Desktop with WSL2 backend enabled.
- If you analyze files outside the repo (for source path mode), make sure those drives/folders are shared in Docker Desktop.
jsonfor internal timeline exchangeedlfor broad NLE compatibilityfcpxmlfor Final Cut Pro workflows
Key variables in apps/api/.env:
| Variable | Default | Description |
|---|---|---|
ALLOW_CLOUD_AI |
true |
Set false for local-only heuristic analysis |
GOOGLE_API_KEY |
empty | Gemini API key |
GEMINI_MODEL |
gemini-3-pro-preview |
Gemini model name |
MAX_UPLOAD_MB |
20480 |
Max upload size (MB) |
FCPXML_PATH_MAP_FROM |
/app/data |
Path in container |
FCPXML_PATH_MAP_TO |
host path | Path on host machine |
If you use external drives on macOS, enable /Volumes mapping in docker-compose.yml.
cd apps/api
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
uvicorn app.main:app --reload --port 8000cd apps/web
npm install
cp .env.example .env.local
npm run devapps/api: ingest, analysis jobs, candidate scoring, playback, exportsapps/web: dashboard and review consolememory-bank: internal schema and architecture notes
- If UI changes do not appear after code updates, rebuild containers:
docker compose up -d --buildThis project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
See LICENSE for the full text.
