HACKER Q&A
📣 Abderrahman54

CVAT users, how do you QA labels?


Hi HN, AJ here.

CVAT users: how do you do QA? consistency and regression debugging.

We forked CVAT into CVAT-DATAUP to improve that loop (submit/review/accept, dataset/class insights, and starting on eval + visual error analysis).

Looking for a few early users who use CVAT to work closely with them and iterate with feedback.

Repo: https://github.com/dataup-io/cvat-dataup


  👤 Abderrahman54 Accepted Answer ✓
A bit more detail for anyone curious: CVAT-DATAUP is a CVAT-compatible fork. Today it adds workflow signals (submit/review/accept) + dataset/class distribution views. Next thing we’re building is “eval close to the dataset”: compare runs, slice failures, click from metric to the exact images/labels.

If you’re using CVAT in production, I’d love to learn:

how you enforce label consistency across annotators

what your QA sampling strategy looks like

how you debug regressions (what do you look at first?)

If you want to try it, be in touch:)