https://github.com/jsbroks/coco-annotator
https://github.com/drainingsun/ybat
https://github.com/visipedia/annotation_tools
https://www.simonwenkel.com/2019/07/19/list-of-annotation-tools-for-machine-learning-research.html
Roboflow seems to be a company with LabelImg and CVAT projects for annotating, but it sounds like it saves as VOC and then you can run a script to make a COCO JSON out of the VOC XML.
https://towardsdatascience.com/how-to-train-detectron2-on-custom-object-detection-data-be9d1c233e4
Going to try work around using Roboflow though, and save directly to COCO somehow. They have “pre-processing” (resize) and “augmentation” (flip the pictures around every which way to generate more data).
Good video about COCO format: https://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch
This looks good: https://gitlab.com/vgg/via http://www.robots.ox.ac.uk/~vgg/software/via/
maybe this https://github.com/wkentaro/labelme (as suggested here: https://www.dlology.com/blog/how-to-create-custom-coco-data-set-for-instance-segmentation/)
Ok, VIA VGG.
Ok it turns out version 2 is a lot better than version 3:
via-master/via-2.x.y/src/index.html
Working pretty well. Annotation is a bit confusing.
After watching a youtube video, the trick is to name an attribute, like ‘type’, and then add options for ‘egg’ and ‘chicken’ and then select as a dropdown. Then you can set the type attributes by clicking on the shape and selecting ‘egg’ or ‘chicken’
here’s a paper on making like a photoshop style magic selector https://arxiv.org/pdf/1903.10830.pdf for human annotators
Also found “Open Labeler” https://github.com/Cartucho/OpenLabeling which looks pretty good.