An annotation automation platform by SuperAnnotate: designed for researchers, by researchers
Why did we start SuperAnnotate?
Software summary and future roadmap
1. Before SuperAnnotate
About 1.5 years ago, I was getting my PhD at KTH in Sweden, where my research was focused on weakly supervised image segmentation. At the time, I had a paper that was accepted in one of the workshops at CVPR 2018. As a researcher, I spent a lot of time on the annotation and on the process of correcting the annotations. While attending CVPR, I had noticed many companies sponsoring the event were providing services for such annotation tasks. As a result, I have seen a lot of opportunities to apply my own research to accelerate the pixel-accurate annotations for semantic and panoptic segmentation tasks (check the image below or read my previous article for more details).
2. Why did we start SuperAnnotate?
After the conference, my brother, Tigran, and I decided to drop out of our PhDs and fully focus on actually starting and building our company. Soon after, we moved to Silicon Valley to be part of Berkeley’s SkyDeck Startup Accelerator program that gave us a huge boost in expanding our business.
It was then decided that we would create much more than simply tooling for annotation, but rather a complete software solution that will help computer vision (CV) engineers to start, annotate, train, iterate and finish a CV projects.
The ability to quickly annotate, train and iterate is extremely important to finishing a CV project, but sometimes having a robust team, project, data and quality management system becomes even more crucial, especially for large projects.
Therefore, completing a CV project requires much more than what’s offered by open source solutions and other toolsets.
3. Free usage for academia
While at Berkeley, we started a collaboration with UC Berkeley’s AI department and were excited that Prof. Pieter Abbeel and Prof. Trevor Darrell joined our advisory board to help us to further refine and develop our product. Coming from an academic background and having leading academic advisors, we deeply understand the importance of a good dataset for the future development of a particular research area (like ImageNet for DNNs, Cityscapes for Semantic Segmentation, etc.).
Since many of us came from a research background and felt the pain of image annotation, data training, and modelling first hand, we would like to share our annotation platform for free to all researchers that are facing the same challenges and issues.
In addition, we are very open to getting your valuable feedback to create some of the best features that will greatly accommodate the needs of computer vision engineers. If you have any request for ‘nice to have features’, please don’t hesitate to submit your feature request directly by emailing me.
If you’re a CV engineer or researcher and would like to learn more, I’ll be more than happy to have 15–30-minute talk and know more about your research project and your annotation needs.
4. Software Summary and Future Roadmap
Now, we have a fully scalable software solution for all sorts of image annotation tasks (points, polygons, polylines, boxes, ellipses, 3D cuboids, templates for emotion and pose annotations, pixel-accurate for semantic and panoptic segmentation), robust team, project, quality, and data management system, AI predictions for automatic annotations, etc.
On the roadmap, we currently have three priorities which we will accomplish in the upcoming months:Transfer Learning: Increasing Prediction Accuracy by Iterative Retraining
Active learning: Picking the Right Images to Annotate
Video Annotation: Tracking Both on a Box Level and Pixel Accurate Level.
For all these topics, we already started collaborations in these topics with top researchers from Europe, US, and China. Please let us know in the survey below about which of these features you think is more crucial for your computer vision project. Our ambition is that, at any time, you’ll find state-of-the-art tools to make your research projects as efficient as possible.
5. Concluding remarks
I will give more detailed explanations about the features I described in section 4 on a monthly basis and will keep the progress in this medium channel. Please follow this channel to be first to get those updates!.