Computer Vision

Label images in-house for image annotation tasks such as object detection, image segmentation and image classification. Use Prodigy’s fully scriptable back-end to build powerful workflows by putting your model in the loop.

Flexible image annotation

If you're running a model for object detection, image segmentation or image classification in production, the examples your model is classifying will be changing constantly. Prodigy makes it easy to create annotated training data and and correct your model's mistakes.

Stream in your images and mark object by using bounding-boxes, or draw polygon and freehand shapes using an intuitive, browser-based interface. Your annotations can be exported as a simple JSON file with pixel coordinates, making it easy to integrate the data into the rest of your pipeline.

You're also free to use pre-trained computer vision models for image annotation to speed up data collection. These can run as machine learning models locally on your machine, but can also be written as integrations with a 3rd party API.

Read more

Try it live and draw bounding boxes!

This live demo requires JavaScript to be enabled.
This live demo requires JavaScript to be enabled.

Try it live and select the category!

This live demo requires JavaScript to be enabled.

Try it live and select the images!

This live demo requires JavaScript to be enabled.

Try it live and add a caption!

This live demo requires JavaScript to be enabled.

Customize, combine and extend interfaces

Computer vision is more than just drawing boxes. What sets the modern deep learning approach apart is its flexibility: neural networks are modular, so you can quickly put pieces together to solve challenging new tasks. But to make the most of that flexibility, you'll need to label data in unexpected ways. Nobody knows what annotation interfaces you might need one year from now – including you! But with Prodigy, you know you'll be able to build it if you need it.

Read more

Plug in your own models for data annotation

Custom recipes let you integrate machine learning models using any framework of your choice, load in data from different sources, implement your own storage solution or add other hooks and features. No matter how complex your pipeline is – if you can call it from a Python function, you can use it in Prodigy.

View TensorFlow example
recipe.pyfrom prodigy.components.stream import get_stream

@prodigy.recipe("custom-image-recipe")
def custom_image_recipe(dataset, image_dir):
    stream = prodigy.get_stream(image_dir, loader="image")
    model = load_your_model()
    return {
        "dataset": dataset,
        "stream": model(stream),
        "update": model.update,
        "view_id": "image_manual"
    }

Try out new ideas quickly

Prodigy lets you easily annotate, experiment, train and export with just a few simple commands. The emphasis on scriptability means you're not limited to just one approach, and the fact that it's a downloadable tool that's fully under your control means there's no lock-in. You can iterate faster, ship sooner, and own the whole process end-to-end.

View the documentation