[go: up one dir, main page]

local

Annotation interfaces

The web app lets you annotate a variety of different formats, including plain text, named entities, categorization, images and A/B comparisons, as well as raw HTML. Prodigy expects annotation tasks to follow a simple JSON-style format. This format is also used to communicate tasks between REST API and web application, and will be used when exporting annotations via the db-out command. Annotated tasks contain an additional "answer" key mapping to either "accept", "reject" or "ignore".

text, htmlAnnotate plain text or HTML.
ner, ner_manualAnnotate and correct named entities.
spans, spans_manualAnnotate and correct potentially overlapping spans.
pos, pos_manualAnnotate and correct part-of-speech tags.
relations, depAnnotate syntactic dependencies and semantic relations.
classificationAnnotate labelled text or images.
image, image_manualCreate bounding boxes and image segments.
audio, audio_manualAnnotate and correct regions in audio and video files.
choiceSelect one or more answers.
reviewReview and merge existing annotations by multiple annotators.
diff, compareCompare two texts with a visual diff.
text_inputCollect free-form text input.
blocksCombine different interfaces and custom content.
llm_ioShow the LLM prompt/response

text Annotate plain textbinary

This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "Nintendo Switch",
}
How can I render formatted text?

The text property of an annotation task will always be rendered as plain text. To add markup like line breaks or simple formatting, use the html key instead. Prodigy wants you to explicitly choose to use “HTML mode” to avoid stray HTML tags (which can influence the model’s predictions) from being rendered or hidden – for example, if you’re working with raw, uncleaned data. If you’re using one of Prodigy’s default recipes with a model in the loop, keep in mind that the text of an annotation task is used to update the model.

ner Annotate named entitiesbinary

The ner interface renders one or more entity spans in a text using their start and end character offsets and an optional label. If you’re creating NER annotation tasks manually and need to render multiple entities per task, make sure to provide them in order.

This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "Apple updates its analytics service with new metrics",
"spans": [{ "start": 0, "end": 5, "label": "ORG" }]
}

ner_manual Annotate and correct named entitiesmanual

The ner_manual interface allows highlighting spans of text based on token boundaries. Recipes like ner.manual will typically use a model to tokenize the text and add an additional "tokens" property to the annotation task, e.g. using the add_tokens preprocessor. This allows the application to resolve the highlighted spans back to the respective token boundaries. Selected entities will be added to the task’s "spans". Individual tokens can also set "disabled": true to make them unselectable (and make spans including those tokens invalid).

This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "First look at the new MacBook Pro",
"spans": [
{"start": 22, "end": 33, "label": "PRODUCT", "token_start": 5, "token_end": 6}
],
"tokens": [
{"text": "First", "start": 0, "end": 5, "id": 0},
{"text": "look", "start": 6, "end": 10, "id": 1},
{"text": "at", "start": 11, "end": 13, "id": 2},
{"text": "the", "start": 14, "end": 17, "id": 3},
{"text": "new", "start": 18, "end": 21, "id": 4},
{"text": "MacBook", "start": 22, "end": 29, "id": 5},
{"text": "Pro", "start": 30, "end": 33, "id": 6}
]
}
Why does Prodigy add ↵ characters for newlines?

A newline only produces a line break and is otherwise invisible. So if your input data contains many newlines, they can be easy to miss during annotation. To your model however, \n is just a unicode character. If you’re labelling entities and accidentally include newlines in some of them and not others, you’ll end up with inconsistent data and potentially very confusing and bad results. To prevent this, Prodigy will show you a for each newline in the data (in addition to rendering the newline itself). The tab character \t will be replaced by , by the way, for the same reason.

As of v1.9, tokens containing only newlines (or only newlines and whitespace) are unselectable by default, so you can’t include them in spans you highlight. To disable this behavior, you can set "allow_newline_highlight": true in your prodigy.json.

Why can't I add HTML formatting to the manual UI?

When you’re highlighting spans in the manual interface, you’re still annotating raw text and are creating spans that map to character offsets within that raw text. If you pass in "<strong>hello</strong>", there’s no clear solution for how this should be handled. How should it be tokenized, and what are you really labelling here? The underlying markup or just the text, and what should the character offsets point to? And how should other markup be handled, e.g. images or complex, nested tags?

Similarly, if you’re planning on training a model later on, that model will also get to see the raw text, including the markup – so if you are working with raw HTML (like web dumps), you usually always want to see the original raw text that the model will be learning from. Otherwise, the model might be seeing data or markup that you didn’t see during annotation, which is problematic.

Available config settings

Settings can be specified in the "config" returned by a recipe, or in your prodigy.json.

NameDescriptionDefault
hide_newlinesNew: 1.9 Don’t add real line breaks to tokens and only use symbols. Previously called hide_true_newline_tokens.False
allow_newline_highlightNew: 1.9 Allow highlighting tokens that only consist of newlines. If disabled, spans including those tokens become unselectable.False
honor_token_whitespaceNew: 1.10 Reflect whitespace or lack of whitespace following a token in the UI.True
ner_manual_highlight_charsNew: 1.10 Optimize UI for single-character selection and "tokens" that describe single characters.False
ner_manual_require_clickDon’t auto-add the selection as an entity and require an additional button click to convert the selection to an entity.False
label_styleNew: 1.10.4 Style of label set. "list" for list of keyboard-accessible buttons or "dropdown". Previously available as ner_manual_label_style."list"

spans Annotate potentially overlapping spansbinaryNew: 1.11

The spans interface renders one or more potentially overlapping spans in a text using the token information, the start and end character offsets and a label. The same span can be included multiple times with different labels. For the interface that lets you add and modify spans, see spans_manual.

This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "I like baby cats because they're cute",
"tokens": [
{"text": "I", "start": 0, "end": 1, "id": 0, "ws": true},
{"text": "like", "start": 2, "end": 6, "id": 1, "ws": true},
{"text": "baby", "start": 7, "end": 11, "id": 2, "ws": true},
{"text": "cats", "start": 12, "end": 16, "id": 3, "ws": true},
{"text": "because", "start": 17, "end": 24, "id": 4, "ws": true},
{"text": "they", "start": 25, "end": 29, "id": 5, "ws": false},
{"text": "'re", "start": 29, "end": 32, "id": 6, "ws": true},
{"text": "cute", "start": 33, "end": 37, "id": 7, "ws": false}
],
"spans": [
{"start": 7, "end": 16, "token_start": 2, "token_end": 3, "label": "REF"},
{"start": 25, "end": 37, "token_start": 5, "token_end": 7, "label": "REASON"},
{"start": 33, "end": 37, "token_start": 7, "token_end": 7, "label": "ATTR"}
]
}

spans_manual Annotate and correct potentially overlapping spansmanualNew: 1.11

The spans_manual interface allows highlighting spans of text based on token boundaries with multiple potentially overlapping and nested spans. It’s primarily used by the spans.manual recipe, which uses a model to tokenize the text and add an additional "tokens" property to the annotation task, e.g. using the add_tokens preprocessor. This allows the application to resolve the highlighted spans back to the respective token boundaries. Selected entities will be added to the task’s "spans". Individual tokens can also set "disabled": true to make them unselectable (and make spans including those tokens invalid).

This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "Multivariate analysis revealed that septic shock and bacteremia originating from lower respiratory tract infection were two independent risk factors for 30-day mortality.",
"tokens": [
{"text": "Multivariate", "start": 0, "end": 12, "id": 0, "ws": true},
{"text": "analysis", "start": 13, "end": 21, "id": 1, "ws": true},
{"text": "revealed", "start": 22, "end": 30, "id": 2, "ws": true},
{"text": "that", "start": 31, "end": 35, "id": 3, "ws": true},
{"text": "septic", "start": 36, "end": 42, "id": 4, "ws": true},
{"text": "shock", "start": 43, "end": 48, "id": 5, "ws": true},
{"text": "and", "start": 49, "end": 52, "id": 6, "ws": true},
{"text": "bacteremia", "start": 53, "end": 63, "id": 7, "ws": true},
{"text": "originating", "start": 64, "end": 75, "id": 8, "ws": true},
{"text": "from", "start": 76, "end": 80, "id": 9, "ws": true},
{"text": "lower", "start": 81, "end": 86, "id": 10, "ws": true},
{"text": "respiratory", "start": 87, "end": 98, "id": 11, "ws": true},
{"text": "tract", "start": 99, "end": 104, "id": 12, "ws": true},
{"text": "infection", "start": 105, "end": 114, "id": 13, "ws": true},
{"text": "were", "start": 115, "end": 119, "id": 14, "ws": true},
{"text": "two", "start": 120, "end": 123, "id": 15, "ws": true},
{"text": "independent", "start": 124, "end": 135, "id": 16, "ws": true},
{"text": "risk", "start": 136, "end": 140, "id": 17, "ws": true},
{"text": "factors", "start": 141, "end": 148, "id": 18, "ws": true},
{"text": "for", "start": 149, "end": 152, "id": 19, "ws": true},
{"text": "30", "start": 153, "end": 155, "id": 20, "ws": false},
{"text": "-", "start": 155, "end": 156, "id": 21, "ws": false},
{"text": "day", "start": 156, "end": 159, "id": 22, "ws": true},
{"text": "mortality", "start": 160, "end": 169, "id": 23, "ws": false},
{"text": ".", "start": 169, "end": 170, "id": 24, "ws": false}
],
"spans": [
{"start": 0, "end": 21, "token_start": 0, "token_end": 1, "label": "METHOD"},
{"start": 36, "end": 48, "token_start": 4, "token_end": 5, "label": "FACTOR"},
{"start": 36, "end": 48, "token_start": 4, "token_end": 5, "label": "CONDITION"},
{"start": 53, "end": 114, "token_start": 7, "token_end": 13, "label": "FACTOR"},
{"start": 53, "end": 63, "token_start": 7, "token_end": 7, "label": "CONDITION"},
{"start": 81, "end": 114, "token_start": 10, "token_end": 13, "label": "CONDITION"},
{"start": 153, "end": 169, "token_start": 20, "token_end": 23, "label": "EFFECT"}
]
}

pos Annotate part-of-speech tagsbinary

Under the hood, the POS tagging interface uses the same UI component as the ner interface. It also accepts and produces the same data format. The default theme ships with several customizable highlight colors for part-of-speech tags that make it easier to distinguish them.

This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "First look at the new MacBook Pro",
"spans": [{"start": 6, "end": 10, "label": "NOUN"}]
}

pos_manual Annotate and correct part-of-speech tagsmanual

Under the hood, the manual POS tagging interface uses the same UI component as the ner_manual interface. It also accepts and produces the same data format. The default theme ships with several customizable highlight colors for part-of-speech tags that make it easier to distinguish them.

This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "First look at the new MacBook Pro",
"spans": [
{"text": "First", "start": 0, "end": 5, "token_start": 0, "token_end": 0, "label": "ADJ"},
{"text": "look", "start": 6, "end": 10, "token_start": 1, "token_end": 1, "label": "NOUN"},
{"text": "new", "start": 18, "end": 21, "token_start": 4, "token_end": 4, "label": "ADJ"},
{"text": "MacBook", "start": 22, "end": 29, "token_start": 5, "token_end": 5, "label": "PROPN"},
{"text": "Pro", "start": 30, "end": 33, "token_start": 6, "token_end": 6, "label": "PROPN"}
],
"tokens": [
{"text": "First", "start": 0, "end": 5, "id": 0},
{"text": "look", "start": 6, "end": 10, "id": 1},
{"text": "at", "start": 11, "end": 13, "id": 2},
{"text": "the", "start": 14, "end": 17, "id": 3},
{"text": "new", "start": 18, "end": 21, "id": 4},
{"text": "MacBook", "start": 22, "end": 29, "id": 5},
{"text": "Pro", "start": 30, "end": 33, "id": 6}
]
}

relations Annotate semantic relations and syntactic dependenciesmanualNew: 1.10

The relations interface lets you annotate directional labelled relationships between tokens and expressions by clicking on the “head” and selecting the “child”, and optionally also assign spans for joint entity and dependency annotation. Single expressions can be part of multiple overlapping relations and you can configure the UI to only show arcs on hover and switch between a single-line tree view and a display with tokens wrapping across lines. If you’re in span annotation mode, clicking on a span will select it so you can click or hit d to delete it. To add a span, click and drag across the tokens, or hold down shift and click on the start and end token.

To use the interface most efficiently, you typically want to preprocess your text to add spans (e.g. named entities, noun phrases) and disable tokens you know will never be part of a dependency relation you’re interested in (e.g. punctuation). The rel.manual recipe lets you define those using powerful token-based patterns, and the built-in task-specific recipes like coref.manual and dep.correct include suitable predefined configurations.

This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "My mother’s name is Sasha Smith. She likes dogs and pedigree cats.",
"tokens": [
{"text": "My", "start": 0, "end": 2, "id": 0, "ws": true},
{"text": "mother", "start": 3, "end": 9, "id": 1, "ws": false},
{"text": "’s", "start": 9, "end": 11, "id": 2, "ws": true},
{"text": "name", "start": 12, "end": 16, "id": 3, "ws": true },
{"text": "is", "start": 17, "end": 19, "id": 4, "ws": true },
{"text": "Sasha", "start": 20, "end": 25, "id": 5, "ws": true},
{"text": "Smith", "start": 26, "end": 31, "id": 6, "ws": true},
{"text": ".", "start": 31, "end": 32, "id": 7, "ws": true, "disabled": true},
{"text": "She", "start": 33, "end": 36, "id": 8, "ws": true},
{"text": "likes", "start": 37, "end": 42, "id": 9, "ws": true},
{"text": "dogs", "start": 43, "end": 47, "id": 10, "ws": true},
{"text": "and", "start": 48, "end": 51, "id": 11, "ws": true, "disabled": true},
{"text": "pedigree", "start": 52, "end": 60, "id": 12, "ws": true},
{"text": "cats", "start": 61, "end": 65, "id": 13, "ws": true},
{"text": ".", "start": 65, "end": 66, "id": 14, "ws": false, "disabled": true}
],
"spans": [
{"start": 20, "end": 31, "token_start": 5, "token_end": 6, "label": "PERSON"},
{"start": 43, "end": 47, "token_start": 10, "token_end": 10, "label": "NP"},
{"start": 52, "end": 65, "token_start": 12, "token_end": 13, "label": "NP"}
],
"relations": [
{
"head": 0,
"child": 1,
"label": "POSS",
"head_span": {"start": 0, "end": 2, "token_start": 0, "token_end": 0, "label": null},
"child_span": {"start": 3, "end": 9, "token_start": 1, "token_end": 1, "label": null}
},
{
"head": 1,
"child": 8,
"label": "COREF",
"head_span": {"start": 3, "end": 9, "token_start": 1, "token_end": 1, "label": null},
"child_span": {"start": 33, "end": 36, "token_start": 8, "token_end": 8, "label": null}
},
{
"head": 9,
"child": 13,
"label": "OBJECT",
"head_span": {"start": 37, "end": 42, "token_start": 9, "token_end": 9, "label": null},
"child_span": {"start": 52, "end": 65, "token_start": 12, "token_end": 13, "label": "NP"}
}
]
}

Relationships are defined as dictionaries with a "head" and a "child" token index, indicating the direction of the arrow, the corresponding "head_spans" and "child_spans" describing the tokens or spans the relation is attached to, as well as a relation "label". If "spans" are present in the data, they will be displayed as a merged entity. If relations are added to spans, they will always refer to the last token as the head and child, respectively. If spans with existing relations are merged or split, Prodigy will always try to resolve and reconcile the indices.

The interface lets you toggle the following display settings:

Switch to dependency annotation mode to assign labelled relations to tokens and spans (default mode). To select a head and its child, click on the tokens in order. To attach a token to itself, double-click on it. To remove a relation, click on the arc or its label.
Switch to span annotation mode to merge, highlight and label spans like entities. Spans are indicated by a dotted border. To select a span, drag across the tokens or from the start token to the end token or hold shift and click the first token and last token. To remove a span, click on it to select it and hit or press d.
All relationsDisplay all annotated relation arcs. If disabled, the tokens retain their color indicating they’re part of an annotated relation, but the arcs connected to a token are only shown on hover.
All labelsShow all labels of “special” tokens, e.g. merged named entities or pattern matches. If disabled, labels are only shown on hover and dotted borders indicate “special” tokens.
WrapWrap tokens over multiple lines if needed, instead of displaying them all in one row with a scrollbar. Can be useful for longer texts with few but very long dependencies.
What does it mean if the dragged box in span mode turns red or green?

When you drag across tokens to select them in span highlighting mode, your selection will either turn green or red. This indicates whether the span you’ve selected is valid and can be added. For instance, since a token can only be part of one entity, a span selection is invalid if it overlaps with another span. If you want to create a larger span, remove the previous span first and then add the new span.

Spans also can’t be added if they include disabled tokens, i.e. tokens with "disable": true. You can take advantage of this to prevent annotation mistakes and precprocess your data to automatically disable certain words and word types that shouldn’t be part of spans or relations – like punctuation or determiners. See the rel.manual recipe for details.

How can I highlight spans over multiple lines?

If you want to highlight a span across a line break, it doesn’t always work to click and drag, because you may be selecting tokens in between that you’re not interested in. In that case, you can either toggle the wrapping at the top and do your span selection witout line breaks, or hold down shift and click the start and end token of the span you want to highlight. You can also custiomize this shortcut.

How can I attach a dependency like the root to itself?

To denote the syntactic root of a sentence, the ROOT dependency is typically attached to itself and has the same head and child token. In the relations interface, you can achieve this by double-clicking or double-tapping on the token you want to attach to itself. It will then be shown as a downward-pointing arrow. You can see an example of this in the dependency parsing docs.

What happens if I remove a span that has a relation attached?

Under the hood, "relations" can refer to individual tokens or spans. If you remove a span that is also the head or child of a relation, this relation will be reattached to the last token of the span. Prodigy will always try its best to reconcile the existing relations – however, it’s recommended to assign and correct your spans first, and then move on to add relations to them.

Available config settings

Settings can be specified in the "config" returned by a recipe, or in your prodigy.json. In addition to these config settions, you can also change the maximum height of the arc annotation space via the custom theme settings.

NameDescriptionDefault
relations_span_labelsOptional list of span labels to assign during annotation. If set, an additonal span highlighting mode is added.None
wrap_relationsWrap text across multiple lines instead of displaying all tokens in one line with scrollbar. Will be used as the default value for the checkbox.False
show_relationsShow all relations. Will be used as the default value for the checkbox.True
show_relations_labelsShow all labels for spans, instead of only on hover. Will be used as the default value for the checkbox.True
hide_relations_arrowHide the arrow head. Relations will still be added as "head" and "child", but if the distinction and directionality doesn’t matter to you, you can ignore it and hide the arrows.False
label_styleNew: 1.10.5 Style of label set. "list" for list of keyboard-accessible buttons or "dropdown". Previously available as ner_manual_label_style."list"

dep Annotate syntactic dependencies and semantic relationsbinary

Dependencies are specified as a list of "arcs" with a "head" and a "child" token index, indicating the direction of the arrow. Since the arcs refer to the token index, the task also requires a "tokens" property. This is usually taken care of in the recipes using the add_tokens pre-processor.

This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "First look at the new MacBook Pro",
"arcs": [{"head": 2, "child": 5, "label": "pobj"}],
"tokens": [
{"text": "First", "start": 0, "end": 5, "id": 0},
{"text": "look", "start": 6, "end": 10, "id": 1},
{"text": "at", "start": 11, "end": 13, "id": 2},
{"text": "the", "start": 14, "end": 17, "id": 3},
{"text": "new", "start": 18, "end": 21, "id": 4},
{"text": "MacBook", "start": 22, "end": 29, "id": 5}
]
}

classification Annotate labelled text or imagesbinary

The classification interface lets you render content (text, text with spans, image or HTML) with a prominent label on top. The annotation decision you typically collect with this interface is whether the label applies to the content or not.

This live demo requires JavaScript to be enabled.

JSON task format (text)

{
"text": "Apple updates its analytics service with new metrics",
"label": "TECHNOLOGY"
}
This live demo requires JavaScript to be enabled.

JSON task format (text with spans)

{
"text": "Apple updates its analytics service with new metrics",
"label": "CORRECT",
"spans": [{ "start": 0, "end": 5, "label": "ORG" }]
}
This live demo requires JavaScript to be enabled.

JSON task format (image)

{
"image": "https://images.unsplash.com/photo-1536521642388-441263f88a61?w=400",
"label": "FOOD"
}

image Annotate images, bounding boxes and image segmentsbinary

Images can be provided as file paths, URLs or base64-encoded data URIs – basically, anything that can be rendered as an image. When using local paths, keep in mind that modern browsers typically block images from local paths for security reasons. So you can either host the images with a local web server using the ImageServer, loader place them in an S3 bucket, or use the fetch_media preprocessor to convert all images from local paths and URIs to base64-encoded data URIs.

This live demo requires JavaScript to be enabled.

JSON task format (image URL)

{
"image": "https://images.unsplash.com/photo-1554415707-6e8cfc93fe23?w=400"
}

JSON task format (base64 data URI)

{
"image": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAqdJREFUeNrMl12LTVEcxs85TjPMKBkmeWkaOWgkKa9z4T1ulCtlcDG5UFO4QG58BD4ApcSESe7kQilcaGQMbpzMSJozNElMQ5IhM36P/mq3O3ufvdbZu6z6tTp77bXWs/b/bZ1c7n9si9pKcz3n5cFpTiHi+U5P7SthRhoCVrmexNoaaEpDwFaY5fkFFtYlgJM30q2FNg8B7a7zqn2BTpgHpxBT8BDQ7jKhaLZugGY7+TnIQze8Zvwm/Uf4NTb6ZjomAv52PiYomOojcBHW25hMcR6uwu4EzqUDLIblLgLyoVNokz7YB1NwQqI4+VSCHNBBV4ZB3t/o5QNM/E53Gn7DbbiQZHNrW+xAJcTMrMcJR2FcAuJsHpO85pgp/ASw6U85HzxySMEL6PYG1uyqNxE9gXcJN281R50deHyW50ehwckJA4vK6++FIqVokSFH1aYrYDschJaI9YfgOjy1A8m0X2ESZN7pKAEtmGLcElGrhekSWA07oMM2LSb80j9gAj7BMzPvK0VN3qPkNlqtOAz7Lf7jmiLqPtyCB/A2GFnOAkJiltJdgl0RrwzCcRiIiqh8wo1U40ssMhzxRZQz9oSGrsAxyy3OURBuKs2bqw2wwaTVj+AJx+Bkrc1dBMgRt8WMP7Tc8a/1svlEPXkg3FTqNunOF/EV5GjXQoJyaQrYAMus3Ea1uwGvL6ciQDUeVF4PWRI6w+/miPtiRXcG+AafvcqxbarkMt8umF0W78GUqsVvwB14rt8KMYsUFbIP/F7nLSAkRl/oAFwGldgRE/S4WlzzvmxfYaw78ZUsbtAyVh8Ld9rlpIdn/TFTymaKXCoCAm3A7PuixnsjWQl4acmllnOpwLzPQoBO1m/xHteUqr9kIUCL9iZ4r2J5IJN/zE1ZrPtHgAEAAuvBpEMJ1j8AAAAASUVORK5CYII="
}

Image tasks can also define a list of "spans", describing the boxes to draw on the image. Each span can define a list of "points", consisting of [x, y] coordinate tuples of the bounding box corners. While bounding boxes are usually squares, Prodigy also supports polygon shapes. The coordinates should be measured in pixels, relative to the original image. If a "label" is specified, it will be rendered with the bounding box. You can also define an optional "color". If none is set, Prodigy will pick one for you.

This live demo requires JavaScript to be enabled.

JSON task format (with boxes)

{
"image": "https://images.unsplash.com/photo-1554415707-6e8cfc93fe23?w=400",
"spans": [{"points": [[155, 15], [305, 15], [305, 160], [155, 160]], "label": "LAPTOP"}]
}

image_manual Create bounding boxes and image segmentsmanual

The manual image interface lets you draw bounding boxes and polygon shapes on the image. It’s optimized for fast object annotation, but not necessarily high-precision image masking. You can click and drag or click and release to draw boxes. Polygon shapes can also be closed by double-clicking when adding the last point, similar to closing a shape in Photoshop or Illustrator. Clicking on the label will select a shape so you can change the label, delete or resize it. The interface also supports pre-populating the "spans" to pre-highlight shapes.

This live demo requires JavaScript to be enabled.

JSON task format

{
"image": "https://images.unsplash.com/photo-1434993568367-36f24aa04d2f?w=400",
"width": 400,
"height": 267,
"spans": [
{
"label":"SKATEBOARD",
"color": "yellow",
"x": 47.5,
"y": 171.4,
"width": 109.1,
"height": 67.4,
"points": [[47.5, 171.4], [47.5, 238.8], [156.6, 238.8], [156.6, 171.4]],
"center": [102.05, 205.1],
"type": "rect"
},
{
"label": "PERSON",
"color": "cyan",
"points": [[256, 39.5], [237, 78.5], [269, 116.5], [286, 67.5]],
"type": "polygon"
}
]
}
This live demo requires JavaScript to be enabled.

JSON task format (with spans)

{
"image": "https://images.unsplash.com/photo-1415594445260-63e18261587e?w=1080",
"width": 1080,
"height": 720,
"spans": [
{"label": "CAR", "points": [[6,6],[296,3],[343,22],[396,133],[433,143],[431,194],[410,199],[432,269],[426,349],[433,471],[406,490],[386,511],[381,618],[360,670],[289,687],[264,665],[107,678],[3,677],[6,6]]},
{"label": "CAR", "points": [[904,98],[876,140],[854,209],[838,222],[825,313],[825,354],[829,371],[825,458],[832,523],[857,533],[899,532],[912,512],[972,591],[975,666],[995,712],[1074,714],[1076,65],[1007,71],[935,79],[912,87],[904,98]]},
{"label": "CAR", "points": [[753,33],[818,30],[1000,30],[1037,35],[1044,68],[933,80],[911,90],[879,138],[858,195],[852,213],[836,226],[825,307],[817,310],[814,375],[796,381],[750,375],[744,331],[716,328],[712,253],[705,213],[706,170],[730,158],[735,86],[742,48],[753,33]]},
{"label": "CAR", "points": [[662,73],[694,66],[738,65],[731,158],[703,173],[708,220],[712,267],[680,267],[676,281],[640,278],[634,217],[634,145],[616,134],[616,127],[640,122],[662,73]]}
]
}

The interface lets you toggle the following display settings:

Draw a rectangular shape by clicking and dragging. Will store the x, y, width, height, center, points and label for each rect.
Draw a polygon shape by clicking on each point of the shape. To close the shape, double-click or click the first point. Will store the points and label.
New: v1.10 Draw a freehand shape by dragging your cursor. Will store the points and label.
All labelsNew: v1.10 Display all bounding box labels. If unchecked, labels are only shown on hover.

Available config settings

Settings can be specified in the "config" returned by a recipe, or in your prodigy.json.

NameDescriptionDefault
darken_imageDarken an image in image detection or segmentation mode for better visibility of the colored bounding boxes. For example, 0.25 will darken the image by 25%.0
show_bounding_box_centerMark the center of a bounding box (rectangles only).False
image_manual_stroke_widthNew: 1.10 Stroke width of the bounding box. Also used to calculate the size of the transform anchors.4
image_manual_font_sizeNew: 1.10 Font size of the bounding box label.16
image_manual_from_centerNew: 1.10 Draw and resize rectangles from their center instead of the top left corner.False
image_manual_modesNew: 1.10 Annotation modes to support in the UI. Can be used to disable existing modes or change the order of the buttons.["rect", "polygon", "freehand"]
image_manual_show_labelsNew: 1.10 Default value of the toggle used to show and hide bounding box labels.True
image_manual_spans_keyNew: 1.10.4 The JSON key used to store the image annotations. Can be customized if the interface is combined with a different UI using "spans", e.g. ner_manual."spans"
image_manual_legacyNew: 1.10 Switch back to the previous manual interface used in Prodigy v1.9 and below. Will likely be deprecated in the future.False
label_styleNew: 1.10.5 Style of label set. "list" for list of keyboard-accessible buttons or "dropdown". Previously available as ner_manual_label_style."list"

audio Annotate data and regions in audio and video filesbinaryNew: 1.10

The audio interface lets you play and label audio and video files, and it’s able to load URLs or base64-encoded data URIs. To load audio files from local paths, you can use the Audio, AudioServer, Video or VideoServer loaders or the fetch_media preprocessor to convert all audios from local paths and URIs to base64-encoded data URIs. Using blocks, the audio interface can also be combined with other interfaces, like a text_input for creating audio transcript, or the choice UI to ask a multiple-choice question about the audio and/or certain audio segments.

This live demo requires JavaScript to be enabled.

JSON task format (audio URL)

{
"audio": "https://example.com/audio.wav"
}

JSON task format (video URL)

{
"video": "https://example.com/video.mp4"
}

JSON task format (base64 data URI)

{
"audio": "data:audio/x-wav;base64,UklGRigUAQBXQVZFZm10IBAAAAABAAIARKwAABCxAgAEABAAZGF0YegTAQD3//f/IQAhAB4AHgD9//3/MwAzABoAGgABAAEAMgAyAAcABwAIAAgAMgAyAP7//v8UABQAKwArAPb/9v8hACEAIAAgAPP/8/8lACUAEAAQAPr/+v8uAC4ABgAGAAQABAAyADIAAQABABQAFAAuAC4A+f/5/x0AHQAhACEA9f/1/ygAKAAWABYA+f/5/ywALAADAAMAAAAAAC0ALQD6//r/DwAPAC4ALgD4//j/HwAfACkAKQD6//r/LAAsABkAGQD7//v/LAAsAAUABQD7//v/KwArAPz//P8HAAcAJAAkAO//7/8SABIAGwAbAOn/6f8YABgACgAKAOz/7P8hACEA/f/9//L/8v8gACAA7P/s//j/+P8aABoA5v/m/wsACwAWABYA6v/q/x4AHgAUABQA9//3/ywALAALAAsAAQABADEAMQADAAMADQANAC4ALgD3//f/EwATACAAIADs/+z/HgAeABUAFQDz//P/KQApAAoACgD+//7/MAAwAAQABAAPAA8AMQAxAPz//P8dAB0AKwArAPb..."
}

Audio tasks can also define a list of "audio_spans" describing the regions to pre-highlight. Each audio span consists of a start and end timestamp (in seconds), as well as an optional "label" and an optional "color", e.g. #ffd700 (without transparency). Clicking on a highlighted region will play only that region and then pause.

This live demo requires JavaScript to be enabled.

JSON task format

{
"audio": "https://example.com/audio.mp3",
"audio_spans": [
{"start": 1.89, "end": 2.83, "label": "SPEAKER_1"},
{"start": 9.45, "end": 9.94, "label": "SPEAKER_1"},
{"start": 13.48, "end": 16.19, "label": "SPEAKER_2"},
{"start": 16.68, "end": 20.62, "label": "SPEAKER_1"},
{"start": 20.81, "end": 23.78, "label": "SPEAKER_2"}
]
}

audio_manual Create and edit regions in audio and video filesmanualNew: 1.10

The manual audio interface lets you play and label audio and video files, and it’s able to load URLs or base64-encoded data URIs. To load audio files from local paths, you can use the Audio, AudioServer, Video or VideoServer loaders or the fetch_media preprocessor to convert all audios from local paths and URIs to base64-encoded data URIs. You can also load in data with existing "audio_spans" to pre-populate the interface. To create a new region with the current label, you can click and drag. You can also drag and drop existing regions to move them, or delete a region by clicking its × button.

This live demo requires JavaScript to be enabled.

JSON task format

{
"audio": "https://example.com/audio.mp3",
"audio_spans": [
{"start": 1.89, "end": 2.83, "label": "SPEAKER_1"},
{"start": 9.45, "end": 9.94, "label": "SPEAKER_1"},
{"start": 13.48, "end": 16.19, "label": "SPEAKER_2"},
{"start": 16.68, "end": 20.62, "label": "SPEAKER_1"},
{"start": 20.81, "end": 23.78, "label": "SPEAKER_2"}
]
}
This live demo requires JavaScript to be enabled.

JSON task format

{
"video": "https://example.com/video.mp4",
"audio_spans": [
{"start": 1.89, "end": 2.83, "label": "SPEAKER_1"},
{"start": 9.45, "end": 9.94, "label": "SPEAKER_1"},
{"start": 13.48, "end": 16.19, "label": "SPEAKER_2"},
{"start": 16.68, "end": 20.62, "label": "SPEAKER_1"},
{"start": 20.81, "end": 23.78, "label": "SPEAKER_2"}
]
}

By default, videos are scaled to the full width of the annotation card. But you can also provide a "width" and "height" value on the annotation task, which will be used instead if available.

Available config settings

Settings can be specified in the "config" returned by a recipe, or in your prodigy.json.

NameDescriptionDefault
show_audio_cursorShow a line indicating the current cursor position.True
show_audio_cursor_timeShow a timestamp next to the audio cursor line (if enabled).True
show_audio_minimapShow a smaller interactive minimap of the whole waveform below.True
show_audio_timelineShow a timeline below the waveform.False
audio_autoplayAuto-play the audio when a new task loads.False
audio_loopDefault value of loop setting (whether to loop audio).False
audio_bar_widthWidth of a single waveform bar. Set to 0 for a more “traiditional” look.3
audio_bar_heightHeight of the waveform bars. If greater than 1, the bars will be stretched.2
audio_bar_gapSpacing between the waveform bars.2
audio_bar_radiusCorner radius of the waveform bars.2
audio_max_zoomMaximum zoom level in pixels per second.5000
audio_rateAudio playback speed. Lower number is slower, e.g. 0.5 will play at half the speed, 2.0 twice as fast.1.0

Here’s an example of a customized waveform with different settings and custom label colors. The settings can be provided globally in the prodigy.json, or in the "config" returned by a recipe.

This live demo requires JavaScript to be enabled.

prodigy.json (excerpt)

{
"show_audio_cursor_time": false,
"show_audio_minimap": false,
"audio_bar_width": 0,
"audio_bar_height": 4,
"audio_bar_gap": 0,
"audio_bar_radius": 0,
"audio_max_zoom": 100,
"custom_theme": {"labels": {"SPEECH": "#f194b4"}}
}

Advanced customizations New: 1.10.4

As of v1.10.4, Prodigy also exposes the underlying WaveSurfer instance (the widget powering the interactive audio player) via window.wavesurfer. This allows you to customize the playback and implement your own controls using a custom interface, a HTML block and optional custon JavaScript. Here’s an example of two buttons to toggle the playback rate:

HTML block

<button onclick="window.wavesurfer.setPlaybackRate(2)">2x speed</button>
<button onclick="window.wavesurfer.setPlaybackRate(1)">1x speed</button>

choice Select one or more answersmanual

The choice interface can render text, entities, images or HTML. Each option is structured like an individual annotation task and will receive a unique ID. You can also choose to assign the IDs yourself. A choice task can also contain other task properties – like a text, a label or spans. This content will be rendered above the options.

Note that selecting options via keyboard shortcuts like 1 and 2 only works for 9 or less options. To use multiple choice instead of single choice, you can set"choice_style": "multiple" in your Prodigy or recipe config. You can also set "choice_auto_accept": true to automatically accept a single-choice option when selecting it – without having to click the accept button.

This live demo requires JavaScript to be enabled.
This live demo requires JavaScript to be enabled.

JSON task format

{
"text": "Pick the odd one out.",
"options": [
{"id": "BANANA", "text": "🍌 banana"},
{"id": "BROCCOLI", "text": "🥦 broccoli"},
{"id": "TOMATO", "text": "🍅 tomato"}
]
}

When you annotate, an "accept" key is added to the parent task, containing the IDs of the selected options, as well as the "answer". This allows the annotator to also reject or ignore tasks, for example if they contain errors.

JSON task format (annotated)

{
"text": "Pick the odd one out.",
"options": [
{"id": "BANANA", "text": "🍌 banana"},
{"id": "BROCCOLI", "text": "🥦 broccoli"},
{"id": "TOMATO", "text": "🍅 tomato"}
],
"accept": ["BROCCOLI"],
"answer": "accept"
}

An option can also contain a style key with an object of CSS properties in the camel-cased JavaScript format, mapping to their values:

JSON task format (with custom CSS)

{
"options": [
{"id": 1, "text": "Option 1", "style": {"background": "#ff6666"}},
{"id": 2, "text": "Option 2", "style": {"fontWeight": "bold" }}
]
}
This live demo requires JavaScript to be enabled.

Available config settings

Settings can be specified in the "config" returned by a recipe, or in your prodigy.json.

NameDescriptionDefault
choice_styleStyle of choice interface, “single” or “multiple”."single"
choice_auto_acceptAutomatically accept a single-choice option when selecting it – without having to click the accept button.False

html Annotate HTML content

The HTML interface can render any HTML, including embeds. It’s especially useful for basic formatting and simple markup you can create programmatically. If you’re looking to build complex or interactive custom interfaces, you probably want to use the "html_template" config setting and add custom CSS and JavaScript.

This live demo requires JavaScript to be enabled.

JSON task format

{
"html": "<iframe width='400' height='225' src='https://www.youtube.com/embed/vKmrxnbjlfY'></iframe>"
}

review Review and merge existing annotations by multiple annotatorsNew: 1.8

The review interface currently supports all annotation modes except image_manual and compare. It will present one or more versions of a given task, e.g. annotated in different sessions by different users, display them with the session information and allow the reviewer to correct or override the decision. The data requires a "view_id" setting that defines the annotation UI to render the example with. The versions follow the same format as regular annotation tasks. They can specify a list of "sessions", as well as a boolean "default" key, marking that version as the default version to pre-populate the review UI with (if the interface is a manual interface).

This live demo requires JavaScript to be enabled.

JSON task format (binary annotations)

{
"text": "Hello world",
"label": "POSITIVE",
"view_id": "classification",
"versions": [
{
"text": "Hello world",
"label": "POSITIVE",
"answer": "accept",
"sessions": ["dataset-user1", "dataset-user3", "dataset-user4"]
},
{
"text": "Hello world",
"label": "POSITIVE",
"answer": "reject",
"sessions": ["dataset-user2"]
}
]
}
This live demo requires JavaScript to be enabled.

JSON task format (manual annotations)

{
"text": "Hello Apple",
"tokens": [{"text": "Hello", "start": 0, "end": 5, "id": 0}, {"text": "Apple", "start": 6, "end": 11, "id": 1}],
"answer": "accept",
"view_id": "ner_manual",
"versions": [
{
"text": "Hello Apple",
"tokens": [{"text": "Hello", "start": 0, "end": 5, "id": 0}, {"text": "Apple", "start": 6, "end": 11, "id": 1}],
"spans": [{"start": 6, "end": 11, "label": "ORG", "token_start": 1, "token_end": 1}],
"answer": "accept",
"sessions": ["dataset-user1", "dataset-user3", "dataset-user4"],
"default": true
},
{
"text": "Hello Apple",
"tokens": [{"text": "Hello", "start": 0, "end": 5, "id": 0}, {"text": "Apple", "start": 6, "end": 11, "id": 1}],
"spans": [],
"answer": "accept",
"sessions": ["dataset-user2"],
"default": false
}
]
}

diff Compare two texts with a visual diffbinary

The diff view is especially useful if you’re dealing with very subtle comparisons, e.g. when evaluating spelling correction. The config setting "diff_style" can be set to "chars", "words" or "sentences" to customize how the diff is displayed. If you prefer the suggested changes, you can hit accept, which corresponds to the "accept" or "added" object in the data. In a real-world evaluation setting, you would typically randomize which text gets shown in green or red, and then store that mapping with the task, so you can resolve it later.

This live demo requires JavaScript to be enabled.

JSON task format (v1.10+)

{
"added": "Cyber researchers have linked the vulnerability exploited by the latest ransomware to “WannaCry”. Both versions of malicious software rely on weaknesses discovered by the National Security Agency years ago, Kaspersky said.",
"removed": "Cyber researchers linked the vulnerability exploited by the latest ransomware to 'Wanna Cry'. Both versions of malicious software rely on weaknessses, discovered by the National Security Agency years ago, Kaspersky said"
}

JSON task format

{
"accept": {
"text": "Cyber researchers have linked the vulnerability exploited by the latest ransomware to “WannaCry”. Both versions of malicious software rely on weaknesses discovered by the National Security Agency years ago, Kaspersky said."
},
"reject": {
"text": "Cyber researchers linked the vulnerability exploited by the latest ransomware to 'Wanna Cry'. Both versions of malicious software rely on weaknessses, discovered by the National Security Agency years ago, Kaspersky said"
}
}

Available config settings

Settings can be specified in the "config" returned by a recipe, or in your prodigy.json.

NameDescriptionDefault
diff_styleStyle to use for visual diffs. "words", "chars" or "sentences"."words"

compare Compare two pieces of contentbinary

The compare interface, typically used via the compare recipe, supports various types of content like text, HTML and images. It can receive an optional input containing the original example, plus two examples to collect feedback on. This format is normally used to compare two different outputs, and requires you to feed in data from two different sources. The data is associated via its IDs. To ensure unbiased annotation, Prodigy randomly decides which example to return as the assumed correct answer, e.g. "accept". The mapping lets you resolve those back to the original sources later on (A or B).

This live demo requires JavaScript to be enabled.

JSON task format

{
"id": 1,
"input": {"text": "NLP"},
"accept": {"text": "Natural Language Processing"},
"reject": {"text": "Neuro-Linguistic Programming"},
"mapping": {"A": "accept", "B": "reject"}
}

text_input Collect free-form text inputmanualNew: 1.9

The text input interface lets you collect free-form text input. The text is added to the annotated task with a given key. You can also customize the placeholder text, add an optional label to the field and make the input field a text area with multiple rows. The text input interface is best used as a block in the blocks interface, for example to ask the annotator to translate a given text or leave feedback about a manual NER annotation task.

This live demo requires JavaScript to be enabled.

JSON task format

{
"field_id": "user_input",
"field_label": "User input field",
"field_placeholder": "Type here...",
"field_rows": 1,
"field_autofocus": false
}

The "field_id" property lets you define a custom key that the collected text is stored under. You can also pre-populate the field when you stream in the data to set a default value. This can be very useful for tasks that need the annotator to rewrite a text or perform other manual corrections.

JSON task format (annotated)

{
"field_id": "user_input",
"user_input": "This is some text typed by the user",
"answer": "accept"
}

As of v1.10, Prodigy lets you provide an optional list of "field_suggestions" that will be used to auto-suggest and auto-complete an answer if the user types a letter or presses the key.

This live demo requires JavaScript to be enabled.

JSON task format

{
"field_id": "country",
"field_placeholder": "Your country",
"field_suggestions": ["United States", "United Kingdom", "Germany", "France"]
}

The text_input interface allows the following task properties:

NameDescriptionDefault
field_idKey that the collected text is stored under."user_input"
field_labelLabel to display above the field.
field_placeholderPlaceholder to display if the input is empty.."Type here..."
field_rowsNumber of rows to display. If greater than 1, the field becomes a resizable textarea.1
field_autofocusAutofocus the field when a new task loads.false
field_suggestionsNew: 1.10 List of texts to auto-suggest when the user types or presses .

blocks Combine different interfaces and custom contentNew: 1.9

The blocks interface lets you combine different interfaces via the custom recipe configuration. A list of "blocks" tells it which interfaces to use in which order. The incoming data needs to match the format expected by the interfaces used in the blocks. For instance, to render a ner_manual block, the data needs to contain "text" and "tokens". For more details and examples, see the documentation on custom blocks.

This live demo requires JavaScript to be enabled.

Recipe config

{
"labels": ["PERSON", "ORG", "PRODUCT"],
"blocks": [
{"view_id": "ner_manual"},
{"view_id": "text_input"},
{"view_id": "html"},
]
}

Example JSON task

{
"text": "First look at the new MacBook Pro",
"spans": [
{"start": 22, "end": 33, "label": "PRODUCT", "token_start": 5, "token_end": 6}
],
"tokens": [
{"text": "First", "start": 0, "end": 5, "id": 0},
{"text": "look", "start": 6, "end": 10, "id": 1},
{"text": "at", "start": 11, "end": 13, "id": 2},
{"text": "the", "start": 14, "end": 17, "id": 3},
{"text": "new", "start": 18, "end": 21, "id": 4},
{"text": "MacBook", "start": 22, "end": 29, "id": 5},
{"text": "Pro", "start": 30, "end": 33, "id": 6}
],
"field_id": "user_input",
"field_rows": 5,
"html": "<iframe width='100%' height='150' scrolling='no' frameborder='no' src='https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/174136820&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true'></iframe>"
}

llm_io Show the requests and response to/from an LLMNew: 1.13.2

This interface lets you render the prompt sent to the LLM as well as the obtained response. This can be useful when debugging, but it may also provide an extra bit of context while annotating with an LLM in the loop.

This interface is typically used as a component of a blocks interface together with an annotation interface that renders the output of the LLM.

Recipe config

{
"blocks": [
{"view_id": "ner_manual"},
{"view_id": "llm_io"},
]
}

Example JSON task for LLM-UI

{
"llm": {
"prompt": "This is the prompt sent to the LLM.",
"response": "This is the response that we got back."
}
}
This live demo requires JavaScript to be enabled.

If you use spacy-llm then you can use the prompt/response pair stored in the spaCy doc object under the user_data key.

import spacy
from dotenv import load_dotenv
# Just in case, make sure the environment variables are loaded
load_dotenv()
# Assemble a spaCy pipeline from the config
nlp = assemble("config.cfg")
# Use this pipeline as you would normally
doc = nlp("I know of a great pizza recipe with anchovis.")
# You'll find the LLM response/prompt pair here
doc.user_data

However, be sure that you set the save_io property in your config file. Otherwise the prompt and response won’t be stored.

Start of a spacy-llm config file

[nlp]
lang = "en"
pipeline = ["llm"]
[components]
[components.llm]
factory = "llm"
save_io = true