8000 Updating quickstart and walkthrough to use new evaluation framework by brimoor · Pull Request #21 · voxel51/fiftyone-examples · GitHub
[go: up one dir, main page]

Skip to content

Updating quickstart and walkthrough to use new evaluation framework #21

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Feb 24, 2021

Conversation

brimoor
Copy link
Contributor
@brimoor brimoor commented Feb 23, 2021

Updates the quickstart and walkthrough notebooks to use the new evaluation framework introduced in fiftyone==0.7.3.

These are important changes to get live because the latest release slightly changed some of the field names generated by the evaluation routine, which may confuse readers trying to follow along with the example.

@brimoor brimoor added the enhancement New feature or request label Feb 23, 2021
@brimoor brimoor requested a review from a team February 23, 2021 23:40
@brimoor brimoor self-assigned this Feb 23, 2021
@benjaminpkane
Copy link
Contributor

This is news to me, but detection mistakenness is now mistakenness_loc, correct? I think the "Finding label mistakes" section needs to be updated accordingly.

@brimoor
Copy link
Contributor Author
brimoor commented Feb 24, 2021

Actually there is both mistakenness and mistakenness_loc. The quickstart only discusses mistakenness, but I think the content reads okay as is(?)

@benjaminpkane
Copy link
Contributor

The final view yields no results for me at the moment because it does not use `mistakennes_loc

@brimoor
Copy link
Contributor Author
brimoor commented Feb 24, 2021

oh I see. I didn't re-run the last cell b/c I thought nothing needed to be changed. Wrong! I'll fix it

@brimoor brimoor changed the title Updating quickstart to use new evaluation framework Updating quickstart and walkthrough to use new evaluation framework Feb 24, 2021
@brimoor
Copy link
Contributor Author
brimoor commented Feb 24, 2021

@benjaminpkane Hmm actually I wasn't able to reproduce the empty output that you mentioned. I did re-generate the last output, but I didn't need to change any code to get the same results as before... 🤔

Copy link
Contributor
@benjaminpkane benjaminpkane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving, but I still think there is an issue. It is confusing to me that you have mistakenness on your detections, but I do not get that when I run it. I have v0.3.0 brain installed.

Screenshot from 2021-02-23 17-37-01

My output:
Screenshot from 2021-02-23 17-38-31

@brimoor
Copy link
Contributor Author
brimoor commented Feb 24, 2021

Oooooh I think I know what the problem is. As of the last prod release, the mistakenness brain method runs evaluation with eval_key=mistakenness, which conflicts with the brain results (both are writing stuff under a mistakenness key). The last step of detection mistakenness is to clean up the evaluation results, which deletes the field from the dataset.

tl; dr- mistakenness is currently broken in production. I discovered this issue in my recent brain work but didn't realize it was also a problem in production.

I ran the notebook using a develop brain build, so I didn't have the issue.

@brimoor
Copy link
Contributor Author
brimoor commented Feb 24, 2021

The bug is unrelated to this PR, so I'll go ahead and merge this one.

@brimoor brimoor merged commit b1c9d65 into master Feb 24, 2021
@brimoor brimoor deleted the eval-updates branch February 24, 2021 01:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
0