8000 Fixed Install Issues by fogx · Pull Request #58 · Cartucho/OpenLabeling · GitHub
[go: up one dir, main page]

Skip to content

Fixed Install Issues #58

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Aug 4, 2019
Merged

Fixed Install Issues #58

merged 13 commits into from
Aug 4, 2019

Conversation

fogx
Copy link
Contributor
@fogx fogx commented Jun 1, 2019

I created a fork with fixes to the issues explained in Issue #57

fogx added 5 commits June 1, 2019 11:18
…itmodules to show working version of the DaSiamCode
Added Pytorch installation as well as the Download of the submodule and the model from google drive.
@fogx
Copy link
Contributor Author
fogx commented Jun 1, 2019

Changes:

  • removed all .cuda() calls
  • changed the torch.load() call to work with CPU only
  • Updated Readme
  • (temp) changed the submodule to use my forked version, which contains the same code but without .cuda() calls
  • Changed the google download link in main.py to link to the correct file.

After these changes i was able to run the DaSiam Tracker on my CPU-Only machine.

@Cartucho
Copy link
Owner
Cartucho commented Jun 1, 2019

With your changes can we still use the DaSiam Tracker on GPU? I was thinking that we could add an argument in the main.py file --gpu which would use the GPU version if CUDA was detected and the CPU version if not.

What do you think?

@Cartucho Cartucho force-pushed the master branch 2 times, most recently from 17b41d9 to e4a02ac Compare June 1, 2019 15:52
@fogx
Copy link
Contributor Author
fogx commented Jun 1, 2019

no i think it wont run on GPU anymore tbh. Should definitely add a switch argument like this:
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
and replacing every .cuda() with .to(device)
(from here: pytorch/pytorch#1668)
An issue was that the DaSiam Tracker is in a submodule, so i couldn't change that part and had to fork it as well. Do you have any preferred method for that?

@Cartucho
Copy link
Owner
Cartucho commented Jun 2, 2019

We could have two submodules:

  1. main/DaSiamRPN from the original repo
  2. main/DaSiamRPN_no_CUDA from your fork

what do you think? Then we could test if torch.cuda.available(): and we would import the functions from your fork if no CUDA is available.

@fogx
Copy link
Contributor Author
fogx commented Jun 3, 2019

i will take care of this as soon as i have some time

@Cartucho
Copy link
Owner
Cartucho commented Jun 8, 2019

perfect! thank you

fogx added 5 commits June 24, 2019 15:12
Fixed readme to show new commands
added check to dasiamwrapper to target cpu if necessary
@fogx
Copy link
Contributor Author
fogx commented Jun 24, 2019

i implemented the change, but now the DaSIAM Tracker is going all over the place. Im not sure if its because of my changes though, since i can't really test it on a GPU right now.
It shouldn't have any performance impacts (only time) as far as i know?
Can you test this?

I also updated the Readme a bit, since the current one is outdated.

@Cartucho
Copy link
Owner

Great PR! I will test it! Sorry for the super later response!

@WillJStone
Copy link
Contributor

Can I also recommend that you change the readme to instruct the user to use the install shell script in the dasiamrpn submodule for installing the dasiamrpn dependencies? This makes sure that you get the correct version of pytorch. It may still work with a different version but it's not guaranteed.

@Cartucho
Copy link
Owner

@WillJStone yes, I agree!

@Cartucho
Copy link
Owner

The rest is working fine on the CPU! I need to test on GPU (:

Should the GPU part work the same way as the official version? I haven't checked your fork yet.

@fogx
Copy link
Contributor Author
fogx commented Jul 29, 2019

I fixed the issues you pointed out and uploaded the model to the no_cuda repository.
When i run the openLabeling Tool now i get this error: MESA-LOADER: failed to open r600 (search paths /usr/lib/dri) and KCF and MOSSE only run for 3 frames and then stop working. GOTURN will not work at all.
Do you guys also get these errors? ive tried it on two different setups

@fogx
Copy link
Contributor Author
fogx commented Jul 29, 2019

The rest is working fine on the CPU! I need to test on GPU (:

Should the GPU part work the same way as the official version? I haven't checked your fork yet.

i added this code: device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
so everything should work fine with a gpu as well (and will use the first cuda device)

@fogx
Copy link
Contributor Author
fogx commented Jul 29, 2019

Can I also recommend that you change the readme to instruct the user to use the install shell script in the dasiamrpn submodule for installing the dasiamrpn dependencies? This makes sure that you get the correct version of pytorch. It may still work with a different version but it's not guaranteed.

can you give me a text to insert? Then ill do that. I'm not sure what the submodule prompt says.
Otherwise i would just say "Install version as prompted by daSiamRPN or use this link to generate your required install code/file"

@Cartucho
Copy link
Owner
Cartucho commented Aug 4, 2019

I fixed the issues you pointed out and uploaded the model to the no_cuda repository.
When i run the openLabeling Tool now i get this error: MESA-LOADER: failed to open r600 (search paths /usr/lib/dri) and KCF and MOSSE only run for 3 frames and then stop working. GOTURN will not work at all.
Do you guys also get these errors? ive tried it on two different setups

I do not get that error

@Cartucho Cartucho merged commit e2b54ba into Cartucho:master Aug 4, 2019
@Cartucho
Copy link
Owner
Cartucho commented Aug 4, 2019

I merged but there is still one issue:
The file SiamRPNVOT.model should be inside the folder fogx/DaSiamRPN_noCUDA/code.

I fixed other errors.

@fogx
Copy link
Contributor Author
fogx commented Aug 7, 2019

should i still move the model? since you closed the PR?

@Cartucho
Copy link
Owner
Cartucho commented Aug 7, 2019

yes please, I assumed you would do that asap (:

@fogx
Copy link
Contributor Author
fogx commented Aug 7, 2019

ok i changed it. I think you need to update the submodule though (git submodule sync & git submodule update)

@Cartucho
Copy link
Owner
Cartucho commented Aug 7, 2019

Done! Should be working fine now (:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0