-
Notifications
You must be signed in to change notification settings - Fork 274
Fixed Install Issues #58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…itmodules to show working version of the DaSiamCode
Added Pytorch installation as well as the Download of the submodule and the model from google drive.
Changes:
After these changes i was able to run the DaSiam Tracker on my CPU-Only machine. |
With your changes can we still use the DaSiam Tracker on GPU? I was thinking that we could add an argument in the What do you think? |
17b41d9
to
e4a02ac
Compare
no i think it wont run on GPU anymore tbh. Should definitely add a switch argument like this: |
We could have two submodules:
what do you think? Then we could test |
i will take care of this as soon as i have some time |
perfect! thank you |
Fixed readme to show new commands
added check to dasiamwrapper to target cpu if necessary
i implemented the change, but now the DaSIAM Tracker is going all over the place. Im not sure if its because of my changes though, since i can't really test it on a GPU right now. I also updated the Readme a bit, since the current one is outdated. |
Great PR! I will test it! Sorry for the super later response! |
Can I also recommend that you change the readme to instruct the user to use the install shell script in the dasiamrpn submodule for installing the dasiamrpn dependencies? This makes sure that you get the correct version of pytorch. It may still work with a different version but it's not guaranteed. |
@WillJStone yes, I agree! |
The rest is working fine on the CPU! I need to test on GPU (: Should the GPU part work the same way as the official version? I haven't checked your fork yet. |
I fixed the issues you pointed out and uploaded the model to the no_cuda repository. |
i added this code: device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') |
can you give me a text to insert? Then ill do that. I'm not sure what the submodule prompt says. |
I do not get that error |
I merged but there is still one issue: I fixed other errors. |
should i still move the model? since you closed the PR? |
yes please, I assumed you would do that asap (: |
ok i changed it. I think you need to update the submodule though (git submodule sync & git submodule update) |
Done! Should be working fine now (: |
I created a fork with fixes to the issues explained in Issue #57