8000 a simple compiling error with clang in macOS · Issue #728 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

a simple compiling error with clang in macOS #728

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Teaonly opened this issue Feb 13, 2017 · 6 comments
Closed

a simple compiling error with clang in macOS #728

Teaonly opened this issue Feb 13, 2017 · 6 comments

Comments

@Teaonly
Copy link
Contributor
Teaonly commented Feb 13, 2017

Building error in macOS (version 10.12.3)
Code commit version:

commit 0a893abc7be4dbaf609c1db180dcc148853ad208
Author: Adam Lerer <alerer@fb.com>
Date:   Sat Feb 11 18:49:32 2017 -0800

    fix serialization bug for large files

Error message:

In file included from torch/csrc/serialization.cpp:7:
In file included from /Users/teaonly/Workspace/github/pytorch/torch/lib/tmp_install/include/TH/THGenerateAllTypes.h:8:
In file included from generic/serialization.cpp:1:
/Users/teaonly/Workspace/github/pytorch/torch/csrc/generic/serialization.cpp:106:27: error: no matching function for call to 'min'
      size_t to_convert = std::min(size - i, buffer_size);
                          ^~~~~~~~
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/algorithm:2589:1: note: candidate template ignored: deduced conflicting types
      for parameter '_Tp' ('long long' vs. 'long')
min(const _Tp& __a, const _Tp& __b)
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/algorithm:2599:1: note: candidate template ignored: could not match
      'initializer_list<type-parameter-0-0>' against 'long long'
min(initializer_list<_Tp> __t, _Compare __comp)
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/algorithm:2581:1: note: candidate function template not viable: requires 3
      arguments, but 2 were provided
min(const _Tp& __a, const _Tp& __b, _Compare __comp)
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/algorithm:2607:1: note: candidate function template not viable: requires single
      argument '__t', but 2 arguments were provided
min(initializer_list<_Tp> __t)

The clang will report error in the following code:

 54     for (int64_t i = 0; i < self->size; i += buffer_size) {
 55       size_t to_convert = std::min(self->size - i, buffer_size);

self->size is long
i is int64_t
buffer_size is long
so 'self->size-i' is a long long,

clang will report error when different type in std::min.

@szagoruyko
Copy link
Contributor

you probably forgot to add export MACOSX_DEPLOYMENT_TARGET=10.9 ?

@Teaonly
Copy link
Contributor Author
Teaonly commented Feb 13, 2017
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/algorithm:2599:1: note:
      candidate template ignored: could not match 'initializer_list<type-parameter-0-0>' against 'long long'
min(initializer_list<_Tp> __t, _Compare __comp)
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/algorithm:2581:1: note:
      candidate function template not viable: requires 3 arguments, but 2 were provided
min(const _Tp& __a, const _Tp& __b, _Compare __comp)
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/algorithm:2607:1: note:
      candidate function template not viable: requires single argument '__t', but 2 arguments were provided
min(initializer_list<_Tp> __t)
^
14 errors generated.
/usr/bin/clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/teaonly/opt/anaconda3/include -arch x86_64 -I/Users/teaonly/Workspace/github/pytorch -I/Users/teaonly/Workspace/github/pytorch/torch/csrc -I/Users/teaonly/Workspace/github/pytorch/torch/lib/tmp_install/include -I/Users/teaonly/Workspace/github/pytorch/torch/lib/tmp_install/include/TH -I/Users/teaonly/Workspace/github/pytorch/torch/lib/tmp_install/include/THPP -I/Users/teaonly/Workspace/github/pytorch/torch/lib/tmp_install/include/THNN -I/Users/teaonly/opt/anaconda3/lib/python3.5/site-packages/numpy/core/include -I/Users/teaonly/opt/anaconda3/include/python3.5m -c torch/csrc/autograd/init.cpp -o build/temp.macosx-10.6-x86_64-3.5/torch/csrc/autograd/init.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY
error: command '/usr/bin/clang' failed with exit status 1
aliwork:pytorch teaonly$ echo $MACOSX_DEPLOYMENT_TARGET
10.9

I have setup env. If modify code like following, the pytorch will be installed.

--- a/torch/csrc/generic/serialization.cpp
+++ b/torch/csrc/generic/serialization.cpp
@@ -52,7 +52,7 @@ void THPStorage_(writeFileRaw)(THStorage *self, int fd)
     long buffer_size = std::min(self->size, (long)5000);
     std::unique_ptr<uint8_t[]> le_buffer(new uint8_t[buffer_size * sizeof(real)]);
     for (int64_t i = 0; i < self->size; i += buffer_size) {
-      size_t to_convert = std::min(self->size - i, buffer_size);
+      size_t to_convert = std::min(self->size - (int)i, buffer_size);
       if (sizeof(real) == 2) {
         THP_encodeInt16Buffer((uint8_t*)le_buffer.get(),
             (const int16_t*)data + i,
@@ -103,7 +103,7 @@ THStorage * THPStorage_(readFileRaw)(int fd)
     long buffer_size = std::min(size, (long)5000);
     std::unique_ptr<uint8_t[]> le_buffer(new uint8_t[buffer_size * sizeof(real)]);
     for (int64_t i = 0; i < size; i += buffer_size) {
-      size_t to_convert = std::min(size - i, buffer_size);
+      size_t to_convert = std::min(size - (int)i, buffer_size);
       SYSCHECK(read(fd, le_buffer.get(), sizeof(real) * to_convert));
       if (sizeof(real) == 2) {
         THP_decodeInt16Buffer((int16_t*)data + i,

@kashif
Copy link
Contributor
kashif commented Feb 13, 2017

hopefully #732 fixes this

@kashif
Copy link
Contributor
kashif commented Feb 13, 2017

@szagoruyko do you know where the MACOSX_DEPLOYMENT_TARGET variable is used? As far as I can see, it doesn't do anything...

@apaszke
Copy link
Contributor
apaszke commented Feb 13, 2017

@kashif it's not used by PyTorch, but by the compiler

@apaszke
Copy link
Contributor
apaszke commented Feb 13, 2017

This should be fixed now by @kashif's PR.

@apaszke apaszke closed this as completed Feb 13, 2017
bddppq pushed a commit to bddppq/pytorch that referenced this issue Apr 17, 2018
…9c90c8

Previous import was a4dcc47791eb127652f5aaddd51d8896d446a067

Included changes:
- **[985af3f](onnx/onnx@985af3f)**: Update PythonAPIOverview.md (pytorch#738) <Dmytro Dzhulgakov>
- **[b69be33](onnx/onnx@b69be33)**: Add backend test for upsample (pytorch#729) <Sebastian Meßmer>
- **[0d9496e](onnx/onnx@0d9496e)**: Input test data of concat op should be float (pytorch#711) <Changming Sun>
- **[20bcb8b](onnx/onnx@20bcb8b)**: Fix the spec for batchnorm and instancenorm (pytorch#733) <Lu Fang>
- **[c9f825f](onnx/onnx@c9f825f)**: Refine a little bit about op spec. (pytorch#666) <Ke Zhang>
- **[a484eb2](onnx/onnx@a484eb2)**: Fix an error in Conv doc (pytorch#731) <Lu Fang>
- **[7410cc4](onnx/onnx@7410cc4)**: Fix incorrect package output paths (pytorch#730) <bddppq>
- **[be546e2](onnx/onnx@be546e2)**: Improve optimizer's API and docs (pytorch#713) <Lu Fang>
- **[c61506f](onnx/onnx@c61506f)**: Fix the shape inference python API (pytorch#716) <Lu Fang>
- **[e9d4134](onnx/onnx@e9d4134)**: Fix cmake on windows when not building python extension (pytorch#728) <bddppq>
- **[72187aa](onnx/onnx@72187aa)**: Add value_info support in make_graph (pytorch#726) <Lu Fang>
- **[67b7d89](onnx/onnx@67b7d89)**: Fix gen_proto in cmake (pytorch#719) <bddppq>
- **[fcb4ae3](onnx/onnx@fcb4ae3)**: docs rewording: Important Python Functions -> Python API Overview (pytorch#721) <anderspapitto>
- **[24275d6](onnx/onnx@24275d6)**: Ignore .eggs directory when doing lint (pytorch#722) <bddppq>
- **[54be8fa](onnx/onnx@54be8fa)**: Use cmake3 if it's available (pytorch#718) <bddppq>
- **[b8c4238](onnx/onnx@b8c4238)**: Add python function docs (pytorch#714) <Lu Fang>
- **[e177493](onnx/onnx@e177493)**: Remove unused cmake utils (pytorch#712) <bddppq>
- **[72d6ad6](onnx/onnx@72d6ad6)**: Remove pycmd from CMake (pytorch#710) <bddppq>
- **[93f0d40](onnx/onnx@93f0d40)**: Fix windows local build (pytorch#709) <Raymond Yang>
- **[6734224](onnx/onnx@6734224)**: CMake fixes and setup.py cleanup (pytorch#706) <bddppq>
- **[7f6a4fd](onnx/onnx@7f6a4fd)**: Add docs to explain important functions in ONNX Infra (pytorch#682) <Lu Fang>
- **[f0f6b3d](onnx/onnx@f0f6b3d)**: fix hardmax test cases make output dtype same as input (pytorch#705) <Wenhao Hu>
- **[c970f0c](onnx/onnx@c970f0c)**: Fix the Dummy backend (pytorch#701) <Lu Fang>
- **[2af45df](onnx/onnx@2af45df)**: setup.py uses cmake build system (pytorch#606) <anderspapitto>
- **[dfcaade](onnx/onnx@dfcaade)**: clean up unused variable left by removing consumed_input (pytorch#697) <bddppq>
- **[accfc74](onnx/onnx@accfc74)**: Remove incorrect backend test (pytorch#700) <Lu Fang>
- **[e558732](onnx/onnx@e558732)**: add max inclusive version to defs.get_schema function (pytorch#695) <Wenhao Hu>
- **[16f02eb](onnx/onnx@16f02eb)**: add API to add domain to min/max version for extension. (pytorch#694) <Ke Zhang>
- **[3e560dd](onnx/onnx@3e560dd)**: Fix doc for initializer (pytorch#690) <bddppq>
- **[6cc4f53](onnx/onnx@6cc4f53)**: Add model save function (pytorch#692) <Lu Fang>
- **[21eaf9b](onnx/onnx@21eaf9b)**: Changing the string discussing versions in operator specifications. (pytorch#691) <Niklas Gustafsson>
- **[3b0cdf4](onnx/onnx@3b0cdf4)**: Minor code quality improvements in optimizer/ (pytorch#612) <Sebastian Meßmer>
- **[641f126](onnx/onnx@641f126)**: Fix Gemm doc wording (pytorch#689) <bddppq>
- **[4a0ec75](onnx/onnx@4a0ec75)**: Clarifies installation error message when external protobuf dependencies are missing (pytorch#684) <Daniel J. H>
- **[960a2c3](onnx/onnx@960a2c3)**: Check outputs dtype in backend tests (pytorch#567) <bddppq>
- **[1d7dee4](onnx/onnx@1d7dee4)**: Fix Average pool test cases converted from PyTorch (pytorch#677) <Lu Fang>
- **[36d7fff](onnx/onnx@36d7fff)**: Fix Attribute default value pybind11 binding (pytorch#671) <bddppq>
- **[0536866](onnx/onnx@0536866)**: git ignore .pytest_cache (pytorch#674) <bddppq>
- **[afc84ac](onnx/onnx@afc84ac)**: Update README.md (pytorch#672) <Dmytro Dzhulgakov>
- **[9d2b530](onnx/onnx@9d2b530)**: Revert "[Typing 1/3] Setup mypy type checker (pytorch#607)" (pytorch#667) <bddppq>
- **[086727e](onnx/onnx@086727e)**: [Typing 1/3] Setup mypy type checker (pytorch#607) <Sebastian Meßmer>
- **[5716e20](onnx/onnx@5716e20)**: Convert all Node tests to Model tests (pytorch#651) <bddppq>
- **[6fe932a](onnx/onnx@6fe932a)**: Replace unittest.skip with custom exception (pytorch#659) <Dmytro Dzhulgakov>
- **[ecac1c1](onnx/onnx@ecac1c1)**: Merge Rel 1.1.0 branch into master (pytorch#657) <Anirudh>
- **[5cb999d](onnx/onnx@5cb999d)**: Minor cleanups to shape inference (pytorch#653) <anderspapitto>
- **[f4acf28](onnx/onnx@f4acf28)**: Remove allowconsumed enforceconsumed from op schema. (pytorch#617) <Ke Zhang>
- **[a8e4648](onnx/onnx@a8e4648)**: Adjust link flags when built in Windows Debug mode (pytorch#647) <Yinghai Lu>
- **[7c009fe](onnx/onnx@7c009fe)**: Fix lint error in optimizer test (pytorch#656) <bddppq>
- **[063d12f](onnx/onnx@063d12f)**: Fix optimizer split pass for models with constant output (pytorch#652) <bddppq>
jjsjann123 pushed a commit to mcarilli/pytorch that referenced this issue Mar 25, 2021
* Add a reproducer

* Fix getAllValsBetween (pytorch#728)

* expand the test

* Return a deterministically ordered vector instead of unordered_set
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this issue Sep 20, 2021
r-barnes added a commit to r-barnes/pytorch that referenced this issue Oct 11, 2021
Summary:
Pull Request resolved: pytorch/FBGEMM#728

Pull Request resolved: pytorch#62653

This consolidates checks determining whether tensors live on the same device into a single line using template parameter packs to unroll the check code.

The advantage of using the new checking syntax is that it makes it easy to use static analysis to determine both if the check is present and whether or not it is comprehensive. D30072495 includes a linter which performs this action.

Note that this is especially useful for PyTorch extensions which don't receive this check automatically from codegen.

Test Plan:
```
buck test //caffe2/torch/fb/sparsenn:gpu_test
buck test //caffe2/torch/fb/sparsenn:test
```

Reviewed By: ngimel

Differential Revision: D29924464

fbshipit-source-id: 6c575dda8b707eb6df7e9675d2bb62ec8e541753
hubertlu-tw pushed a commit to hubertlu-tw/pytorch that referenced this issue Nov 1, 2022
…pytorch#728)

* Adding C++ Multihead Attention implementation to contrib.

* Add reference test that at least works for forward.

* Remove CublasLt support from multihead attention.

* Add new Python version of self attention.

* Update python model of MHA with backward pass.

* Fixed Output Linear connection in MHA.

* Clean up compiles and add documentation to PySelfAttention.

* Add Encdec Python version of multihead attention.  Cleanup files.

* Tests for self and encdec multihead attention.

* Add reference pytorch implementation of attention with norm and add.

* Add cutlass branch definition.

* Add cutlass download to compile.

* Add norm/add tests.

* Add biases to pytorch python versions.

* Add tests and fix issues with python version of attention masking.

* Create README.md

* Update README.md

* Update README.md

* Update perf test parameters.

* Update README.md

* Update README.md

* Update README.md

* Add files via upload

* Update README.md

* Update README.md

* Update README.md

* Fix matmul1 output tensor size.  Fix tests that missed issue.

* Allow for Z dimensions of 64K and greater on batched GEMMs.

* remove redundant imports

* general cleanup, remove deprecated or unused functions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
0