Commit Graph

18 Commits

Author SHA1 Message Date
b4420f0fd5 Fix complex variable notation for division operator to be consistent. (#98057)
A readability improvement: changes notation in complex division to match comments `(a + bi)/(c + di)` for `constexpr FORCE_INLINE_APPLE complex<T>& operator/=(const complex<U>& rhs)`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98057
Approved by: https://github.com/ezyang
2023-04-05 12:06:20 +00:00
1f55f3b0de Solving the under/overflow for complex division (#92539)
Fixes #92043.
I'm following numpy's implementation as suggested by @min-jean-cho.
I found out that this implementation still produces overflow if we're working with numbers greater than `finfo.max / 2`, but this is still much better than the previous implementation where it gets overflow with numbers greater than `finfo.max ** 0.5`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92539
Approved by: https://github.com/lezcano
2023-01-26 01:14:06 +00:00
bac33ea8b6 [CUDA] Drop CUDA 10 support (#89582)
CC @ptrblck @ngimel @malfet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89582
Approved by: https://github.com/malfet, https://github.com/ngimel
2023-01-05 05:11:53 +00:00
e81bfffbe1 [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D33999038

fbshipit-source-id: f6e3f24997eb3e478857341d21fa6aaf9dd3a906
(cherry picked from commit d530a426d4b20475cfb3b2538d0e0e2c017c358b)
2022-02-04 11:14:53 +00:00
e90f5586d6 Add support for include-what-you-use (#71114)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71114

`include-what-you-use` or `iwyu` is a clang-based tool that looks at
the code's AST to figure out which symbols need to be included and
with the help of user-defined mappings it suggests the include
files that are actually needed.

This is very nice for the per-operator headers build because it give
you a list of exactly the `ATen/ops` headers needed by the file. You
still need to manually write the include-guards etc. but at least this
automates the most tedious part.

The header mappings aren't perfect yet so it will still suggest you
include basic c10 components everywhere instead of taking it
transitively from `TensorBase.h`. However, this does provide some
useful mappings and removes bad include paths from the build system
that were causing bad suggestions.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D33949901

Pulled By: malfet

fbshipit-source-id: d5b015ef9e168bee4b8717b8e87ccc0608da62a1
(cherry picked from commit ecb2ffb35a5b1509a1275834fbe5c25e60ea1b79)
2022-02-04 01:39:48 +00:00
a383d01774 [fbcode][warnings] Suppress warnings in caffe2/c10 (#71356)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71356

Suppress remaining header based warnings in `caffe2/c10` when building with `clang`

Test Plan: CI pass

Reviewed By: r-barnes

Differential Revision: D33600097

fbshipit-source-id: e1c0d84a0bad768eb03e047d62b5379cf28b48e2
2022-01-15 18:34:08 -08:00
572c3e3118 Fix some usages of CUDA_VERSION (#69092)
Summary:
See https://pytorch.slack.com/archives/G4Z791LL8/p1638229956006300

I grepped c10, aten, and torch for CUDA_VERSION and checked the usages I saw.
I can't guarantee I made a clean sweep. but this improves the status quo.

cc ngimel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69092

Reviewed By: zou3519

Differential Revision: D32786919

Pulled By: ngimel

fbshipit-source-id: 1d29827dca246f33118d81e136252ddb5bf3830f
2021-12-02 18:32:47 -08:00
72803dbcfd [caffe2] Fix invalid vector accesses and polar() call (#66757)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66757

`InterpreterStateImpl::run()` gets the number of outputs from the current frame, but by the time the continuation completes, the frame is gone, so we're calling `front()` on an empty vector. This works out in practice (data is still there) but it is technically undefined behavior and could break in the future.

Also, `std::polar()` expects its argument to be non-negative, but `c10::polar()` does not, so implement it explicitly (implementation is the same as libstdc++).

Test Plan: JIT tests pass.

Reviewed By: zhxchen17

Differential Revision: D31715587

fbshipit-source-id: 98abcc10c2742887af866d8e70169a0187c41d33
2021-10-19 00:29:54 -07:00
085e2f7bdd [ROCm] Changes not to rely on CUDA_VERSION or HIP_VERSION (#65610)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65610

- Replace HIP_PLATFORM_HCC with USE_ROCM
- Dont rely on CUDA_VERSION or HIP_VERSION and use USE_ROCM and ROCM_VERSION.

- In the next PR
   - Will be removing the mapping from CUDA_VERSION to HIP_VERSION and CUDA to HIP in hipify.
   - HIP_PLATFORM_HCC is deprecated, so will add HIP_PLATFORM_AMD to support HIP host code compilation on gcc.

cc jeffdaily sunway513 jithunnair-amd ROCmSupport amathews-amd

Reviewed By: jbschlosser

Differential Revision: D30909053

Pulled By: ezyang

fbshipit-source-id: 224a966ebf1aaec79beccbbd686fdf3d49267e06
2021-09-29 09:55:43 -07:00
0a66d5b325 [PyTorch] Remove unnecessary iostream includes in headers (#61500)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61500

libstdc++ defines a static variable called `std::__ioinit` in iostream that adds global constructor size overhead to each translation that includes iostream. To reduce the size overhead from that, we can often include ostream instead.
ghstack-source-id: 136163529

Test Plan: buildsizebot some mobile apps

Reviewed By: dhruvbird

Differential Revision: D29648016

fbshipit-source-id: 9c3139712c71248513cc5032d21e77f3ecbae8fe
2021-08-19 18:54:51 -07:00
44cc873fba [PyTorch] Autoformat c10 (#56830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56830

Opt into formatting on GitHub and format everything. This is a trial run before turning on formatting for more and eventually all of the codebase.

Test Plan: CI

Reviewed By: zertosh

Differential Revision: D27979080

fbshipit-source-id: a80f0c48691c08ae8ca0af06377b87e6a2351151
2021-04-30 21:23:28 -07:00
2962fee99a Fix/suppress a type warning in PyTorch (#55142)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55142

Declare some functions C10_HOST_DEVICE to fix the NVCC warning.

During pytorch compilation, NVCC compiler was emmiting several warnings like this one:

```
caffe2/c10/util/TypeCast.h(39): warning: calling a constexpr __host__ function from a __host__ __device__ function is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.
          detected during:
            instantiation of "dest_t c10::static_cast_with_inter_type<dest_t, src_t>::apply(src_t) [with dest_t=c10::complex<double>, src_t=__nv_bool]"
(158): here
            instantiation of "To c10::convert<To,From>(From) [with To=c10::complex<double>, From=__nv_bool]"
(170): here
            instantiation of "To c10::checked_convert<To,From>(From, const char *) [with To=c10::complex<double>, From=__nv_bool]"
caffe2/c10/core/Scalar.h(63): here
```

How to reproduce.
- Make sure you are on remote/master
- run:
  `buck build mode/dev-nosan caffe2/torch/fb/sparsenn:sparsenn_operators_gpu`

Test Plan: - compilation completes without warnings.

Reviewed By: r-barnes

Differential Revision: D27469757

fbshipit-source-id: f8c4eedb637c6d487ac49bb310e48be11db204e2
2021-04-01 13:59:56 -07:00
8aad66a7bd [c10/**] Fix typos (#49815)
Summary:
All pretty minor. I avoided renaming `class DestructableMock` to `class DestructibleMock` and similar such symbol renames (in this PR).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49815

Reviewed By: VitalyFedyunin

Differential Revision: D25734507

Pulled By: mruberry

fbshipit-source-id: bbe8874a99d047e9d9814bf92ea8c036a5c6a3fd
2021-01-01 02:11:56 -08:00
11334280bf Suppress warning: calling a constexpr __host__ function from a __host__ __device__ function is not allowed warning (#49197)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49197

Compiling currently gives a number of these warnings:
```
caffe2/c10/util/TypeCast.h(27): warning: calling a constexpr __host__ function from a __host__ __device__ function is not allowed. The experimental flag '--expt-relaxed-constexpr' can be used to allow this.
          detected during:
            instantiation of "decltype(auto) c10::maybe_real<true, src_t>::apply(src_t) [with src_t=c10::complex<double>]"
(57): here
            instantiation of "uint8_t c10::static_cast_with_inter_type<uint8_t, src_t>::apply(src_t) [with src_t=c10::complex<double>]"
(157): here
            instantiation of "To c10::convert<To,From>(From) [with To=uint8_t, From=c10::complex<double>]"
(169): here
            instantiation of "To c10::checked_convert<To,From>(From, const char *) [with To=uint8_t, From=c10::complex<double>]"
caffe2/c10/co
```
Here we fix this by adding `C10_HOST_DEVICE` to the offending function.

Test Plan:
Compiling
```
buck build mode/dev-nosan -c=python.package_style=inplace dper3/dper3_models/experimental/pytorch/ads:ads_model_generation_script
```
shows this warning.

We rely on sandcastle for testing here.

Reviewed By: xw285cornell

Differential Revision: D25440771

fbshipit-source-id: 876c412eb06e8837978061cc4793abda42fac821
2020-12-15 10:49:07 -08:00
b470fa4500 Add complex number support for binary logical operators (#43174)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43174

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D23684425

Pulled By: mruberry

fbshipit-source-id: 4857b16e18ec4c65327136badd7f04c74e32d330
2020-09-23 23:03:00 -07:00
c7d79f35e3 Header rename complex_type.h -> complex.h (#39885)
Summary:
This file should have been renamed as `complex.h`, but unfortunately, it was named as `complex_type.h` due to a name clash with FBCode. Is this still the case and is it easy to resolve the name clash? Maybe related to the comment at https://github.com/pytorch/pytorch/pull/39834#issuecomment-642950012
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39885

Differential Revision: D22018575

Pulled By: ezyang

fbshipit-source-id: e237ccedbe2b30c31aca028a5b4c8c063087a30f
2020-06-23 16:27:09 -07:00
9216c67c9e Revert D21021677: [pytorch][PR] Add core of c10::complex
Test Plan: revert-hammer

Differential Revision:
D21021677

Original commit changeset: 9e144e581fa4

fbshipit-source-id: ce6a88fc71ec0134d0fc6ecdddc4c4db35f89b1f
2020-04-14 13:58:24 -07:00
25252816cf Add core of c10::complex (#35524)
Summary:
Step 0 of https://github.com/pytorch/pytorch/issues/35284

Reference: https://en.cppreference.com/w/cpp/numeric/complex
We are targeting C++20. The difference across C++ versions are mostly `constexpr` qualifiers, newer version has more function declared as `constexpr`

This PR adds the core of `c10::complex`, it includes
- standard constructors as in `std::complex`
- explicit conversion constructors converting from `std/thrust::complex` to `c10::complex`
- standard assignment operators as in `std::complex`
- conversion assignment operators converting from `std/thrust::complex` to `c10::complex`
- other standard operators as in `std::complex`
- standard methods as in `std::complex`
- explicit casting operators to std/thrust
- basic non-member functions as in `std::complex`:
  - arithmetic operators
  - `==`, `!=`
  - `<<`, `>>`
  - `std::real`, `std::imag`, `std::abs`, `std::arg`, `std::norm`, `std::conj`, `std::proj`, `std::polar`
    - Some of them are intentionally not completely implemented, these are marked as `TODO` and will be implemented in the future.

This PR does not include:
- overload of math functions

which will come in the next PR
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35524

Differential Revision: D21021677

Pulled By: anjali411

fbshipit-source-id: 9e144e581fa4b2bee62d33adaf756ce5aadc0c71
2020-04-14 11:00:24 -07:00