mirror of
https://github.com/uxlfoundation/oneDNN.git
synced 2025-10-20 18:43:49 +08:00
all: updated Github links to uxlfoundation
This commit is contained in:
8
.github/ISSUE_TEMPLATE/bug_report.md
vendored
8
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -12,7 +12,7 @@ factors are considered important to reproduce an issue.
|
||||
|
||||
# Version
|
||||
Report oneDNN version and githash. Version information is printed to stdout
|
||||
in [verbose mode](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html).
|
||||
in [verbose mode](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose.html).
|
||||
|
||||
# Environment
|
||||
oneDNN includes hardware-specific optimizations and may behave
|
||||
@ -30,8 +30,8 @@ the following information to help reproduce the issue:
|
||||
Please check that the issue is reproducible with the latest revision on
|
||||
master. Include all the steps to reproduce the issue.
|
||||
|
||||
You can use [verbose mode](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html)
|
||||
and [benchdnn](https://github.com/oneapi-src/oneDNN/tree/master/tests/benchdnn)
|
||||
You can use [verbose mode](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose.html)
|
||||
and [benchdnn](https://github.com/uxlfoundation/oneDNN/tree/master/tests/benchdnn)
|
||||
to validate correctness of all primitives the library supports. If this does not
|
||||
work a short C/C++ program or modified unit tests demonstrating the issue
|
||||
will greatly help with the investigation.
|
||||
@ -40,7 +40,7 @@ will greatly help with the investigation.
|
||||
Document behavior you observe. For performance defects, like performance
|
||||
regressions or a function being slow, provide a log including output generated
|
||||
by your application in
|
||||
[verbose mode](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html).
|
||||
[verbose mode](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose.html).
|
||||
|
||||
# Expected behavior
|
||||
Document behavior you expect.
|
2
.github/automation/aarch64/skipped-tests.sh
vendored
2
.github/automation/aarch64/skipped-tests.sh
vendored
@ -27,7 +27,7 @@ OS=${OS:-"Linux"}
|
||||
SKIPPED_GRAPH_TEST_FAILURES="test_graph_unit_dnnl_sdp_decomp_cpu"
|
||||
SKIPPED_GRAPH_TEST_FAILURES+="|test_graph_unit_dnnl_mqa_decomp_cpu"
|
||||
|
||||
# described in issue: https://github.com/oneapi-src/oneDNN/issues/2175
|
||||
# described in issue: https://github.com/uxlfoundation/oneDNN/issues/2175
|
||||
SKIPPED_TEST_FAILURES="test_benchdnn_modeC_matmul_multidims_cpu"
|
||||
|
||||
# We currently have some OS and config specific test failures.
|
||||
|
6
.github/pull_request_template.md
vendored
6
.github/pull_request_template.md
vendored
@ -1,6 +1,6 @@
|
||||
# Description
|
||||
|
||||
Please include a summary of the change. Please also include relevant motivation and context. See [contribution guidelines](https://github.com/oneapi-src/oneDNN/blob/master/CONTRIBUTING.md) for more details. If the change fixes an issue not documented in the project's Github issue tracker, please document all steps necessary to reproduce it.
|
||||
Please include a summary of the change. Please also include relevant motivation and context. See [contribution guidelines](https://github.com/uxlfoundation/oneDNN/blob/master/CONTRIBUTING.md) for more details. If the change fixes an issue not documented in the project's Github issue tracker, please document all steps necessary to reproduce it.
|
||||
|
||||
Fixes # (github issue)
|
||||
|
||||
@ -26,7 +26,7 @@ Fixes # (github issue)
|
||||
- [ ] Have you included information on how to reproduce the issue (either in a github issue or in this PR)?
|
||||
- [ ] Have you added relevant regression tests?
|
||||
|
||||
## [RFC](https://github.com/oneapi-src/oneDNN/tree/rfcs) PR
|
||||
## [RFC](https://github.com/uxlfoundation/oneDNN/tree/rfcs) PR
|
||||
|
||||
- [ ] Does RFC document follow the [template](https://github.com/oneapi-src/oneDNN/blob/rfcs/rfcs/template.md#onednn-design-document-rfc)?
|
||||
- [ ] Does RFC document follow the [template](https://github.com/uxlfoundation/oneDNN/blob/rfcs/rfcs/template.md#onednn-design-document-rfc)?
|
||||
- [ ] Have you added a link to the rendered document?
|
||||
|
@ -8,8 +8,8 @@ message: >-
|
||||
type: software
|
||||
authors:
|
||||
- name: oneDNN Contributors
|
||||
repository-code: 'https://github.com/oneapi-src/oneDNN'
|
||||
url: 'https://oneapi-src.github.io/oneDNN'
|
||||
repository-code: 'https://github.com/uxlfoundation/oneDNN'
|
||||
url: 'https://uxlfoundation.github.io/oneDNN'
|
||||
abstract: >-
|
||||
oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
|
||||
performance library of basic building blocks for deep learning applications.
|
||||
|
@ -25,7 +25,7 @@ oneDNN uses [clang-tidy](https://clang.llvm.org/extra/clang-tidy/) in order to
|
||||
diagnose and fix common style violations and easy-to-fix issues in the code
|
||||
base. For instructions on how to use `clang-tidy`, please refer to the
|
||||
[clang-tidy
|
||||
RFC](https://github.com/oneapi-src/oneDNN/blob/rfcs/rfcs/20200813-clang-tidy/README.md).
|
||||
RFC](https://github.com/uxlfoundation/oneDNN/blob/rfcs/rfcs/20200813-clang-tidy/README.md).
|
||||
The list of clang-tidy checks the oneDNN code base follows is available in the
|
||||
`.clang-tidy` file found in the oneDNN top-level directory.
|
||||
|
||||
|
@ -7,8 +7,8 @@ requests! To get started, see the GitHub
|
||||
You can:
|
||||
|
||||
- Submit your changes directly with a
|
||||
[pull request](https://github.com/oneapi-src/oneDNN/pulls)
|
||||
- Log a bug or feedback with an [issue](https://github.com/oneapi-src/oneDNN/issues)
|
||||
[pull request](https://github.com/uxlfoundation/oneDNN/pulls)
|
||||
- Log a bug or feedback with an [issue](https://github.com/uxlfoundation/oneDNN/issues)
|
||||
|
||||
**See also:** [Contributor Covenant](CODE_OF_CONDUCT.md) code of conduct.
|
||||
|
||||
@ -54,7 +54,7 @@ For Comments (RFC) process, which consists of opening, discussing, and
|
||||
accepting (promoting) RFC pull requests.
|
||||
|
||||
More information about the process can be found in the dedicated
|
||||
[`rfcs`](https://github.com/oneapi-src/oneDNN/tree/rfcs) branch.
|
||||
[`rfcs`](https://github.com/uxlfoundation/oneDNN/tree/rfcs) branch.
|
||||
|
||||
## Code contribution guidelines
|
||||
|
||||
@ -146,7 +146,7 @@ Use the following command to run tests selected by a build configuration:
|
||||
```
|
||||
|
||||
To modify the coverage, use the
|
||||
[`ONEDNN_TEST_SET`](https://oneapi-src.github.io/oneDNN/dev_guide_build_options.html#onednn-test-set)
|
||||
[`ONEDNN_TEST_SET`](https://uxlfoundation.github.io/oneDNN/dev_guide_build_options.html#onednn-test-set)
|
||||
build option.
|
||||
|
||||
More details on how to run benchdnn can be found in
|
||||
|
@ -13,17 +13,17 @@ developers interested in improving application performance on CPUs and GPUs.
|
||||
This package contains oneDNN v@PROJECT_VERSION@ (@DNNL_VERSION_HASH@).
|
||||
|
||||
You can find information about the latest version and release notes
|
||||
at the oneDNN Github (https://github.com/oneapi-src/oneDNN/releases).
|
||||
at the oneDNN Github (https://github.com/uxlfoundation/oneDNN/releases).
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
* Developer guide
|
||||
(https://oneapi-src.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@)
|
||||
(https://uxlfoundation.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@)
|
||||
explains the programming model, supported functionality, and implementation
|
||||
details, and includes annotated examples.
|
||||
* API reference
|
||||
(https://oneapi-src.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@/modules.html)
|
||||
(https://uxlfoundation.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@/modules.html)
|
||||
provides a comprehensive reference of the library API.
|
||||
|
||||
System Requirements
|
||||
@ -48,7 +48,7 @@ just-in-time (JIT) code generation to deploy the code optimized
|
||||
for the latest supported ISA. Future ISAs may have initial support in the
|
||||
library disabled by default and require the use of run-time controls to enable
|
||||
them. See CPU dispatcher control
|
||||
(https://oneapi-src.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html)
|
||||
(https://uxlfoundation.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html)
|
||||
for more details.
|
||||
|
||||
The library is optimized for the following GPUs:
|
||||
@ -65,7 +65,7 @@ Support
|
||||
-------
|
||||
|
||||
Submit questions, feature requests, and bug reports on the
|
||||
GitHub issues page (https://github.com/oneapi-src/oneDNN/issues).
|
||||
GitHub issues page (https://github.com/uxlfoundation/oneDNN/issues).
|
||||
|
||||
License
|
||||
-------
|
||||
@ -102,7 +102,7 @@ govern your use of the third party programs as set forth in the
|
||||
|
||||
# Security
|
||||
|
||||
Security Policy (https://github.com/oneapi-src/oneDNN/blob/main/SECURITY.md)
|
||||
Security Policy (https://github.com/uxlfoundation/oneDNN/blob/main/SECURITY.md)
|
||||
outlines our guidelines and procedures for ensuring the highest level
|
||||
of Security and trust for our users who consume oneDNN.
|
||||
|
||||
|
27
README.md
27
README.md
@ -4,7 +4,7 @@ oneAPI Deep Neural Network Library (oneDNN)
|
||||
===========================================
|
||||
|
||||
[](https://www.bestpractices.dev/projects/8762)
|
||||
[](https://securityscorecards.dev/viewer/?uri=github.com/oneapi-src/oneDNN)
|
||||
[](https://securityscorecards.dev/viewer/?uri=github.com/uxlfoundation/oneDNN)
|
||||
|
||||
oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
|
||||
performance library of basic building blocks for deep learning applications.
|
||||
@ -63,10 +63,9 @@ optimizations are available with [Intel® Extension for TensorFlow*].
|
||||
optimizations, and improvements implemented in each version of
|
||||
oneDNN.
|
||||
|
||||
[oneDNN Developer Guide and Reference]: https://oneapi-src.github.io/oneDNN
|
||||
[API Reference]: https://oneapi-src.github.io/oneDNN/group_dnnl_api.html
|
||||
[Release Notes]: https://github.com/oneapi-src/oneDNN/releases
|
||||
|
||||
[oneDNN Developer Guide and Reference]: https://uxlfoundation.github.io/oneDNN
|
||||
[API Reference]: https://uxlfoundation.github.io/oneDNN/group_dnnl_api.html
|
||||
[Release Notes]: https://github.com/uxlfoundation/oneDNN/releases
|
||||
|
||||
# System Requirements
|
||||
|
||||
@ -121,8 +120,8 @@ The library is optimized for the following GPUs:
|
||||
(formerly Meteor Lake, Arrow Lake and Lunar Lake)
|
||||
* future Intel Arc graphics (code name Battlemage)
|
||||
|
||||
[CPU dispatcher control]: https://oneapi-src.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html
|
||||
[Linking Guide]: https://oneapi-src.github.io/oneDNN/dev_guide_link.html
|
||||
[CPU dispatcher control]: https://uxlfoundation.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html
|
||||
[Linking Guide]: https://uxlfoundation.github.io/oneDNN/dev_guide_link.html
|
||||
|
||||
## Requirements for Building from Source
|
||||
|
||||
@ -313,8 +312,8 @@ You can download and install the oneDNN library using one of the following optio
|
||||
|
||||
[conda-forge]: https://anaconda.org/conda-forge/onednn
|
||||
[System Requirements]: #system-requirements
|
||||
[Build Options]: https://oneapi-src.github.io/oneDNN/dev_guide_build_options.html
|
||||
[Build from Source]: https://oneapi-src.github.io/oneDNN/dev_guide_build.html
|
||||
[Build Options]: https://uxlfoundation.github.io/oneDNN/dev_guide_build_options.html
|
||||
[Build from Source]: https://uxlfoundation.github.io/oneDNN/dev_guide_build.html
|
||||
|
||||
# Validated Configurations
|
||||
|
||||
@ -366,7 +365,7 @@ Submit questions, feature requests, and bug reports on the
|
||||
You can also contact oneDNN developers via [UXL Foundation Slack] using
|
||||
[#onednn] channel.
|
||||
|
||||
[Github issues]: https://github.com/oneapi-src/oneDNN/issues
|
||||
[Github issues]: https://github.com/uxlfoundation/oneDNN/issues
|
||||
[UXL Foundation Slack]: https://slack-invite.uxlfoundation.org/
|
||||
[#onednn]: https://uxlfoundation.slack.com/channels/onednn
|
||||
|
||||
@ -401,12 +400,12 @@ This project is intended to be a safe, welcoming space for
|
||||
collaboration, and contributors are expected to adhere to the
|
||||
[Contributor Covenant](CODE_OF_CONDUCT.md) code of conduct.
|
||||
|
||||
[RFC pull request]: https://github.com/oneapi-src/oneDNN/tree/rfcs
|
||||
[RFC pull request]: https://github.com/uxlfoundation/oneDNN/tree/rfcs
|
||||
[code contribution guidelines]: CONTRIBUTING.md#code-contribution-guidelines
|
||||
[coding standards]: CONTRIBUTING.md#coding-standards
|
||||
[pull request]: https://github.com/oneapi-src/oneDNN/pulls
|
||||
[Milestones]: https://github.com/oneapi-src/oneDNN/milestones
|
||||
[help wanted]: https://github.com/oneapi-src/oneDNN/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22
|
||||
[pull request]: https://github.com/uxlfoundation/oneDNN/pulls
|
||||
[Milestones]: https://github.com/uxlfoundation/oneDNN/milestones
|
||||
[help wanted]: https://github.com/uxlfoundation/oneDNN/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22
|
||||
|
||||
|
||||
# License
|
||||
|
@ -64,6 +64,6 @@ If you have any suggestions on how this Policy could be improved, please submit
|
||||
an issue or a pull request to this repository. Please **do not** report
|
||||
potential vulnerabilities or security flaws via a pull request.
|
||||
|
||||
[1]: https://github.com/oneapi-src/oneDNN/releases/latest
|
||||
[2]: https://github.com/oneapi-src/oneDNN/security/advisories/new
|
||||
[3]: https://github.com/oneapi-src/oneDNN/security/advisories
|
||||
[1]: https://github.com/uxlfoundation/oneDNN/releases/latest
|
||||
[2]: https://github.com/uxlfoundation/oneDNN/security/advisories/new
|
||||
[3]: https://github.com/uxlfoundation/oneDNN/security/advisories
|
||||
|
@ -22,7 +22,7 @@ Both kinds of experimental features can be enabled simultaneously.
|
||||
|
||||
| Environment variable | Description |
|
||||
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| ONEDNN_EXPERIMENTAL_BNORM_STATS_ONE_PASS | Calculate mean and variance in batch normalization(BN) in single pass ([RFC](https://github.com/oneapi-src/oneDNN/tree/rfcs/rfcs/20210519-single-pass-bnorm)). |
|
||||
| ONEDNN_EXPERIMENTAL_BNORM_STATS_ONE_PASS | Calculate mean and variance in batch normalization(BN) in single pass ([RFC](https://github.com/uxlfoundation/oneDNN/tree/rfcs/rfcs/20210519-single-pass-bnorm)). |
|
||||
| ONEDNN_EXPERIMENTAL_GPU_CONV_V2 | Enable shapeless GPU convolution implementation (the feature is under development). |
|
||||
|
||||
| Build time option | Description |
|
||||
|
@ -115,9 +115,9 @@ in this example.
|
||||
|
||||
One can create memory with **NCHW** data layout using
|
||||
#dnnl_nchw of the enum type #dnnl_format_tag_t defined in
|
||||
[dnnl_types.h](https://github.com/oneapi-src/oneDNN/blob/master/include/oneapi/dnnl/dnnl_types.h)
|
||||
[dnnl_types.h](https://github.com/uxlfoundation/oneDNN/blob/master/include/oneapi/dnnl/dnnl_types.h)
|
||||
for the C API, and dnnl::memory::format_tag::nchw defined in
|
||||
[dnnl.hpp](https://github.com/oneapi-src/oneDNN/blob/master/include/oneapi/dnnl/dnnl.hpp)
|
||||
[dnnl.hpp](https://github.com/uxlfoundation/oneDNN/blob/master/include/oneapi/dnnl/dnnl.hpp)
|
||||
for the C++ API.
|
||||
|
||||
|
||||
|
8
doc/build/build.md
vendored
8
doc/build/build.md
vendored
@ -3,16 +3,16 @@ Build from Source {#dev_guide_build}
|
||||
|
||||
## Download the Source Code
|
||||
|
||||
Download [oneDNN source code](https://github.com/oneapi-src/oneDNN/archive/master.zip)
|
||||
or clone [the repository](https://github.com/oneapi-src/oneDNN.git).
|
||||
Download [oneDNN source code](https://github.com/uxlfoundation/oneDNN/archive/master.zip)
|
||||
or clone [the repository](https://github.com/uxlfoundation/oneDNN.git).
|
||||
|
||||
~~~sh
|
||||
git clone https://github.com/oneapi-src/oneDNN.git
|
||||
git clone https://github.com/uxlfoundation/oneDNN.git
|
||||
~~~
|
||||
|
||||
## Build the Library
|
||||
|
||||
Ensure that all [software dependencies](https://github.com/oneapi-src/oneDNN#requirements-for-building-from-source)
|
||||
Ensure that all [software dependencies](https://github.com/uxlfoundation/oneDNN#requirements-for-building-from-source)
|
||||
are in place and have at least the minimal supported version.
|
||||
|
||||
The oneDNN build system is based on CMake. Use
|
||||
|
2
doc/build/build_options.md
vendored
2
doc/build/build_options.md
vendored
@ -303,7 +303,7 @@ $ cmake -DONEDNN_BLAS_VENDOR=ARMPL ..
|
||||
|
||||
Additional options available for development/debug purposes. These options are
|
||||
subject to change without notice, see
|
||||
[`cmake/options.cmake`](https://github.com/oneapi-src/oneDNN/blob/master/cmake/options.cmake)
|
||||
[`cmake/options.cmake`](https://github.com/uxlfoundation/oneDNN/blob/master/cmake/options.cmake)
|
||||
for details.
|
||||
|
||||
## GPU Options
|
||||
|
@ -70,7 +70,7 @@ optional.
|
||||
[GELU](@ref dev_guide_op_gelu), [Sigmoid](@ref dev_guide_op_sigmoid), and so on.
|
||||
For Swish activation, the node can be constructed with the [Sigmoid](@ref dev_guide_op_sigmoid)
|
||||
and [Multiply](@ref dev_guide_op_multiply) as below. You can also refer the
|
||||
[Gated-MLP example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/gated_mlp.cpp)
|
||||
[Gated-MLP example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/gated_mlp.cpp)
|
||||
for Swish definition.
|
||||
|
||||

|
||||
@ -104,13 +104,13 @@ platforms follow the general description in @ref dev_guide_data_types.
|
||||
## Examples
|
||||
|
||||
oneDNN provides a [Gated-MLP
|
||||
example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/gated_mlp.cpp)
|
||||
example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/gated_mlp.cpp)
|
||||
demonstrating how to construct a typical floating-point Gated-MLP pattern with
|
||||
oneDNN Graph API on CPU and GPU with different runtimes.
|
||||
|
||||
For applications where the weights of FC up and FC gate are combined as a single
|
||||
tensor, oneDNN also provides an
|
||||
[example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/gated_mlp_wei_combined.cpp)
|
||||
[example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/gated_mlp_wei_combined.cpp)
|
||||
demonstrating how to create the weight tensors for the pattern with the offsets
|
||||
and strides from the combined weight tensor.
|
||||
|
||||
@ -120,4 +120,4 @@ and strides from the combined weight tensor.
|
||||
2. GLU Variants Improve Transformer, https://arxiv.org/abs/2002.05202
|
||||
3. LLaMA: Open and Efficient Foundation Language Models, https://arxiv.org/abs/2302.13971
|
||||
4. Qwen Technical Report, https://arxiv.org/abs/2309.16609
|
||||
5. oneDNN Graph API documentation, https://oneapi-src.github.io/oneDNN/graph_extension.html
|
||||
5. oneDNN Graph API documentation, https://uxlfoundation.github.io/oneDNN/graph_extension.html
|
||||
|
@ -93,7 +93,7 @@ platforms follow the general description in @ref dev_guide_data_types.
|
||||
## Example
|
||||
|
||||
oneDNN provides a [GQA
|
||||
example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/gqa.cpp)
|
||||
example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/gqa.cpp)
|
||||
demonstrating how to construct a floating-point GQA pattern with oneDNN Graph
|
||||
API on CPU and GPU with different runtimes.
|
||||
|
||||
|
@ -135,12 +135,12 @@ platforms follow the general description in @ref dev_guide_data_types.
|
||||
## Example
|
||||
|
||||
oneDNN provides an [SDPA
|
||||
example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/sdpa.cpp)
|
||||
example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/sdpa.cpp)
|
||||
demonstrating how to construct a typical floating-point SDPA pattern with oneDNN
|
||||
Graph API on CPU and GPU with different runtimes.
|
||||
|
||||
oneDNN also provides a [MQA (Multi-Query Attention)
|
||||
example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/mqa.cpp) [3]
|
||||
example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/mqa.cpp) [3]
|
||||
demonstrating how to construct a floating-point MQA pattern with the same
|
||||
pattern structure as in the SDPA example but different head number in Key and
|
||||
Value tensors. In MQA, the head number of Key and Value is always one.
|
||||
@ -149,6 +149,6 @@ Value tensors. In MQA, the head number of Key and Value is always one.
|
||||
|
||||
[1] Attention is all you need, https://arxiv.org/abs/1706.03762v7
|
||||
|
||||
[2] oneDNN Graph API documentation, https://oneapi-src.github.io/oneDNN/graph_extension.html
|
||||
[2] oneDNN Graph API documentation, https://uxlfoundation.github.io/oneDNN/graph_extension.html
|
||||
|
||||
[3] Fast Transformer Decoding: One Write-Head is All You Need, https://arxiv.org/abs/1911.02150
|
||||
|
@ -4,4 +4,4 @@ Benchmarking Performance {#dev_guide_benchdnn}
|
||||
oneDNN has a built-in benchmarking program called benchdnn.
|
||||
|
||||
For a complete description of the available options and working examples, see
|
||||
the [benchdnn readme](https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/README.md#benchdnn).
|
||||
the [benchdnn readme](https://github.com/uxlfoundation/oneDNN/blob/master/tests/benchdnn/README.md#benchdnn).
|
||||
|
@ -151,7 +151,7 @@ Above, we can see that the highest performance implementations were
|
||||
not dispatched either because they required a higher ISA, or because
|
||||
they did not support that datatype configuration.
|
||||
A complete list of verbose messages encountered in the dispatch mode
|
||||
can be found [here](https://oneapi-src.github.io/oneDNN/dev_guide_verbose_table.html) along with their explanation.
|
||||
can be found [here](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose_table.html) along with their explanation.
|
||||
|
||||
### Enable ONEDNN_VERBOSE with timestamps
|
||||
|
||||
@ -240,7 +240,7 @@ primitive execution.
|
||||
|
||||
@note
|
||||
When oneDNN verbose mode is enabled for builds with
|
||||
[Compute Library for the Arm architecture](https://oneapi-src.github.io/oneDNN/dev_guide_build.html#gcc-with-arm-compute-library-acl-on-aarch64-host),
|
||||
[Compute Library for the Arm architecture](https://uxlfoundation.github.io/oneDNN/dev_guide_build.html#gcc-with-arm-compute-library-acl-on-aarch64-host),
|
||||
any failures in the validation of Compute Library primitives will be detailed
|
||||
in the verbose output.
|
||||
|
||||
|
@ -54,7 +54,7 @@ The following catalogue lists verbose messages, explanations, and additional inf
|
||||
|`alpha and beta parameters are not properly set` | | `eltwise` | Alpha and beta parameters are not properly set for the elementwise algorithm. |
|
||||
|`large shapes fall back` | | `gemm` | Heuristic to skip current implementation for large tensor shapes for better performance. |
|
||||
|`only trivial strides are supported` | | `gemm`, `rnn` | Current implementation for the primitive does not process non-trivial stride values. |
|
||||
|`unsupported fpmath mode` | | `matmul` | [Floating-point math mode](https://oneapi-src.github.io/oneDNN/group_dnnl_api_fpmath_mode.html?highlight=math%20mode) is not supported by the current primitive implementation. |
|
||||
|`unsupported fpmath mode` | | `matmul` | [Floating-point math mode](https://uxlfoundation.github.io/oneDNN/group_dnnl_api_fpmath_mode.html?highlight=math%20mode) is not supported by the current primitive implementation. |
|
||||
|`small shapes fall back` | | `matmul` | Heuristic to skip current implementation for small tensor shapes for better performance. |
|
||||
|`incompatible gemm format` | | `matmul`, `ip` | Specified GeMM format is incompatible with the current primitive implementation. |
|
||||
|`unsupported <t> tensor layout` |`t` - tensor | `reorder` | The data layout for the source/destination tensor is not supported by the current implementation. |
|
||||
@ -63,13 +63,13 @@ The following catalogue lists verbose messages, explanations, and additional inf
|
||||
|**Miscellaneous** | | | |
|
||||
|`failed to create nested <pm> primitive` |`pm` - `dnnl::primitive` | all | Descriptor initialization for the nested primitive implementation was unsuccessful. |
|
||||
|`failed to create <pm> descriptor` |`pm` -`dnnl::primitive`, `dnnl::memory` | all | Descriptor initialization for the primitive or memory object was unsuccessful. |
|
||||
|`bad accumulation mode` | | all | Bad or invalid [accumulation mode](https://oneapi-src.github.io/oneDNN/enum_dnnl_accumulation_mode.html) specified for primitive attribute `dnnl::primitive_attr`. |
|
||||
|`bad accumulation mode` | | all | Bad or invalid [accumulation mode](https://uxlfoundation.github.io/oneDNN/enum_dnnl_accumulation_mode.html) specified for primitive attribute `dnnl::primitive_attr`. |
|
||||
|`unsupported <t> md flag` |`t` - tensor | all | Bad or unsupported flags specified for the memory descriptor `dnnl::memory::desc`. |
|
||||
|`problem is not mathematically consistent` | | all | *(self-explanatory)* |
|
||||
|`workspace mismatch between forward and backward primitive descriptors`| | all | *(self-explanatory)* |
|
||||
|`workspace initialization failed` | | all | [Workspace](https://oneapi-src.github.io/oneDNN/dev_guide_inference_and_training_aspects.html?highlight=workspace#workspace) descriptor initialization was unsuccessful during primitive creation. |
|
||||
|`workspace initialization failed` | | all | [Workspace](https://uxlfoundation.github.io/oneDNN/dev_guide_inference_and_training_aspects.html?highlight=workspace#workspace) descriptor initialization was unsuccessful during primitive creation. |
|
||||
|`invalid datatype for <t>` |`t` - tensor | all | The data type for the tensor/data processed by the primitive is invalid. **Example**: This is encountered when an undefined data type `data_type::undef` is specified for the accumulator. |
|
||||
|`failed to run kernel deterministically` | | all | failed to run application in the [deterministic mode](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_deterministic.html?highlight=deterministic). |
|
||||
|`failed to run kernel deterministically` | | all | failed to run application in the [deterministic mode](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_deterministic.html?highlight=deterministic). |
|
||||
|`skipping or dispatching to another implementation` | | all | *(self-explanatory)* |
|
||||
|`failed to create <k> kernel` |`k` - kernel name | all | *(self-explanatory)* |
|
||||
|
||||
@ -86,7 +86,7 @@ The following catalogue lists verbose messages, explanations, and additional inf
|
||||
|`unsupported <d> platform (expected <d0> got <d1>)` |`d` - `dnnl::engine::kind`, `d0` - queried platform, `d1` - available platform | `sycl`, `opencl` | Unsupported device platform encountered during engine creation. |
|
||||
|`failed to create <d> engine with index <i>` |`d` - `dnnl::engine::kind`, `i` - device index |all | Engine creation was unsuccessful for the specified device index and kind. |
|
||||
|`unsupported <d> backend` |`d` - `dnnl::engine::kind` | `sycl` | *(self-explanatory)* |
|
||||
|`profiling capabilities are not supported` | | all | Experimental profiling ([ONEDNN_EXPERIMENTAL_PROFILING](https://oneapi-src.github.io/oneDNN/dev_guide_experimental.html?highlight=profiling#onednn-experimental-profiling)) is not enabled for the application. |
|
||||
|`profiling capabilities are not supported` | | all | Experimental profiling ([ONEDNN_EXPERIMENTAL_PROFILING](https://uxlfoundation.github.io/oneDNN/dev_guide_experimental.html?highlight=profiling#onednn-experimental-profiling)) is not enabled for the application. |
|
||||
|
||||
|
||||
## Memory Creation and Related Operations
|
||||
@ -96,6 +96,6 @@ The following catalogue lists verbose messages, explanations, and additional inf
|
||||
|`bad arguments for memory descriptor` | Bad or unsupported values passed to the memory descriptor `dnnl::memory::desc` during memory object creation. |
|
||||
|`invalid memory index` | An out-of-range value encountered for memory handle during data mapping. |
|
||||
|`unsupported memory stride` | Memory descriptor initialization failed due to unsupported value for memory strides. |
|
||||
|`scratchpad memory limit exceeded` | [Scratchpad](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_scratchpad.html?highlight=scratchpad) space is exhausted during GEMM kernel initialization. |
|
||||
|`scratchpad memory limit exceeded` | [Scratchpad](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_scratchpad.html?highlight=scratchpad) space is exhausted during GEMM kernel initialization. |
|
||||
|`scratchpad initialization unsuccessful` | *(self-explanatory)* |
|
||||
|
||||
|
@ -136,7 +136,7 @@ html_static_path = ['_static']
|
||||
#html_js_files = [('dnnl.js', {'defer': 'defer'})]
|
||||
|
||||
html_theme_options = {
|
||||
"repository_url": "https://github.com/oneapi-src/oneDNN",
|
||||
"repository_url": "https://github.com/uxlfoundation/oneDNN",
|
||||
"repository_branch": "master",
|
||||
"use_repository_button": True,
|
||||
"use_download_button": False
|
||||
|
@ -154,7 +154,7 @@ void compute_q10n_params(const char *message, const std::vector<float> &v,
|
||||
|
||||
#ifndef OMIT_WORKAROUND_FOR_SKX
|
||||
// Read more in CPU / Section 1 here:
|
||||
// https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html
|
||||
// https://uxlfoundation.github.io/oneDNN/dev_guide_int8_computations.html
|
||||
if (std::is_same<T, uint8_t>::value) max_int /= 2;
|
||||
#endif
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Verbose log converter
|
||||
|
||||
Verbose log converter is a tool that allows to convert [oneDNN
|
||||
verbose](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html)
|
||||
verbose](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose.html)
|
||||
output to various outputs (input files for benchdnn and execution
|
||||
statistics breakdown at this time). The tool can be extended to
|
||||
produce other types of output by adding generators.
|
||||
|
@ -3,7 +3,7 @@ GPU Convolution Kernel Generator
|
||||
|
||||
# Generalized Convolution Algorithm
|
||||
|
||||
See [oneDNN documentation](https://oneapi-src.github.io/oneDNN/dev_guide_convolution.html)
|
||||
See [oneDNN documentation](https://uxlfoundation.github.io/oneDNN/dev_guide_convolution.html)
|
||||
for the naming conventions that are used below.
|
||||
|
||||
Convolution has more variations than GEMM but for simplicity we will rely on
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
**benchdnn** is an extended and robust correctness verification and performance
|
||||
benchmarking tool for the primitives provided by
|
||||
[oneDNN](https://github.com/oneapi-src/oneDNN). The purpose of the benchmark is
|
||||
[oneDNN](https://github.com/uxlfoundation/oneDNN). The purpose of the benchmark is
|
||||
an extended and robust correctness verification of the primitives provided by
|
||||
oneDNN. **benchdnn** itself is a harness for different primitive-specific
|
||||
drivers.
|
||||
|
@ -92,7 +92,7 @@ int check_reorder_presence(
|
||||
/* Note for x64:
|
||||
Both data types of src and weight are s8, oneDNN addds 128 to one of the s8
|
||||
input to make it of type u8 instead, as explained in
|
||||
https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html or
|
||||
https://uxlfoundation.github.io/oneDNN/dev_guide_int8_computations.html or
|
||||
doc/advanced/int8_computations.md
|
||||
It is because `VPDPBUSD` instruction uses the combination of s8 and u8 as
|
||||
input.
|
||||
|
@ -104,7 +104,7 @@ int check_reorder_presence(
|
||||
/* Note for x64:
|
||||
Both data types of src and weight are s8, oneDNN addds 128 to one of the s8
|
||||
input to make it of type u8 instead, as explained in
|
||||
https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html or
|
||||
https://uxlfoundation.github.io/oneDNN/dev_guide_int8_computations.html or
|
||||
doc/advanced/int8_computations.md
|
||||
It is because `VPDPBUSD` instruction uses the combination of s8 and u8 as
|
||||
input.
|
||||
|
@ -19,7 +19,7 @@ where *binary-knobs* are:
|
||||
Refer to [tags](knobs_tag.md) for details.
|
||||
- `--alg={ADD [default], DIV, EQ, GE, GT, LE, LT, MAX, MIN, MUL, NE, SELECT, SUB}`
|
||||
-- algorithm for binary operations.
|
||||
Refer to [binary primitive](https://oneapi-src.github.io/oneDNN/dev_guide_binary.html)
|
||||
Refer to [binary primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_binary.html)
|
||||
for details.
|
||||
- `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input
|
||||
memory as output, otherwise, input and output are separate.
|
||||
|
@ -23,7 +23,7 @@ where *bnorm-knobs* are:
|
||||
`H` is dnnl_use_shift;
|
||||
`R` is dnnl_fuse_norm_relu;
|
||||
`A` is dnnl_fuse_norm_add_relu;
|
||||
Refer to [batch normalization primitive](https://oneapi-src.github.io/oneDNN/dev_guide_batch_normalization.html)
|
||||
Refer to [batch normalization primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_batch_normalization.html)
|
||||
for details.
|
||||
- `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input
|
||||
memory as output, otherwise, input and output are separate.
|
||||
|
@ -60,8 +60,8 @@ errors.
|
||||
|
||||
The table below shows supported name configurations for this driver:
|
||||
|
||||
For data type support, refer to [data types](https://oneapi-src.github.io/oneDNN/dev_guide_data_types.html)
|
||||
and [convolution primitive](https://oneapi-src.github.io/oneDNN/dev_guide_convolution.html#data-types)
|
||||
For data type support, refer to [data types](https://uxlfoundation.github.io/oneDNN/dev_guide_data_types.html)
|
||||
and [convolution primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_convolution.html#data-types)
|
||||
documentation.
|
||||
|
||||
| src | wei | dst | acc | cfg |
|
||||
|
@ -14,7 +14,7 @@ where *eltwise-knobs* are:
|
||||
- `--tag={nchw [default], ...}` -- physical src and dst memory layout.
|
||||
Refer to [tags](knobs_tag.md) for details.
|
||||
- `--alg={RELU [default], ...}` -- dnnl_eltwise algorithm. Refer to
|
||||
[eltwise primitive](https://oneapi-src.github.io/oneDNN/dev_guide_eltwise.html)
|
||||
[eltwise primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_eltwise.html)
|
||||
for details.
|
||||
- `--alpha=FLOAT` -- float value corresponding to algorithm operation.
|
||||
Refer to ``Floating point arguments`` below.
|
||||
|
@ -19,7 +19,7 @@ where *gnorm-knobs* are:
|
||||
`G` is dnnl_use_global_stats;
|
||||
`C` is dnnl_use_scale;
|
||||
`H` is dnnl_use_shift;
|
||||
Refer to [group normalization primitive](https://oneapi-src.github.io/oneDNN/dev_guide_group_normalization.html)
|
||||
Refer to [group normalization primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_group_normalization.html)
|
||||
for details.
|
||||
- `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input
|
||||
memory as output, otherwise, input and output are separate.
|
||||
|
@ -22,7 +22,7 @@ where *lnorm-knobs* are:
|
||||
`G` is dnnl_use_global_stats;
|
||||
`C` is dnnl_use_scale;
|
||||
`H` is dnnl_use_shift;
|
||||
Refer to [layer normalization primitive](https://oneapi-src.github.io/oneDNN/dev_guide_layer_normalization.html)
|
||||
Refer to [layer normalization primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_layer_normalization.html)
|
||||
for details.
|
||||
- `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input
|
||||
memory as output, otherwise, input and output are separate.
|
||||
|
@ -16,7 +16,7 @@ where *lrn-knobs* are:
|
||||
- `--alg={ACROSS [default], WITHIN}` -- lrn algorithm.
|
||||
`ACROSS` is dnnl_lrn_across_channels;
|
||||
`WITHIN` is dnnl_lrn_within_channel;
|
||||
Refer to [LRN primitive](https://oneapi-src.github.io/oneDNN/dev_guide_lrn.html)
|
||||
Refer to [LRN primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_lrn.html)
|
||||
for details.
|
||||
- `--mb=INT` -- override minibatch size specified in the problem description.
|
||||
When set to `0`, use minibatch size as defined by the individual
|
||||
@ -36,7 +36,7 @@ size value and accepts integer X values. The default is `5`. `alphaF` stands for
|
||||
LRN alpha scale and accepts float F values. The default is `1.f / 8192`. `betaF`
|
||||
stands for LRN beta power and accepts float F values. The default is `0.75f`.
|
||||
`kF` stands for LRN k shift and accept float F values. The default is `1.f`.
|
||||
Refer to [LRN primitive](https://oneapi-src.github.io/oneDNN/dev_guide_lrn.html)
|
||||
Refer to [LRN primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_lrn.html)
|
||||
for details.
|
||||
|
||||
## Essence of Testing
|
||||
|
@ -23,7 +23,7 @@ where *pool-knobs* are:
|
||||
dnnl_pooling_avg_exclude_padding;
|
||||
`avg_p` or `pooling_avg_include_padding` is
|
||||
dnnl_pooling_avg_include_padding;
|
||||
Refer to [pooling primitive](https://oneapi-src.github.io/oneDNN/dev_guide_pooling.html)
|
||||
Refer to [pooling primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_pooling.html)
|
||||
for details.
|
||||
- `--mb=INT` -- override minibatch size specified in the problem description.
|
||||
When set to `0`, use minibatch size as defined by the individual
|
||||
|
@ -16,7 +16,7 @@ where *reduction-knobs* are:
|
||||
- `--dtag={any [default], ...}` -- physical dst memory layout.
|
||||
Refer to [tags](knobs_tag.md) for details.
|
||||
- `--alg={sum [default], ...}` -- algorithm for reduction operations.
|
||||
Refer to [reduction primitive](https://oneapi-src.github.io/oneDNN/dev_guide_reduction.html)
|
||||
Refer to [reduction primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_reduction.html)
|
||||
for details.
|
||||
- `--p=FLOAT` -- float value corresponding to algorithm operation.
|
||||
Refer to ``Floating point arguments`` below.
|
||||
|
@ -18,7 +18,7 @@ where *resampling-knobs* are:
|
||||
- `--alg={nearest [default], linear}` -- resampling algorithm.
|
||||
`nearest` or `resampling_nearest` is dnnl_resampling_nearest;
|
||||
`linear` or `resampling_nearest` is dnnl_resampling_linear;
|
||||
Refer to [resampling primitive](https://oneapi-src.github.io/oneDNN/dev_guide_resampling.html)
|
||||
Refer to [resampling primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_resampling.html)
|
||||
for details.
|
||||
- `--mb=INT` -- override minibatch size specified in the problem description.
|
||||
When set to `0`, use minibatch size as defined by the individual
|
||||
|
@ -47,7 +47,7 @@ where *rnn-knobs* are:
|
||||
- `--flags=[|O]` -- RNN flags, default `undef` (no flags); where multiple
|
||||
simultaneous flags are supported.
|
||||
`O` is dnnl_rnn_flags_diff_weights_overwrite;
|
||||
Refer to [RNN primitive](https://oneapi-src.github.io/oneDNN/dev_guide_rnn.html) for details.
|
||||
Refer to [RNN primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_rnn.html) for details.
|
||||
- Any attributes options. Refer to [attributes](knobs_attr.md) for details.
|
||||
|
||||
and *rnn-desc* is a problem descriptor. The canonical form is:
|
||||
|
@ -20,7 +20,7 @@ where *softmax-knobs* are:
|
||||
- `--alg={SOFTMAX [default], LOGSOFTMAX}` -- softmax algorithm.
|
||||
`SOFTMAX` or `softmax_accurate` is `dnnl_softmax_accurate`;
|
||||
`LOGSOFTMAX` or `softmax_log` is `dnnl_softmax_log`;
|
||||
Refer to [softmax primitive](https://oneapi-src.github.io/oneDNN/dev_guide_softmax.html)
|
||||
Refer to [softmax primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_softmax.html)
|
||||
for details.
|
||||
- `--axis=INT` -- dimension on which operation will be performed.
|
||||
Default is `1`, corresponds to channels in logical memory layout.
|
||||
|
@ -20,7 +20,7 @@
|
||||
## --attr-scratchpad
|
||||
`--attr-scratchpad` specifies the scratchpad mode to be used for benchmarking.
|
||||
`MODE` values can be `library` (the default) or `user`. Refer to
|
||||
[scratchpad primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_scratchpad.html)
|
||||
[scratchpad primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_scratchpad.html)
|
||||
for details.
|
||||
|
||||
## --attr-fpmath
|
||||
@ -29,7 +29,7 @@ for details.
|
||||
or `any`.
|
||||
`APPLY_TO_INT` values can be either `true` (the default) or `false`.
|
||||
Refer to
|
||||
[fpmath primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_fpmath_mode.html)
|
||||
[fpmath primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_fpmath_mode.html)
|
||||
for details.
|
||||
|
||||
|
||||
@ -37,7 +37,7 @@ for details.
|
||||
`--attr-acc-mode` specifies the accumulation mode to be used for benchmarking.
|
||||
`ACCMODE` values can be any of `strict` (the default), `relaxed`, `any`, `f32`,
|
||||
`s32` or `f16`. Refer to
|
||||
[accumulation mode primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_accumulation_mode.html)
|
||||
[accumulation mode primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_accumulation_mode.html)
|
||||
for details.
|
||||
|
||||
## --attr-rounding-mode
|
||||
@ -48,14 +48,14 @@ for details.
|
||||
- `diff_weights` corresponds to `DNNL_ARG_DIFF_WEIGHTS`.
|
||||
`MODE` specifies which mode to apply to the corresponding memory
|
||||
argument. Supported values are: `environment` (default) and `stochastic`. Refer
|
||||
to [rounding mode primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_rounding_mode.html)
|
||||
to [rounding mode primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_rounding_mode.html)
|
||||
for details.
|
||||
|
||||
## --attr-deterministic
|
||||
`--attr-deterministic` specifies the deterministic mode to be used for
|
||||
benchmarking. `BOOL` values can be `true`, which enables the deterministic
|
||||
mode and `false` (the default), which disables it. Refer to
|
||||
[deterministic primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_deterministic.html)
|
||||
[deterministic primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_deterministic.html)
|
||||
for details.
|
||||
|
||||
## --attr-dropout
|
||||
|
@ -137,7 +137,7 @@ int check_reorder_presence(
|
||||
/* Note for x64:
|
||||
Both data types of src and weight are s8, oneDNN addds 128 to one of the s8
|
||||
input to make it of type u8 instead, as explained in
|
||||
https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html or
|
||||
https://uxlfoundation.github.io/oneDNN/dev_guide_int8_computations.html or
|
||||
doc/advanced/int8_computations.md
|
||||
It is because `VPDPBUSD` instruction uses the combination of s8 and u8 as
|
||||
input.
|
||||
|
@ -243,7 +243,7 @@ void setup_cmp(compare::compare_t &cmp, const prb_t *prb, data_kind_t kind,
|
||||
#if DNNL_AARCH64 || defined(DNNL_SYCL_HIP) || defined(DNNL_SYCL_CUDA)
|
||||
// MIOpen and ACL softmax accumulate in F16, but oneDNN now expects accumulation in
|
||||
// F32, this partially reverts 6727bbe8. For more information on ACL softmax, see
|
||||
// https://github.com/oneapi-src/oneDNN/issues/1819
|
||||
// https://github.com/uxlfoundation/oneDNN/issues/1819
|
||||
// Similarly, for bf16 on AArch64, the relaxed threshold is necessary due to
|
||||
// minor accuracy drops observed compared to f32
|
||||
const float trh = trh_f32;
|
||||
|
@ -30,7 +30,7 @@ const size_t eol = std::string::npos;
|
||||
std::stringstream help_ss;
|
||||
|
||||
static const std::string benchdnn_url
|
||||
= "https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn";
|
||||
= "https://github.com/uxlfoundation/oneDNN/blob/master/tests/benchdnn";
|
||||
static const std::string doc_url = benchdnn_url + "/doc/";
|
||||
|
||||
namespace parser_utils {
|
||||
@ -549,8 +549,8 @@ bool parse_encoding(std::vector<sparse_options_t> &sparse_options,
|
||||
static const std::string help
|
||||
= "ENCODING[+SPARSITY]:ENCODING[+SPARSITY]:ENCODING[+SPARSITY]\n "
|
||||
"Specifies sparse encodings and sparsity.\n More details at "
|
||||
"https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/"
|
||||
"doc/knobs_encoding.md\n";
|
||||
"https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
|
||||
"benchdnn/doc/knobs_encoding.md\n";
|
||||
|
||||
std::vector<sparse_options_t> def {sparse_options_t()};
|
||||
auto parse_sparse_options_func = [](const std::string &s) {
|
||||
@ -594,8 +594,8 @@ bool parse_attr_post_ops(std::vector<attr_t::post_ops_t> &po, const char *str,
|
||||
"is one of those:\n * SUM[:SCALE[:ZERO_POINT[:DATA_TYPE]]]\n "
|
||||
" * ELTWISE[:ALPHA[:BETA[:SCALE]]]\n * DW:KkSsPp[:DST_DT]\n "
|
||||
" * BINARY:DT[:MASK_INPUT[:TAG]]\n More details at "
|
||||
"https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/"
|
||||
"doc/knobs_attr.md\n";
|
||||
"https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
|
||||
"benchdnn/doc/knobs_attr.md\n";
|
||||
std::vector<attr_t::post_ops_t> def {attr_t::post_ops_t()};
|
||||
return parse_vector_option(po, def, parser_utils::parse_attr_post_ops_func,
|
||||
str, option_name, help);
|
||||
@ -606,8 +606,8 @@ bool parse_attr_scales(std::vector<attr_t::arg_scales_t> &scales,
|
||||
static const std::string help
|
||||
= "ARG:POLICY[:SCALE][+...]\n Specifies input scales "
|
||||
"attribute.\n More details at "
|
||||
"https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/"
|
||||
"doc/knobs_attr.md\n";
|
||||
"https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
|
||||
"benchdnn/doc/knobs_attr.md\n";
|
||||
return parse_subattr(scales, str, option_name, help);
|
||||
}
|
||||
|
||||
@ -616,8 +616,8 @@ bool parse_attr_zero_points(std::vector<attr_t::zero_points_t> &zp,
|
||||
static const std::string help
|
||||
= "ARG:POLICY[:ZEROPOINT][+...]\n Specifies zero-points "
|
||||
"attribute.\n More details at "
|
||||
"https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/"
|
||||
"doc/knobs_attr.md\n";
|
||||
"https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
|
||||
"benchdnn/doc/knobs_attr.md\n";
|
||||
return parse_subattr(zp, str, option_name, help);
|
||||
}
|
||||
|
||||
@ -1461,7 +1461,7 @@ bool parse_bench_settings(const char *str) {
|
||||
help_ss << "= Global options: =\n";
|
||||
help_ss << "===================\n";
|
||||
help_ss << "(More technical details available at "
|
||||
"https://github.com/oneapi-src/oneDNN/blob/master/tests/"
|
||||
"https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
|
||||
"benchdnn/doc/knobs_common.md)\n\n";
|
||||
start_msg = true;
|
||||
}
|
||||
@ -1486,7 +1486,7 @@ bool parse_bench_settings(const char *str) {
|
||||
help_ss << "= Driver options: =\n";
|
||||
help_ss << "===================\n";
|
||||
help_ss << "(More technical details available at "
|
||||
"https://github.com/oneapi-src/oneDNN/blob/master/tests/"
|
||||
"https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
|
||||
"benchdnn/doc/driver_"
|
||||
<< driver_name << ".md)\n\n";
|
||||
end_msg = true;
|
||||
|
@ -793,7 +793,7 @@ HANDLE_EXCEPTIONS_FOR_TEST_F(attr_test_t, TestGetCppObjects) {
|
||||
// of using a dangling pointer from destroyed object via
|
||||
// `pd.get_primitive_attr().get_post_ops()` construction as attributes will
|
||||
// be destroyed once post-ops are saved on stack.
|
||||
// See https://github.com/oneapi-src/oneDNN/issues/1337 for details.
|
||||
// See https://github.com/uxlfoundation/oneDNN/issues/1337 for details.
|
||||
dnnl::primitive_attr attr;
|
||||
dnnl::post_ops ops;
|
||||
memory::desc po_src1_md({1, 1, 1, 1}, data_type::f32, tag::abcd);
|
||||
|
Reference in New Issue
Block a user