all: updated Github links to uxlfoundation

This commit is contained in:
Viktoriia
2025-02-26 13:52:40 +01:00
committed by Vadim Pirogov
parent 8b5e68ca5e
commit 56e219e035
43 changed files with 102 additions and 103 deletions

View File

@ -12,7 +12,7 @@ factors are considered important to reproduce an issue.
# Version # Version
Report oneDNN version and githash. Version information is printed to stdout Report oneDNN version and githash. Version information is printed to stdout
in [verbose mode](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html). in [verbose mode](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose.html).
# Environment # Environment
oneDNN includes hardware-specific optimizations and may behave oneDNN includes hardware-specific optimizations and may behave
@ -30,8 +30,8 @@ the following information to help reproduce the issue:
Please check that the issue is reproducible with the latest revision on Please check that the issue is reproducible with the latest revision on
master. Include all the steps to reproduce the issue. master. Include all the steps to reproduce the issue.
You can use [verbose mode](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html) You can use [verbose mode](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose.html)
and [benchdnn](https://github.com/oneapi-src/oneDNN/tree/master/tests/benchdnn) and [benchdnn](https://github.com/uxlfoundation/oneDNN/tree/master/tests/benchdnn)
to validate correctness of all primitives the library supports. If this does not to validate correctness of all primitives the library supports. If this does not
work a short C/C++ program or modified unit tests demonstrating the issue work a short C/C++ program or modified unit tests demonstrating the issue
will greatly help with the investigation. will greatly help with the investigation.
@ -40,7 +40,7 @@ will greatly help with the investigation.
Document behavior you observe. For performance defects, like performance Document behavior you observe. For performance defects, like performance
regressions or a function being slow, provide a log including output generated regressions or a function being slow, provide a log including output generated
by your application in by your application in
[verbose mode](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html). [verbose mode](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose.html).
# Expected behavior # Expected behavior
Document behavior you expect. Document behavior you expect.

View File

@ -27,7 +27,7 @@ OS=${OS:-"Linux"}
SKIPPED_GRAPH_TEST_FAILURES="test_graph_unit_dnnl_sdp_decomp_cpu" SKIPPED_GRAPH_TEST_FAILURES="test_graph_unit_dnnl_sdp_decomp_cpu"
SKIPPED_GRAPH_TEST_FAILURES+="|test_graph_unit_dnnl_mqa_decomp_cpu" SKIPPED_GRAPH_TEST_FAILURES+="|test_graph_unit_dnnl_mqa_decomp_cpu"
# described in issue: https://github.com/oneapi-src/oneDNN/issues/2175 # described in issue: https://github.com/uxlfoundation/oneDNN/issues/2175
SKIPPED_TEST_FAILURES="test_benchdnn_modeC_matmul_multidims_cpu" SKIPPED_TEST_FAILURES="test_benchdnn_modeC_matmul_multidims_cpu"
# We currently have some OS and config specific test failures. # We currently have some OS and config specific test failures.

View File

@ -1,6 +1,6 @@
# Description # Description
Please include a summary of the change. Please also include relevant motivation and context. See [contribution guidelines](https://github.com/oneapi-src/oneDNN/blob/master/CONTRIBUTING.md) for more details. If the change fixes an issue not documented in the project's Github issue tracker, please document all steps necessary to reproduce it. Please include a summary of the change. Please also include relevant motivation and context. See [contribution guidelines](https://github.com/uxlfoundation/oneDNN/blob/master/CONTRIBUTING.md) for more details. If the change fixes an issue not documented in the project's Github issue tracker, please document all steps necessary to reproduce it.
Fixes # (github issue) Fixes # (github issue)
@ -26,7 +26,7 @@ Fixes # (github issue)
- [ ] Have you included information on how to reproduce the issue (either in a github issue or in this PR)? - [ ] Have you included information on how to reproduce the issue (either in a github issue or in this PR)?
- [ ] Have you added relevant regression tests? - [ ] Have you added relevant regression tests?
## [RFC](https://github.com/oneapi-src/oneDNN/tree/rfcs) PR ## [RFC](https://github.com/uxlfoundation/oneDNN/tree/rfcs) PR
- [ ] Does RFC document follow the [template](https://github.com/oneapi-src/oneDNN/blob/rfcs/rfcs/template.md#onednn-design-document-rfc)? - [ ] Does RFC document follow the [template](https://github.com/uxlfoundation/oneDNN/blob/rfcs/rfcs/template.md#onednn-design-document-rfc)?
- [ ] Have you added a link to the rendered document? - [ ] Have you added a link to the rendered document?

View File

@ -8,8 +8,8 @@ message: >-
type: software type: software
authors: authors:
- name: oneDNN Contributors - name: oneDNN Contributors
repository-code: 'https://github.com/oneapi-src/oneDNN' repository-code: 'https://github.com/uxlfoundation/oneDNN'
url: 'https://oneapi-src.github.io/oneDNN' url: 'https://uxlfoundation.github.io/oneDNN'
abstract: >- abstract: >-
oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
performance library of basic building blocks for deep learning applications. performance library of basic building blocks for deep learning applications.

View File

@ -25,7 +25,7 @@ oneDNN uses [clang-tidy](https://clang.llvm.org/extra/clang-tidy/) in order to
diagnose and fix common style violations and easy-to-fix issues in the code diagnose and fix common style violations and easy-to-fix issues in the code
base. For instructions on how to use `clang-tidy`, please refer to the base. For instructions on how to use `clang-tidy`, please refer to the
[clang-tidy [clang-tidy
RFC](https://github.com/oneapi-src/oneDNN/blob/rfcs/rfcs/20200813-clang-tidy/README.md). RFC](https://github.com/uxlfoundation/oneDNN/blob/rfcs/rfcs/20200813-clang-tidy/README.md).
The list of clang-tidy checks the oneDNN code base follows is available in the The list of clang-tidy checks the oneDNN code base follows is available in the
`.clang-tidy` file found in the oneDNN top-level directory. `.clang-tidy` file found in the oneDNN top-level directory.

View File

@ -7,8 +7,8 @@ requests! To get started, see the GitHub
You can: You can:
- Submit your changes directly with a - Submit your changes directly with a
[pull request](https://github.com/oneapi-src/oneDNN/pulls) [pull request](https://github.com/uxlfoundation/oneDNN/pulls)
- Log a bug or feedback with an [issue](https://github.com/oneapi-src/oneDNN/issues) - Log a bug or feedback with an [issue](https://github.com/uxlfoundation/oneDNN/issues)
**See also:** [Contributor Covenant](CODE_OF_CONDUCT.md) code of conduct. **See also:** [Contributor Covenant](CODE_OF_CONDUCT.md) code of conduct.
@ -54,7 +54,7 @@ For Comments (RFC) process, which consists of opening, discussing, and
accepting (promoting) RFC pull requests. accepting (promoting) RFC pull requests.
More information about the process can be found in the dedicated More information about the process can be found in the dedicated
[`rfcs`](https://github.com/oneapi-src/oneDNN/tree/rfcs) branch. [`rfcs`](https://github.com/uxlfoundation/oneDNN/tree/rfcs) branch.
## Code contribution guidelines ## Code contribution guidelines
@ -146,7 +146,7 @@ Use the following command to run tests selected by a build configuration:
``` ```
To modify the coverage, use the To modify the coverage, use the
[`ONEDNN_TEST_SET`](https://oneapi-src.github.io/oneDNN/dev_guide_build_options.html#onednn-test-set) [`ONEDNN_TEST_SET`](https://uxlfoundation.github.io/oneDNN/dev_guide_build_options.html#onednn-test-set)
build option. build option.
More details on how to run benchdnn can be found in More details on how to run benchdnn can be found in

View File

@ -13,17 +13,17 @@ developers interested in improving application performance on CPUs and GPUs.
This package contains oneDNN v@PROJECT_VERSION@ (@DNNL_VERSION_HASH@). This package contains oneDNN v@PROJECT_VERSION@ (@DNNL_VERSION_HASH@).
You can find information about the latest version and release notes You can find information about the latest version and release notes
at the oneDNN Github (https://github.com/oneapi-src/oneDNN/releases). at the oneDNN Github (https://github.com/uxlfoundation/oneDNN/releases).
Documentation Documentation
------------- -------------
* Developer guide * Developer guide
(https://oneapi-src.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@) (https://uxlfoundation.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@)
explains the programming model, supported functionality, and implementation explains the programming model, supported functionality, and implementation
details, and includes annotated examples. details, and includes annotated examples.
* API reference * API reference
(https://oneapi-src.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@/modules.html) (https://uxlfoundation.github.io/oneDNN/v@DNNL_VERSION_MAJOR@.@DNNL_VERSION_MINOR@/modules.html)
provides a comprehensive reference of the library API. provides a comprehensive reference of the library API.
System Requirements System Requirements
@ -48,7 +48,7 @@ just-in-time (JIT) code generation to deploy the code optimized
for the latest supported ISA. Future ISAs may have initial support in the for the latest supported ISA. Future ISAs may have initial support in the
library disabled by default and require the use of run-time controls to enable library disabled by default and require the use of run-time controls to enable
them. See CPU dispatcher control them. See CPU dispatcher control
(https://oneapi-src.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html) (https://uxlfoundation.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html)
for more details. for more details.
The library is optimized for the following GPUs: The library is optimized for the following GPUs:
@ -65,7 +65,7 @@ Support
------- -------
Submit questions, feature requests, and bug reports on the Submit questions, feature requests, and bug reports on the
GitHub issues page (https://github.com/oneapi-src/oneDNN/issues). GitHub issues page (https://github.com/uxlfoundation/oneDNN/issues).
License License
------- -------
@ -102,7 +102,7 @@ govern your use of the third party programs as set forth in the
# Security # Security
Security Policy (https://github.com/oneapi-src/oneDNN/blob/main/SECURITY.md) Security Policy (https://github.com/uxlfoundation/oneDNN/blob/main/SECURITY.md)
outlines our guidelines and procedures for ensuring the highest level outlines our guidelines and procedures for ensuring the highest level
of Security and trust for our users who consume oneDNN. of Security and trust for our users who consume oneDNN.

View File

@ -4,7 +4,7 @@ oneAPI Deep Neural Network Library (oneDNN)
=========================================== ===========================================
[![OpenSSF Best Practices](https://www.bestpractices.dev/projects/8762/badge)](https://www.bestpractices.dev/projects/8762) [![OpenSSF Best Practices](https://www.bestpractices.dev/projects/8762/badge)](https://www.bestpractices.dev/projects/8762)
[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/oneapi-src/oneDNN/badge)](https://securityscorecards.dev/viewer/?uri=github.com/oneapi-src/oneDNN) [![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/uxlfoundation/oneDNN/badge)](https://securityscorecards.dev/viewer/?uri=github.com/uxlfoundation/oneDNN)
oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform oneAPI Deep Neural Network Library (oneDNN) is an open-source cross-platform
performance library of basic building blocks for deep learning applications. performance library of basic building blocks for deep learning applications.
@ -63,10 +63,9 @@ optimizations are available with [Intel® Extension for TensorFlow*].
optimizations, and improvements implemented in each version of optimizations, and improvements implemented in each version of
oneDNN. oneDNN.
[oneDNN Developer Guide and Reference]: https://oneapi-src.github.io/oneDNN [oneDNN Developer Guide and Reference]: https://uxlfoundation.github.io/oneDNN
[API Reference]: https://oneapi-src.github.io/oneDNN/group_dnnl_api.html [API Reference]: https://uxlfoundation.github.io/oneDNN/group_dnnl_api.html
[Release Notes]: https://github.com/oneapi-src/oneDNN/releases [Release Notes]: https://github.com/uxlfoundation/oneDNN/releases
# System Requirements # System Requirements
@ -121,8 +120,8 @@ The library is optimized for the following GPUs:
(formerly Meteor Lake, Arrow Lake and Lunar Lake) (formerly Meteor Lake, Arrow Lake and Lunar Lake)
* future Intel Arc graphics (code name Battlemage) * future Intel Arc graphics (code name Battlemage)
[CPU dispatcher control]: https://oneapi-src.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html [CPU dispatcher control]: https://uxlfoundation.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html
[Linking Guide]: https://oneapi-src.github.io/oneDNN/dev_guide_link.html [Linking Guide]: https://uxlfoundation.github.io/oneDNN/dev_guide_link.html
## Requirements for Building from Source ## Requirements for Building from Source
@ -313,8 +312,8 @@ You can download and install the oneDNN library using one of the following optio
[conda-forge]: https://anaconda.org/conda-forge/onednn [conda-forge]: https://anaconda.org/conda-forge/onednn
[System Requirements]: #system-requirements [System Requirements]: #system-requirements
[Build Options]: https://oneapi-src.github.io/oneDNN/dev_guide_build_options.html [Build Options]: https://uxlfoundation.github.io/oneDNN/dev_guide_build_options.html
[Build from Source]: https://oneapi-src.github.io/oneDNN/dev_guide_build.html [Build from Source]: https://uxlfoundation.github.io/oneDNN/dev_guide_build.html
# Validated Configurations # Validated Configurations
@ -366,7 +365,7 @@ Submit questions, feature requests, and bug reports on the
You can also contact oneDNN developers via [UXL Foundation Slack] using You can also contact oneDNN developers via [UXL Foundation Slack] using
[#onednn] channel. [#onednn] channel.
[Github issues]: https://github.com/oneapi-src/oneDNN/issues [Github issues]: https://github.com/uxlfoundation/oneDNN/issues
[UXL Foundation Slack]: https://slack-invite.uxlfoundation.org/ [UXL Foundation Slack]: https://slack-invite.uxlfoundation.org/
[#onednn]: https://uxlfoundation.slack.com/channels/onednn [#onednn]: https://uxlfoundation.slack.com/channels/onednn
@ -401,12 +400,12 @@ This project is intended to be a safe, welcoming space for
collaboration, and contributors are expected to adhere to the collaboration, and contributors are expected to adhere to the
[Contributor Covenant](CODE_OF_CONDUCT.md) code of conduct. [Contributor Covenant](CODE_OF_CONDUCT.md) code of conduct.
[RFC pull request]: https://github.com/oneapi-src/oneDNN/tree/rfcs [RFC pull request]: https://github.com/uxlfoundation/oneDNN/tree/rfcs
[code contribution guidelines]: CONTRIBUTING.md#code-contribution-guidelines [code contribution guidelines]: CONTRIBUTING.md#code-contribution-guidelines
[coding standards]: CONTRIBUTING.md#coding-standards [coding standards]: CONTRIBUTING.md#coding-standards
[pull request]: https://github.com/oneapi-src/oneDNN/pulls [pull request]: https://github.com/uxlfoundation/oneDNN/pulls
[Milestones]: https://github.com/oneapi-src/oneDNN/milestones [Milestones]: https://github.com/uxlfoundation/oneDNN/milestones
[help wanted]: https://github.com/oneapi-src/oneDNN/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22 [help wanted]: https://github.com/uxlfoundation/oneDNN/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22
# License # License

View File

@ -64,6 +64,6 @@ If you have any suggestions on how this Policy could be improved, please submit
an issue or a pull request to this repository. Please **do not** report an issue or a pull request to this repository. Please **do not** report
potential vulnerabilities or security flaws via a pull request. potential vulnerabilities or security flaws via a pull request.
[1]: https://github.com/oneapi-src/oneDNN/releases/latest [1]: https://github.com/uxlfoundation/oneDNN/releases/latest
[2]: https://github.com/oneapi-src/oneDNN/security/advisories/new [2]: https://github.com/uxlfoundation/oneDNN/security/advisories/new
[3]: https://github.com/oneapi-src/oneDNN/security/advisories [3]: https://github.com/uxlfoundation/oneDNN/security/advisories

View File

@ -22,7 +22,7 @@ Both kinds of experimental features can be enabled simultaneously.
| Environment variable | Description | | Environment variable | Description |
|:-----------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------| |:-----------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ONEDNN_EXPERIMENTAL_BNORM_STATS_ONE_PASS | Calculate mean and variance in batch normalization(BN) in single pass ([RFC](https://github.com/oneapi-src/oneDNN/tree/rfcs/rfcs/20210519-single-pass-bnorm)). | | ONEDNN_EXPERIMENTAL_BNORM_STATS_ONE_PASS | Calculate mean and variance in batch normalization(BN) in single pass ([RFC](https://github.com/uxlfoundation/oneDNN/tree/rfcs/rfcs/20210519-single-pass-bnorm)). |
| ONEDNN_EXPERIMENTAL_GPU_CONV_V2 | Enable shapeless GPU convolution implementation (the feature is under development). | | ONEDNN_EXPERIMENTAL_GPU_CONV_V2 | Enable shapeless GPU convolution implementation (the feature is under development). |
| Build time option | Description | | Build time option | Description |

View File

@ -115,9 +115,9 @@ in this example.
One can create memory with **NCHW** data layout using One can create memory with **NCHW** data layout using
#dnnl_nchw of the enum type #dnnl_format_tag_t defined in #dnnl_nchw of the enum type #dnnl_format_tag_t defined in
[dnnl_types.h](https://github.com/oneapi-src/oneDNN/blob/master/include/oneapi/dnnl/dnnl_types.h) [dnnl_types.h](https://github.com/uxlfoundation/oneDNN/blob/master/include/oneapi/dnnl/dnnl_types.h)
for the C API, and dnnl::memory::format_tag::nchw defined in for the C API, and dnnl::memory::format_tag::nchw defined in
[dnnl.hpp](https://github.com/oneapi-src/oneDNN/blob/master/include/oneapi/dnnl/dnnl.hpp) [dnnl.hpp](https://github.com/uxlfoundation/oneDNN/blob/master/include/oneapi/dnnl/dnnl.hpp)
for the C++ API. for the C++ API.

8
doc/build/build.md vendored
View File

@ -3,16 +3,16 @@ Build from Source {#dev_guide_build}
## Download the Source Code ## Download the Source Code
Download [oneDNN source code](https://github.com/oneapi-src/oneDNN/archive/master.zip) Download [oneDNN source code](https://github.com/uxlfoundation/oneDNN/archive/master.zip)
or clone [the repository](https://github.com/oneapi-src/oneDNN.git). or clone [the repository](https://github.com/uxlfoundation/oneDNN.git).
~~~sh ~~~sh
git clone https://github.com/oneapi-src/oneDNN.git git clone https://github.com/uxlfoundation/oneDNN.git
~~~ ~~~
## Build the Library ## Build the Library
Ensure that all [software dependencies](https://github.com/oneapi-src/oneDNN#requirements-for-building-from-source) Ensure that all [software dependencies](https://github.com/uxlfoundation/oneDNN#requirements-for-building-from-source)
are in place and have at least the minimal supported version. are in place and have at least the minimal supported version.
The oneDNN build system is based on CMake. Use The oneDNN build system is based on CMake. Use

View File

@ -303,7 +303,7 @@ $ cmake -DONEDNN_BLAS_VENDOR=ARMPL ..
Additional options available for development/debug purposes. These options are Additional options available for development/debug purposes. These options are
subject to change without notice, see subject to change without notice, see
[`cmake/options.cmake`](https://github.com/oneapi-src/oneDNN/blob/master/cmake/options.cmake) [`cmake/options.cmake`](https://github.com/uxlfoundation/oneDNN/blob/master/cmake/options.cmake)
for details. for details.
## GPU Options ## GPU Options

View File

@ -70,7 +70,7 @@ optional.
[GELU](@ref dev_guide_op_gelu), [Sigmoid](@ref dev_guide_op_sigmoid), and so on. [GELU](@ref dev_guide_op_gelu), [Sigmoid](@ref dev_guide_op_sigmoid), and so on.
For Swish activation, the node can be constructed with the [Sigmoid](@ref dev_guide_op_sigmoid) For Swish activation, the node can be constructed with the [Sigmoid](@ref dev_guide_op_sigmoid)
and [Multiply](@ref dev_guide_op_multiply) as below. You can also refer the and [Multiply](@ref dev_guide_op_multiply) as below. You can also refer the
[Gated-MLP example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/gated_mlp.cpp) [Gated-MLP example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/gated_mlp.cpp)
for Swish definition. for Swish definition.
![Swish Activation](images/gated-mlp-swish.png) ![Swish Activation](images/gated-mlp-swish.png)
@ -104,13 +104,13 @@ platforms follow the general description in @ref dev_guide_data_types.
## Examples ## Examples
oneDNN provides a [Gated-MLP oneDNN provides a [Gated-MLP
example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/gated_mlp.cpp) example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/gated_mlp.cpp)
demonstrating how to construct a typical floating-point Gated-MLP pattern with demonstrating how to construct a typical floating-point Gated-MLP pattern with
oneDNN Graph API on CPU and GPU with different runtimes. oneDNN Graph API on CPU and GPU with different runtimes.
For applications where the weights of FC up and FC gate are combined as a single For applications where the weights of FC up and FC gate are combined as a single
tensor, oneDNN also provides an tensor, oneDNN also provides an
[example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/gated_mlp_wei_combined.cpp) [example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/gated_mlp_wei_combined.cpp)
demonstrating how to create the weight tensors for the pattern with the offsets demonstrating how to create the weight tensors for the pattern with the offsets
and strides from the combined weight tensor. and strides from the combined weight tensor.
@ -120,4 +120,4 @@ and strides from the combined weight tensor.
2. GLU Variants Improve Transformer, https://arxiv.org/abs/2002.05202 2. GLU Variants Improve Transformer, https://arxiv.org/abs/2002.05202
3. LLaMA: Open and Efficient Foundation Language Models, https://arxiv.org/abs/2302.13971 3. LLaMA: Open and Efficient Foundation Language Models, https://arxiv.org/abs/2302.13971
4. Qwen Technical Report, https://arxiv.org/abs/2309.16609 4. Qwen Technical Report, https://arxiv.org/abs/2309.16609
5. oneDNN Graph API documentation, https://oneapi-src.github.io/oneDNN/graph_extension.html 5. oneDNN Graph API documentation, https://uxlfoundation.github.io/oneDNN/graph_extension.html

View File

@ -93,7 +93,7 @@ platforms follow the general description in @ref dev_guide_data_types.
## Example ## Example
oneDNN provides a [GQA oneDNN provides a [GQA
example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/gqa.cpp) example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/gqa.cpp)
demonstrating how to construct a floating-point GQA pattern with oneDNN Graph demonstrating how to construct a floating-point GQA pattern with oneDNN Graph
API on CPU and GPU with different runtimes. API on CPU and GPU with different runtimes.

View File

@ -135,12 +135,12 @@ platforms follow the general description in @ref dev_guide_data_types.
## Example ## Example
oneDNN provides an [SDPA oneDNN provides an [SDPA
example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/sdpa.cpp) example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/sdpa.cpp)
demonstrating how to construct a typical floating-point SDPA pattern with oneDNN demonstrating how to construct a typical floating-point SDPA pattern with oneDNN
Graph API on CPU and GPU with different runtimes. Graph API on CPU and GPU with different runtimes.
oneDNN also provides a [MQA (Multi-Query Attention) oneDNN also provides a [MQA (Multi-Query Attention)
example](https://github.com/oneapi-src/oneDNN/tree/main/examples/graph/mqa.cpp) [3] example](https://github.com/uxlfoundation/oneDNN/tree/main/examples/graph/mqa.cpp) [3]
demonstrating how to construct a floating-point MQA pattern with the same demonstrating how to construct a floating-point MQA pattern with the same
pattern structure as in the SDPA example but different head number in Key and pattern structure as in the SDPA example but different head number in Key and
Value tensors. In MQA, the head number of Key and Value is always one. Value tensors. In MQA, the head number of Key and Value is always one.
@ -149,6 +149,6 @@ Value tensors. In MQA, the head number of Key and Value is always one.
[1] Attention is all you need, https://arxiv.org/abs/1706.03762v7 [1] Attention is all you need, https://arxiv.org/abs/1706.03762v7
[2] oneDNN Graph API documentation, https://oneapi-src.github.io/oneDNN/graph_extension.html [2] oneDNN Graph API documentation, https://uxlfoundation.github.io/oneDNN/graph_extension.html
[3] Fast Transformer Decoding: One Write-Head is All You Need, https://arxiv.org/abs/1911.02150 [3] Fast Transformer Decoding: One Write-Head is All You Need, https://arxiv.org/abs/1911.02150

View File

@ -4,4 +4,4 @@ Benchmarking Performance {#dev_guide_benchdnn}
oneDNN has a built-in benchmarking program called benchdnn. oneDNN has a built-in benchmarking program called benchdnn.
For a complete description of the available options and working examples, see For a complete description of the available options and working examples, see
the [benchdnn readme](https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/README.md#benchdnn). the [benchdnn readme](https://github.com/uxlfoundation/oneDNN/blob/master/tests/benchdnn/README.md#benchdnn).

View File

@ -151,7 +151,7 @@ Above, we can see that the highest performance implementations were
not dispatched either because they required a higher ISA, or because not dispatched either because they required a higher ISA, or because
they did not support that datatype configuration. they did not support that datatype configuration.
A complete list of verbose messages encountered in the dispatch mode A complete list of verbose messages encountered in the dispatch mode
can be found [here](https://oneapi-src.github.io/oneDNN/dev_guide_verbose_table.html) along with their explanation. can be found [here](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose_table.html) along with their explanation.
### Enable ONEDNN_VERBOSE with timestamps ### Enable ONEDNN_VERBOSE with timestamps
@ -240,7 +240,7 @@ primitive execution.
@note @note
When oneDNN verbose mode is enabled for builds with When oneDNN verbose mode is enabled for builds with
[Compute Library for the Arm architecture](https://oneapi-src.github.io/oneDNN/dev_guide_build.html#gcc-with-arm-compute-library-acl-on-aarch64-host), [Compute Library for the Arm architecture](https://uxlfoundation.github.io/oneDNN/dev_guide_build.html#gcc-with-arm-compute-library-acl-on-aarch64-host),
any failures in the validation of Compute Library primitives will be detailed any failures in the validation of Compute Library primitives will be detailed
in the verbose output. in the verbose output.

View File

@ -54,7 +54,7 @@ The following catalogue lists verbose messages, explanations, and additional inf
|`alpha and beta parameters are not properly set` | | `eltwise` | Alpha and beta parameters are not properly set for the elementwise algorithm. | |`alpha and beta parameters are not properly set` | | `eltwise` | Alpha and beta parameters are not properly set for the elementwise algorithm. |
|`large shapes fall back` | | `gemm` | Heuristic to skip current implementation for large tensor shapes for better performance. | |`large shapes fall back` | | `gemm` | Heuristic to skip current implementation for large tensor shapes for better performance. |
|`only trivial strides are supported` | | `gemm`, `rnn` | Current implementation for the primitive does not process non-trivial stride values. | |`only trivial strides are supported` | | `gemm`, `rnn` | Current implementation for the primitive does not process non-trivial stride values. |
|`unsupported fpmath mode` | | `matmul` | [Floating-point math mode](https://oneapi-src.github.io/oneDNN/group_dnnl_api_fpmath_mode.html?highlight=math%20mode) is not supported by the current primitive implementation. | |`unsupported fpmath mode` | | `matmul` | [Floating-point math mode](https://uxlfoundation.github.io/oneDNN/group_dnnl_api_fpmath_mode.html?highlight=math%20mode) is not supported by the current primitive implementation. |
|`small shapes fall back` | | `matmul` | Heuristic to skip current implementation for small tensor shapes for better performance. | |`small shapes fall back` | | `matmul` | Heuristic to skip current implementation for small tensor shapes for better performance. |
|`incompatible gemm format` | | `matmul`, `ip` | Specified GeMM format is incompatible with the current primitive implementation. | |`incompatible gemm format` | | `matmul`, `ip` | Specified GeMM format is incompatible with the current primitive implementation. |
|`unsupported <t> tensor layout` |`t` - tensor | `reorder` | The data layout for the source/destination tensor is not supported by the current implementation. | |`unsupported <t> tensor layout` |`t` - tensor | `reorder` | The data layout for the source/destination tensor is not supported by the current implementation. |
@ -63,13 +63,13 @@ The following catalogue lists verbose messages, explanations, and additional inf
|**Miscellaneous** | | | | |**Miscellaneous** | | | |
|`failed to create nested <pm> primitive` |`pm` - `dnnl::primitive` | all | Descriptor initialization for the nested primitive implementation was unsuccessful. | |`failed to create nested <pm> primitive` |`pm` - `dnnl::primitive` | all | Descriptor initialization for the nested primitive implementation was unsuccessful. |
|`failed to create <pm> descriptor` |`pm` -`dnnl::primitive`, `dnnl::memory` | all | Descriptor initialization for the primitive or memory object was unsuccessful. | |`failed to create <pm> descriptor` |`pm` -`dnnl::primitive`, `dnnl::memory` | all | Descriptor initialization for the primitive or memory object was unsuccessful. |
|`bad accumulation mode` | | all | Bad or invalid [accumulation mode](https://oneapi-src.github.io/oneDNN/enum_dnnl_accumulation_mode.html) specified for primitive attribute `dnnl::primitive_attr`. | |`bad accumulation mode` | | all | Bad or invalid [accumulation mode](https://uxlfoundation.github.io/oneDNN/enum_dnnl_accumulation_mode.html) specified for primitive attribute `dnnl::primitive_attr`. |
|`unsupported <t> md flag` |`t` - tensor | all | Bad or unsupported flags specified for the memory descriptor `dnnl::memory::desc`. | |`unsupported <t> md flag` |`t` - tensor | all | Bad or unsupported flags specified for the memory descriptor `dnnl::memory::desc`. |
|`problem is not mathematically consistent` | | all | *(self-explanatory)* | |`problem is not mathematically consistent` | | all | *(self-explanatory)* |
|`workspace mismatch between forward and backward primitive descriptors`| | all | *(self-explanatory)* | |`workspace mismatch between forward and backward primitive descriptors`| | all | *(self-explanatory)* |
|`workspace initialization failed` | | all | [Workspace](https://oneapi-src.github.io/oneDNN/dev_guide_inference_and_training_aspects.html?highlight=workspace#workspace) descriptor initialization was unsuccessful during primitive creation. | |`workspace initialization failed` | | all | [Workspace](https://uxlfoundation.github.io/oneDNN/dev_guide_inference_and_training_aspects.html?highlight=workspace#workspace) descriptor initialization was unsuccessful during primitive creation. |
|`invalid datatype for <t>` |`t` - tensor | all | The data type for the tensor/data processed by the primitive is invalid. **Example**: This is encountered when an undefined data type `data_type::undef` is specified for the accumulator. | |`invalid datatype for <t>` |`t` - tensor | all | The data type for the tensor/data processed by the primitive is invalid. **Example**: This is encountered when an undefined data type `data_type::undef` is specified for the accumulator. |
|`failed to run kernel deterministically` | | all | failed to run application in the [deterministic mode](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_deterministic.html?highlight=deterministic). | |`failed to run kernel deterministically` | | all | failed to run application in the [deterministic mode](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_deterministic.html?highlight=deterministic). |
|`skipping or dispatching to another implementation` | | all | *(self-explanatory)* | |`skipping or dispatching to another implementation` | | all | *(self-explanatory)* |
|`failed to create <k> kernel` |`k` - kernel name | all | *(self-explanatory)* | |`failed to create <k> kernel` |`k` - kernel name | all | *(self-explanatory)* |
@ -86,7 +86,7 @@ The following catalogue lists verbose messages, explanations, and additional inf
|`unsupported <d> platform (expected <d0> got <d1>)` |`d` - `dnnl::engine::kind`, `d0` - queried platform, `d1` - available platform | `sycl`, `opencl` | Unsupported device platform encountered during engine creation. | |`unsupported <d> platform (expected <d0> got <d1>)` |`d` - `dnnl::engine::kind`, `d0` - queried platform, `d1` - available platform | `sycl`, `opencl` | Unsupported device platform encountered during engine creation. |
|`failed to create <d> engine with index <i>` |`d` - `dnnl::engine::kind`, `i` - device index |all | Engine creation was unsuccessful for the specified device index and kind. | |`failed to create <d> engine with index <i>` |`d` - `dnnl::engine::kind`, `i` - device index |all | Engine creation was unsuccessful for the specified device index and kind. |
|`unsupported <d> backend` |`d` - `dnnl::engine::kind` | `sycl` | *(self-explanatory)* | |`unsupported <d> backend` |`d` - `dnnl::engine::kind` | `sycl` | *(self-explanatory)* |
|`profiling capabilities are not supported` | | all | Experimental profiling ([ONEDNN_EXPERIMENTAL_PROFILING](https://oneapi-src.github.io/oneDNN/dev_guide_experimental.html?highlight=profiling#onednn-experimental-profiling)) is not enabled for the application. | |`profiling capabilities are not supported` | | all | Experimental profiling ([ONEDNN_EXPERIMENTAL_PROFILING](https://uxlfoundation.github.io/oneDNN/dev_guide_experimental.html?highlight=profiling#onednn-experimental-profiling)) is not enabled for the application. |
## Memory Creation and Related Operations ## Memory Creation and Related Operations
@ -96,6 +96,6 @@ The following catalogue lists verbose messages, explanations, and additional inf
|`bad arguments for memory descriptor` | Bad or unsupported values passed to the memory descriptor `dnnl::memory::desc` during memory object creation. | |`bad arguments for memory descriptor` | Bad or unsupported values passed to the memory descriptor `dnnl::memory::desc` during memory object creation. |
|`invalid memory index` | An out-of-range value encountered for memory handle during data mapping. | |`invalid memory index` | An out-of-range value encountered for memory handle during data mapping. |
|`unsupported memory stride` | Memory descriptor initialization failed due to unsupported value for memory strides. | |`unsupported memory stride` | Memory descriptor initialization failed due to unsupported value for memory strides. |
|`scratchpad memory limit exceeded` | [Scratchpad](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_scratchpad.html?highlight=scratchpad) space is exhausted during GEMM kernel initialization. | |`scratchpad memory limit exceeded` | [Scratchpad](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_scratchpad.html?highlight=scratchpad) space is exhausted during GEMM kernel initialization. |
|`scratchpad initialization unsuccessful` | *(self-explanatory)* | |`scratchpad initialization unsuccessful` | *(self-explanatory)* |

View File

@ -136,7 +136,7 @@ html_static_path = ['_static']
#html_js_files = [('dnnl.js', {'defer': 'defer'})] #html_js_files = [('dnnl.js', {'defer': 'defer'})]
html_theme_options = { html_theme_options = {
"repository_url": "https://github.com/oneapi-src/oneDNN", "repository_url": "https://github.com/uxlfoundation/oneDNN",
"repository_branch": "master", "repository_branch": "master",
"use_repository_button": True, "use_repository_button": True,
"use_download_button": False "use_download_button": False

View File

@ -154,7 +154,7 @@ void compute_q10n_params(const char *message, const std::vector<float> &v,
#ifndef OMIT_WORKAROUND_FOR_SKX #ifndef OMIT_WORKAROUND_FOR_SKX
// Read more in CPU / Section 1 here: // Read more in CPU / Section 1 here:
// https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html // https://uxlfoundation.github.io/oneDNN/dev_guide_int8_computations.html
if (std::is_same<T, uint8_t>::value) max_int /= 2; if (std::is_same<T, uint8_t>::value) max_int /= 2;
#endif #endif

View File

@ -1,7 +1,7 @@
# Verbose log converter # Verbose log converter
Verbose log converter is a tool that allows to convert [oneDNN Verbose log converter is a tool that allows to convert [oneDNN
verbose](https://oneapi-src.github.io/oneDNN/dev_guide_verbose.html) verbose](https://uxlfoundation.github.io/oneDNN/dev_guide_verbose.html)
output to various outputs (input files for benchdnn and execution output to various outputs (input files for benchdnn and execution
statistics breakdown at this time). The tool can be extended to statistics breakdown at this time). The tool can be extended to
produce other types of output by adding generators. produce other types of output by adding generators.

View File

@ -3,7 +3,7 @@ GPU Convolution Kernel Generator
# Generalized Convolution Algorithm # Generalized Convolution Algorithm
See [oneDNN documentation](https://oneapi-src.github.io/oneDNN/dev_guide_convolution.html) See [oneDNN documentation](https://uxlfoundation.github.io/oneDNN/dev_guide_convolution.html)
for the naming conventions that are used below. for the naming conventions that are used below.
Convolution has more variations than GEMM but for simplicity we will rely on Convolution has more variations than GEMM but for simplicity we will rely on

View File

@ -2,7 +2,7 @@
**benchdnn** is an extended and robust correctness verification and performance **benchdnn** is an extended and robust correctness verification and performance
benchmarking tool for the primitives provided by benchmarking tool for the primitives provided by
[oneDNN](https://github.com/oneapi-src/oneDNN). The purpose of the benchmark is [oneDNN](https://github.com/uxlfoundation/oneDNN). The purpose of the benchmark is
an extended and robust correctness verification of the primitives provided by an extended and robust correctness verification of the primitives provided by
oneDNN. **benchdnn** itself is a harness for different primitive-specific oneDNN. **benchdnn** itself is a harness for different primitive-specific
drivers. drivers.

View File

@ -92,7 +92,7 @@ int check_reorder_presence(
/* Note for x64: /* Note for x64:
Both data types of src and weight are s8, oneDNN addds 128 to one of the s8 Both data types of src and weight are s8, oneDNN addds 128 to one of the s8
input to make it of type u8 instead, as explained in input to make it of type u8 instead, as explained in
https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html or https://uxlfoundation.github.io/oneDNN/dev_guide_int8_computations.html or
doc/advanced/int8_computations.md doc/advanced/int8_computations.md
It is because `VPDPBUSD` instruction uses the combination of s8 and u8 as It is because `VPDPBUSD` instruction uses the combination of s8 and u8 as
input. input.

View File

@ -104,7 +104,7 @@ int check_reorder_presence(
/* Note for x64: /* Note for x64:
Both data types of src and weight are s8, oneDNN addds 128 to one of the s8 Both data types of src and weight are s8, oneDNN addds 128 to one of the s8
input to make it of type u8 instead, as explained in input to make it of type u8 instead, as explained in
https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html or https://uxlfoundation.github.io/oneDNN/dev_guide_int8_computations.html or
doc/advanced/int8_computations.md doc/advanced/int8_computations.md
It is because `VPDPBUSD` instruction uses the combination of s8 and u8 as It is because `VPDPBUSD` instruction uses the combination of s8 and u8 as
input. input.

View File

@ -19,7 +19,7 @@ where *binary-knobs* are:
Refer to [tags](knobs_tag.md) for details. Refer to [tags](knobs_tag.md) for details.
- `--alg={ADD [default], DIV, EQ, GE, GT, LE, LT, MAX, MIN, MUL, NE, SELECT, SUB}` - `--alg={ADD [default], DIV, EQ, GE, GT, LE, LT, MAX, MIN, MUL, NE, SELECT, SUB}`
-- algorithm for binary operations. -- algorithm for binary operations.
Refer to [binary primitive](https://oneapi-src.github.io/oneDNN/dev_guide_binary.html) Refer to [binary primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_binary.html)
for details. for details.
- `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input - `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input
memory as output, otherwise, input and output are separate. memory as output, otherwise, input and output are separate.

View File

@ -23,7 +23,7 @@ where *bnorm-knobs* are:
`H` is dnnl_use_shift; `H` is dnnl_use_shift;
`R` is dnnl_fuse_norm_relu; `R` is dnnl_fuse_norm_relu;
`A` is dnnl_fuse_norm_add_relu; `A` is dnnl_fuse_norm_add_relu;
Refer to [batch normalization primitive](https://oneapi-src.github.io/oneDNN/dev_guide_batch_normalization.html) Refer to [batch normalization primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_batch_normalization.html)
for details. for details.
- `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input - `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input
memory as output, otherwise, input and output are separate. memory as output, otherwise, input and output are separate.

View File

@ -60,8 +60,8 @@ errors.
The table below shows supported name configurations for this driver: The table below shows supported name configurations for this driver:
For data type support, refer to [data types](https://oneapi-src.github.io/oneDNN/dev_guide_data_types.html) For data type support, refer to [data types](https://uxlfoundation.github.io/oneDNN/dev_guide_data_types.html)
and [convolution primitive](https://oneapi-src.github.io/oneDNN/dev_guide_convolution.html#data-types) and [convolution primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_convolution.html#data-types)
documentation. documentation.
| src | wei | dst | acc | cfg | | src | wei | dst | acc | cfg |

View File

@ -14,7 +14,7 @@ where *eltwise-knobs* are:
- `--tag={nchw [default], ...}` -- physical src and dst memory layout. - `--tag={nchw [default], ...}` -- physical src and dst memory layout.
Refer to [tags](knobs_tag.md) for details. Refer to [tags](knobs_tag.md) for details.
- `--alg={RELU [default], ...}` -- dnnl_eltwise algorithm. Refer to - `--alg={RELU [default], ...}` -- dnnl_eltwise algorithm. Refer to
[eltwise primitive](https://oneapi-src.github.io/oneDNN/dev_guide_eltwise.html) [eltwise primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_eltwise.html)
for details. for details.
- `--alpha=FLOAT` -- float value corresponding to algorithm operation. - `--alpha=FLOAT` -- float value corresponding to algorithm operation.
Refer to ``Floating point arguments`` below. Refer to ``Floating point arguments`` below.

View File

@ -19,7 +19,7 @@ where *gnorm-knobs* are:
`G` is dnnl_use_global_stats; `G` is dnnl_use_global_stats;
`C` is dnnl_use_scale; `C` is dnnl_use_scale;
`H` is dnnl_use_shift; `H` is dnnl_use_shift;
Refer to [group normalization primitive](https://oneapi-src.github.io/oneDNN/dev_guide_group_normalization.html) Refer to [group normalization primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_group_normalization.html)
for details. for details.
- `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input - `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input
memory as output, otherwise, input and output are separate. memory as output, otherwise, input and output are separate.

View File

@ -22,7 +22,7 @@ where *lnorm-knobs* are:
`G` is dnnl_use_global_stats; `G` is dnnl_use_global_stats;
`C` is dnnl_use_scale; `C` is dnnl_use_scale;
`H` is dnnl_use_shift; `H` is dnnl_use_shift;
Refer to [layer normalization primitive](https://oneapi-src.github.io/oneDNN/dev_guide_layer_normalization.html) Refer to [layer normalization primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_layer_normalization.html)
for details. for details.
- `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input - `--inplace=BOOL` -- memory mode for the primitive. If `true`, it uses input
memory as output, otherwise, input and output are separate. memory as output, otherwise, input and output are separate.

View File

@ -16,7 +16,7 @@ where *lrn-knobs* are:
- `--alg={ACROSS [default], WITHIN}` -- lrn algorithm. - `--alg={ACROSS [default], WITHIN}` -- lrn algorithm.
`ACROSS` is dnnl_lrn_across_channels; `ACROSS` is dnnl_lrn_across_channels;
`WITHIN` is dnnl_lrn_within_channel; `WITHIN` is dnnl_lrn_within_channel;
Refer to [LRN primitive](https://oneapi-src.github.io/oneDNN/dev_guide_lrn.html) Refer to [LRN primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_lrn.html)
for details. for details.
- `--mb=INT` -- override minibatch size specified in the problem description. - `--mb=INT` -- override minibatch size specified in the problem description.
When set to `0`, use minibatch size as defined by the individual When set to `0`, use minibatch size as defined by the individual
@ -36,7 +36,7 @@ size value and accepts integer X values. The default is `5`. `alphaF` stands for
LRN alpha scale and accepts float F values. The default is `1.f / 8192`. `betaF` LRN alpha scale and accepts float F values. The default is `1.f / 8192`. `betaF`
stands for LRN beta power and accepts float F values. The default is `0.75f`. stands for LRN beta power and accepts float F values. The default is `0.75f`.
`kF` stands for LRN k shift and accept float F values. The default is `1.f`. `kF` stands for LRN k shift and accept float F values. The default is `1.f`.
Refer to [LRN primitive](https://oneapi-src.github.io/oneDNN/dev_guide_lrn.html) Refer to [LRN primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_lrn.html)
for details. for details.
## Essence of Testing ## Essence of Testing

View File

@ -23,7 +23,7 @@ where *pool-knobs* are:
dnnl_pooling_avg_exclude_padding; dnnl_pooling_avg_exclude_padding;
`avg_p` or `pooling_avg_include_padding` is `avg_p` or `pooling_avg_include_padding` is
dnnl_pooling_avg_include_padding; dnnl_pooling_avg_include_padding;
Refer to [pooling primitive](https://oneapi-src.github.io/oneDNN/dev_guide_pooling.html) Refer to [pooling primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_pooling.html)
for details. for details.
- `--mb=INT` -- override minibatch size specified in the problem description. - `--mb=INT` -- override minibatch size specified in the problem description.
When set to `0`, use minibatch size as defined by the individual When set to `0`, use minibatch size as defined by the individual

View File

@ -16,7 +16,7 @@ where *reduction-knobs* are:
- `--dtag={any [default], ...}` -- physical dst memory layout. - `--dtag={any [default], ...}` -- physical dst memory layout.
Refer to [tags](knobs_tag.md) for details. Refer to [tags](knobs_tag.md) for details.
- `--alg={sum [default], ...}` -- algorithm for reduction operations. - `--alg={sum [default], ...}` -- algorithm for reduction operations.
Refer to [reduction primitive](https://oneapi-src.github.io/oneDNN/dev_guide_reduction.html) Refer to [reduction primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_reduction.html)
for details. for details.
- `--p=FLOAT` -- float value corresponding to algorithm operation. - `--p=FLOAT` -- float value corresponding to algorithm operation.
Refer to ``Floating point arguments`` below. Refer to ``Floating point arguments`` below.

View File

@ -18,7 +18,7 @@ where *resampling-knobs* are:
- `--alg={nearest [default], linear}` -- resampling algorithm. - `--alg={nearest [default], linear}` -- resampling algorithm.
`nearest` or `resampling_nearest` is dnnl_resampling_nearest; `nearest` or `resampling_nearest` is dnnl_resampling_nearest;
`linear` or `resampling_nearest` is dnnl_resampling_linear; `linear` or `resampling_nearest` is dnnl_resampling_linear;
Refer to [resampling primitive](https://oneapi-src.github.io/oneDNN/dev_guide_resampling.html) Refer to [resampling primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_resampling.html)
for details. for details.
- `--mb=INT` -- override minibatch size specified in the problem description. - `--mb=INT` -- override minibatch size specified in the problem description.
When set to `0`, use minibatch size as defined by the individual When set to `0`, use minibatch size as defined by the individual

View File

@ -47,7 +47,7 @@ where *rnn-knobs* are:
- `--flags=[|O]` -- RNN flags, default `undef` (no flags); where multiple - `--flags=[|O]` -- RNN flags, default `undef` (no flags); where multiple
simultaneous flags are supported. simultaneous flags are supported.
`O` is dnnl_rnn_flags_diff_weights_overwrite; `O` is dnnl_rnn_flags_diff_weights_overwrite;
Refer to [RNN primitive](https://oneapi-src.github.io/oneDNN/dev_guide_rnn.html) for details. Refer to [RNN primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_rnn.html) for details.
- Any attributes options. Refer to [attributes](knobs_attr.md) for details. - Any attributes options. Refer to [attributes](knobs_attr.md) for details.
and *rnn-desc* is a problem descriptor. The canonical form is: and *rnn-desc* is a problem descriptor. The canonical form is:

View File

@ -20,7 +20,7 @@ where *softmax-knobs* are:
- `--alg={SOFTMAX [default], LOGSOFTMAX}` -- softmax algorithm. - `--alg={SOFTMAX [default], LOGSOFTMAX}` -- softmax algorithm.
`SOFTMAX` or `softmax_accurate` is `dnnl_softmax_accurate`; `SOFTMAX` or `softmax_accurate` is `dnnl_softmax_accurate`;
`LOGSOFTMAX` or `softmax_log` is `dnnl_softmax_log`; `LOGSOFTMAX` or `softmax_log` is `dnnl_softmax_log`;
Refer to [softmax primitive](https://oneapi-src.github.io/oneDNN/dev_guide_softmax.html) Refer to [softmax primitive](https://uxlfoundation.github.io/oneDNN/dev_guide_softmax.html)
for details. for details.
- `--axis=INT` -- dimension on which operation will be performed. - `--axis=INT` -- dimension on which operation will be performed.
Default is `1`, corresponds to channels in logical memory layout. Default is `1`, corresponds to channels in logical memory layout.

View File

@ -20,7 +20,7 @@
## --attr-scratchpad ## --attr-scratchpad
`--attr-scratchpad` specifies the scratchpad mode to be used for benchmarking. `--attr-scratchpad` specifies the scratchpad mode to be used for benchmarking.
`MODE` values can be `library` (the default) or `user`. Refer to `MODE` values can be `library` (the default) or `user`. Refer to
[scratchpad primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_scratchpad.html) [scratchpad primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_scratchpad.html)
for details. for details.
## --attr-fpmath ## --attr-fpmath
@ -29,7 +29,7 @@ for details.
or `any`. or `any`.
`APPLY_TO_INT` values can be either `true` (the default) or `false`. `APPLY_TO_INT` values can be either `true` (the default) or `false`.
Refer to Refer to
[fpmath primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_fpmath_mode.html) [fpmath primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_fpmath_mode.html)
for details. for details.
@ -37,7 +37,7 @@ for details.
`--attr-acc-mode` specifies the accumulation mode to be used for benchmarking. `--attr-acc-mode` specifies the accumulation mode to be used for benchmarking.
`ACCMODE` values can be any of `strict` (the default), `relaxed`, `any`, `f32`, `ACCMODE` values can be any of `strict` (the default), `relaxed`, `any`, `f32`,
`s32` or `f16`. Refer to `s32` or `f16`. Refer to
[accumulation mode primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_accumulation_mode.html) [accumulation mode primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_accumulation_mode.html)
for details. for details.
## --attr-rounding-mode ## --attr-rounding-mode
@ -48,14 +48,14 @@ for details.
- `diff_weights` corresponds to `DNNL_ARG_DIFF_WEIGHTS`. - `diff_weights` corresponds to `DNNL_ARG_DIFF_WEIGHTS`.
`MODE` specifies which mode to apply to the corresponding memory `MODE` specifies which mode to apply to the corresponding memory
argument. Supported values are: `environment` (default) and `stochastic`. Refer argument. Supported values are: `environment` (default) and `stochastic`. Refer
to [rounding mode primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_rounding_mode.html) to [rounding mode primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_rounding_mode.html)
for details. for details.
## --attr-deterministic ## --attr-deterministic
`--attr-deterministic` specifies the deterministic mode to be used for `--attr-deterministic` specifies the deterministic mode to be used for
benchmarking. `BOOL` values can be `true`, which enables the deterministic benchmarking. `BOOL` values can be `true`, which enables the deterministic
mode and `false` (the default), which disables it. Refer to mode and `false` (the default), which disables it. Refer to
[deterministic primitive attribute](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_deterministic.html) [deterministic primitive attribute](https://uxlfoundation.github.io/oneDNN/dev_guide_attributes_deterministic.html)
for details. for details.
## --attr-dropout ## --attr-dropout

View File

@ -137,7 +137,7 @@ int check_reorder_presence(
/* Note for x64: /* Note for x64:
Both data types of src and weight are s8, oneDNN addds 128 to one of the s8 Both data types of src and weight are s8, oneDNN addds 128 to one of the s8
input to make it of type u8 instead, as explained in input to make it of type u8 instead, as explained in
https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html or https://uxlfoundation.github.io/oneDNN/dev_guide_int8_computations.html or
doc/advanced/int8_computations.md doc/advanced/int8_computations.md
It is because `VPDPBUSD` instruction uses the combination of s8 and u8 as It is because `VPDPBUSD` instruction uses the combination of s8 and u8 as
input. input.

View File

@ -243,7 +243,7 @@ void setup_cmp(compare::compare_t &cmp, const prb_t *prb, data_kind_t kind,
#if DNNL_AARCH64 || defined(DNNL_SYCL_HIP) || defined(DNNL_SYCL_CUDA) #if DNNL_AARCH64 || defined(DNNL_SYCL_HIP) || defined(DNNL_SYCL_CUDA)
// MIOpen and ACL softmax accumulate in F16, but oneDNN now expects accumulation in // MIOpen and ACL softmax accumulate in F16, but oneDNN now expects accumulation in
// F32, this partially reverts 6727bbe8. For more information on ACL softmax, see // F32, this partially reverts 6727bbe8. For more information on ACL softmax, see
// https://github.com/oneapi-src/oneDNN/issues/1819 // https://github.com/uxlfoundation/oneDNN/issues/1819
// Similarly, for bf16 on AArch64, the relaxed threshold is necessary due to // Similarly, for bf16 on AArch64, the relaxed threshold is necessary due to
// minor accuracy drops observed compared to f32 // minor accuracy drops observed compared to f32
const float trh = trh_f32; const float trh = trh_f32;

View File

@ -30,7 +30,7 @@ const size_t eol = std::string::npos;
std::stringstream help_ss; std::stringstream help_ss;
static const std::string benchdnn_url static const std::string benchdnn_url
= "https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn"; = "https://github.com/uxlfoundation/oneDNN/blob/master/tests/benchdnn";
static const std::string doc_url = benchdnn_url + "/doc/"; static const std::string doc_url = benchdnn_url + "/doc/";
namespace parser_utils { namespace parser_utils {
@ -549,8 +549,8 @@ bool parse_encoding(std::vector<sparse_options_t> &sparse_options,
static const std::string help static const std::string help
= "ENCODING[+SPARSITY]:ENCODING[+SPARSITY]:ENCODING[+SPARSITY]\n " = "ENCODING[+SPARSITY]:ENCODING[+SPARSITY]:ENCODING[+SPARSITY]\n "
"Specifies sparse encodings and sparsity.\n More details at " "Specifies sparse encodings and sparsity.\n More details at "
"https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/" "https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
"doc/knobs_encoding.md\n"; "benchdnn/doc/knobs_encoding.md\n";
std::vector<sparse_options_t> def {sparse_options_t()}; std::vector<sparse_options_t> def {sparse_options_t()};
auto parse_sparse_options_func = [](const std::string &s) { auto parse_sparse_options_func = [](const std::string &s) {
@ -594,8 +594,8 @@ bool parse_attr_post_ops(std::vector<attr_t::post_ops_t> &po, const char *str,
"is one of those:\n * SUM[:SCALE[:ZERO_POINT[:DATA_TYPE]]]\n " "is one of those:\n * SUM[:SCALE[:ZERO_POINT[:DATA_TYPE]]]\n "
" * ELTWISE[:ALPHA[:BETA[:SCALE]]]\n * DW:KkSsPp[:DST_DT]\n " " * ELTWISE[:ALPHA[:BETA[:SCALE]]]\n * DW:KkSsPp[:DST_DT]\n "
" * BINARY:DT[:MASK_INPUT[:TAG]]\n More details at " " * BINARY:DT[:MASK_INPUT[:TAG]]\n More details at "
"https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/" "https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
"doc/knobs_attr.md\n"; "benchdnn/doc/knobs_attr.md\n";
std::vector<attr_t::post_ops_t> def {attr_t::post_ops_t()}; std::vector<attr_t::post_ops_t> def {attr_t::post_ops_t()};
return parse_vector_option(po, def, parser_utils::parse_attr_post_ops_func, return parse_vector_option(po, def, parser_utils::parse_attr_post_ops_func,
str, option_name, help); str, option_name, help);
@ -606,8 +606,8 @@ bool parse_attr_scales(std::vector<attr_t::arg_scales_t> &scales,
static const std::string help static const std::string help
= "ARG:POLICY[:SCALE][+...]\n Specifies input scales " = "ARG:POLICY[:SCALE][+...]\n Specifies input scales "
"attribute.\n More details at " "attribute.\n More details at "
"https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/" "https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
"doc/knobs_attr.md\n"; "benchdnn/doc/knobs_attr.md\n";
return parse_subattr(scales, str, option_name, help); return parse_subattr(scales, str, option_name, help);
} }
@ -616,8 +616,8 @@ bool parse_attr_zero_points(std::vector<attr_t::zero_points_t> &zp,
static const std::string help static const std::string help
= "ARG:POLICY[:ZEROPOINT][+...]\n Specifies zero-points " = "ARG:POLICY[:ZEROPOINT][+...]\n Specifies zero-points "
"attribute.\n More details at " "attribute.\n More details at "
"https://github.com/oneapi-src/oneDNN/blob/master/tests/benchdnn/" "https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
"doc/knobs_attr.md\n"; "benchdnn/doc/knobs_attr.md\n";
return parse_subattr(zp, str, option_name, help); return parse_subattr(zp, str, option_name, help);
} }
@ -1461,7 +1461,7 @@ bool parse_bench_settings(const char *str) {
help_ss << "= Global options: =\n"; help_ss << "= Global options: =\n";
help_ss << "===================\n"; help_ss << "===================\n";
help_ss << "(More technical details available at " help_ss << "(More technical details available at "
"https://github.com/oneapi-src/oneDNN/blob/master/tests/" "https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
"benchdnn/doc/knobs_common.md)\n\n"; "benchdnn/doc/knobs_common.md)\n\n";
start_msg = true; start_msg = true;
} }
@ -1486,7 +1486,7 @@ bool parse_bench_settings(const char *str) {
help_ss << "= Driver options: =\n"; help_ss << "= Driver options: =\n";
help_ss << "===================\n"; help_ss << "===================\n";
help_ss << "(More technical details available at " help_ss << "(More technical details available at "
"https://github.com/oneapi-src/oneDNN/blob/master/tests/" "https://github.com/uxlfoundation/oneDNN/blob/master/tests/"
"benchdnn/doc/driver_" "benchdnn/doc/driver_"
<< driver_name << ".md)\n\n"; << driver_name << ".md)\n\n";
end_msg = true; end_msg = true;

View File

@ -793,7 +793,7 @@ HANDLE_EXCEPTIONS_FOR_TEST_F(attr_test_t, TestGetCppObjects) {
// of using a dangling pointer from destroyed object via // of using a dangling pointer from destroyed object via
// `pd.get_primitive_attr().get_post_ops()` construction as attributes will // `pd.get_primitive_attr().get_post_ops()` construction as attributes will
// be destroyed once post-ops are saved on stack. // be destroyed once post-ops are saved on stack.
// See https://github.com/oneapi-src/oneDNN/issues/1337 for details. // See https://github.com/uxlfoundation/oneDNN/issues/1337 for details.
dnnl::primitive_attr attr; dnnl::primitive_attr attr;
dnnl::post_ops ops; dnnl::post_ops ops;
memory::desc po_src1_md({1, 1, 1, 1}, data_type::f32, tag::abcd); memory::desc po_src1_md({1, 1, 1, 1}, data_type::f32, tag::abcd);