Commit Graph

64 Commits

Author SHA1 Message Date
9fff8155c3 [2/N] Fix clang-tidy readability checks (#164652)
This PR applies clang-tidy readability checks to jit sources and all headers in the code base.
`readability-redundant-inline-specifier` is suppressed because it incurs too many changes. `readability-redundant-inline-specifier` is used to detect redundant inline specifiers on function and variable declarations. There are many in-class method definitions that are marked inline.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164652
Approved by: https://github.com/Skylion007
2025-10-06 01:06:01 +00:00
2c5ed6e7c0 Revert "[2/N] Fix clang-tidy readability checks (#164652)"
This reverts commit 3c5ca685d6f5b6f3971c0cd20a054aa355610419.

Reverted https://github.com/pytorch/pytorch/pull/164652 on behalf of https://github.com/izaitsevfb due to need to revert due to a conflict with revert of https://github.com/pytorch/pytorch/pull/162659 ([comment](https://github.com/pytorch/pytorch/pull/164652#issuecomment-3369346707))
2025-10-05 21:36:57 +00:00
3c5ca685d6 [2/N] Fix clang-tidy readability checks (#164652)
This PR applies clang-tidy readability checks to jit sources and all headers in the code base.
`readability-redundant-inline-specifier` is suppressed because it incurs too many changes. `readability-redundant-inline-specifier` is used to detect redundant inline specifiers on function and variable declarations. There are many in-class method definitions that are marked inline.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164652
Approved by: https://github.com/Skylion007
2025-10-05 07:05:11 +00:00
541584d22e [BE][8/16] fix typos in torch/ (torch/csrc/jit/) (#156318)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156318
Approved by: https://github.com/albanD
2025-07-02 22:55:29 +00:00
cyy
0274d16c01 Fix clang-tidy warnings in jit code (#138974)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138974
Approved by: https://github.com/ezyang
2024-10-29 04:33:40 +00:00
cyy
1a73255102 Concat namespaces in jit code (#138976)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138976
Approved by: https://github.com/Skylion007
2024-10-26 17:41:27 +00:00
fddabc6e0b C10_UNUSED to [[maybe_unused]] (#6357) (#138364)
Summary: Pull Request resolved: https://github.com/pytorch/executorch/pull/6357

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138364
Approved by: https://github.com/Skylion007, https://github.com/eqy
2024-10-19 13:17:43 +00:00
b7f798caa4 Use C10_UNUSED instead of (void)X (#137239)
Summary:
Auto-generated with
```
buck run //scripts/rbarnes/regex_multiline_replacer:regex_multiline_replacer -- --find '^(\s*for\s*\()(const.*\n)\s*\(void\)[A-Za-z]+;\s*//\s*Suppress.*\s*\n(.*)'  --replace '\1C10_UNUSED \2\3' `find caffe2/ -regex ".*\.\(cpp\|h\)"`
```

Differential Revision: D33432600

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137239
Approved by: https://github.com/Skylion007
2024-10-15 14:32:59 +00:00
cyy
7bbdf87517 [22/N] Fix clang-tidy warnings in jit (#134829)
Follows  #134537

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134829
Approved by: https://github.com/ezyang
2024-09-19 19:24:42 +00:00
cyy
77f2883c41 [Reland2] fix missing-prototypes warnings in torch_cpu (Part 4) (#102228)
This PR relands the changes introduced in PR https://github.com/pytorch/pytorch/pull/100849. The old PR turnd nnc_* functions into  static. We now add declarations for them and hope that inter builds will pass.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102228
Approved by: https://github.com/albanD
2023-06-02 22:04:44 +00:00
32ce06a5ab Revert "[Reland] fix missing-prototypes warnings in torch_cpu (Part 4) (#101949)"
This reverts commit 4f2c007a1b5170c2aa0d47e388ff9e07c7a7d354.

Reverted https://github.com/pytorch/pytorch/pull/101949 on behalf of https://github.com/osalpekar due to As noted in @izaitsevfb's comment, we are still seeing linker errors, this time due to `nnc_prepacked_linear_clamp_run` being made a static function. ([comment](https://github.com/pytorch/pytorch/pull/101949#issuecomment-1560226880))
2023-05-23 22:53:47 +00:00
cyy
4f2c007a1b [Reland] fix missing-prototypes warnings in torch_cpu (Part 4) (#101949)
This PR relands the changes introduced in PR #100849. The old PR turnd  nnc_aten_embedding  into a static function, however, it is actually used in torch/csrc/jit/tensorexpr/operators/misc.cpp.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101949
Approved by: https://github.com/albanD
2023-05-22 10:53:07 +00:00
498c34e8e8 Revert " fix missing-prototypes warnings in torch_cpu (Part 4) (#100849)"
This reverts commit c2f28d1c1df0db78f2951e4df5dde264f80f07eb.

Reverted https://github.com/pytorch/pytorch/pull/100849 on behalf of https://github.com/izaitsevfb due to fails internal Meta builds, including fbcode and android, see D46009888: ld.lld: error: undefined symbol: nnc_aten_embedding ([comment](https://github.com/pytorch/pytorch/pull/100849#issuecomment-1555105800))
2023-05-19 19:05:15 +00:00
cyy
c2f28d1c1d fix missing-prototypes warnings in torch_cpu (Part 4) (#100849)
This PR fixes more missing-prototypes violations in the torch_cpu source following PRs #100053, #100147 and #100245

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100849
Approved by: https://github.com/albanD
2023-05-18 03:49:45 +00:00
380ccfd442 Revert "Added round_with_scale_factor arg to ATen (#97868)"
This reverts commit aa99c5b4eda345f792687c490e72c8575110977a.

Reverted https://github.com/pytorch/pytorch/pull/97868 on behalf of https://github.com/osalpekar due to Caused breakages in the glow compiler - see [D45374622](https://www.internalfb.com/diff/D45374622) for more details
2023-04-28 20:47:00 +00:00
aa99c5b4ed Added round_with_scale_factor arg to ATen (#97868)
Addresses #62396 following the strategy described in https://github.com/pytorch/pytorch/pull/64983#issuecomment-1026177629.

Fixing output size to match opencv, scikit-image, scipy if scale factor is specified on ATen side only due to JIT FC.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97868
Approved by: https://github.com/lezcano, https://github.com/mikaylagawarecki
2023-04-26 18:48:37 +00:00
62ecfa8b79 Fix typo under torch/csrc/jit/passes directory (#97222)
This PR fixes typo in comments under `torch/csrc/jit/passes` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97222
Approved by: https://github.com/davidberard98, https://github.com/kit1980
2023-03-23 04:08:42 +00:00
e57a694d77 Add some missing moves to torch jit passes (#92317)
Add some missing moves in torch/jit/passes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92317
Approved by: https://github.com/ezyang
2023-01-22 16:33:08 +00:00
5c7e801c50 [pytorch][on device quant] Finalize method for ondevice quant (#83571)
Summary:
After inserting quant dequant nodes in the graph, we need
1. Insert packed param creation and quantized op
2. Create packed_params attribute in the top module. For this we need
graph that inlined except for calculate_qparams method calls. But they
can be inlined too. So perhaps we need to make sure no other callmethods
exist.
3. Insert SetAttr for the packed param
4. Insert GetAttr for the packed param
5. Use GetAttr output for quantized op where applicable, e.g.
linear_dynamic

The above is added to quantize_<method-name> method created inprevious
step. Once the above steps are done clone the method into
quantized_<method-name>

Modify quantize_<method-name>:
1. Remove all outputs from the method.
2. Run dce
3. Remove all inputs from the method except self.

Modify quantized_<method-name>:
1. Remove all packed_param setAttr nodes.
2. Run dce.

This should result in removal of all nodes that generate packed param.

Test Plan: To be written

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D38771416](https://our.internmc.facebook.com/intern/diff/D38771416)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83571
Approved by: https://github.com/jerryzh168
2022-08-29 17:53:11 +00:00
30214aef2d [BE] irangefy (#62928)
Summary:
Replace for loop with for `irange` loop. Also fix some unused variable warnings in range loop cases

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62928

Reviewed By: driazati

Differential Revision: D30171904

Pulled By: malfet

fbshipit-source-id: 1b437a0f7e3515f4a2e324f3450e93312f1933ae
2021-08-07 13:34:13 -07:00
b39b28ced3 irange-ify 10 (#62122)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62122

Test Plan: Sandcastle

Reviewed By: malfet

Differential Revision: D29879694

fbshipit-source-id: 87cd8ab17061129c835d9f961b67587c84d181d1
2021-07-28 13:35:23 -07:00
b162d95e46 Fix a number of lint perf and safety issues in torch (#59897)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59897

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D29037012

fbshipit-source-id: 7c16286d5fc2b67964fb65f8374dfff4d1a7aefb
2021-06-15 13:14:51 -07:00
4cb534f92e Make PyTorch code-base clang-tidy compliant (#56892)
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
2021-04-28 14:10:25 -07:00
26419815af Modernize for-loops (#52330)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52330

Test Plan: Sandcastle

Reviewed By: mruberry

Differential Revision: D26001961

fbshipit-source-id: e75cc8f1a8d30917b4d55df9e1a3c7836c271820
2021-02-23 17:32:33 -08:00
958c208666 [quant] conv_transpose graph patterns (#45078)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45078

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23821580

Pulled By: z-a-f

fbshipit-source-id: 813a4ef1bbc429720765d61791fe754b6678a334
2020-09-25 18:14:29 -07:00
ed8b08a3ba Update quantize_jit to handle new upsample overloads (#43407)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43407

ghstack-source-id: 110404846

Test Plan:
test_general_value_ops passes with D21209991 applied.
(Without this diff D21209991 breaks that test.)

Reviewed By: jerryzh168

Differential Revision: D23256503

fbshipit-source-id: 0f75e50a9f7fccb5b4325604319a5f76b42dfe5e
2020-08-24 13:33:47 -07:00
6bd46b583e [quant][graph] Add support for FP16 dynamic quant (#42222)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42222

This change adds the necessary passes to perform FP16 dynamic quantization.
We skip inserting observers for activations based on the dtype (torch.float16) and only insert the Fp16Observer for weights

Test Plan:
python test/test_quantization.py TestQuantizeJitOps

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D22849220

fbshipit-source-id: 2c53594ecd2485e9e3dd0b380eceaf7c5ab5fc50
2020-07-31 12:33:53 -07:00
dde18041a6 [quant][graphmode] Refactor quantization patterns (#40894)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40894

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D22403901

fbshipit-source-id: e0bcf8a628c6a1acfe6fa10a52912360a619bc62
2020-07-08 10:36:25 -07:00
26543e6caf [quant][graphmode] FP16 quant support - Operator Fusion (#40710)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40710

Test Plan:
python test/test_quantization.py

Imported from OSS

Differential Revision: D22335975

fbshipit-source-id: 5c176bb6b9c300e1beb83df972149dd5a400b854
2020-07-01 14:15:53 -07:00
8f5b28674c [JIT] Remove dead store in quantization_patterns.h (#40724)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40724

Test Plan: Continuous integration.

Differential Revision: D22294600

Pulled By: SplitInfinity

fbshipit-source-id: 04546579273d8864d91c3c74a654aa75ba34ee45
2020-06-29 16:55:15 -07:00
0a19534dd2 [JIT] Remove dead store in quantization_patterns.h (#40623)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40623

Test Plan: Continuous integration.

Reviewed By: jerryzh168

Differential Revision: D22259209

fbshipit-source-id: 90c9e79e039100f2961195504bb81230bba5c5fe
2020-06-26 19:43:43 -07:00
e3a97688cc [quant][graphmode][fix] dequantize propagation for {add/mul}_scalar (#40596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40596

Previously the fusion patterns for {add/mul}_scalar is inconsistent since the op pattern
produces a non-quantized tensor and the op replacement graph produces a quantized tensor

Test Plan: Imported from OSS

Differential Revision: D22251072

fbshipit-source-id: e16eb92cf6611578cca1ed8ebde961f8d0610137
2020-06-25 22:17:08 -07:00
ab8a99bd36 graph mode: add hardswish inplace handling (#40284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40284

Adds graph mode handling for inplace hardswish, and test coverage for functional hardswish.

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_hardswish
```

Imported from OSS

Differential Revision: D22140628

fbshipit-source-id: 55a514f7dc1130d510f69ee4e611d7cb5e08d02e
2020-06-21 09:40:50 -07:00
c6dbfcaf9e quantized elu: graph mode handling (#40111)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40111

Adds graph mode handling for quantized elu.

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_elu
```

Imported from OSS

Differential Revision: D22075080

fbshipit-source-id: 37fb1b9e390f2a33d47cbd025157532379b6aa64
2020-06-21 09:40:48 -07:00
13d54c6471 quantized elu: require observation (#40100)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40100

ELU has a range of [-1, inf]. In the original PR which added
the quantized operator we decided to pass the quantization params
from the input.  However, it makes more sense to require observation
for this op.

This PR changes the API to require observation. Next PRs in this stack
will add the eager and graph mode handling.

Test Plan:
```
python test/test_quantization.py TestQuantizedOps.test_qelu
```

Imported from OSS

Differential Revision: D22075083

fbshipit-source-id: 0ea0fd05a00cc7a5f122a2b1de09144bbd586f32
2020-06-21 09:38:28 -07:00
9da277c635 [quant][graphmodel] linear_relu (#40021)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40021

This replaces #36889 due to significant merge conflicts

Test Plan: Imported from OSS

Differential Revision: D22087061

Pulled By: z-a-f

fbshipit-source-id: 6a65cdd3c0c0c957968a9d017902fb6d03b58150
2020-06-19 23:32:54 -07:00
e04a611b91 [quant][graphmode] clang format changes (#40329)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40329

Test Plan: Imported from OSS

Differential Revision: D22149706

fbshipit-source-id: 3c07cb0c09a53a01fc69185943ddc409264a6ff5
2020-06-19 23:22:43 -07:00
37362fff66 graph mode: util for fusion of functions which require observation (#39413)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39413

Implementing the request from
https://github.com/pytorch/pytorch/pull/39095

WIP so we can align on the API, once it looks good
will amend the PR to apply to all relevant functions.

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_hardswish
```

Imported from OSS

Differential Revision: D21885263

fbshipit-source-id: 029339a99f8c50e45dd1dfb7fd89c20e3188720d
2020-06-18 10:21:20 -07:00
f42c948df5 [quant][graphmode] Support another use pattern of mean (#40038)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40038

Test Plan: Imported from OSS

Differential Revision: D22055696

fbshipit-source-id: 776196ce3d743deb8335d237bf5ef0fa67f7f26d
2020-06-16 18:37:21 -07:00
144e8dc5a3 [quant][graphmode] Use quantizedbatch_norm in graph mode (#39911)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39911

Test Plan: Imported from OSS

Differential Revision: D22012282

fbshipit-source-id: 98af55172cbeaa7080865d6533df21647a7cedfa
2020-06-16 00:58:11 -07:00
99084104b6 [quant][graphmode][refactor] isScalar check (#39892)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39892

Test Plan: Imported from OSS

Differential Revision: D22009856

fbshipit-source-id: fbc407499bcff0f25e44eedba3d6cd1225325c24
2020-06-12 10:53:35 -07:00
004aa089a6 [jit][subgraph_rewriter] Support list of filters (#39867)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39867

Support list of filters in subgraph rewriter, the rewrite will execute only
when the match passes all filter check. this is useful for different matches
to share the same filter.

Test Plan: Imported from OSS

Differential Revision: D22009855

fbshipit-source-id: 67aab8d6326b2011a9061397699dc62ee9ad4e2d
2020-06-12 08:24:49 -07:00
246d7bb41d [quant][graphmode] Quantizing traced modules (#39826)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39826

Expanding operator test coverage to traced modules

Test Plan: Imported from OSS

Differential Revision: D21991266

fbshipit-source-id: 73b1d94caa6ad41bb0d6cbde7ba0de343da3e7ff
2020-06-12 00:55:11 -07:00
9551fb22d6 [quant][graphmode] Preserve numerics in debug option for clamp ops (#39219)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39219

We didn't model clamp ops correctly right now, this PR fixes that.

Reason is quantized clamp op quantizes the scalar arguments in the op implementation: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp#L614-L617

So we'll need to model this explicitly in the IR.
When we see a `aten::dequantize - aten::clamp(%x, %min, %max)`
we first make a scalar tensor with `aten::scalar_tensor(%scalar, ...)`, then we quantize the tensor with the same quantization parameters from the input tensor of the `aten::clamp`, dequantize the tensor, then convert the dequantized tensor to scalar using `aten::item`.

Test Plan: Imported from OSS

Differential Revision: D21831350

fbshipit-source-id: d60731459a0465d64946aabc62065d25d92faefc
2020-06-08 17:15:39 -07:00
ebdff07d49 instancenorm: static quant graph mode support (#39096)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39096

Hooks up instancenorm for graph mode static quant

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_instance_norm
```

Imported from OSS

Differential Revision: D21885258

fbshipit-source-id: 650cc5b162dda044866176fea6c345082d9788ed
2020-06-07 13:38:28 -07:00
b443ca26c5 groupnorm: graph mode static quant support (#39095)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39095

Hooks up groupnorm to graph mode static quant

Test Plan:
```
python test/test_quantization.py TestQuantizeScriptPTSQOps.test_group_norm
```

Imported from OSS

Differential Revision: D21885257

fbshipit-source-id: 3415c4de76181b026d2f5bfebab130fea29e1d1e
2020-06-07 13:38:22 -07:00
e4627e5dba [quant][graphmode] Fix add_relu patterns for scripting and tracing (#39455)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39455

1. enable filters in PatternInfo
2. add aten_add_alpha_is_one filter
3. add is_functional_relu filter
4. add is_relu_module filter
5. fix the relu module method call matching in traced modules with regex
6. add aten::add - aten::relu patterns for traced modules

Test Plan: Imported from OSS

Differential Revision: D21917118

fbshipit-source-id: e67b55cd1c070fd4238f563d933a6f10a3582ae3
2020-06-06 23:51:34 -07:00
625f4e39a7 [quant] Fix fusion pattern for add_relu (#39367)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39367

We shouldn't match `%alpha` argument since it could be used by multiple functions

Test Plan: Imported from OSS

Differential Revision: D21829295

fbshipit-source-id: 6daa320a4b56df4e142b8e02e04a3ecb36284d1b
2020-06-01 20:15:13 -07:00
9cacbe29e5 [quant] Add reduce_range argument for qlinear_dynamic (#39041)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39041

reduce_range option restricts the activation tensor to 7 bits instead of 8.
This is necessary to enable per channel quant for RNNs and LSTMs

Test Plan:
python test/test_quantization.py TestDynamicQuantizedLinear

Imported from OSS

Reviewed By: akinh

Differential Revision: D21769691

fbshipit-source-id: ef0e9873367f3c1b34091b0b3af788233ef60c6c
2020-05-29 18:19:36 -07:00
a8d8fc5532 [quant][graphmode] Different rule for add/add_/mul/mul_ (#38667)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/38667

Test Plan: Imported from OSS

Differential Revision: D21633555

fbshipit-source-id: 03b0298e83bf4dbda41b048c0edc7bb92cd4e1df
2020-05-20 19:43:46 -07:00