Update ruff to 0.4.1 .
This version fixes a lot false negatives/false positives, is 20-40% faster, and has various other bug fixes.
Below is a before and after table showing the execution time of ruff lint and ruff format in milliseconds courtesy of https://astral.sh/blog/ruff-v0.4.0
| Repository | Linter (v0.3) | Linter (v0.4) | Formatter (v0.3) | Formatter (v0.4) |
|----------------------------------------------------|---------------|---------------|------------------|------------------|
| [pytorch/pytorch](https://github.com/pytorch/pytorch) | 328.7 | 251.8 | 351.1 | 274.9 |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124549
Approved by: https://github.com/ezyang
Fixes#112633
Fixed errors relating to pydocstyle in the following files. The remaining errors are not covered in this issue. `torch/utils/dlpack.py` was not modified as the errors are relating to the function signature in the first line in the docstring which must be maintained as is for proper Sphinx interpretation.
```python
def from_dlpack(ext_tensor: Any) -> 'torch.Tensor':
"""from_dlpack(ext_tensor) -> Tensor
.....
"""
```
pydocstyle torch/utils/_contextlib.py --count
before: 4
after: 0
pydocstyle torch/backends/mps/__init__.py --count
before: 8
after: 1
**remaining errors**
```
torch/backends/mps/__init__.py:1 at module level:
D104: Missing docstring in public package
```
pydocstyle torch/backends/xeon/run_cpu.py --count
before: 13
after: 1
**remaining errors**
```
torch/backends/xeon/run_cpu.py:864 in public function `main`:
D103: Missing docstring in public function
```
pydocstyle torch/backends/cpu/__init__.py --count
before: 2
after: 1
**remaining errors**
```
torch/backends/cpu/__init__.py:1 at module level:
D104: Missing docstring in public package
```
pydocstyle torch/utils/cpp_backtrace.py --count
before: 4
after: 1
**remaining errors**
```
torch/utils/cpp_backtrace.py:1 at module level:
D100: Missing docstring in public module
```
pydocstyle torch/utils/bundled_inputs.py --count
before: 8
after: 1
**remaining errors**
```
torch/utils/bundled_inputs.py:1 at module level:
D100: Missing docstring in public module
```
pydocstyle torch/utils/file_baton.py --count
before: 8
after: 1
**remaining errors**
```
torch/utils/file_baton.py:1 at module level:
D100: Missing docstring in public module
```
pydocstyle torch/utils/mobile_optimizer.py --count
before: 6
after: 1
**remaining errors**
```
torch/utils/mobile_optimizer.py:8 in public class `LintCode`:
D101: Missing docstring in public class
```
pydocstyle torch/backends/opt_einsum/__init__.py --count
before: 7
after: 5
**remaining errors**
```
torch/backends/opt_einsum/__init__.py:1 at module level:
D104: Missing docstring in public package
torch/backends/opt_einsum/__init__.py:67 in public function `set_flags`:
D103: Missing docstring in public function
torch/backends/opt_einsum/__init__.py:77 in public function `flags`:
D103: Missing docstring in public function
torch/backends/opt_einsum/__init__.py:93 in public class `OptEinsumModule`:
D101: Missing docstring in public class
torch/backends/opt_einsum/__init__.py:94 in public method `__init__`:
D107: Missing docstring in __init__
```
pydocstyle torch/utils/_device.py --count
before: 9
after: 6
**remaining errors**
```
torch/utils/_device.py:58 in public class `DeviceContext`:
D101: Missing docstring in public class
torch/utils/_device.py:59 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/_device.py:62 in public method `__enter__`:
D105: Missing docstring in magic method
torch/utils/_device.py:68 in public method `__exit__`:
D105: Missing docstring in magic method
torch/utils/_device.py:73 in public method `__torch_function__`:
D105: Missing docstring in magic method
torch/utils/_device.py:80 in public function `device_decorator`:
D103: Missing docstring in public function
```
pydocstyle torch/utils/_freeze.py --count
before: 15
after: 7
**remaining errors**
```
torch/utils/_freeze.py:77 in public function `indent_msg`:
D103: Missing docstring in public function
torch/utils/_freeze.py:89 in public class `FrozenModule`:
D101: Missing docstring in public class
torch/utils/_freeze.py:100 in public class `Freezer`:
D101: Missing docstring in public class
torch/utils/_freeze.py:101 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/_freeze.py:106 in public method `msg`:
D102: Missing docstring in public method
torch/utils/_freeze.py:185 in public method `get_module_qualname`:
D102: Missing docstring in public method
torch/utils/_freeze.py:206 in public method `compile_string`:
D102: Missing docstring in public method
```
pydocstyle torch/utils/throughput_benchmark.py --count
before: 25
after: 8
**remaining errors**
```
torch/utils/throughput_benchmark.py:1 at module level:
D100: Missing docstring in public module
torch/utils/throughput_benchmark.py:27 in public class `ExecutionStats`:
D101: Missing docstring in public class
torch/utils/throughput_benchmark.py:28 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/throughput_benchmark.py:33 in public method `latency_avg_ms`:
D102: Missing docstring in public method
torch/utils/throughput_benchmark.py:37 in public method `num_iters`:
D102: Missing docstring in public method
torch/utils/throughput_benchmark.py:46 in public method `total_time_seconds`:
D102: Missing docstring in public method
torch/utils/throughput_benchmark.py:50 in public method `__str__`:
D105: Missing docstring in magic method
torch/utils/throughput_benchmark.py:94 in public method `__init__`:
D107: Missing docstring in __init__
```
pydocstyle torch/utils/hooks.py --count
before: 14
after: 11
**remaining errors**
```
torch/utils/hooks.py:1 at module level:
D100: Missing docstring in public module
torch/utils/hooks.py:23 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/hooks.py:34 in public method `remove`:
D102: Missing docstring in public method
torch/utils/hooks.py:44 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/hooks.py:50 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/hooks.py:64 in public method `__enter__`:
D105: Missing docstring in magic method
torch/utils/hooks.py:67 in public method `__exit__`:
D105: Missing docstring in magic method
torch/utils/hooks.py:82 in public function `warn_if_has_hooks`:
D103: Missing docstring in public function
torch/utils/hooks.py:103 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/hooks.py:188 in public method `setup_input_hook`:
D102: Missing docstring in public method
torch/utils/hooks.py:197 in public method `setup_output_hook`:
D102: Missing docstring in public method
```
pydocstyle torch/utils/_traceback.py --count
before: 19
after: 14
**remaining errors**
```
torch/utils/_traceback.py:47 in public function `report_compile_source_on_error`:
D103: Missing docstring in public function
torch/utils/_traceback.py:160 in public class `CapturedTraceback`:
D101: Missing docstring in public class
torch/utils/_traceback.py:163 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/_traceback.py:167 in public method `cleanup`:
D102: Missing docstring in public method
torch/utils/_traceback.py:170 in public method `summary`:
D102: Missing docstring in public method
torch/utils/_traceback.py:182 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/_traceback.py:190 in public method `extract`:
D205: 1 blank line required between summary line and description (found 0)
torch/utils/_traceback.py:190 in public method `extract`:
D400: First line should end with a period (not 't')
torch/utils/_traceback.py:213 in public method `format`:
D205: 1 blank line required between summary line and description (found 0)
torch/utils/_traceback.py:213 in public method `format`:
D400: First line should end with a period (not 'f')
torch/utils/_traceback.py:213 in public method `format`:
D401: First line should be in imperative mood (perhaps 'Format', not 'Formats')
torch/utils/_traceback.py:224 in public method `format_all`:
D200: One-line docstring should fit on one line with quotes (found 3)
torch/utils/_traceback.py:247 in private function `_extract_symbolized_tb`:
D205: 1 blank line required between summary line and description (found 0)
torch/utils/_traceback.py:247 in private function `_extract_symbolized_tb`:
D400: First line should end with a period (not 'f')
```
pydocstyle torch/utils/mkldnn.py --count
before: 28
after: 26
**remaining errors**
```
torch/utils/mkldnn.py:1 at module level:
D100: Missing docstring in public module
torch/utils/mkldnn.py:4 in public class `MkldnnLinear`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:5 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:19 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:23 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:29 in public method `forward`:
D102: Missing docstring in public method
torch/utils/mkldnn.py:75 in public class `MkldnnConv1d`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:76 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:82 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:88 in public class `MkldnnConv2d`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:89 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:100 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:110 in public class `MkldnnConv3d`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:111 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:122 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:133 in public class `MkldnnBatchNorm`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:136 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:155 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:163 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:171 in public method `forward`:
D102: Missing docstring in public method
torch/utils/mkldnn.py:184 in public class `MkldnnPrelu`:
D101: Missing docstring in public class
torch/utils/mkldnn.py:185 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/mkldnn.py:190 in public method `__getstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:194 in public method `__setstate__`:
D105: Missing docstring in magic method
torch/utils/mkldnn.py:199 in public method `forward`:
D102: Missing docstring in public method
torch/utils/mkldnn.py:205 in public function `to_mkldnn`:
D103: Missing docstring in public function
```
pydocstyle torch/utils/weak.py --count
before: 32
after: 30
**remaining errors**
```
torch/utils/weak.py:1 at module level:
D100: Missing docstring in public module
torch/utils/weak.py:42 in public class `WeakIdRef`:
D101: Missing docstring in public class
torch/utils/weak.py:45 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/weak.py:54 in public method `__call__`:
D102: Missing docstring in public method
torch/utils/weak.py:61 in public method `__hash__`:
D105: Missing docstring in magic method
torch/utils/weak.py:64 in public method `__eq__`:
D105: Missing docstring in magic method
torch/utils/weak.py:84 in public class `WeakIdKeyDictionary`:
D101: Missing docstring in public class
torch/utils/weak.py:87 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/weak.py:131 in public method `__delitem__`:
D105: Missing docstring in magic method
torch/utils/weak.py:135 in public method `__getitem__`:
D105: Missing docstring in magic method
torch/utils/weak.py:138 in public method `__len__`:
D105: Missing docstring in magic method
torch/utils/weak.py:145 in public method `__repr__`:
D105: Missing docstring in magic method
torch/utils/weak.py:148 in public method `__setitem__`:
D105: Missing docstring in magic method
torch/utils/weak.py:151 in public method `copy`:
D102: Missing docstring in public method
torch/utils/weak.py:162 in public method `__deepcopy__`:
D105: Missing docstring in magic method
torch/utils/weak.py:172 in public method `get`:
D102: Missing docstring in public method
torch/utils/weak.py:175 in public method `__contains__`:
D105: Missing docstring in magic method
torch/utils/weak.py:182 in public method `items`:
D102: Missing docstring in public method
torch/utils/weak.py:189 in public method `keys`:
D102: Missing docstring in public method
torch/utils/weak.py:198 in public method `values`:
D102: Missing docstring in public method
torch/utils/weak.py:216 in public method `popitem`:
D102: Missing docstring in public method
torch/utils/weak.py:224 in public method `pop`:
D102: Missing docstring in public method
torch/utils/weak.py:228 in public method `setdefault`:
D102: Missing docstring in public method
torch/utils/weak.py:231 in public method `update`:
D102: Missing docstring in public method
torch/utils/weak.py:241 in public method `__ior__`:
D105: Missing docstring in magic method
torch/utils/weak.py:245 in public method `__or__`:
D105: Missing docstring in magic method
torch/utils/weak.py:252 in public method `__ror__`:
D105: Missing docstring in magic method
torch/utils/weak.py:262 in public method `__eq__`:
D105: Missing docstring in magic method
torch/utils/weak.py:276 in public method `__init__`:
D107: Missing docstring in __init__
torch/utils/weak.py:280 in public method `__call__`:
D102: Missing docstring in public method
```
@mikaylagawarecki @jbschlosser @svekars
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113311
Approved by: https://github.com/ezyang
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.
I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.
I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
Summary:
Adjusts type hints for optimize_for_mobile to be consistent with the default. Right now using optimize_for_mobile and only passing a script_module gives me a type error complaining about preserved_methods can't be None.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59282
Test Plan:
Imported from GitHub, without a `Test Plan:` line.
Open source tests ran the lints. Internal CI should be enough here.
Reviewed By: jbschlosser
Differential Revision: D28838159
Pulled By: JacobSzwejbka
fbshipit-source-id: dd1e9aff00a759f71d32025d8c5b01e612c869a5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58344
remove a helper function thats more trouble then its worth.
ghstack-source-id: 129131889
Test Plan: ci and {P414950111}
Reviewed By: dhruvbird
Differential Revision: D28460607
fbshipit-source-id: 31bd6c1cc169785bb360e3113d258b612cad47fc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57045
Went back and adjusted the previous optimizations to just be applied to every function.
Cleaned up api to match.
ghstack-source-id: 127214412
ghstack-source-id: 127536155
Test Plan: unit test
Reviewed By: kimishpatel
Differential Revision: D27950859
fbshipit-source-id: 214e83d5a19b452747fe223615815c10fa4aee58
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53314
Introduction of api for optimizing non forward functions for mobile. As of this diff, all functions that you say to optimize will be preserved, and those functions will be run through canonical optimization. The intention is to stack each further optimization onto separate diffs since they touch multiple files, and it seems like it'd be a nightmare to review.
ghstack-source-id: 123909414
Test Plan:
torch.utils.mobile_optimizer.optimize_for_mobile(net, methods_to_optimize=["forward", "foo"]) runs fine
torch.utils.mobile_optimizer.optimize_for_mobile(net, methods_to_optimize={"foo"}) optimizes just foo if the model doesnt define forward otherwise optimizes foo and forward
torch.utils.mobile_optimizer.optimize_for_mobile(net, methods_to_optimize=["forward"]) runs fine
torch.utils.mobile_optimizer.optimize_for_mobile(net) runs fine if the model defines forward, Throws otherwise
Reviewed By: kimishpatel
Differential Revision: D26618689
fbshipit-source-id: 5bff1fb3f3f6085c4a649a8128af9c10f0fa9400
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51884
it is now possible to bundle inputs and not bundle them for forward. This is ok and so we need to account for that.
ghstack-source-id: 121266667
Test Plan: Manually bundle inputs for a function not named forward. Call optimize_for_mobile and make sure the functions are still there. {P173289878}
Reviewed By: iseeyuan
Differential Revision: D26304558
fbshipit-source-id: 79f82d9de59c70b76f34e01f3d691107bf40e7bc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51496
A previous change added the possibility of more functions being generated when bundled inputs are attached. Want to preserve those here in optimize_for_mobile
ghstack-source-id: 120862718
Test Plan:
Created a dummy model. Augment several methods with bundled inputs. Call optimize for mobile. Verified the functions are still there.
Discovered a weird interaction between freeze_module and bundled inputs. If the user does something like
inputs =[<inputs>]
augment_many_model_functions_with_bundled_inputs(
model,
inputs={
model.forward : inputs,
model.foo : inputs,
}
)
to attach their bundled inputs, freeze_module within optimize_for_mobile will error out. Instead the user would need to do something like
inputs =[<inputs>]
inputs2 =[<inputs>] # Nominally the same as the the inputs above
augment_many_model_functions_with_bundled_inputs(
model,
inputs={
model.forward : inputs,
model.foo : inputs2,
}
)
Reviewed By: dhruvbird
Differential Revision: D26005708
fbshipit-source-id: 3e908c0f7092a57da9039fbc395aee6bf9dd2b20
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51153
Enabled bundled inputs for all public functions that the user wants in a torchscript module. An important caveat here is that you cant add bundled inputs to functions that were in the nn.module but weren't caught in the scripting/tracing process that brought the model to torchscript.
Old Api is exactly the same. Still only works on forward, return types the same, etc.
-----------New API-------------
Attachment of inputs:
***augment_model_with_bundled_inputs*** : works the same as before but added the option to specify an info dictionary.
***augment_many_model_functions_with_bundled_inputs*** : Similar to the above function but allows the user to specify a Dict[Callable, List[<inputs>]] (mapping function references to the bundled inputs for that function) to attach bundled inputs to many functions
Consumption of inputs:
***get_all_bundled_inputs_for_<function_name>()*** : Works exactly like get_all_bundled_inputs does, but can be used for functions other then forward if you know ahead of time what they are called, and if they have bundled inputs.
***get_bundled_inputs_functions_and_info()*** : This is easily the hackiest function. Returns a Dict['str', 'str'] mapping function_names to get_all_bundled_inputs_for_<function_name>. A user can then execute the functions specified in the values with something like
all_info = model.get_bundled_inputs_functions_and_info()
for func_name in all_info.keys():
input_func_name = all_info[func_name]['get_inputs_function_name'][0]
func_to_run = getattr(loaded, input_func_name)
The reason its done this way is because torchscript doesn't support 'Any' type yet meaning I can't return the bundled inputs directly because they could be different types for each function. Torchscript also doesn't support callable so I can't return a function reference directly either.
ghstack-source-id: 120768561
Test Plan:
Got a model into torchscript using the available methods that I'm aware of (tracing, scripting, old scripting method). Not really sure how tracing brings in functions that arent in the forward call path though. Attached bundled inputs and info to them successfully. Changes to TorchTest.py on all but the last version of this diff (where it will be/is removed for land) illustrate what I did to test.
Created and ran unit test
Reviewed By: dreiss
Differential Revision: D25931961
fbshipit-source-id: 36e87c9a585554a83a932e4dcf07d1f91a32f046
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49170
Added an extra step to **always** preserve the bundled inputs methods if they are present in the input module.
Also added a check to see if all the methods in the `preseved_methods` exist. If not, we will now throw an exception. This can hopefully stop hard-to-debug inputs from getting into downstream functions.
~~Add an optional argument `preserve_bundled_inputs_methods=False` to the `optimize_for_mobile` function. If set to be True, the function will now add three additional functions related with bundled inputs to be preserved: `get_all_bundled_inputs`, `get_num_bundled_inputs` and `run_on_bundled_input`.~~
Test Plan:
`buck test mode/dev //caffe2/test:mobile -- 'test_preserve_bundled_inputs_methods \(test_mobile_optimizer\.TestOptimizer\)'`
or
`buck test caffe2/test:mobile` to run some other related tests as well.
Reviewed By: dhruvbird
Differential Revision: D25463719
fbshipit-source-id: 6670dfd59bcaf54b56019c1a43db04b288481b6a
Summary:
Added an extra step to **always** preserve the bundled inputs methods if they are present in the input module.
Also added a check to see if all the methods in the `preseved_methods` exist. If not, we will now throw an exception. This can hopefully stop hard-to-debug inputs from getting into downstream functions.
~~Add an optional argument `preserve_bundled_inputs_methods=False` to the `optimize_for_mobile` function. If set to be True, the function will now add three additional functions related with bundled inputs to be preserved: `get_all_bundled_inputs`, `get_num_bundled_inputs` and `run_on_bundled_input`.~~
Test Plan:
`buck test mode/dev //caffe2/test:mobile -- 'test_preserve_bundled_inputs_methods \(test_mobile_optimizer\.TestOptimizer\)'`
or
`buck test caffe2/test:mobile` to run some other related tests as well.
Reviewed By: dhruvbird
Differential Revision: D25433268
fbshipit-source-id: 0bf9b4afe64b79ed1684a3db4c0baea40ed3cdd5
Summary:
By default freeze_module pass, invoked from optimize_for_mobile,
preserves only forward method. There is an option to specify a list of
methods that can be preserved during freeze_module. This PR exposes that
to optimize_for_module pass.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40629
Test Plan: python test/test_mobile_optimizer.py
Reviewed By: dreiss
Differential Revision: D22260972
Pulled By: kimishpatel
fbshipit-source-id: 452c653269da8bb865acfb58da2d28c23c66e326
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37462
Instead of running all the optimization pass in optimizeForMobile method,
introducing a whitelist optimizer dictionary as second param in the method,
when it is not passed during calling, the method will run all the optimization
passes, otherwise the method will read the dict and only run the pass with
value of True.
ghstack-source-id: 106104503
Test Plan:
python test/test_mobile_optimizer.py
Imported from OSS
Differential Revision: D22096029
fbshipit-source-id: daa9370c0510930f4c032328b225df0bcf97880f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37046
ghstack-source-id: 102669259
Creating a python api entry to generate mobile model lints which takes a scripted module as argument and returns a map of module lints.
The initial version is to create placeholder which included module bundled input as the first lint instance. More lints will be added in the future.
Test Plan: python test/test_optimizer.py
Reviewed By: dreiss
Differential Revision: D21164648
fbshipit-source-id: 9e8f4e19d74b5464a55cc73b9dc18f358c5947d6
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36357
ghstack-source-id: 101907180
Creating a python api entry to optimize mobile model which takes a scripted module as argument and returns an optimized scripted module. The initial optimization features includes inserting and folding prepack ops.
Test Plan: python test/test_optimizer.py
Differential Revision: D20946076
fbshipit-source-id: 93cb4a5bb2371128f802d738eb26d0a4f3b2fe10