Skip to content

Commit 63efb2a

Browse files
committed
Merge branch 'master' into valgrind
2 parents e7c7f1e + 0f1832f commit 63efb2a

38 files changed

+1026
-449
lines changed

.github/workflows/build.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ jobs:
88
matrix:
99
include:
1010
- ruby: 3.3
11-
os: ubuntu-22.04
11+
os: ubuntu-24.04
1212
env:
13-
LIBTORCH_VERSION: 2.2.1
13+
LIBTORCH_VERSION: 2.5.1
1414
steps:
1515
- uses: actions/checkout@v4
1616
- uses: ruby/setup-ruby@v1
@@ -28,6 +28,6 @@ jobs:
2828
cd ~
2929
wget -q -O libtorch.zip https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-$LIBTORCH_VERSION%2Bcpu.zip
3030
unzip -q libtorch.zip
31-
- run: MAKE="make -j$(nproc)" bundle exec rake compile -- --with-torch-dir=$HOME/libtorch
31+
- run: MAKE="make -j$(getconf _NPROCESSORS_ONLN)" bundle exec rake compile -- --with-torch-dir=$HOME/libtorch
3232
- run: sudo apt-get update && sudo apt-get install valgrind
3333
- run: bundle exec rake test:valgrind

CHANGELOG.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,30 @@
1+
## 0.18.1 (unreleased)
2+
3+
- Improved `inspect` for `Device`
4+
- Fixed equality for `Device`
5+
- Fixed `index` method for `Device` when no index
6+
7+
## 0.18.0 (2024-10-22)
8+
9+
- Updated LibTorch to 2.5.0
10+
11+
## 0.17.1 (2024-08-19)
12+
13+
- Added `persistent` option to `register_buffer` method
14+
- Added `prefix` and `recurse` options to `named_buffers` method
15+
16+
## 0.17.0 (2024-07-26)
17+
18+
- Updated LibTorch to 2.4.0
19+
- Added `normalize` method
20+
- Added support for tensor indexing with arrays
21+
22+
## 0.16.0 (2024-06-12)
23+
24+
- Updated LibTorch to 2.3.0
25+
- Added `ELU` and `GELU` classes
26+
- Dropped support for Ruby < 3.1
27+
128
## 0.15.0 (2024-02-28)
229

330
- Updated LibTorch to 2.2.0

Gemfile

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,3 @@ gem "rake-compiler"
77
gem "minitest", ">= 5"
88
gem "numo-narray"
99
gem "ruby_memcheck"
10-
11-
# for examples
12-
gem "torchvision", ">= 0.2", require: false

README.md

Lines changed: 17 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -14,19 +14,28 @@ Check out:
1414

1515
## Installation
1616

17-
First, [install LibTorch](#libtorch-installation). With Homebrew, it’s part of the PyTorch package:
17+
First, [download LibTorch](https://pytorch.org/get-started/locally/). For Mac arm64, use:
1818

1919
```sh
20-
brew install pytorch
20+
curl -L https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.5.1.zip > libtorch.zip
21+
unzip -q libtorch.zip
2122
```
2223

23-
Add this line to your application’s Gemfile:
24+
For Linux x86-64, use the `cxx11 ABI` version. For other platforms, build LibTorch from source.
25+
26+
Then run:
27+
28+
```sh
29+
bundle config build.torch-rb --with-torch-dir=/path/to/libtorch
30+
```
31+
32+
And add this line to your application’s Gemfile:
2433

2534
```ruby
2635
gem "torch-rb"
2736
```
2837

29-
It can take 5-10 minutes to compile the extension.
38+
It can take 5-10 minutes to compile the extension. Windows is not currently supported.
3039

3140
## Getting Started
3241

@@ -398,31 +407,20 @@ Here’s a list of functions to create tensors (descriptions from the [C++ docs]
398407
Torch.zeros(3) # tensor([0, 0, 0])
399408
```
400409

401-
## LibTorch Installation
402-
403-
[Download LibTorch](https://pytorch.org/) (for Linux, use the `cxx11 ABI` version). Then run:
404-
405-
```sh
406-
bundle config build.torch-rb --with-torch-dir=/path/to/libtorch
407-
```
410+
## LibTorch Compatibility
408411

409412
Here’s the list of compatible versions.
410413

411414
Torch.rb | LibTorch
412415
--- | ---
416+
0.18.x | 2.5.x
417+
0.17.x | 2.4.x
418+
0.16.x | 2.3.x
413419
0.15.x | 2.2.x
414420
0.14.x | 2.1.x
415421
0.13.x | 2.0.x
416422
0.12.x | 1.13.x
417423

418-
### Homebrew
419-
420-
You can also use Homebrew.
421-
422-
```sh
423-
brew install pytorch
424-
```
425-
426424
## Performance
427425

428426
Deep learning is significantly faster on a GPU.

codegen/generate_functions.rb

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -155,10 +155,10 @@ def generate_attach_def(name, type, def_method)
155155
end
156156

157157
ruby_name = "_#{ruby_name}" if ["size", "stride", "random!"].include?(ruby_name)
158-
ruby_name = ruby_name.sub(/\Afft_/, "") if type == "fft"
159-
ruby_name = ruby_name.sub(/\Alinalg_/, "") if type == "linalg"
160-
ruby_name = ruby_name.sub(/\Aspecial_/, "") if type == "special"
161-
ruby_name = ruby_name.sub(/\Asparse_/, "") if type == "sparse"
158+
ruby_name = ruby_name.delete_prefix("fft_") if type == "fft"
159+
ruby_name = ruby_name.delete_prefix("linalg_") if type == "linalg"
160+
ruby_name = ruby_name.delete_prefix("special_") if type == "special"
161+
ruby_name = ruby_name.delete_prefix("sparse_") if type == "sparse"
162162
ruby_name = name if name.start_with?("__")
163163

164164
"rb_#{def_method}(m, \"#{ruby_name}\", #{full_name(name, type)}, -1);"
@@ -216,7 +216,7 @@ def add_dispatch(function, def_method)
216216
out_code = generate_dispatch(function["out"], def_method)
217217
out_index = function["out"].out_index
218218

219-
return "if (_r.isNone(#{out_index})) {
219+
"if (_r.isNone(#{out_index})) {
220220
#{indent(base_code)}
221221
} else {
222222
#{indent(out_code)}
@@ -439,7 +439,7 @@ def generate_function_params(function, params, remove_self)
439439
else
440440
"#{func}Optional"
441441
end
442-
end
442+
end
443443

444444
"_r.#{func}(#{param[:position]})"
445445
end

0 commit comments

Comments
 (0)