{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":283871761,"defaultBranch":"main","name":"torch-mlir","ownerLogin":"llvm","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2020-07-30T20:35:59.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/17149993?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1717438249.0","currentOid":""},"activityList":{"items":[{"before":"431d98b405900f2cb2cc816b9c742f292ff5f4e6","after":"d0a818a03e43e9afbce3fadce81ae2320952ce65","ref":"refs/heads/main","pushedAt":"2024-06-07T11:04:03.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sjain-stanford","name":"Sambhav Jain","path":"/sjain-stanford","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/19234106?s=80&v=4"},"commit":{"message":"Representing Symbolic Shape Expressions in Torch Dialect (#3372)\n\nTorch Dialect with symbolic shape expressions:\r\n```ll\r\nmodule { \r\n func.func @main(%arg0: !torch.vtensor<[?,?,3],f32>, %arg1: !torch.vtensor<[?,?,3],f32>) -> !torch.vtensor<[?,?,3],f32> { \r\n %0 = torch.symbolic_int \"s0\" {min_val = 5, max_val = 10} : !torch.int \r\n %1 = torch.symbolic_int \"s1\" {min_val = 0, max_val = 100} : !torch.int \r\n %2 = torch.symbolic_int \"s3\" {min_val = 0, max_val = 50} : !torch.int \r\n \r\n torch.bind_symbolic_shape %arg0, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> \r\n torch.bind_symbolic_shape %arg1, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> \r\n \r\n %3 = torch.aten.tanh %arg0 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32> \r\n torch.bind_symbolic_shape %3, [%0, %1], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> \r\n \r\n %4 = torch.aten.sigmoid %arg1 : !torch.vtensor<[?,?,3],f32> -> !torch.vtensor<[?,?,3],f32> \r\n torch.bind_symbolic_shape %4, [%0, %2], #affine_map<()[s0, s1] -> (s0, s1, 3)> : !torch.vtensor<[?,?,3],f32> \r\n \r\n %5 = torch.prim.ListConstruct %3, %3, %4 : (!torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>, !torch.vtensor<[?,?,3],f32>) -> !torch.list \r\n %int1 = torch.constant.int 1 \r\n %6 = torch.aten.cat %5, %int1 : !torch.list, !torch.int -> !torch.vtensor<[?,?,3],f32> \r\n torch.bind_symbolic_shape %6, [%0, %1, %2], #affine_map<()[s0, s1, s2] -> (s0, s1 * 2 + s2, 3)> : !torch.vtensor<[?,?,3],f32> \r\n \r\n return %6 : !torch.vtensor<[?,?,3],f32> \r\n } \r\n} \r\n```\r\n\r\nFor reference, this is the TorchDynamo exported program with symbolic\r\nshape expressions that the above Torch dialect program is imported from:\r\n```py\r\nExportedProgram: \r\n class GraphModule(torch.nn.Module): \r\n def forward(self, x: \"f32[s0, s1, 3]\", y: \"f32[s0, s3, 3]\"): \r\n # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:31 in forward, code: a = torch.tanh(x) \r\n tanh: \"f32[s0, s1, 3]\" = torch.ops.aten.tanh.default(x); x = None \r\n \r\n # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:32 in forward, code: b = torch.sigmoid(y) \r\n sigmoid: \"f32[s0, s3, 3]\" = torch.ops.aten.sigmoid.default(y); y = None \r\n \r\n # File: /home/sambhav.jain/workspaces/cruise/src/3p/torch-mlir/test/python/fx_importer/symbolic_shape_expr_test.py:33 in forward, code: return torch.cat((a, a, b), dim=1) \r\n cat: \"f32[s0, 2*s1 + s3, 3]\" = torch.ops.aten.cat.default([tanh, tanh, sigmoid], 1); tanh = sigmoid = None \r\n return (cat,) \r\n \r\nGraph signature: ExportGraphSignature(input_specs=[InputSpec(kind=, arg=TensorArgument(name='x'), target=None, persistent=None), InputSpec(kind=, arg=TensorArgument(name='y'), target=None, persistent=None)], output_specs=[OutputSpec(kind=, arg=TensorArgument(name='cat'), target=None)]) \r\nRange constraints: {s0: ValueRanges(lower=5, upper=10, is_bool=False), s1: ValueRanges(lower=0, upper=100, is_bool=False), s3: ValueRanges(lower=0, upper=50, is_bool=False)} \r\n```\r\n\r\nHuge credit to @stellaraccident for the inputs that helped evaluate the\r\nvarious design options and arrive at the representation of choice.\r\n\r\n\r\n- [x] Op definitions for symbolic_int and bind_symbolic_shape ops\r\n- [x] fx_importer updates to import range constraints + create\r\nsymbolic_int ops\r\n- [x] fx_importer changes for AffineMapAttr building + adding\r\nbind_symbolic_shape ops\r\n- [x] custom printer/parser for inlined AffineMap expressions in mlir\r\nassembly\r\n- [x] Dialect lit test\r\n- [x] fx_importer python lit tests\r\n- [ ] Cleanup pass to remove these ops (can add in a follow-on)","shortMessageHtmlLink":"Representing Symbolic Shape Expressions in Torch Dialect (#3372)"}},{"before":"72837fbb3d8177b9757fe8fd6ec10bb360799c1b","after":"431d98b405900f2cb2cc816b9c742f292ff5f4e6","ref":"refs/heads/main","pushedAt":"2024-06-07T08:06:07.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"qingyunqu","name":"Yuanqiang Liu","path":"/qingyunqu","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/29956693?s=80&v=4"},"commit":{"message":"[Stablehlo] Add lowering of GridSampler Op (#3084)\n\nInspired by PyTorch decompositions.py.\r\nSee\r\nhttps://github.com/pytorch/pytorch/blob/ec58f1f74ebcec744d2ab90ad34abd09c1018e92/torch/_decomp/decompositions.py#L3923-L4086\r\nOnly support paddingMode=0 or 1 and interpolationMode=0 or 1","shortMessageHtmlLink":"[Stablehlo] Add lowering of GridSampler Op (#3084)"}},{"before":"d59d0b6e5a88252d1d7e9b380e5488f49fadf87f","after":"72837fbb3d8177b9757fe8fd6ec10bb360799c1b","ref":"refs/heads/main","pushedAt":"2024-06-06T16:53:40.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"vivekkhandelwal1","name":"Vivek Khandelwal","path":"/vivekkhandelwal1","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/68822896?s=80&v=4"},"commit":{"message":"build: manually update PyTorch version (#3340)\n\nSet PyTorch and TorchVision version to nightly release 2024-05-14.\r\n\r\nSigned-Off By: Vivek Khandelwal ","shortMessageHtmlLink":"build: manually update PyTorch version (#3340)"}},{"before":"661be2d5b0ac0936be4f9139b5b1be099905d885","after":"d59d0b6e5a88252d1d7e9b380e5488f49fadf87f","ref":"refs/heads/main","pushedAt":"2024-06-04T23:05:39.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"rsuderman","name":"Rob Suderman","path":"/rsuderman","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5508949?s=80&v=4"},"commit":{"message":"[Linalg] Promote type for compare tensor op (#3416)","shortMessageHtmlLink":"[Linalg] Promote type for compare tensor op (#3416)"}},{"before":"35dd8c52cd23d74cc495ccf314b1101d38cd6512","after":"661be2d5b0ac0936be4f9139b5b1be099905d885","ref":"refs/heads/main","pushedAt":"2024-06-04T16:42:34.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"vivekkhandelwal1","name":"Vivek Khandelwal","path":"/vivekkhandelwal1","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/68822896?s=80&v=4"},"commit":{"message":"[MLIR][Torch] Add TorchToLinalg lowering for AtenAvgPool3dOp (#3030)\n\nThis commit also fixes the average pool op' test failing for\r\nOnnxToLinalg lowering.\r\n\r\nSigned-Off By: Vivek Khandelwal ","shortMessageHtmlLink":"[MLIR][Torch] Add TorchToLinalg lowering for AtenAvgPool3dOp (#3030)"}},{"before":"89f7d24fdc8e3721784856259639a8f9cc60fd41","after":"35dd8c52cd23d74cc495ccf314b1101d38cd6512","ref":"refs/heads/main","pushedAt":"2024-06-04T15:39:53.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"vivekkhandelwal1","name":"Vivek Khandelwal","path":"/vivekkhandelwal1","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/68822896?s=80&v=4"},"commit":{"message":"[ONNX] Add OnnxToTorch Lowering for MaxUnpool op (#3413)\n\nThis commit also adds the Torch declaration for aten.max_unpool2d and\r\naten.max_unpool3d op. The TorchToLinalg lowering for the same will be\r\nadded in a follow-up commit.\r\n\r\nSigned-Off By: Vivek Khandelwal ","shortMessageHtmlLink":"[ONNX] Add OnnxToTorch Lowering for MaxUnpool op (#3413)"}},{"before":"50f7103098ee41799a1180210f0e94400fac47cb","after":"89f7d24fdc8e3721784856259639a8f9cc60fd41","ref":"refs/heads/main","pushedAt":"2024-06-04T07:50:29.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"penguin-wwy","name":"penguin_wwy","path":"/penguin-wwy","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/15888598?s=80&v=4"},"commit":{"message":"[Bazel] Fix bazel deps (#3414)\n\n#3367 and #3364 introduced new dependencies, causing the [Bazel\r\nworkflow](https://github.com/llvm/torch-mlir/actions/workflows/bazelBuildAndTest.yml)\r\nto fail. These need to be fixed in Bazel.","shortMessageHtmlLink":"[Bazel] Fix bazel deps (#3414)"}},{"before":"56d21cba62693b4f6e162b0c91bee3446386328a","after":"50f7103098ee41799a1180210f0e94400fac47cb","ref":"refs/heads/main","pushedAt":"2024-06-04T01:05:00.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"qingyunqu","name":"Yuanqiang Liu","path":"/qingyunqu","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/29956693?s=80&v=4"},"commit":{"message":"[Stablehlo] support uint8 (#3367)\n\nSupport lowering unsigned integer type to stablehlo as discussed in\r\nhttps://github.com/llvm/torch-mlir/pull/2184.\r\n\r\nThe things I do in this PR:\r\n1. create `setupBackendTypeConversionForStablehlo()`,\r\n`createFuncBackendTypeConversionForStablehloPass` and\r\n`createFinalizingBackendTypeConversionForStablehloPass`.\r\n2. remove `InferTypeOpInterface` from `torch_c.to_builtin_tensor`,\r\nbecause it's different result type between linalg backend and stablehlo\r\nbackend:\r\n```\r\n// linalg backend\r\nfunc.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> {\r\n %c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xi8>\r\n %0 = tensor.empty() : tensor<3xf32>\r\n %1 = linalg.generic {indexing_maps = [#map, #map], iterator_types = [\"parallel\"]} ins(%arg0 : tensor<3xi8>) outs(%0 : tensor<3xf32>) {\r\n ^bb0(%in: i8, %out: f32):\r\n %2 = arith.uitofp %in : i8 to f32\r\n linalg.yield %2 : f32\r\n } -> tensor<3xf32>\r\n return %1 : tensor<3xf32>\r\n}\r\n// stablehlo backend\r\nfunc.func @forward(%arg0: !torch.vtensor<[3],ui8>) -> tensor<3xf32> {\r\n %c = torch_c.to_builtin_tensor %arg0 : (!torch.vtensor<[3], ui8> -> tensor<3xui8>\r\n %0 = stablehlo.convert %arg0 : (tensor<3xui8> -> tensor<3xf32>\r\n return %0 : tensor<3xf32>\r\n}\r\n```\r\n3. fix stablehlo and linalg's conversion","shortMessageHtmlLink":"[Stablehlo] support uint8 (#3367)"}},{"before":"0a6861b1e8fce8d06d98e8e8fb7f35707cf7a92b","after":"56d21cba62693b4f6e162b0c91bee3446386328a","ref":"refs/heads/main","pushedAt":"2024-06-04T00:43:28.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"zjgarvey","name":null,"path":"/zjgarvey","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/47986913?s=80&v=4"},"commit":{"message":"Link necessary op interface implementations (#3364)\n\nThis patch adds two `memref` passes to `torch-mlir-opt`, which already\r\noccur in the pass pipeline\r\n`torch-backend-to-linalg-on-tensors-backend-pipeline`. Additionally,\r\nnecessary op interface external models are included to address issue\r\n#3352.","shortMessageHtmlLink":"Link necessary op interface implementations (#3364)"}},{"before":"11c3281a8ae264f8073096b3ccdfe6c7657ee35d","after":"0a6861b1e8fce8d06d98e8e8fb7f35707cf7a92b","ref":"refs/heads/main","pushedAt":"2024-06-03T21:43:38.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"rsuderman","name":"Rob Suderman","path":"/rsuderman","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5508949?s=80&v=4"},"commit":{"message":"Add conversion operation for bool resolved_literal (#3410)\n\nResolving `bool` literals can result in a type change to uint8. This\r\nneeds to be converted back to the expected type before returning to the\r\nwrapped `torch` operators.","shortMessageHtmlLink":"Add conversion operation for bool resolved_literal (#3410)"}},{"before":"948981a773c68e6a6042658989fc9d9a76de4c79","after":"11c3281a8ae264f8073096b3ccdfe6c7657ee35d","ref":"refs/heads/main","pushedAt":"2024-06-03T20:36:10.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"renxida","name":"Xida Ren (Cedar)","path":"/renxida","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10362952?s=80&v=4"},"commit":{"message":"Fix reducesum onnx lit test to linalg lowering fails (#3218)\n\nfixes https://github.com/nod-ai/SHARK-Turbine/issues/653\r\n\r\n---------\r\n\r\nCo-authored-by: Xida Ren ","shortMessageHtmlLink":"Fix reducesum onnx lit test to linalg lowering fails (#3218)"}},{"before":"ce2359f10fd69ded20db40dd35055921d0be0ed6","after":null,"ref":"refs/heads/renxida-patch-2","pushedAt":"2024-06-03T18:10:49.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"renxida","name":"Xida Ren (Cedar)","path":"/renxida","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10362952?s=80&v=4"}},{"before":"8995c90879568ba9c04e13bc7faa70be035d3d7b","after":"948981a773c68e6a6042658989fc9d9a76de4c79","ref":"refs/heads/main","pushedAt":"2024-06-03T18:10:48.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"renxida","name":"Xida Ren (Cedar)","path":"/renxida","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10362952?s=80&v=4"},"commit":{"message":"Update development.md to use ld.lld (#3412)\n\n@kuhar mentioned in the previous PR that we should use ld.lld. I kept\r\nusing ld because for my LLD version, it worked.\r\n\r\nAfter updating to a new LLD version, that became necessary.","shortMessageHtmlLink":"Update development.md to use ld.lld (#3412)"}},{"before":"6382dbbcc00c35cc3deb8a75f0b181ae4701552a","after":"8995c90879568ba9c04e13bc7faa70be035d3d7b","ref":"refs/heads/main","pushedAt":"2024-06-03T16:27:45.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"vivekkhandelwal1","name":"Vivek Khandelwal","path":"/vivekkhandelwal1","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/68822896?s=80&v=4"},"commit":{"message":"[TorchToLinalg] add support for quantized group conv (#3341)\n\nThis addresses 7 of the model failures I'm seeing in the test suite. See\r\n[Shark-Turbine issue\r\n#566](https://github.com/nod-ai/SHARK-Turbine/issues/566).\r\n\r\nNeed the op ```linalg.conv_2d_ngchw_gfchw_q``` to be added upstream\r\nbefore merging this. See [llvm-project PR #92136\r\n](https://github.com/llvm/llvm-project/pull/92136).\r\n\r\nA small additional expansion to operand quantization is included in this\r\npatch to address a model failure that occurs when unblocking the\r\nquantized group convolutions in one of these onnx models.","shortMessageHtmlLink":"[TorchToLinalg] add support for quantized group conv (#3341)"}},{"before":null,"after":"ce2359f10fd69ded20db40dd35055921d0be0ed6","ref":"refs/heads/renxida-patch-2","pushedAt":"2024-06-03T15:07:39.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"renxida","name":"Xida Ren (Cedar)","path":"/renxida","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10362952?s=80&v=4"},"commit":{"message":"Update development.md to use ld.lld\n\n@kuhar mentioned in the previous PR that we should use ld.lld. I kept using ld because for my LLD version, it worked.\r\n\r\nAfter updating to a new LLD version, that became necessary.","shortMessageHtmlLink":"Update development.md to use ld.lld"}},{"before":"285b087a5db1b30002d7e19934e1747d3c5d5be3","after":"6382dbbcc00c35cc3deb8a75f0b181ae4701552a","ref":"refs/heads/main","pushedAt":"2024-06-03T14:59:39.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"vivekkhandelwal1","name":"Vivek Khandelwal","path":"/vivekkhandelwal1","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/68822896?s=80&v=4"},"commit":{"message":"[ONNX] Add OnnxToTorch lowering for SpaceToDepth op (#3393)\n\nSigned-Off By: Vivek Khandelwal ","shortMessageHtmlLink":"[ONNX] Add OnnxToTorch lowering for SpaceToDepth op (#3393)"}},{"before":"267052df2a5cd2042627d6ecece82da8b7d5d20f","after":"285b087a5db1b30002d7e19934e1747d3c5d5be3","ref":"refs/heads/main","pushedAt":"2024-06-03T11:25:52.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"qingyunqu","name":"Yuanqiang Liu","path":"/qingyunqu","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/29956693?s=80&v=4"},"commit":{"message":"[Torch] Emit rrelu and decompose it (#3250)\n\nas title","shortMessageHtmlLink":"[Torch] Emit rrelu and decompose it (#3250)"}},{"before":"23b53050deb7eda150716917a0305ae1591f1b44","after":"267052df2a5cd2042627d6ecece82da8b7d5d20f","ref":"refs/heads/main","pushedAt":"2024-06-03T07:25:09.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"qingyunqu","name":"Yuanqiang Liu","path":"/qingyunqu","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/29956693?s=80&v=4"},"commit":{"message":"[Torch] decompose AtenLerpTensorOp (#3251)\n\nas title","shortMessageHtmlLink":"[Torch] decompose AtenLerpTensorOp (#3251)"}},{"before":"617b00b983dec0fca0a7e13224d04a5862ffab05","after":"23b53050deb7eda150716917a0305ae1591f1b44","ref":"refs/heads/main","pushedAt":"2024-06-03T07:11:13.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"qingyunqu","name":"Yuanqiang Liu","path":"/qingyunqu","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/29956693?s=80&v=4"},"commit":{"message":"[Torch]Support conv_transpose1d and conv_transpose3d (#3286)\n\n1. Support conv_transpose1d and conv_transpose3d\r\n2. Fix bugs of convertTransposedConv func in\r\nlib/Conversion/TorchToStablehlo/Linear.cpp","shortMessageHtmlLink":"[Torch]Support conv_transpose1d and conv_transpose3d (#3286)"}},{"before":"a9513682f8527a55246a18df89c9ffd86c190f06","after":"acb361774cd615daefd4396b9661c9fe28c36f4d","ref":"refs/heads/linkerbench","pushedAt":"2024-06-01T00:11:35.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"renxida","name":"Xida Ren (Cedar)","path":"/renxida","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10362952?s=80&v=4"},"commit":{"message":"with comments from discord","shortMessageHtmlLink":"with comments from discord"}},{"before":null,"after":"a9513682f8527a55246a18df89c9ffd86c190f06","ref":"refs/heads/linkerbench","pushedAt":"2024-06-01T00:10:01.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"renxida","name":"Xida Ren (Cedar)","path":"/renxida","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10362952?s=80&v=4"},"commit":{"message":"initial benchmark with too similar histograms","shortMessageHtmlLink":"initial benchmark with too similar histograms"}},{"before":"878ba72c6537ac981b7f59479fdb5d09db8a6e03","after":"617b00b983dec0fca0a7e13224d04a5862ffab05","ref":"refs/heads/main","pushedAt":"2024-05-31T17:31:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"rsuderman","name":"Rob Suderman","path":"/rsuderman","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5508949?s=80&v=4"},"commit":{"message":"[NFC] Fix member cast change to global for landing collision (#3407)\n\nA PR landed when moving away from a deprecated cast function. Updated\r\nthe corresponding lines to pass.","shortMessageHtmlLink":"[NFC] Fix member cast change to global for landing collision (#3407)"}},{"before":"89523776030581c18daee9f1d633e4d342a3e7c9","after":"878ba72c6537ac981b7f59479fdb5d09db8a6e03","ref":"refs/heads/main","pushedAt":"2024-05-31T16:49:20.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"Groverkss","name":"Kunwar Grover","path":"/Groverkss","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/51270680?s=80&v=4"},"commit":{"message":"Bump LLVM to llvm/llvm-project@6127f15 (#3396)\n\nSigned-off-by: zjgarvey ","shortMessageHtmlLink":"Bump LLVM to llvm/llvm-project@6127f15 (#3396)"}},{"before":"fc100a117ddc3291559b60bcf0bb48cb66fe159b","after":"89523776030581c18daee9f1d633e4d342a3e7c9","ref":"refs/heads/main","pushedAt":"2024-05-31T16:47:57.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"vivekkhandelwal1","name":"Vivek Khandelwal","path":"/vivekkhandelwal1","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/68822896?s=80&v=4"},"commit":{"message":"[Onnx] reduce MatMul OpsetVersion to 1 (#3403)\n\nResolves #3324","shortMessageHtmlLink":"[Onnx] reduce MatMul OpsetVersion to 1 (#3403)"}},{"before":"afca88a0581c7815ce77485eedafbbf506aefb87","after":"fc100a117ddc3291559b60bcf0bb48cb66fe159b","ref":"refs/heads/main","pushedAt":"2024-05-31T07:36:48.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"renxida","name":"Xida Ren (Cedar)","path":"/renxida","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10362952?s=80&v=4"},"commit":{"message":"[MLIR][ONNX] Add OnnxToTorch support for Scatter Op (#3400)\n\nThis PR adds OnnxToTorch support for Scatter op","shortMessageHtmlLink":"[MLIR][ONNX] Add OnnxToTorch support for Scatter Op (#3400)"}},{"before":"4e05e2cd1e9cc07a736fea61a463278e6f6431f9","after":"afca88a0581c7815ce77485eedafbbf506aefb87","ref":"refs/heads/main","pushedAt":"2024-05-31T06:45:14.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"rsuderman","name":"Rob Suderman","path":"/rsuderman","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/5508949?s=80&v=4"},"commit":{"message":"[NFC] Change to *cast instead of .*cast variants (#3405)\n\nMember casts have been deprecated. Changing over a bunch of the member\r\ncast calls to the global templated variants to remove deprecation\r\nwarnings.","shortMessageHtmlLink":"[NFC] Change to *cast instead of .*cast variants (#3405)"}},{"before":"074098d20cd1f62ddfb8379bfc6f42530d6976df","after":"4e05e2cd1e9cc07a736fea61a463278e6f6431f9","ref":"refs/heads/main","pushedAt":"2024-05-31T01:56:48.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"qingyunqu","name":"Yuanqiang Liu","path":"/qingyunqu","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/29956693?s=80&v=4"},"commit":{"message":"[Torch] support recompose of aten.split.with_sizes and aten.tensor_sp… (#3401)\n\n…lit.sections\r\n\r\n* support recompose to aten.split.with_sizes and\r\naten.tensor_split.sections\r\n* fix recompose of aten.chunk","shortMessageHtmlLink":"[Torch] support recompose of aten.split.with_sizes and aten.tensor_sp… ("}},{"before":"d7b8f00d017253b98d5c41aa120938678f8ec672","after":"074098d20cd1f62ddfb8379bfc6f42530d6976df","ref":"refs/heads/main","pushedAt":"2024-05-31T00:34:38.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"renxida","name":"Xida Ren (Cedar)","path":"/renxida","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10362952?s=80&v=4"},"commit":{"message":"Modifies onnx resize lowering to fix numerical issues (#3381)\n\nUpdates:\r\n\r\n- some unsupported modes are now going to report a match failure for\r\nunsupported coordinate transformation modes.\r\n- fixes a bug that was introduced in the last patch for resize (my\r\nbad...)\r\n- uses actual x and y coordinates for computing weights in bilinear\r\ninterpolation (rather than eps modified values)\r\n- slightly simplifies the bilinear interpolation payload for readability\r\nand performance\r\n- passes coordinate transformation mode information from an onnx.Resize\r\nop to the mode string for the aten._interpolate op. This allows us to\r\nperform custom logic in the torch->linalg lowering to support\r\nonnx.Resize options without losing the default behaviors of the\r\ninterpolate op.","shortMessageHtmlLink":"Modifies onnx resize lowering to fix numerical issues (#3381)"}},{"before":"e4be197efd85916cd378ef8e7f21ca3de13b5903","after":"d7b8f00d017253b98d5c41aa120938678f8ec672","ref":"refs/heads/main","pushedAt":"2024-05-30T17:35:26.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"vivekkhandelwal1","name":"Vivek Khandelwal","path":"/vivekkhandelwal1","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/68822896?s=80&v=4"},"commit":{"message":"[ONNX] Add OnnxToTorch Lowering for LpNormalization op (#3397)\n\nSigned-Off By: Vivek Khandelwal ","shortMessageHtmlLink":"[ONNX] Add OnnxToTorch Lowering for LpNormalization op (#3397)"}},{"before":"1f544c37d0fb9f9657e4f80e9c30ccad8e3e0dc2","after":"e4be197efd85916cd378ef8e7f21ca3de13b5903","ref":"refs/heads/main","pushedAt":"2024-05-30T06:31:18.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"penguin-wwy","name":"penguin_wwy","path":"/penguin-wwy","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/15888598?s=80&v=4"},"commit":{"message":"[FxImporter] Fix transpose rank zero (#3382)","shortMessageHtmlLink":"[FxImporter] Fix transpose rank zero (#3382)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEXyD5EwA","startCursor":null,"endCursor":null}},"title":"Activity · llvm/torch-mlir"}