{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":6934395,"defaultBranch":"main","name":"rocksdb","ownerLogin":"facebook","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2012-11-30T06:16:18.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/69631?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1716484498.0","currentOid":""},"activityList":{"items":[{"before":"8765a0f5467eaed25099de7fdbce9926b2dad32b","after":"5cec4bbcab07e7ab925513a0dd133f104f91e1a6","ref":"refs/heads/main","pushedAt":"2024-05-28T23:58:19.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Support PutEntity as write method in the transactional MultiGet stress test (#12699)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12699\n\nThe patch adds `PutEntity` to the potential write operations used in the read-your-own-writes tests for `Transaction::MultiGet`. Note that since the stress test generates wide-column structures which have the value returned by `GenerateValue` in the default column, this does not affect the results returned by the `MultiGet` API (unless we have a bug).\n\nThe wide-column entity is generated according to the usual rules based on the value base and the `use_put_entity_one_in` flag. The entire entity structure will be validated by the upcoming stress test for `Transaction::MultiGetEntity`, where we also plan to leverage this logic.\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D57799075\n\nfbshipit-source-id: 5f86c2b2b3ceee8e1b8bf7453c02f1f1b1b00751","shortMessageHtmlLink":"Support PutEntity as write method in the transactional MultiGet stres…"}},{"before":"259f21e695f4a0b3cb2338e1780d3575a943ba38","after":"8765a0f5467eaed25099de7fdbce9926b2dad32b","ref":"refs/heads/main","pushedAt":"2024-05-28T23:48:18.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix version edit dump in json (#12703)\n\nSummary:\n**Context/Summary:**\nthe flag --json of manifest_dump in ldb tool has no effect\nThe bug may be introduced by pr https://github.com/facebook/rocksdb/pull/8378\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12703\n\nReviewed By: cbi42\n\nDifferential Revision: D57848094\n\nPulled By: ajkr\n\nfbshipit-source-id: 3d1ce65528bf4ce9c53593a7208406ab90e8994b","shortMessageHtmlLink":"Fix version edit dump in json (#12703)"}},{"before":"c115eb6162e9584e35bfd37da61425a36f54fd32","after":"259f21e695f4a0b3cb2338e1780d3575a943ba38","ref":"refs/heads/main","pushedAt":"2024-05-28T22:45:45.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add WB, WBWI Create, UpdateTimestamp, Iterator::Refresh in C API (#10529)\n\nSummary:\nThis PR adds UpdateTimestamp API of WriteBatch and WBWI, create WB, WBWI with all options and Iterator Refresh in C API\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/10529\n\nReviewed By: cbi42\n\nDifferential Revision: D57826913\n\nPulled By: ajkr\n\nfbshipit-source-id: d2ec840129f61a1d3a5a12e859728be98ebbad2f","shortMessageHtmlLink":"Add WB, WBWI Create, UpdateTimestamp, Iterator::Refresh in C API (#10529"}},{"before":"7c6c632ea982c5d59e02b433f99f778b9623e910","after":"c115eb6162e9584e35bfd37da61425a36f54fd32","ref":"refs/heads/main","pushedAt":"2024-05-28T22:39:35.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix compile errors in C++23 (#12106)\n\nSummary:\nThis PR fixes compile errors in C++23.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12106\n\nReviewed By: cbi42\n\nDifferential Revision: D57826279\n\nPulled By: ajkr\n\nfbshipit-source-id: 594abfd8eceaf51eaf3bbabf7696c0bb5e0e9a68","shortMessageHtmlLink":"Fix compile errors in C++23 (#12106)"}},{"before":"d2ef70872fa1cccd10d36b3208201bb9fd46c113","after":"7c6c632ea982c5d59e02b433f99f778b9623e910","ref":"refs/heads/main","pushedAt":"2024-05-28T22:35:04.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Use `std::optional` instead of `std::unique_ptr` to conditionally create a read lock. (#12704)\n\nSummary:\nThis change replaces the use of `std::unique_ptr` with `std::optional` for conditionally constructing a `ReadLock` object. The read lock object was recently introduced in PR https://github.com/facebook/rocksdb/issues/12624. This change makes the code more concise and clarifies that the lock is not meant to be transferred (as `std::unique_ptr` is movable). It also avoids a heap allocation.\n\nThere are no functional changes.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12704\n\nReviewed By: cbi42\n\nDifferential Revision: D57848192\n\nPulled By: ajkr\n\nfbshipit-source-id: da48c77aac33b51ba5dcc238f98fc48ccf234a21","shortMessageHtmlLink":"Use std::optional instead of std::unique_ptr to conditionally cre…"}},{"before":"0ee7f8bacb727886ac30e866299d5badc72dcf27","after":"d2ef70872fa1cccd10d36b3208201bb9fd46c113","ref":"refs/heads/main","pushedAt":"2024-05-28T16:28:03.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Rename, deprecate `LogFile` and `VectorLogPtr` (#12695)\n\nSummary:\nThese names are confusing with `Logger` etc. so moving to `WalFile` etc.\n\nOther small, related name refactorings.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12695\n\nTest Plan: Left most unit tests using old names as an API compatibility test. Non-test code compiles with deprecated names removed. No functional changes.\n\nReviewed By: ajkr\n\nDifferential Revision: D57747458\n\nPulled By: pdillinger\n\nfbshipit-source-id: 7b77596b9c20d865d43b9dc66c30c8bd2b3b424f","shortMessageHtmlLink":"Rename, deprecate LogFile and VectorLogPtr (#12695)"}},{"before":"0e5ed2e0c888913e211ef83a2369aae402df8621","after":"0ee7f8bacb727886ac30e866299d5badc72dcf27","ref":"refs/heads/main","pushedAt":"2024-05-27T00:32:46.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix `max_read_amp` value in crash test (#12701)\n\nSummary:\nIt should be no less than `level0_file_num_compaction_trigger`(which defaults to 4) when set to a positive value. Otherwise DB open will fail.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12701\n\nTest Plan: crash test not failing DB open due to this option value.\n\nReviewed By: ajkr\n\nDifferential Revision: D57825062\n\nPulled By: cbi42\n\nfbshipit-source-id: 22d8e12aeceb5cef815157845995a8448552e2d2","shortMessageHtmlLink":"Fix max_read_amp value in crash test (#12701)"}},{"before":"bd801bd98cd9446b05b3444e30957d587f0ba08d","after":"0e5ed2e0c888913e211ef83a2369aae402df8621","ref":"refs/heads/main","pushedAt":"2024-05-26T00:13:51.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"add export_file to rockdb TARGETS generator and re-gen\n\nSummary:\nwe are converting the implicit loads to explicit loads, then remove the hidden loads in fbcode macroes.\ndetails see https://fb.workplace.com/groups/devx.build.bffs/permalink/7481848805183560/\n\nReviewed By: JakobDegen\n\nDifferential Revision: D57800976\n\nfbshipit-source-id: a893aa2aa9237704ba9eb998cba210222c95dd2f","shortMessageHtmlLink":"add export_file to rockdb TARGETS generator and re-gen"}},{"before":"fecb10c2fa1501fd71a120793e1913a1ac7407ea","after":"bd801bd98cd9446b05b3444e30957d587f0ba08d","ref":"refs/heads/main","pushedAt":"2024-05-24T21:31:57.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Factor out the RYW transaction building logic into a helper (#12697)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12697\n\nAs groundwork for stress testing `Transaction::MultiGetEntity`, the patch factors out the logic for adding transactional writes for some of the keys in a `MultiGet` batch into a separate helper method called `MaybeAddKeyToTxnForRYW`.\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D57791830\n\nfbshipit-source-id: ef347ba6e6e82dfe5cedb4cf67dd6d1503901d89","shortMessageHtmlLink":"Factor out the RYW transaction building logic into a helper (#12697)"}},{"before":"9a72cf1a619a3a0c51bc5572d91071b79f62758e","after":"fecb10c2fa1501fd71a120793e1913a1ac7407ea","ref":"refs/heads/main","pushedAt":"2024-05-24T17:16:38.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Improve universal compaction sorted-run trigger (#12477)\n\nSummary:\nUniversal compaction currently uses `level0_file_num_compaction_trigger` for two purposes:\n1. the trigger for checking if there is any compaction to do, and\n2. the limit on the number of sorted runs. RocksDB will do compaction to keep the number of sorted runs no more than the value of this option.\n\nThis can make the option inflexible. A value that is too small causes higher write amp: more compactions to reduce the number of sorted runs. A value that is too big delays potential compaction work and causes worse read performance. This PR introduce an option `CompactionOptionsUniversal::max_read_amp` for only the second purpose: to specify\nthe hard limit on the number of sorted runs.\n\nFor backward compatibility, `max_read_amp = -1` by default, which means to fallback to the current behavior.\nWhen `max_read_amp > 0`,`level0_file_num_compaction_trigger` will only serve as a trigger to find potential compaction.\nWhen `max_read_amp = 0`, RocksDB will auto-tune the limit on the number of sorted runs. The estimation is based on DB size, write_buffer_size and size_ratio, so it is adaptive to the size change of the DB. See more in `UniversalCompactionBuilder::PickCompaction()`.\nAlternatively, users now can configure `max_read_amp` to a very big value and keep `level0_file_num_compaction_trigger` small. This will allow `size_ratio` and `max_size_amplification_percent` to control the number of sorted runs. This essentially disables compactions with reason kUniversalSortedRunNum.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12477\n\nTest Plan:\n* new unit test\n* existing unit test for default behavior\n* updated crash test with the new option\n* benchmark:\n * Create a DB that is roughly 24GB in the last level. When `max_read_amp = 0`, we estimate that the DB needs 9 levels to avoid excessive compactions to reduce the number of sorted runs.\n * We then run fillrandom to ingest another 24GB data to compare write amp.\n * case 1: small level0 trigger: `level0_file_num_compaction_trigger=5, max_read_amp=-1`\n * write-amp: 4.8\n * case 2: auto-tune: `level0_file_num_compaction_trigger=5, max_read_amp=0`\n * write-amp: 3.6\n * case 3: auto-tune with minimal trigger: `level0_file_num_compaction_trigger=1, max_read_amp=0`\n * write-amp: 3.8\n * case 4: hard-code a good value for trigger: `level0_file_num_compaction_trigger=9`\n * write-amp: 2.8\n```\nCase 1:\n** Compaction Stats [default] **\nLevel Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n L0 0/0 0.00 KB 1.0 0.0 0.0 0.0 22.6 22.6 0.0 1.0 0.0 163.2 141.94 111.10 108 1.314 0 0 0.0 0.0\n L45 8/0 1.81 GB 0.0 39.6 11.1 28.5 39.3 10.8 0.0 3.5 209.0 207.3 194.25 191.29 43 4.517 348M 2498K 0.0 0.0\n L46 13/0 3.12 GB 0.0 15.3 9.5 5.8 15.0 9.3 0.0 1.6 203.1 199.3 77.13 75.88 16 4.821 134M 2362K 0.0 0.0\n L47 19/0 4.68 GB 0.0 15.4 10.5 4.9 14.7 9.8 0.0 1.4 204.0 194.9 77.38 76.15 8 9.673 135M 5920K 0.0 0.0\n L48 38/0 9.42 GB 0.0 19.6 11.7 7.9 17.3 9.4 0.0 1.5 206.5 182.3 97.15 95.02 4 24.287 172M 20M 0.0 0.0\n L49 91/0 22.70 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0\n Sum 169/0 41.74 GB 0.0 89.9 42.9 47.0 109.0 61.9 0.0 4.8 156.7 189.8 587.85 549.45 179 3.284 791M 31M 0.0 0.0\n\nCase 2:\n** Compaction Stats [default] **\nLevel Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n L0 1/0 214.47 MB 1.2 0.0 0.0 0.0 22.6 22.6 0.0 1.0 0.0 164.5 140.81 109.98 108 1.304 0 0 0.0 0.0\n L44 0/0 0.00 KB 0.0 1.3 1.3 0.0 1.2 1.2 0.0 1.0 206.1 204.9 6.24 5.98 3 2.081 11M 51K 0.0 0.0\n L45 4/0 844.36 MB 0.0 7.1 5.4 1.7 7.0 5.4 0.0 1.3 194.6 192.9 37.41 36.00 13 2.878 62M 489K 0.0 0.0\n L46 11/0 2.57 GB 0.0 14.6 9.8 4.8 14.3 9.5 0.0 1.5 193.7 189.8 77.09 73.54 17 4.535 128M 2411K 0.0 0.0\n L47 24/0 5.81 GB 0.0 19.8 12.0 7.8 18.8 11.0 0.0 1.6 191.4 181.1 106.19 101.21 9 11.799 174M 9166K 0.0 0.0\n L48 38/0 9.42 GB 0.0 19.6 11.8 7.9 17.3 9.4 0.0 1.5 197.3 173.6 101.97 97.23 4 25.491 172M 20M 0.0 0.0\n L49 91/0 22.70 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0\n Sum 169/0 41.54 GB 0.0 62.4 40.3 22.1 81.3 59.2 0.0 3.6 136.1 177.2 469.71 423.94 154 3.050 549M 32M 0.0 0.0\n\nCase 3:\n** Compaction Stats [default] **\nLevel Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n L0 0/0 0.00 KB 5.0 0.0 0.0 0.0 22.6 22.6 0.0 1.0 0.0 163.8 141.43 111.13 108 1.310 0 0 0.0 0.0\n L44 0/0 0.00 KB 0.0 0.8 0.8 0.0 0.8 0.8 0.0 1.0 201.4 200.2 4.26 4.19 2 2.130 7360K 33K 0.0 0.0\n L45 4/0 844.38 MB 0.0 6.3 5.0 1.2 6.2 5.0 0.0 1.2 202.0 200.3 31.81 31.50 12 2.651 55M 403K 0.0 0.0\n L46 7/0 1.62 GB 0.0 13.3 8.8 4.6 13.1 8.6 0.0 1.5 198.9 195.7 68.72 67.89 17 4.042 117M 1696K 0.0 0.0\n L47 24/0 5.81 GB 0.0 21.7 12.9 8.8 20.6 11.8 0.0 1.6 198.5 188.6 112.04 109.97 12 9.336 191M 9352K 0.0 0.0\n L48 41/0 10.14 GB 0.0 24.8 13.0 11.8 21.9 10.1 0.0 1.7 198.6 175.6 127.88 125.36 6 21.313 218M 25M 0.0 0.0\n L49 91/0 22.70 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0\n Sum 167/0 41.10 GB 0.0 67.0 40.5 26.4 85.4 58.9 0.0 3.8 141.1 179.8 486.13 450.04 157 3.096 589M 36M 0.0 0.0\n\nCase 4:\n** Compaction Stats [default] **\nLevel Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n L0 0/0 0.00 KB 0.7 0.0 0.0 0.0 22.6 22.6 0.0 1.0 0.0 158.6 146.02 114.68 108 1.352 0 0 0.0 0.0\n L42 0/0 0.00 KB 0.0 1.7 1.7 0.0 1.7 1.7 0.0 1.0 185.4 184.3 9.25 8.96 4 2.314 14M 67K 0.0 0.0\n L43 0/0 0.00 KB 0.0 2.5 2.5 0.0 2.5 2.5 0.0 1.0 197.8 195.6 13.01 12.65 4 3.253 22M 202K 0.0 0.0\n L44 4/0 844.40 MB 0.0 4.2 4.2 0.0 4.1 4.1 0.0 1.0 188.1 185.1 22.81 21.89 5 4.562 36M 503K 0.0 0.0\n L45 13/0 3.12 GB 0.0 7.5 6.5 1.0 7.2 6.2 0.0 1.1 188.7 181.8 40.69 39.32 5 8.138 65M 2282K 0.0 0.0\n L46 17/0 4.18 GB 0.0 8.3 7.1 1.2 7.9 6.6 0.0 1.1 192.2 181.8 44.23 43.06 4 11.058 73M 3846K 0.0 0.0\n L47 22/0 5.34 GB 0.0 8.9 7.5 1.4 8.2 6.8 0.0 1.1 189.1 174.1 48.12 45.37 3 16.041 78M 6098K 0.0 0.0\n L48 27/0 6.58 GB 0.0 9.2 7.6 1.6 8.2 6.6 0.0 1.1 195.2 172.9 48.52 47.11 2 24.262 81M 9217K 0.0 0.0\n L49 91/0 22.70 GB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0\n Sum 174/0 42.74 GB 0.0 42.3 37.0 5.3 62.4 57.1 0.0 2.8 116.3 171.3 372.66 333.04 135 2.760 372M 22M 0.0 0.0\n\nsetup:\n./db_bench --benchmarks=fillseq,compactall,waitforcompaction --num=200000000 --compression_type=none --disable_wal=1 --compaction_style=1 --num_levels=50 --target_file_size_base=268435456 --max_compaction_bytes=6710886400 --level0_file_num_compaction_trigger=10 --write_buffer_size=268435456 --seed 1708494134896523\n\nbenchmark:\n./db_bench --benchmarks=overwrite,waitforcompaction,stats --num=200000000 --compression_type=none --disable_wal=1 --compaction_style=1 --write_buffer_size=268435456 --level0_file_num_compaction_trigger=5 --target_file_size_base=268435456 --use_existing_db=1 --num_levels=50 --writes=200000000 --universal_max_read_amp=-1 --seed=1716488324800233\n\n```\n\nReviewed By: ajkr\n\nDifferential Revision: D55370922\n\nPulled By: cbi42\n\nfbshipit-source-id: 9be69979126b840d08e93e7059260e76a878bb2a","shortMessageHtmlLink":"Improve universal compaction sorted-run trigger (#12477)"}},{"before":"f044b6a6ad812443345ad4a6f680072930379b3b","after":"9a72cf1a619a3a0c51bc5572d91071b79f62758e","ref":"refs/heads/main","pushedAt":"2024-05-24T03:33:44.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add timestamp support in dump_wal/dump/idump (#12690)\n\nSummary:\nAs titled. For dumping wal files, since a mapping from column family id to the user comparator object is needed to print the timestamp in human readable format, option `[--db=]` is added to `dump_wal` command to allow the user to choose to optionally open the DB as read only instance and dump the wal file with better timestamp formatting.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12690\n\nTest Plan:\nManually tested\n\ndump_wal:\n[dump a wal file specified with --walfile]\n```\n>> ./ldb --walfile=$TEST_DB/000004.log dump_wal --print_value\n>>1,1,28,13,PUT(0) : 0x666F6F0100000000000000 : 0x7631\n(Column family id: [0] contained in WAL are not opened in DB. Applied default hex formatting for user key. Specify --db= to open DB for better user key formatting if it contains timestamp.)\n```\n\n[dump with --db specified for better timestamp formatting]\n```\n>> ./ldb --walfile=$TEST_DB/000004.log dump_wal --db=$TEST_DB --print_value\n>> 1,1,28,13,PUT(0) : 0x666F6F|timestamp:1 : 0x7631\n```\n\ndump:\n[dump a file specified with --path]\n```\n>>./ldb --path=/tmp/rocksdbtest-501/column_family_test_75359_17910784957761284041/000004.log dump\nSequence,Count,ByteSize,Physical Offset,Key(s) : value\n1,1,28,13,PUT(0) : 0x666F6F0100000000000000 : 0x7631\n(Column family id: [0] contained in WAL are not opened in DB. Applied default hex formatting for user key. Specify --db= to open DB for better user key formatting if it contains timestamp.)\n```\n\n[dump db specified with --db]\n```\n>> ./ldb --db=/tmp/rocksdbtest-501/column_family_test_75359_17910784957761284041 dump\n>> foo|timestamp:1 ==> v1\nKeys in range: 1\n```\n\nidump\n```\n./ldb --db=$TEST_DB idump\n'foo|timestamp:1' seq:1, type:1 => v1\nInternal keys in range: 1\n```\n\nReviewed By: ltamasi\n\nDifferential Revision: D57755382\n\nPulled By: jowlyzhang\n\nfbshipit-source-id: a0a2ef80c92801cbf7bfccc64769c1191824362e","shortMessageHtmlLink":"Add timestamp support in dump_wal/dump/idump (#12690)"}},{"before":"c72ee4531b288bf08b9414155fafb86cc4378fb4","after":"f044b6a6ad812443345ad4a6f680072930379b3b","ref":"refs/heads/main","pushedAt":"2024-05-23T23:50:47.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix a couple of issues in the stress test for Transaction::MultiGet (#12696)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12696\n\nTwo fixes:\n1) `Random::Uniform(n)` returns an integer from the interval [0, n - 1], so `Uniform(2)` returns 0 or 1, which means is that we have apparently never covered transactions with deletions in the test. (To prevent similar issues, the patch cleans this write logic up a bit using an `enum class` for the type of write.)\n2) The keys passed in to `TestMultiGet` can have duplicates. What this boils down to is that we have to keep track of the latest expected values for read-your-own-writes on a per-key basis.\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D57750212\n\nfbshipit-source-id: e8ab603252c32331f8db0dfb2affcca1e188c790","shortMessageHtmlLink":"Fix a couple of issues in the stress test for Transaction::MultiGet (#…"}},{"before":"db0960800a2971e94c73d7e7eb1bd2e9b8d3c54c","after":"c72ee4531b288bf08b9414155fafb86cc4378fb4","ref":"refs/heads/main","pushedAt":"2024-05-22T22:39:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix recycled WAL detection when wal_compression is enabled (#12643)\n\nSummary:\nI think the point of the `if (end_of_buffer_offset_ - buffer_.size() == 0)` was to only set `recycled_` when the first record was read. However, the condition was false when reading the first record when the WAL began with a `kSetCompressionType` record because we had already dropped the `kSetCompressionType` record from `buffer_`. To fix this, I used `first_record_read_` instead.\n\nAlso, it was pretty confusing to treat the WAL as non-recycled when a recyclable record first appeared in a non-first record. I changed it to return an error if that happens.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12643\n\nReviewed By: hx235\n\nDifferential Revision: D57238099\n\nPulled By: ajkr\n\nfbshipit-source-id: e20a2a0c9cf0c9510a7b6af463650a05d559239e","shortMessageHtmlLink":"Fix recycled WAL detection when wal_compression is enabled (#12643)"}},{"before":"733150f6aaf52c8dcedb843fc329b472bba4ffd7","after":"db0960800a2971e94c73d7e7eb1bd2e9b8d3c54c","ref":"refs/heads/main","pushedAt":"2024-05-22T18:34:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add Transaction::PutEntity to the stress tests (#12688)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12688\n\nAs a first step of covering the wide-column transaction APIs, the patch adds `PutEntity` to the optimistic and pessimistic transaction stress tests (for the latter, only when the WriteCommitted policy is utilized). Other APIs and the multi-operation transaction test will be covered by subsequent PRs.\n\nReviewed By: jaykorean\n\nDifferential Revision: D57675781\n\nfbshipit-source-id: bfe062ec5f6ab48641cd99a70f239ce4aa39299c","shortMessageHtmlLink":"Add Transaction::PutEntity to the stress tests (#12688)"}},{"before":"014368f62c4654c64f1b199faf322b17c6fad56a","after":"733150f6aaf52c8dcedb843fc329b472bba4ffd7","ref":"refs/heads/main","pushedAt":"2024-05-22T18:16:51.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Flush WAL upon DB close (#12684)\n\nSummary:\n**Context/Summary:** https://github.com/facebook/rocksdb/pull/12556 `avoid_sync_during_shutdown=false` missed an edge case where `manual_wal_flush == true` so WAL sync will still miss unflushed WAL. This PR fixes it.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12684\n\nTest Plan: modified UT to include this case `manual_wal_flush==true`\n\nReviewed By: cbi42\n\nDifferential Revision: D57655861\n\nPulled By: hx235\n\nfbshipit-source-id: c9f49fe260e8b38b3ea387558432dcd9a3dbec19","shortMessageHtmlLink":"Flush WAL upon DB close (#12684)"}},{"before":"1827f3f98374a065cef08faf82dbda006beb2830","after":"014368f62c4654c64f1b199faf322b17c6fad56a","ref":"refs/heads/main","pushedAt":"2024-05-22T18:10:39.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix the names of function objects added in PR 12681 (#12689)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12689\n\nThese should be in `snake_case` (not `camelCase`) per our style guide.\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D57676418\n\nfbshipit-source-id: 82ad6a87d1540f0b29c2f864ca0128287fe95a9e","shortMessageHtmlLink":"Fix the names of function objects added in PR 12681 (#12689)"}},{"before":"ad6f6e24c80c8aa14ab107711ecf2de482ad89cf","after":"1827f3f98374a065cef08faf82dbda006beb2830","ref":"refs/heads/main","pushedAt":"2024-05-22T14:18:40.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Remove extra semi colon from internal_repo_rocksdb/repo/table/sst_file_reader.cc\n\nSummary:\n`-Wextra-semi` or `-Wextra-semi-stmt`\n\nIf the code compiles, this is safe to land.\n\nReviewed By: palmje\n\nDifferential Revision: D57632757\n\nfbshipit-source-id: 1dbad2a2e185381e225df8b9027033e06aeaf01b","shortMessageHtmlLink":"Remove extra semi colon from internal_repo_rocksdb/repo/table/sst_fil…"}},{"before":"62600cb2d4bf6392d871103cad5a31102c669bea","after":"ad6f6e24c80c8aa14ab107711ecf2de482ad89cf","ref":"refs/heads/main","pushedAt":"2024-05-22T07:52:30.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix txn_write_policy check in crash test script (#12683)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12683\n\nWith optimistic transactions, the stress test parameter `txn_write_policy` is not applicable and is thus not set. When the parameter is subsequently checked, Python's dictionary `get` method returns `None`, which is not equal to zero. The net result of this is that currently, `sync_fault_injection` and `manual_wal_flush_one_in` are always disabled in optimistic transaction mode (most likely unintentionally).\n\nReviewed By: cbi42\n\nDifferential Revision: D57655339\n\nfbshipit-source-id: 8b93a788f9b02307b6ea7b2129dc012271130334","shortMessageHtmlLink":"Fix txn_write_policy check in crash test script (#12683)"}},{"before":"cee32c5ccecc6beaf72ad6c405875792252594b9","after":"62600cb2d4bf6392d871103cad5a31102c669bea","ref":"refs/heads/main","pushedAt":"2024-05-22T00:25:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix rebuilding transactions containing PutEntity (#12681)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12681\n\nWhen rebuilding transactions during recovery, `MemtableInserter::PutCFImpl` currently calls `WriteBatchInternal::Put` regardless of value type, which is incorrect for `PutEntity` entries, as well as `TimedPut`s and the blob indexes used by the old BlobDB implementation. The patch fixes the handling of `PutEntity` and returns `NotSupported` for `TimedPut`s and blob indices.\n\nReviewed By: jaykorean, jowlyzhang\n\nDifferential Revision: D57636355\n\nfbshipit-source-id: 833de4e4aa0b42ff6638b72c4181f981d12d0f15","shortMessageHtmlLink":"Fix rebuilding transactions containing PutEntity (#12681)"}},{"before":"d89ab23bec4c4d539f6c9377ef0eaa2163a849b3","after":"cee32c5ccecc6beaf72ad6c405875792252594b9","ref":"refs/heads/main","pushedAt":"2024-05-21T19:59:33.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"use nullptr instead of NULL / 0 in rocksdbjni (#12575)\n\nSummary:\nWhile I was trying to understand issue https://github.com/facebook/rocksdb/issues/12503, I found this minor problem. Please have a look adamretter rhubner\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12575\n\nReviewed By: ajkr\n\nDifferential Revision: D57596055\n\nPulled By: cbi42\n\nfbshipit-source-id: ee0860bdfbee9364cd30c23957b72a04da6acd45","shortMessageHtmlLink":"use nullptr instead of NULL / 0 in rocksdbjni (#12575)"}},{"before":"d7b938882e4e81892721d5787c446348ffb782c8","after":"d89ab23bec4c4d539f6c9377ef0eaa2163a849b3","ref":"refs/heads/main","pushedAt":"2024-05-21T17:21:38.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Disallow memtable flush and sst ingest while WAL is locked (#12652)\n\nSummary:\nWe recently noticed that some memtable flushed and file\ningestions could proceed during LockWAL, in violation of its stated\ncontract. (Note: we aren't 100% sure its actually needed by MySQL, but\nwe want it to be in a clean state nonetheless.)\n\nDespite earlier skepticism that this could be done safely (https://github.com/facebook/rocksdb/issues/12666), I\nfound a place to wait to wait for LockWAL to be cleared before allowing\nthese operations to proceed: WaitForPendingWrites()\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12652\n\nTest Plan:\nAdded to unit tests. Extended how db_stress validates LockWAL\nand re-enabled combination of ingestion and LockWAL in crash test, in\nfollow-up to https://github.com/facebook/rocksdb/issues/12642\n\nRan blackbox_crash_test for a long while with relevant features\namplified.\n\nSuggested follow-up: fix FaultInjectionTestFS to report file sizes\nconsistent with what the user has requested to be flushed.\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D57622142\n\nPulled By: pdillinger\n\nfbshipit-source-id: aef265fce69465618974b4ec47f4636257c676ce","shortMessageHtmlLink":"Disallow memtable flush and sst ingest while WAL is locked (#12652)"}},{"before":"ef1d4955ba703ebec80877a83b88a6dc47cdb398","after":"d7b938882e4e81892721d5787c446348ffb782c8","ref":"refs/heads/main","pushedAt":"2024-05-21T00:37:10.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Sync WAL during db Close() (#12556)\n\nSummary:\n**Context/Summary:**\nBelow crash test found out we don't sync WAL upon DB close, which can lead to unsynced data loss. This PR syncs it.\n```\n./db_stress --threads=1 --disable_auto_compactions=1 --WAL_size_limit_MB=0 --WAL_ttl_seconds=0 --acquire_snapshot_one_in=0 --adaptive_readahead=0 --adm_policy=1 --advise_random_on_open=1 --allow_concurrent_memtable_write=1 --allow_data_in_errors=True --allow_fallocate=0 --async_io=0 --auto_readahead_size=0 --avoid_flush_during_recovery=1 --avoid_flush_during_shutdown=0 --avoid_unnecessary_blocking_io=1 --backup_max_size=104857600 --backup_one_in=0 --batch_protection_bytes_per_key=0 --bgerror_resume_retry_interval=1000000 --block_align=0 --block_protection_bytes_per_key=2 --block_size=16384 --bloom_before_level=1 --bloom_bits=29.895303579352174 --bottommost_compression_type=disable --bottommost_file_compaction_delay=0 --bytes_per_sync=0 --cache_index_and_filter_blocks=0 --cache_index_and_filter_blocks_with_high_priority=1 --cache_size=33554432 --cache_type=lru_cache --charge_compression_dictionary_building_buffer=1 --charge_file_metadata=0 --charge_filter_construction=1 --charge_table_reader=1 --checkpoint_one_in=0 --checksum_type=kxxHash64 --clear_column_family_one_in=0 --column_families=1 --compact_files_one_in=0 --compact_range_one_in=0 --compaction_pri=0 --compaction_readahead_size=0 --compaction_style=0 --compaction_ttl=0 --compress_format_version=2 --compressed_secondary_cache_ratio=0 --compressed_secondary_cache_size=0 --compression_checksum=1 --compression_max_dict_buffer_bytes=0 --compression_max_dict_bytes=0 --compression_parallel_threads=4 --compression_type=zstd --compression_use_zstd_dict_trainer=1 --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --data_block_index_type=0 --db=/dev/shm/rocksdb_test/rocksdb_crashtest_whitebox --db_write_buffer_size=0 --default_temperature=kUnknown --default_write_temperature=kUnknown --delete_obsolete_files_period_micros=0 --delpercent=0 --delrangepercent=0 --destroy_db_initially=1 --detect_filter_construct_corruption=1 --disable_wal=0 --dump_malloc_stats=0 --enable_checksum_handoff=0 --enable_compaction_filter=0 --enable_custom_split_merge=0 --enable_do_not_compress_roles=1 --enable_index_compression=1 --enable_memtable_insert_with_hint_prefix_extractor=0 --enable_pipelined_write=0 --enable_sst_partitioner_factory=0 --enable_thread_tracking=1 --enable_write_thread_adaptive_yield=0 --expected_values_dir=/dev/shm/rocksdb_test/rocksdb_crashtest_expected --fail_if_options_file_error=0 --fifo_allow_compaction=1 --file_checksum_impl=none --fill_cache=0 --flush_one_in=1000 --format_version=5 --get_current_wal_file_one_in=0 --get_live_files_one_in=0 --get_property_one_in=0 --get_sorted_wal_files_one_in=0 --hard_pending_compaction_bytes_limit=274877906944 --high_pri_pool_ratio=0 --index_block_restart_interval=6 --index_shortening=0 --index_type=0 --ingest_external_file_one_in=0 --initial_auto_readahead_size=16384 --iterpercent=0 --key_len_percent_dist=1,30,69 --last_level_temperature=kUnknown --level_compaction_dynamic_level_bytes=1 --lock_wal_one_in=0 --log2_keys_per_lock=10 --log_file_time_to_roll=0 --log_readahead_size=16777216 --long_running_snapshots=0 --low_pri_pool_ratio=0 --lowest_used_cache_tier=0 --manifest_preallocation_size=5120 --manual_wal_flush_one_in=0 --mark_for_compaction_one_file_in=0 --max_auto_readahead_size=0 --max_background_compactions=1 --max_bytes_for_level_base=67108864 --max_key=2500000 --max_key_len=3 --max_log_file_size=0 --max_manifest_file_size=1073741824 --max_sequential_skip_in_iterations=8 --max_total_wal_size=0 --max_write_batch_group_size_bytes=64 --max_write_buffer_number=10 --max_write_buffer_size_to_maintain=0 --memtable_insert_hint_per_batch=0 --memtable_max_range_deletions=0 --memtable_prefix_bloom_size_ratio=0.5 --memtable_protection_bytes_per_key=1 --memtable_whole_key_filtering=1 --memtablerep=skip_list --metadata_charge_policy=0 --min_write_buffer_number_to_merge=1 --mmap_read=0 --mock_direct_io=True --nooverwritepercent=1 --num_file_reads_for_auto_readahead=0 --num_levels=1 --open_files=-1 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --ops_per_thread=3 --optimize_filters_for_hits=1 --optimize_filters_for_memory=1 --optimize_multiget_for_io=0 --paranoid_file_checks=0 --partition_filters=0 --partition_pinning=1 --pause_background_one_in=0 --periodic_compaction_seconds=0 --prefix_size=1 --prefixpercent=0 --prepopulate_block_cache=0 --preserve_internal_time_seconds=3600 --progress_reports=0 --read_amp_bytes_per_bit=0 --read_fault_one_in=0 --readahead_size=16384 --readpercent=0 --recycle_log_file_num=0 --reopen=2 --report_bg_io_stats=1 --sample_for_compression=5 --secondary_cache_fault_one_in=0 --secondary_cache_uri= --skip_stats_update_on_db_open=1 --snapshot_hold_ops=0 --soft_pending_compaction_bytes_limit=68719476736 --sst_file_manager_bytes_per_sec=0 --sst_file_manager_bytes_per_truncate=0 --stats_dump_period_sec=10 --stats_history_buffer_size=1048576 --strict_bytes_per_sync=0 --subcompactions=3 --sync=0 --sync_fault_injection=1 --table_cache_numshardbits=6 --target_file_size_base=16777216 --target_file_size_multiplier=1 --test_batches_snapshots=0 --top_level_index_pinning=0 --unpartitioned_pinning=3 --use_adaptive_mutex=1 --use_adaptive_mutex_lru=0 --use_delta_encoding=1 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=0 --use_full_merge_v1=0 --use_get_entity=0 --use_merge=0 --use_multi_get_entity=0 --use_multiget=1 --use_put_entity_one_in=0 --use_write_buffer_manager=0 --user_timestamp_size=0 --value_size_mult=32 --verification_only=0 --verify_checksum=1 --verify_checksum_one_in=1000 --verify_compression=0 --verify_db_one_in=100000 --verify_file_checksums_one_in=0 --verify_iterator_with_expected_state_one_in=5 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=0 --wal_compression=zstd --write_buffer_size=33554432 --write_dbid_to_manifest=0 --write_fault_one_in=0 --writepercent=100\n\n Verification failed for column family 0 key 000000000000B9D1000000000000012B000000000000017D (4756691): value_from_db: , value_from_expected: 010000000504070609080B0A0D0C0F0E111013121514171619181B1A1D1C1F1E212023222524272629282B2A2D2C2F2E313033323534373639383B3A3D3C3F3E, msg: Iterator verification: Value not found: NotFound:\nVerification failed :(\n```\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12556\n\nTest Plan:\n- New UT\n- Same stress test command failed before this fix but pass after\n- CI\n\nReviewed By: ajkr\n\nDifferential Revision: D56267964\n\nPulled By: hx235\n\nfbshipit-source-id: af1b7e8769c129f64ba1c7f1ff17102f1239b929","shortMessageHtmlLink":"Sync WAL during db Close() (#12556)"}},{"before":"b7520f4815bdbc5eb0d160df2820bdbe15b55947","after":"ef1d4955ba703ebec80877a83b88a6dc47cdb398","ref":"refs/heads/main","pushedAt":"2024-05-21T00:09:04.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix the output of `ldb dump_wal` for PutEntity records (#12677)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12677\n\nThe patch contains two fixes related to printing `PutEntity` records with `ldb dump_wal`:\n1) It adds the key to the printout (it was missing earlier).\n2) It restores the formatting flags of the output stream after dumping the wide-column structure so that any `hex` flag that might have been set does not affect subsequent printing of e.g. sequence numbers.\n\nReviewed By: jaykorean, jowlyzhang\n\nDifferential Revision: D57591295\n\nfbshipit-source-id: af4e3e219f0082ad39bbdfd26f8c5a57ebb898be","shortMessageHtmlLink":"Fix the output of ldb dump_wal for PutEntity records (#12677)"}},{"before":"35985a988c9fac5fd1f480af0aefd4738af994d4","after":"b7520f4815bdbc5eb0d160df2820bdbe15b55947","ref":"refs/heads/main","pushedAt":"2024-05-20T23:13:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Support building ldb with buck (#12676)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12676\n\nThe patch extends the RocksDB buckifier script so it also creates a `buck` target for the `ldb` tool and updates the `TARGETS` file with the results of the new version of the script.\n\nReviewed By: cbi42\n\nDifferential Revision: D57588789\n\nfbshipit-source-id: 2ed58b405b3f216e802cf6bcbdbf9809e7386c8b","shortMessageHtmlLink":"Support building ldb with buck (#12676)"}},{"before":"c87f5cf91c17407fb4a67ebe09213bb26eaaa572","after":"35985a988c9fac5fd1f480af0aefd4738af994d4","ref":"refs/heads/main","pushedAt":"2024-05-20T20:26:17.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix value of `inplace_update_support` across stress test runs (#12675)\n\nSummary:\nthe value of `inplace_update_support` option need to be fixed across runs of db_stress on the same DB (https://github.com/facebook/rocksdb/issues/12577). My recent fix (https://github.com/facebook/rocksdb/issues/12673) regressed this behavior. Also fix some existing places where this does not hold.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12675\n\nTest Plan: monitor crash tests related to `inplace_update_support`.\n\nReviewed By: hx235\n\nDifferential Revision: D57576375\n\nPulled By: cbi42\n\nfbshipit-source-id: 75b1bd233f03e5657984f5d5234dbbb1ffc35c27","shortMessageHtmlLink":"Fix value of inplace_update_support across stress test runs (#12675)"}},{"before":"f910a0c025bf9c415c6906aede786697d1a987da","after":"c87f5cf91c17407fb4a67ebe09213bb26eaaa572","ref":"refs/heads/main","pushedAt":"2024-05-20T17:46:14.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Add GetEntityForUpdate to optimistic and WriteCommitted pessimistic transactions (#12668)\n\nSummary:\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12668\n\nThe patch adds a new `GetEntityForUpdate` API to optimistic and WriteCommitted pessimistic transactions, which provides transactional wide-column point lookup functionality with concurrency control. For WriteCommitted transactions, user-defined timestamps are also supported similarly to the `GetForUpdate` API.\n\nReviewed By: jaykorean\n\nDifferential Revision: D57458304\n\nfbshipit-source-id: 7eadbac531ca5446353e494abbd0635d63f62d24","shortMessageHtmlLink":"Add GetEntityForUpdate to optimistic and WriteCommitted pessimistic t…"}},{"before":"4dd084f66dcfc747d6a42ebc8cfc576328078f45","after":"f910a0c025bf9c415c6906aede786697d1a987da","ref":"refs/heads/main","pushedAt":"2024-05-20T16:45:04.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Fix unreleased bug fix .md name (#12672)\n\nSummary:\nContext/Summary: as above\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12672\n\nTest Plan: no code change\n\nReviewed By: ajkr\n\nDifferential Revision: D57505136\n\nPulled By: hx235\n\nfbshipit-source-id: 0e216dc5974e9be10027b444eb6b4034f679dfd8","shortMessageHtmlLink":"Fix unreleased bug fix .md name (#12672)"}},{"before":"c4782bde41fa368de3b9c63570357c8063f6e88e","after":"4dd084f66dcfc747d6a42ebc8cfc576328078f45","ref":"refs/heads/main","pushedAt":"2024-05-19T01:05:54.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"fix gcc warning about dangling-reference in backup_engine_test (#12637)\n\nSummary:\ngcc 14.1 reports some warnings about dangling-reference occured in backup_engine_test.\n```c++\n/data/rocksdb/utilities/backup/backup_engine_test.cc: In member function 'virtual void rocksdb::{anonymous}::BackupEngineTest_ExcludeFiles_Test::TestBody()':\n/data/rocksdb/utilities/backup/backup_engine_test.cc:4411:64: error: possibly dangling reference to a temporary [-Werror=dangling-reference]\n 4411 | std::make_pair(alt_backup_engine, backup_engine_.get())}) {\n | ^\n/data/rocksdb/utilities/backup/backup_engine_test.cc:4410:23: note: the temporary was destroyed at the end of the full expression 'std::make_pair(((rocksdb::{anonymous}::BackupEngineTest_ExcludeFiles_Test*)this)->rocksdb::{anonymous}::BackupEngineTest_ExcludeFiles_Test::rocksdb::{anonymous}::BackupEngineTest.rocksdb::{anonymous}::BackupEngineTest::backup_engine_.std::unique_ptr::get(), alt_backup_engine)'\n 4410 | {std::make_pair(backup_engine_.get(), alt_backup_engine),\n | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/data/rocksdb/utilities/backup/backup_engine_test.cc:4411:64: error: possibly dangling reference to a temporary [-Werror=dangling-reference]\n 4411 | std::make_pair(alt_backup_engine, backup_engine_.get())}) {\n | ^\n/data/rocksdb/utilities/backup/backup_engine_test.cc:4411:23: note: the temporary was destroyed at the end of the full expression 'std::make_pair(alt_backup_engine, ((rocksdb::{anonymous}::BackupEngineTest_ExcludeFiles_Test*)this)->rocksdb::{anonymous}::BackupEngineTest_ExcludeFiles_Test::rocksdb::{anonymous}::BackupEngineTest.rocksdb::{anonymous}::BackupEngineTest::backup_engine_.std::unique_ptr::get())'\n 4411 | std::make_pair(alt_backup_engine, backup_engine_.get())}) {\n | ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n```\nIt seems to be related to this update in gcc:\nhttps://gcc.gnu.org/gcc-14/changes.html#:~:text=%2DWdangling%2Dreference%20false%20positives%20have%20been%20reduced.%20The%20warning%20does%20not%20warn%20about%20std%3A%3Aspan%2Dlike%20classes%3B%20there%20is%20also%20a%20new%20attribute%20gnu%3A%3Ano_dangling%20to%20suppress%20the%20warning.%20See%20the%20manual%20for%20more%20info.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12637\n\nReviewed By: cbi42\n\nDifferential Revision: D57263996\n\nPulled By: ajkr\n\nfbshipit-source-id: 1e416c38240d3d1adda787fc484c0392e28bb7f1","shortMessageHtmlLink":"fix gcc warning about dangling-reference in backup_engine_test (#12637)"}},{"before":"0ed93552f4cd6004e966815e1c18347e01628830","after":"c4782bde41fa368de3b9c63570357c8063f6e88e","ref":"refs/heads/main","pushedAt":"2024-05-18T23:51:46.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Disable `inplace_update_support` in crash test with unsynced data loss (#12673)\n\nSummary:\nWith unsynced data loss, we replay traces to recover expected state to DB's latest sequence number. With `inplace_update_support`, the largest sequence number of memtable may not reflect the latest update. This is because inplace updates in memtable do not update sequence number. So we disable `inplace_update_support` where traces need to be replayed.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12673\n\nReviewed By: ltamasi\n\nDifferential Revision: D57512548\n\nPulled By: cbi42\n\nfbshipit-source-id: 69278fe2e935874faf744d0ac4fd85263773c3ec","shortMessageHtmlLink":"Disable inplace_update_support in crash test with unsynced data loss ("}},{"before":"ffd7930312dffc6fe7a29e05c4f5870db23869b8","after":"0ed93552f4cd6004e966815e1c18347e01628830","ref":"refs/heads/main","pushedAt":"2024-05-18T02:17:34.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"facebook-github-bot","name":"Facebook Community Bot","path":"/facebook-github-bot","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/6422482?s=80&v=4"},"commit":{"message":"Implement obsolete file deletion (GC) in follower (#12657)\n\nSummary:\nThis PR implements deletion of obsolete files in a follower RocksDB instance. The follower tails the leader's MANIFEST and creates links to newly added SST files. These links need to be deleted once those files become obsolete in order to reclaim space. There are three cases to be considered -\n1. New files added and links created, but the Version could not be installed due to some missing files. Those links need to be preserved so a subsequent catch up attempt can succeed. We insert the next file number in the `VersionSet` to `pending_outputs_` to prevent their deletion.\n2. Files deleted from the previous successfully installed `Version`. These are deleted as usual in `PurgeObsoleteFiles`.\n3. New files added by a `VersionEdit` and deleted by a subsequent `VersionEdit`, both processed in the same catchup attempt. Links will be created for the new files when verifying a candidate `Version`. Those need to be deleted explicitly as they're never added to `VersionStorageInfo`, and thus not deleted by `PurgeObsoleteFiles`.\n\nTest plan -\nNew unit tests in `db_follower_test`.\n\nPull Request resolved: https://github.com/facebook/rocksdb/pull/12657\n\nReviewed By: jowlyzhang\n\nDifferential Revision: D57462697\n\nPulled By: anand1976\n\nfbshipit-source-id: 898f15570638dd4930f839ffd31c560f9cb73916","shortMessageHtmlLink":"Implement obsolete file deletion (GC) in follower (#12657)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEVmE3BQA","startCursor":null,"endCursor":null}},"title":"Activity · facebook/rocksdb"}