Stockfish Testing Queue

Cores

1650

Nodes / sec

1032M

Games / min

1494

Time remaining

53.2h

Workers - 170 machines

Pending approval - 1 tests

26-01-06 Ali
History_second_order diff
  
  
    LLR: 0.00 (-2.94,2.94) <0.00,2.00>
      
Total: 0 W: 0 L: 0 D: 0
Ptnml(0-2): 0, 0, 0, 0, 0
sprt @ 10+0.1 th 1
cores: 0 (0)
Only the bonus, malus, and countermove tune, not the pawn and low ply history in the original patch.

Paused - 16 tests

26-01-04 max
feature/non-pawn-quiet diff
  
  
    LLR: -0.14 (-2.94,2.94) <0.00,2.00>
      
Total: 28064 W: 7273 L: 7244 D: 13547
Ptnml(0-2): 83, 3295, 7255, 3308, 91
sprt @ 10+0.1 th 1
cores: 0 (0)
Increase LMR for non-pawn quiet moves
26-01-04 max
feature/minor-piece-qui diff
  
  
    LLR: -0.12 (-2.94,2.94) <0.00,2.00>
      
Total: 4800 W: 1256 L: 1260 D: 2284
Ptnml(0-2): 13, 607, 1175, 581, 24
sprt @ 10+0.1 th 1
cores: 0 (0)
Increase LMR for minor piece quiet moves at depth
26-01-04 max
feature/additive-medium diff
  
  
    LLR: -0.05 (-2.94,2.94) <0.00,2.00>
      
Total: 26240 W: 6784 L: 6750 D: 12706
Ptnml(0-2): 67, 3098, 6754, 3136, 65
sprt @ 10+0.1 th 1
cores: 0 (0)
Additive LMR with medium coefficients for quiet moves
26-01-04 max
feature/improving-quiet diff
  
  
    LLR: -1.08 (-2.94,2.94) <0.00,2.00>
      
Total: 22272 W: 5671 L: 5734 D: 10867
Ptnml(0-2): 80, 2649, 5736, 2596, 75
sprt @ 10+0.1 th 1
cores: 0 (0)
Increase LMR reduction for quiet moves when position is improving
26-01-05 max
feature/improving-reduc diff
  
  
    LLR: -0.20 (-2.94,2.94) <0.00,2.00>
      
Total: 6656 W: 1710 L: 1718 D: 3228
Ptnml(0-2): 21, 780, 1724, 792, 11
sprt @ 10+0.1 th 1
cores: 0 (0)
LMR: reduce less for quiet moves when position is improving (depth >= 12, -256)
26-01-04 max
feature/minor-piece-qui diff
  
  
    LLR: 0.03 (-2.94,2.94) <0.00,2.00>
      
Total: 33888 W: 8748 L: 8695 D: 16445
Ptnml(0-2): 114, 4036, 8580, 4111, 103
sprt @ 10+0.1 th 1
cores: 0 (0)
Increase LMR for minor piece quiet moves at depth
26-01-04 max
feature/additive-aggres diff
  
  
    LLR: 0.08 (-2.94,2.94) <0.00,2.00>
      
Total: 22880 W: 5900 L: 5859 D: 11121
Ptnml(0-2): 63, 2677, 5940, 2676, 84
sprt @ 10+0.1 th 1
cores: 0 (0)
Additive LMR with full coefficients for quiet moves
26-01-05 max
feature/beta-proximity- diff
  
  
    LLR: 0.09 (-2.94,2.94) <0.00,2.00>
      
Total: 9344 W: 2386 L: 2364 D: 4594
Ptnml(0-2): 26, 1093, 2410, 1119, 24
sprt @ 10+0.1 th 1
cores: 0 (0)
Increase LMR for quiet moves near beta
26-01-04 max
feature/non-checking-qu diff
  
  
    LLR: 0.32 (-2.94,2.94) <0.00,2.00>
      
Total: 34784 W: 9100 L: 9020 D: 16664
Ptnml(0-2): 127, 4052, 8951, 4138, 124
sprt @ 10+0.1 th 1
cores: 0 (0)
Increase LMR for non-checking quiet moves
26-01-04 max
feature/non-pv-quiet diff
  
  
    LLR: 0.66 (-2.94,2.94) <0.00,2.00>
      
Total: 55776 W: 14430 L: 14289 D: 27057
Ptnml(0-2): 195, 6503, 14343, 6660, 187
sprt @ 10+0.1 th 1
cores: 0 (0)
Increase LMR for quiet moves at non-PV nodes
25-12-28 sg
tweak_lmr_scale5 diff
  
  
    LLR: -0.83 (-2.94,2.94) <0.00,2.00>
      
Total: 195584 W: 50695 L: 50485 D: 94404
Ptnml(0-2): 621, 22093, 52123, 22365, 590
sprt @ 10+0.1 th 1
cores: 0 (0)
Exclude tt move from LMR scaling.
25-12-31 sg
hist_prune diff
  
  
    LLR: -1.74 (-2.94,2.94) <0.00,2.00>
      
Total: 90976 W: 23390 L: 23411 D: 44175
Ptnml(0-2): 284, 10759, 23427, 10730, 288
sprt @ 10+0.1 th 1
cores: 0 (0)
Fixed: More quiet history pruning if improving.
26-01-01 sg
hist_prune_a diff
  
  
    LLR: -1.82 (-2.94,2.94) <0.00,2.00>
      
Total: 48448 W: 12375 L: 12465 D: 23608
Ptnml(0-2): 143, 5749, 12510, 5699, 123
sprt @ 10+0.1 th 1
cores: 0 (0)
Half effect: More quiet history pruning if improving.
26-01-04 sg
tweak_lmr_scale_a5 diff
  
  
    LLR: -0.66 (-2.94,2.94) <0.00,2.00>
      
Total: 46944 W: 12057 L: 12047 D: 22840
Ptnml(0-2): 115, 5543, 12173, 5499, 142
sprt @ 10+0.1 th 1
cores: 0 (0)
Use following depth dependent reduction factor: r * (depth + 1) / (depth * depth + 3).
25-12-30 sg
tweak_lmr_scale10 diff
  
  
    LLR: 0.28 (-2.94,2.94) <0.00,2.00>
      
Total: 169056 W: 43926 L: 43651 D: 81479
Ptnml(0-2): 578, 19827, 43455, 20078, 590
sprt @ 10+0.1 th 1
cores: 0 (0)
Use offset=3 for new depth dependent reduction scaling (thats equal to half effect and decay).
26-01-05 max
feature/additive-three- diff
  
  
    LLR: -0.46 (-2.94,2.94) <0.50,2.50>
      
Total: 7026 W: 1794 L: 1817 D: 3415
Ptnml(0-2): 4, 770, 1987, 749, 3
sprt @ 60+0.6 th 1
cores: 0 (0)
LTC: Additive LMR with three independent signals

Failed - 0 tests

No failed tests on this page

Active - 41 tests

26-01-05 fau
ffff2b diff
  
  
    LLR: 1.22 (-2.94,2.94) <0.00,2.00>
      
Total: 33216 W: 8589 L: 8431 D: 16196
Ptnml(0-2): 106, 3849, 8554, 3979, 120
sprt @ 10+0.1 th 1
cores: 21 (5)
ffff2b
26-01-05 fau
ffff2a diff
  
  
    LLR: 1.04 (-2.94,2.94) <0.00,2.00>
      
Total: 39776 W: 10292 L: 10142 D: 19342
Ptnml(0-2): 103, 4637, 10276, 4751, 121
sprt @ 10+0.1 th 1
cores: 29 (5)
ffff2a
26-01-04 sg
tweak_lmr_scale_a3 diff
  
  
    LLR: 0.71 (-2.94,2.94) <0.00,2.00>
      
Total: 117504 W: 30400 L: 30165 D: 56939
Ptnml(0-2): 293, 13891, 30246, 13932, 390
sprt @ 10+0.1 th 1
cores: 23 (7)
Use following depth dependent reduction factor: r * depth / (depth * depth + 3).
26-01-04 sni
nnue_complexity diff
  
  
    LLR: 0.61 (-2.94,2.94) <0.00,2.00>
      
Total: 88192 W: 22841 L: 22656 D: 42695
Ptnml(0-2): 286, 10382, 22589, 10539, 300
sprt @ 10+0.1 th 1
cores: 32 (6)
take 7
26-01-05 0x5
exp-gbquietsort diff
  
  
    LLR: 0.43 (-2.94,2.94) <-1.75,0.25>
      
Total: 63872 W: 16498 L: 16530 D: 30844
Ptnml(0-2): 213, 7489, 16554, 7477, 203
sprt @ 10+0.1 th 1
cores: 38 (4)
consider anything we sort as good
26-01-05 Viz
qsPrAdjc4 diff
  
  
    LLR: 0.23 (-2.94,2.94) <0.00,2.00>
      
Total: 20736 W: 5397 L: 5347 D: 9992
Ptnml(0-2): 46, 2415, 5420, 2417, 70
sprt @ 10+0.1 th 1
cores: 28 (3)
Take 4
26-01-05 max
feature/early-game-quie diff
  
  
    LLR: 0.08 (-2.94,2.94) <0.00,2.00>
      
Total: 25024 W: 6544 L: 6500 D: 11980
Ptnml(0-2): 83, 2907, 6489, 2949, 84
sprt @ 10+0.1 th 1
cores: 14 (3)
LMR: increase reduction for quiet moves in early game (game_ply < 40, depth >= 12, +256)
26-01-05 Viz
moreCorrhists1 diff
  
  
    LLR: -0.11 (-2.94,2.94) <0.00,2.00>
      
Total: 42664 W: 10980 L: 10930 D: 20754
Ptnml(0-2): 62, 4758, 11623, 4846, 43
sprt @ 5+0.05 th 8
cores: 72 (4)
Take 1
26-01-05 Tar
optimism_re-evaluate diff
  
  
    LLR: -0.19 (-2.94,2.94) <0.00,2.00>
      
Total: 63200 W: 16242 L: 16166 D: 30792
Ptnml(0-2): 204, 7417, 16272, 7513, 194
sprt @ 10+0.1 th 1
cores: 38 (4)
take 3
26-01-04 sg
tweak_lmr_scale_a4 diff
  
  
    LLR: -0.33 (-2.94,2.94) <0.00,2.00>
      
Total: 125888 W: 32493 L: 32337 D: 61058
Ptnml(0-2): 413, 14822, 32319, 14976, 414
sprt @ 10+0.1 th 1
cores: 17 (3)
Use following depth dependent reduction factor: r * depth / (depth * depth + 4).
26-01-05 max
feature/additive-two-si diff
  
  
    LLR: -0.38 (-2.94,2.94) <0.50,2.50>
      
Total: 16752 W: 4187 L: 4184 D: 8381
Ptnml(0-2): 13, 1865, 4611, 1880, 7
sprt @ 60+0.6 th 1
cores: 51 (5)
LTC: Additive LMR for non-PV and non-pawn quiet moves
26-01-05 max
feature/late-move-quiet diff
  
  
    LLR: -0.40 (-2.94,2.94) <0.00,2.00>
      
Total: 25120 W: 6477 L: 6476 D: 12167
Ptnml(0-2): 78, 2962, 6470, 2981, 69
sprt @ 10+0.1 th 1
cores: 12 (3)
Increase LMR for late quiet moves
26-01-05 fau
ffff2c diff
  
  
    LLR: -0.41 (-2.94,2.94) <0.00,2.00>
      
Total: 28832 W: 7429 L: 7423 D: 13980
Ptnml(0-2): 84, 3415, 7407, 3431, 79
sprt @ 10+0.1 th 1
cores: 20 (3)
ffff2c
26-01-05 max
feature/high-reduction- diff
  
  
    LLR: -0.44 (-2.94,2.94) <0.00,2.00>
      
Total: 12928 W: 3319 L: 3339 D: 6270
Ptnml(0-2): 28, 1533, 3366, 1505, 32
sprt @ 10+0.1 th 1
cores: 12 (4)
LMR: increase reduction for quiet moves with already-high r (r > 1024, depth >= 12, +256)
26-01-05 jay
qsearch-pvnode-quiet-ch diff
  
  
    LLR: -0.48 (-2.94,2.94) <0.00,2.00>
      
Total: 77056 W: 19858 L: 19787 D: 37411
Ptnml(0-2): 265, 8938, 20074, 8963, 288
sprt @ 10+0.1 th 1
cores: 39 (5)
qsearch: allow quiet checks on PvNode
26-01-03 fau
isus2 diff
  
  
    LLR: -0.55 (-2.94,2.94) <-1.75,0.25>
      
Total: 245760 W: 63102 L: 63423 D: 119235
Ptnml(0-2): 804, 29276, 63001, 29035, 764
sprt @ 10+0.1 th 1
cores: 24 (2)
isus2
26-01-03 Viz
qsPrAdjc3 diff
  
  
    LLR: -0.57 (-2.94,2.94) <0.00,2.00>
      
Total: 137152 W: 35528 L: 35377 D: 66247
Ptnml(0-2): 430, 16098, 35376, 16235, 437
sprt @ 10+0.1 th 1
cores: 16 (2)
Take 3
26-01-05 max
feature/not-improving-q diff
  
  
    LLR: -0.58 (-2.94,2.94) <0.00,2.00>
      
Total: 8928 W: 2260 L: 2298 D: 4370
Ptnml(0-2): 22, 1085, 2291, 1041, 25
sprt @ 10+0.1 th 1
cores: 13 (3)
LMR: increase reduction for quiet moves when position is not improving (depth >= 12, +256)
26-01-01 Viz
lesserYoink4 diff
  
  
    LLR: -0.67 (-2.94,2.94) <0.50,2.50>
      
Total: 212412 W: 54092 L: 53716 D: 104604
Ptnml(0-2): 114, 22739, 60122, 23119, 112
sprt @ 60+0.6 th 1
cores: 108 (4)
LTC: Take 4
26-01-04 Ali
History_second_order diff
  
  
    LLR: -0.74 (-2.94,2.94) <0.00,2.00>
      
Total: 206368 W: 53570 L: 53330 D: 99468
Ptnml(0-2): 676, 24499, 52613, 24701, 695
sprt @ 10+0.1 th 1
cores: 35 (5)
The SPSA tune seems to be counterproductive. Back to manual tunes.
26-01-05 max
feature/additive-three- diff
  
  
    LLR: -1.21 (-2.94,2.94) <0.50,2.50>
      
Total: 20700 W: 5267 L: 5324 D: 10109
Ptnml(0-2): 14, 2282, 5811, 2233, 10
sprt @ 60+0.6 th 1
cores: 57 (6)
LTC: Additive LMR with three signals and higher coefficients
26-01-04 sni
nnue_complexity diff
  
  
    LLR: -1.26 (-2.94,2.94) <0.00,2.00>
      
Total: 83264 W: 21567 L: 21556 D: 40141
Ptnml(0-2): 262, 9871, 21406, 9780, 313
sprt @ 10+0.1 th 1
cores: 31 (7)
take 8
26-01-04 Tar
optimism_re-evaluate diff
  
  
    LLR: -1.30 (-2.94,2.94) <0.00,2.00>
      
Total: 149152 W: 38557 L: 38452 D: 72143
Ptnml(0-2): 506, 17653, 38176, 17712, 529
sprt @ 10+0.1 th 1
cores: 36 (6)
take 2
26-01-04 sni
nnue_complexity^ diff
  
  
    LLR: -1.31 (-2.94,2.94) <0.00,2.00>
      
Total: 140384 W: 36389 L: 36299 D: 67696
Ptnml(0-2): 392, 16712, 35937, 16716, 435
sprt @ 10+0.1 th 1
cores: 22 (3)
take 3
26-01-04 sg
tweak_lmr_scale_a2 diff
  
  
    LLR: -1.41 (-2.94,2.94) <0.00,2.00>
      
Total: 128832 W: 33031 L: 32967 D: 62834
Ptnml(0-2): 385, 15252, 33072, 15328, 379
sprt @ 10+0.1 th 1
cores: 20 (3)
Use following depth dependent reduction factor: r * depth / (depth * depth + 2).
26-01-05 0x5
exp-gbquietsort diff
  
  
    LLR: -1.45 (-2.94,2.94) <-1.75,0.25>
      
Total: 41664 W: 10609 L: 10785 D: 20270
Ptnml(0-2): 130, 5052, 10672, 4820, 158
sprt @ 10+0.1 th 1
cores: 32 (6)
don't sort bad quiets fixed
26-01-05 max
feature/post-capture-re diff
  
  
    LLR: -1.51 (-2.94,2.94) <0.00,2.00>
      
Total: 18880 W: 4812 L: 4918 D: 9150
Ptnml(0-2): 58, 2275, 4883, 2163, 61
sprt @ 10+0.1 th 1
cores: 12 (3)
LMR: reduce less for quiet moves after opponent capture
26-01-03 sg
tweak_lmr_scale_a diff
  
  
    LLR: -1.55 (-2.94,2.94) <0.00,2.00>
      
Total: 122816 W: 31749 L: 31706 D: 59361
Ptnml(0-2): 366, 14464, 31691, 14535, 352
sprt @ 10+0.1 th 1
cores: 19 (2)
Use following depth dependent reduction factor: r * depth / (depth * depth + 1).
26-01-04 sg
tweak_lmr_scale_a6 diff
  
  
    LLR: -1.58 (-2.94,2.94) <0.00,2.00>
      
Total: 60512 W: 15504 L: 15555 D: 29453
Ptnml(0-2): 180, 7157, 15613, 7146, 160
sprt @ 10+0.1 th 1
cores: 16 (3)
Use following depth dependent reduction factor: r * depth / (depth * depth + 5).
26-01-04 ane
shared-cont-corr diff
  
  
    LLR: -1.67 (-2.94,2.94) <0.50,2.50>
      
Total: 94334 W: 24240 L: 24187 D: 45907
Ptnml(0-2): 30, 9589, 27889, 9616, 43
sprt @ 20+0.2 th 8
cores: 338 (13)
LTC: Share continuation correction history first tried by Kieren of Halogen fame
26-01-05 Viz
captHistClear3 diff
  
  
    LLR: -1.86 (-2.94,2.94) <0.00,2.00>
      
Total: 41568 W: 10603 L: 10706 D: 20259
Ptnml(0-2): 121, 4926, 10787, 4835, 115
sprt @ 10+0.1 th 1
cores: 15 (4)
Take 3
26-01-04 Dan
lmr_fds_mh_1 diff
  
  
    LLR: -1.89 (-2.94,2.94) <0.00,2.00>
      
Total: 148704 W: 38382 L: 38331 D: 71991
Ptnml(0-2): 478, 17636, 38099, 17635, 504
sprt @ 10+0.1 th 1
cores: 37 (5)
take 1
26-01-04 Ali
History_second_order diff
  
  
    LLR: -1.92 (-2.94,2.94) <0.00,2.00>
      
Total: 149024 W: 38470 L: 38421 D: 72133
Ptnml(0-2): 506, 17582, 38279, 17647, 498
sprt @ 10+0.1 th 1
cores: 33 (2)
This one got quite a good move ranking during debug. Test it.
26-01-05 fau
ffff1 diff
  
  
    LLR: -2.17 (-2.94,2.94) <0.50,2.50>
      
Total: 37620 W: 9478 L: 9579 D: 18563
Ptnml(0-2): 14, 4162, 10559, 4061, 14
sprt @ 60+0.6 th 1
cores: 94 (2)
LTC: ffff1
26-01-04 max
feature/tt-confirmed-qu diff
  
  
    LLR: -2.33 (-2.94,2.94) <0.50,2.50>
      
Total: 49128 W: 12561 L: 12652 D: 23915
Ptnml(0-2): 29, 5369, 13855, 5286, 25
sprt @ 60+0.6 th 1
cores: 43 (7)
LTC: Increase LMR for quiet moves with TT confirmation
26-01-05 Viz
moreCorrhists2 diff
  
  
    LLR: -2.36 (-2.94,2.94) <0.00,2.00>
      
Total: 43344 W: 11018 L: 11157 D: 21169
Ptnml(0-2): 61, 4996, 11698, 4855, 62
sprt @ 5+0.05 th 8
cores: 56 (4)
Take 2
26-01-04 Dan
lmr_fds_mh_2 diff
  
  
    LLR: -2.39 (-2.94,2.94) <0.00,2.00>
      
Total: 170688 W: 43990 L: 43951 D: 82747
Ptnml(0-2): 584, 20167, 43768, 20276, 549
sprt @ 10+0.1 th 1
cores: 40 (3)
take 2
26-01-05 max
feature/prior-capture-q diff
  
  
    LLR: -2.53 (-2.94,2.94) <0.00,2.00>
      
Total: 23040 W: 5852 L: 6040 D: 11148
Ptnml(0-2): 64, 2767, 6048, 2575, 66
sprt @ 10+0.1 th 1
cores: 11 (2)
LMR: increase reduction for quiet moves after opponent captured (priorCapture, depth >= 12, +512)
26-01-05 Ali
History_second_order diff
  
  
    LLR: -2.53 (-2.94,2.94) <0.00,2.00>
      
Total: 47200 W: 11988 L: 12144 D: 23068
Ptnml(0-2): 162, 5651, 12111, 5533, 143
sprt @ 10+0.1 th 1
cores: 40 (4)
Try tuning the value of lph and pawn but not the bonus and malus.
26-01-04 fau
ffff1t diff
  
  
    40900/90000 iterations
      
81800/180000 games played
180000 @ 10+0.1 th 1
cores: 20 (3)
ffff1t
26-01-01 sg
tune_lmr_conditions diff
  
  
    38208/100000 iterations
      
76416/200000 games played
200000 @ 60+0.6 th 1
cores: 36 (2)
Add reductions for the combination (also negations) of 15 conditions (451 parameters). Use also an unused param 'Random' for estimate random walk variance. Apply the reduction s only at non-ttPv nodes to avoid scaling problems. Tune at LTC with half throughput. Later i will test the most significant changes.

Finished - 179793 tests

26-01-04 ane idek diff
  
  
    LLR: -2.95 (-2.94,2.94) <0.00,2.00>
      
Total: 88928 W: 22800 L: 22931 D: 43197
Ptnml(0-2): 276, 10587, 22872, 10450, 279
sprt @ 10+0.1 th 1 Take 2
26-01-03 ane l3-lover-grp32 diff
  
  
    LLR: 0.17 (-2.94,2.94) <0.50,2.50>
      
Total: 3952 W: 1022 L: 1004 D: 1926
Ptnml(0-2): 0, 282, 1394, 300, 0
sprt @ 20+0.2 th 128 LTC: 4 32-thread groups vs. 8 16-thread groups (~3.5% performance loss, but maybe more history sharing is caring?)
26-01-04 sni nnue_complexity diff
  
  
    LLR: -1.41 (-2.94,2.94) <0.00,2.00>
      
Total: 47488 W: 12248 L: 12303 D: 22937
Ptnml(0-2): 127, 5617, 12319, 5546, 135
sprt @ 10+0.1 th 1 take 9
26-01-02 ane atomic-ref-lover diff
  
  
    LLR: -2.96 (-2.94,2.94) <0.00,2.00>
      
Total: 124128 W: 31865 L: 31943 D: 60320
Ptnml(0-2): 400, 13778, 33775, 13722, 389
sprt @ 10+0.1 th 1 Does using std::atomic_ref (C++20) lead to a speedup
26-01-04 sni nnue_complexity^ diff
  
  
    LLR: -2.61 (-2.94,2.94) <0.00,2.00>
      
Total: 50432 W: 12812 L: 12969 D: 24651
Ptnml(0-2): 138, 6056, 13026, 5817, 179
sprt @ 10+0.1 th 1 take 5
26-01-05 0x5 exp-gbquietsort diff
  
  
    LLR: 0.00 (-2.94,2.94) <0.00,2.00>
      
Total: 0 W: 0 L: 0 D: 0
Ptnml(0-2): 0, 0, 0, 0, 0
sprt @ 10+0.1 th 1 don't sort bad quiets fixed
26-01-05 0x5 exp-gbquietsort diff
  
  
    LLR: -0.18 (-2.94,2.94) <0.00,2.00>
      
Total: 224 W: 50 L: 67 D: 107
Ptnml(0-2): 1, 39, 49, 22, 1
sprt @ 10+0.1 th 1 don't sort bad quiets
26-01-05 0x5 exp-gbquietsort diff
  
  
    LLR: 0.00 (-2.94,2.94) <-1.75,0.25>
      
Total: 0 W: 0 L: 0 D: 0
Ptnml(0-2): 0, 0, 0, 0, 0
sprt @ 10+0.1 th 1 consider anything we sort as good
26-01-04 Viz captHistClear2 diff
  
  
    LLR: -2.95 (-2.94,2.94) <0.00,2.00>
      
Total: 47968 W: 12293 L: 12484 D: 23191
Ptnml(0-2): 142, 5777, 12347, 5566, 152
sprt @ 10+0.1 th 1 Take 2
26-01-03 sni number_of_pieces5 diff
  
  
    LLR: 2.94 (-2.94,2.94) <-1.75,0.25>
      
Total: 239388 W: 60709 L: 60712 D: 117967
Ptnml(0-2): 147, 26271, 66857, 26276, 143
sprt @ 60+0.6 th 1 LTC: Simplify material formula (take 3, rewritten to be shorter)
26-01-04 fau ffff2 diff
  
  
    34003/80000 iterations
      
68006/160000 games played
160000 @ 10+0.1 th 1 ffff2
26-01-04 max feature/additive-three- diff
  
  
    LLR: 2.93 (-2.94,2.94) <0.00,2.00>
      
Total: 67008 W: 17465 L: 17108 D: 32435
Ptnml(0-2): 195, 7692, 17380, 8035, 202
sprt @ 10+0.1 th 1 Additive LMR with three independent signals
26-01-05 max feature/post-capture-qu diff
  
  
    LLR: 0.00 (-2.94,2.94) <0.00,2.00>
      
Total: 0 W: 0 L: 0 D: 0
Ptnml(0-2): 0, 0, 0, 0, 0
sprt @ 10+0.1 th 1 Increase LMR for quiet moves after opponent captured
26-01-04 fau ffff5 diff
  
  
    LLR: -2.60 (-2.94,2.94) <0.00,2.00>
      
Total: 65600 W: 16835 L: 16969 D: 31796
Ptnml(0-2): 194, 7825, 16891, 7701, 189
sprt @ 10+0.1 th 1 ffff5
26-01-05 max feature/post-capture-qu diff
  
  
    LLR: 0.00 (-2.94,2.94) <0.00,2.00>
      
Total: 0 W: 0 L: 0 D: 0
Ptnml(0-2): 0, 0, 0, 0, 0
sprt @ 10+0.1 th 1 Increase LMR for quiet moves after opponent captured
26-01-01 Viz obviousSpeedup1 diff
  
  
    LLR: -2.95 (-2.94,2.94) <0.00,2.00>
      
Total: 241184 W: 62184 L: 62094 D: 116906
Ptnml(0-2): 711, 26465, 66157, 26541, 718
sprt @ 10+0.1 th 1 Take 1
26-01-04 fau ffff1 diff
  
  
    LLR: 2.94 (-2.94,2.94) <0.00,2.00>
      
Total: 90528 W: 23569 L: 23176 D: 43783
Ptnml(0-2): 303, 10346, 23587, 10711, 317
sprt @ 10+0.1 th 1 ffff1
26-01-04 ane idek diff
  
  
    LLR: -2.95 (-2.94,2.94) <0.00,2.00>
      
Total: 28256 W: 7169 L: 7391 D: 13696
Ptnml(0-2): 88, 3484, 7214, 3246, 96
sprt @ 10+0.1 th 1 Less IIR with high corrplexity
26-01-05 Ali History_second_order diff
  
  
    LLR: -0.77 (-2.94,2.94) <0.00,2.00>
      
Total: 1280 W: 291 L: 356 D: 633
Ptnml(0-2): 3, 182, 333, 121, 1
sprt @ 10+0.1 th 1 This time, try with the pawn history unified and bring back the offset even more strongly.
26-01-05 Ali History_second_order diff
  
  
    LLR: -0.86 (-2.94,2.94) <0.00,2.00>
      
Total: 1088 W: 240 L: 316 D: 532
Ptnml(0-2): 5, 168, 271, 98, 2
sprt @ 10+0.1 th 1 Without the part that modifies lph and pawn
26-01-05 Ali History_second_order diff
  
  
    LLR: -1.40 (-2.94,2.94) <0.00,2.00>
      
Total: 2656 W: 626 L: 748 D: 1282
Ptnml(0-2): 13, 374, 668, 268, 5
sprt @ 10+0.1 th 1 To take advantage of the second-order history, REMOVE this cursed history offset and INCREASE the update limit significantly.
26-01-04 Viz multiCutFR2 diff
  
  
    LLR: -2.93 (-2.94,2.94) <0.00,2.00>
      
Total: 57792 W: 14760 L: 14935 D: 28097
Ptnml(0-2): 171, 6958, 14794, 6821, 152
sprt @ 10+0.1 th 1 Take 2
26-01-04 max feature/additive-three- diff
  
  
    LLR: 2.96 (-2.94,2.94) <0.00,2.00>
      
Total: 32160 W: 8423 L: 8112 D: 15625
Ptnml(0-2): 89, 3702, 8210, 3967, 112
sprt @ 10+0.1 th 1 Additive LMR with three signals and higher coefficients
26-01-04 max feature/additive-two-si diff
  
  
    LLR: 2.99 (-2.94,2.94) <0.00,2.00>
      
Total: 36544 W: 9512 L: 9191 D: 17841
Ptnml(0-2): 112, 4218, 9286, 4549, 107
sprt @ 10+0.1 th 1 Additive LMR for non-PV and non-pawn quiet moves
26-01-04 Dan lmr_fds_mh_3 diff
  
  
    LLR: -2.93 (-2.94,2.94) <0.00,2.00>
      
Total: 55008 W: 14170 L: 14350 D: 26488
Ptnml(0-2): 179, 6652, 13993, 6530, 150
sprt @ 10+0.1 th 1 take 3