Before the fix, it processed all transactions in the mempool which could be very slow when mempool grows to several MBs in size. I observed `get_block_template_backlog` taking up to 15 seconds of CPU time under high mempool load.
After the fix, only transactions that can potentially be mined in the next block will be processed (a bit more than the current block median weight).
Adds the following:
- "get_miner_data" to RPC API
- "json-miner-data" to ZeroMQ subscriber contexts
Both provide the necessary data to create a custom block template. They are used by p2pool.
Data provided:
- major fork version
- current height
- previous block id
- RandomX seed hash
- network difficulty
- median block weight
- coins mined by the network so far
- mineable mempool transactions
On Mac, size_t is a distinct type from uint64_t, and some
types (in wallet cache as well as cold/hot wallet transfer
data) use pairs/containers with size_t as fields. Mac would
save those as full size, while other platforms would save
them as varints. Might apply to other platforms where the
types are distinct.
There's a nasty hack for backward compatibility, which can
go after a couple forks.
A 20% fluff probability increases the precision of a spy connected to
every node by 10% on average, compared to a network using 0% fluff
probability. The current value (10% fluff) should increase precision by
~5% compared to baseline.
This decreases the expected stem length from 10 to 5. The embargo
timeout was therefore lowered to 39s; the fifth node in a stem is
expected to have a 90% chance of being the first to timeout, which is
the same probability we currently have with an expected stem length of
10 nodes.
Miners with MLSAG txes which they'd already verified included
a couple in that block, but the consensus rules had changed
in the meantime, so that block is technically invalid and any
node which did not already have those two txes in their txpool
could not sync. Grandfather them in, since it has no effect in
practice.