Hi tevador,
On 9 May 2020, at 06:43, tevador tevador@gmail.com wrote:
For our dynamic PoW system to work, we will need to be able to compare PoW tokens with each other. To do so we define a function: unsigned effort(uint8_t *token) which takes as its argument a hash output token, and returns the number of leading zero bits on it.
This definition makes the effort exponential, i.e. the computational resources required to reach one notch higher effort increase by a factor of 2 each time.
I suggest to use linear effort defined as the quotient of dividing a bitstring of 1s by the hash value:
== Example A:
effort(00000001100010101101) = 11111111111111111111 / 00000001100010101101
or in decimal:
effort(6317) = 1048575 / 6317 = 165.
This definition of effort has the advantage of directly expressing the expected number of hashes that the client had to calculate to reach the effort.
With the exponential definition, we could have an equivalent linear effort of either 128 (7 leading zeroes) or 256 (8 leading zeroes), while the linear definition provides smoother classification of PoW results.
There are two possible issues with this design:
Division is expensive on some platforms, including ARM-based devices. But there might be a way to calculate an approximate value without division. (For example, bit shifts, or multiplying by an inverse.) Or we could calculate the maximum value once, and then re-use it.
Is it still possible to express the full range of difficulties? Is that expression reasonably compact?
Some advantages of this exponential distribution are: * spurious results can be filtered using a single instruction (a bit mask), * the expected effort is quick and easy to calculate, * the effort can be expressed in a compact way.
Maybe we don't need some of these properties, and a linear design would be fine.
But if we do, we could change the exponent to the square or cube root of two. There would be a smoother distribution, but a wider range, and the checks would still be reasonably fast.
T