When considering new standard library functions, we essentially need to strongly justify their existence over their implementation in Luau in user library code.
This PR attempts to provide a few axes of consideration; ideally new library functions tick many of the boxes, eg "used often + is more performant + clear unambiguous interface" is an ideal consideration for a library function, whereas if it's merely accelerating a single specific use case for a single application it's unlikely to be a good justification for inclusion.
Fix description incorrectly saying that parameter w specifies an upper range. w is actually a width. Proof:
print(bit32.extract(2^32-1, 3, 4)) -- prints 15, not 1.
Also indicate that the position is 0-based, and that the function will error if the selected range exceeds the allowed bounds.
This change adds another file for benchmarking luau-analyze and sets up
benchmarks for both non-strict/strict modes for analysis and all three
optimization levels for compilation performance.
To avoid issues with race conditions on repository update we do all this
in the same job in benchmark.yml.
To be able to benchmark both modes from a single file, luau-analyze
gains --mode argument which allows to override the default typechecking
mode. Not sure if we'll want this to be a hard override on top of the
module-specified mode in the future, but this works for now.
Since callgrind allows to control stats collection from the guest, this
allows us to reset the collection right before the benchmark starts.
This change exposes this to the benchmark runner and integrates
callgrind data parsing into bench.py, so that we can run bench.py with
--callgrind argument and, as long as the runner was built with callgrind
support, we get instruction counts from the run.
We convert instruction counts to seconds using 10G instructions/second
rate; there's no correct way to do this without simulating the full CPU
pipeline but it results in time units on a similar scale to real runs.