Closes#599
Not sure if these context definitions are distinct enough, but they work for my use cases.
I repurposed (a majority of) the existing unit tests for this, but if it should live in separate test cases let me know
---
* Add autocomplete context to result
* Update tests
* Remove comments
* Fix expression context issues
We make four adjustments in this RFC:
1. `{{` is not allowed. This is likely a valid but poor attempt at escaping coming from C#, Rust, or Python.
2. We now allow `` `this` `` with zero interpolating expressions.
3. We now allow `` f `this` `` also.
4. Explicitly say that `` `this` `` and `` `this {that}` `` are not valid type annotation syntax.
When considering new standard library functions, we essentially need to strongly justify their existence over their implementation in Luau in user library code.
This PR attempts to provide a few axes of consideration; ideally new library functions tick many of the boxes, eg "used often + is more performant + clear unambiguous interface" is an ideal consideration for a library function, whereas if it's merely accelerating a single specific use case for a single application it's unlikely to be a good justification for inclusion.
Fix description incorrectly saying that parameter w specifies an upper range. w is actually a width. Proof:
print(bit32.extract(2^32-1, 3, 4)) -- prints 15, not 1.
Also indicate that the position is 0-based, and that the function will error if the selected range exceeds the allowed bounds.
This change adds another file for benchmarking luau-analyze and sets up
benchmarks for both non-strict/strict modes for analysis and all three
optimization levels for compilation performance.
To avoid issues with race conditions on repository update we do all this
in the same job in benchmark.yml.
To be able to benchmark both modes from a single file, luau-analyze
gains --mode argument which allows to override the default typechecking
mode. Not sure if we'll want this to be a hard override on top of the
module-specified mode in the future, but this works for now.
Since callgrind allows to control stats collection from the guest, this
allows us to reset the collection right before the benchmark starts.
This change exposes this to the benchmark runner and integrates
callgrind data parsing into bench.py, so that we can run bench.py with
--callgrind argument and, as long as the runner was built with callgrind
support, we get instruction counts from the run.
We convert instruction counts to seconds using 10G instructions/second
rate; there's no correct way to do this without simulating the full CPU
pipeline but it results in time units on a similar scale to real runs.