- Fix DeprecatedGlobal warning text in cases when the global is deprecated without a suggested alternative
- Fix an off-by-one error in type error text for incorrect use of string.format
- Reduce stack consumption further during parsing, hopefully eliminating stack overflows during parsing/compilation for good
- Mark interpolated string support as experimental (requires --fflags=LuauInterpolatedStringBaseSupport to enable)
- Simplify garbage collection treatment of upvalues, reducing cache misses during sweeping stage and reducing the cost of upvalue assignment (SETUPVAL); supersedes #643
- Simplify garbage collection treatment of sleeping threads
- Simplify sweeping of alive threads, reducing cache misses during sweeping stage
- Simplify management of string buffers, removing redundant linked list operations
Implements the string interpolation RFC (#165).
Adds the string interpolation as per the RFC.
```lua
local name = "world"
print(`Hello {name}!`) -- Hello world!
```
Co-authored-by: Arseny Kapoulkine <arseny.kapoulkine@gmail.com>
Co-authored-by: Alexander McCord <11488393+alexmccord@users.noreply.github.com>
This was implicitly assumed to be supported, but we should really just say so explicitly. If we didn't, there'd be two ways to interpret it: string interpolations are raw strings and escape sequences don't exist for the most part (false), or string interpolations are like strings and escape sequences do exist (true).
Also talks about the ambiguity with `\u{` and that it is defined to take priority and does not terminate the interpolation chunk and starts a new expression, instead it must be a well-formed escape sequence `\u{...}`. It is very unlikely there'd be another escape sequence that also has this same ambiguity problem, so we don't need to worry about this.
Fixes a bug where the environment scope is not passed to the autocomplete typechecker, so environment-related types are not provided.
This caused an issue where `getModuleEnvironment` returns `typeChecker.globalScope` when there is no environment. We pass this through to `typeCheckerForAutocomplete.check()` then environmentscope is no longer nullopt and it will use `typeChecker.globalScope` instead of falling back to `typeCheckerForAutocomplete.globalScope`.
This is solved by passing `forAutocomplete` to `getModuleEnvironment` so it falls back to the autocomplete global scope if none found.
- Fix autocomplete not suggesting globals defined after the cursor (fixes#622)
- Improve type checker stability
- Reduce parser C stack consumption which fixes some stack overflow crashes on deeply nested sources
- Improve performance of bit32.extract/replace when width is implied (~3% faster chess)
- Improve performance of bit32.extract when field/width are constants (~10% faster base64)
- Heap dump now annotates thread stacks with local variable/function names
In case a large userdata size is passed to lua_newuserdatadtor it might overflow the size resulting in luaU_newudata actually allocating the object without a memory error. This will then result in overwriting part of the metatable pointer of the userdata.
This PR fixes this issue by checking for the overflow and in such cases pass a size value which will cause a memory error in luaU_newudata.
Closes#599
Not sure if these context definitions are distinct enough, but they work for my use cases.
I repurposed (a majority of) the existing unit tests for this, but if it should live in separate test cases let me know
---
* Add autocomplete context to result
* Update tests
* Remove comments
* Fix expression context issues
We make four adjustments in this RFC:
1. `{{` is not allowed. This is likely a valid but poor attempt at escaping coming from C#, Rust, or Python.
2. We now allow `` `this` `` with zero interpolating expressions.
3. We now allow `` f `this` `` also.
4. Explicitly say that `` `this` `` and `` `this {that}` `` are not valid type annotation syntax.
When considering new standard library functions, we essentially need to strongly justify their existence over their implementation in Luau in user library code.
This PR attempts to provide a few axes of consideration; ideally new library functions tick many of the boxes, eg "used often + is more performant + clear unambiguous interface" is an ideal consideration for a library function, whereas if it's merely accelerating a single specific use case for a single application it's unlikely to be a good justification for inclusion.
Fix description incorrectly saying that parameter w specifies an upper range. w is actually a width. Proof:
print(bit32.extract(2^32-1, 3, 4)) -- prints 15, not 1.
Also indicate that the position is 0-based, and that the function will error if the selected range exceeds the allowed bounds.
This change adds another file for benchmarking luau-analyze and sets up
benchmarks for both non-strict/strict modes for analysis and all three
optimization levels for compilation performance.
To avoid issues with race conditions on repository update we do all this
in the same job in benchmark.yml.
To be able to benchmark both modes from a single file, luau-analyze
gains --mode argument which allows to override the default typechecking
mode. Not sure if we'll want this to be a hard override on top of the
module-specified mode in the future, but this works for now.
Since callgrind allows to control stats collection from the guest, this
allows us to reset the collection right before the benchmark starts.
This change exposes this to the benchmark runner and integrates
callgrind data parsing into bench.py, so that we can run bench.py with
--callgrind argument and, as long as the runner was built with callgrind
support, we get instruction counts from the run.
We convert instruction counts to seconds using 10G instructions/second
rate; there's no correct way to do this without simulating the full CPU
pipeline but it results in time units on a similar scale to real runs.