This change adds codegen runs to coverage config and adds O2/codegen
testing to CI.
Note that we don't run O2 combinations in coverage - it's better that we
see gaps in O2 coverage in compiler tests, as these are valuable for
validating codegen intricacies that are difficult to see from
conformance tests passing/failing.
We don't need to run any cachegrind benchmarks in benchmark-dev, since
benchmark uses our new callgrind setup instead.
Also removes prototyping filters that we no longer need from all builds.
Resolves#668
## The problem
Benchmarks jobs run concurrently for the different operating systems.
This means that when it comes time to push the benchmark results to [the
assigned benchmark results
repo](https://github.com/luau-lang/benchmark-data), there can be two
different jobs trying to push changes at the same time. In such a case,
one of the pushes will fail and we end up missing some benchmark results
data from the workflow run.
## The solution
Whenever a push fails, we need to retry the steps leading up to the push
(checking out the benchmark results repo, storing benchmark results,
pushing the results to [a specific
repo](https://github.com/luau-lang/benchmark-data)).
### Note
There are 3 push attempts before submitting to failure.
## TL;DR
This PR retries pushing benchmark results when they fail to get pushed
(often due to pushing from multiple jobs concurrently)
Co-authored-by: Jamie Kuppens <reshurum@gmail.com>
Co-authored-by: Ignacio Falk <flakolefluk@gmail.com>
This change adds another file for benchmarking luau-analyze and sets up
benchmarks for both non-strict/strict modes for analysis and all three
optimization levels for compilation performance.
To avoid issues with race conditions on repository update we do all this
in the same job in benchmark.yml.
To be able to benchmark both modes from a single file, luau-analyze
gains --mode argument which allows to override the default typechecking
mode. Not sure if we'll want this to be a hard override on top of the
module-specified mode in the future, but this works for now.
Since callgrind allows to control stats collection from the guest, this
allows us to reset the collection right before the benchmark starts.
This change exposes this to the benchmark runner and integrates
callgrind data parsing into bench.py, so that we can run bench.py with
--callgrind argument and, as long as the runner was built with callgrind
support, we get instruction counts from the run.
We convert instruction counts to seconds using 10G instructions/second
rate; there's no correct way to do this without simulating the full CPU
pipeline but it results in time units on a similar scale to real runs.
Changed the GHA workflows to:
- Not run `build` and `release` workflows for PRs that only affect `prototyping/`
- Run `prototyping` workflow when PRs affect `Analysis/**`, `Ast/**`, or the `luau-ast` source files
Attempt to fix coverage builds by using checkout@v2 instead of v1 which might fix the detacthed HEAD issue.
On the off chance it doesn't, add extra logging around git specifically.
We keep getting compat reports for warnings in various compiler
versions. While we can keep merging PRs to resolve these warnings, it
would be nice if the users of other compilers or compiler versions weren't
blocked on us fixing this.
As such, this change disables Werror by default and only enables it when
requested, which happens in CI in test builds.