Github has deprecated `v1` and `v2` of `actions/upload-artifact` which
causes occassional CI failures and will affect our ability to make new
releases in the future. This PR updates the version used in
`release.yml`.
vm.mmap_rnd_bits has been recently changed to 32 on GHA, which triggers
issues in ASAN builds that spuriously fail on startup. The fix requires
a more recent clang/gcc than the agents have available (clang 17, not
sure what GCC version), so for now we need to work around this by
restricting the ASLR randomness.
See https://github.com/google/sanitizers/issues/1614
GitHub just released support for M1 runners for open source projects via
GHA. It can be selected by using macos-14 OS to run on.
This is very valuable because until now, Luau wasn't tested in OSS CI on
Arm64, despite the presence of a fairly large volume of AArch64 specific
code courtesy of A64 codegen. This change fixes that.
In the future we could also expand codecov reporting as it seems to work
well but it needs a little bit of cleanup for the build in question so
let's start small.
We need ubuntu-20.04 for coverage analysis (clang after 10 doesn't seem
to properly interact with gcov version used for codecov) and for
releases (to produce binaries targeting earlier glibc).
However, we still should be verifying that Luau builds on latest,
because newer toolchains have stricter standard library headers and/or
warnings; without this we're at risk of constantly regressing the build
for packaging or external applications.
Note that release.yml can probably just be deleted but for now we simply
adjust it.
Nobody is maintaining this and we haven't really used it and are
unlikely to start due to a high degree of noise and lack of dedicated
machines for this setup. Callgrind has worked for us well enough, with
additional profiling performed locally by engineers - this is not
perfect but it doesn't look like we have a path to making this better.
For now just do this in strict mode. This will help us track performance
over time, although for now the behavior is going to keep changing so
it's not going to be a fully solid metric for a few weeks.
### Problem
ubuntu-latest was updated to 22.04 which removes clang++-10 we used for
coverage stats and creates a pre-compiled binary that requires a glibc
upgrade https://github.com/Roblox/luau/issues/773
### Solution
Pin to ubuntu-20.04 using multi-value matrix configurations.
In coverage configuration, we use clang++-10 once again.
Github fails to find clang++-10.
Since we install llvm without a specific version number, we shouldn't
use clang++-10 with a version number.
I tried installing llvm-10, but that package is not available on github
(any more?).
We will now run luau with --codegen during benchmark runs and collect
the data into separate JSON. Note that we don't yet have the historical
data for these, which will be backfilled later.
This change adds codegen runs to coverage config and adds O2/codegen
testing to CI.
Note that we don't run O2 combinations in coverage - it's better that we
see gaps in O2 coverage in compiler tests, as these are valuable for
validating codegen intricacies that are difficult to see from
conformance tests passing/failing.
We don't need to run any cachegrind benchmarks in benchmark-dev, since
benchmark uses our new callgrind setup instead.
Also removes prototyping filters that we no longer need from all builds.
Resolves#668
## The problem
Benchmarks jobs run concurrently for the different operating systems.
This means that when it comes time to push the benchmark results to [the
assigned benchmark results
repo](https://github.com/luau-lang/benchmark-data), there can be two
different jobs trying to push changes at the same time. In such a case,
one of the pushes will fail and we end up missing some benchmark results
data from the workflow run.
## The solution
Whenever a push fails, we need to retry the steps leading up to the push
(checking out the benchmark results repo, storing benchmark results,
pushing the results to [a specific
repo](https://github.com/luau-lang/benchmark-data)).
### Note
There are 3 push attempts before submitting to failure.
## TL;DR
This PR retries pushing benchmark results when they fail to get pushed
(often due to pushing from multiple jobs concurrently)
Co-authored-by: Jamie Kuppens <reshurum@gmail.com>
Co-authored-by: Ignacio Falk <flakolefluk@gmail.com>
This change adds another file for benchmarking luau-analyze and sets up
benchmarks for both non-strict/strict modes for analysis and all three
optimization levels for compilation performance.
To avoid issues with race conditions on repository update we do all this
in the same job in benchmark.yml.
To be able to benchmark both modes from a single file, luau-analyze
gains --mode argument which allows to override the default typechecking
mode. Not sure if we'll want this to be a hard override on top of the
module-specified mode in the future, but this works for now.
Since callgrind allows to control stats collection from the guest, this
allows us to reset the collection right before the benchmark starts.
This change exposes this to the benchmark runner and integrates
callgrind data parsing into bench.py, so that we can run bench.py with
--callgrind argument and, as long as the runner was built with callgrind
support, we get instruction counts from the run.
We convert instruction counts to seconds using 10G instructions/second
rate; there's no correct way to do this without simulating the full CPU
pipeline but it results in time units on a similar scale to real runs.
Changed the GHA workflows to:
- Not run `build` and `release` workflows for PRs that only affect `prototyping/`
- Run `prototyping` workflow when PRs affect `Analysis/**`, `Ast/**`, or the `luau-ast` source files
Attempt to fix coverage builds by using checkout@v2 instead of v1 which might fix the detacthed HEAD issue.
On the off chance it doesn't, add extra logging around git specifically.