- Added the littlefs license note to the scripts.
- Adopted parse_intermixed_args everywhere for more consistent arg
handling.
- Removed argparse's implicit help text formatting as it does not
work with perse_intermixed_args and breaks sometimes.
- Used string concatenation for argparse everywhere, uses backslashed
line continuations only works with argparse because it strips
redundant whitespace.
- Consistent argparse formatting.
- Consistent openio mode handling.
- Consistent color argument handling.
- Adopted functools.lru_cache in tracebd.py.
- Moved unicode printing behind --subscripts in traceby.py, making all
scripts ascii by default.
- Renamed pretty_asserts.py -> prettyasserts.py.
- Renamed struct.py -> struct_.py, the original name conflicts with
Python's built in struct module in horrible ways.
With more scripts generating CSV files this moves most CSV manipulation
into summary.py, which can now handle more or less any arbitrary CSV
file with arbitrary names and fields.
This also includes a bunch of additional, probably unnecessary, tweaks:
- summary.py/coverage.py use a custom fractional type for encoding
fractions, this will also be used for test counts.
- Added a smaller diff output for size scripts with the --percent flag.
- Added line and hit info to coverage.py's CSV files.
- Added --tree flag to stack.py to show only the call tree without
other noise.
- Renamed structs.py to struct.py.
- Changed a few flags around for consistency between size/summary scripts.
- Added `make sizes` alias.
- Added `make lfs.code.csv` rules
These scripts can't easily share the common logic, but separating
field details from the print/merge/csv logic should make the common
part of these scripts much easier to create/modify going forward.
This also tweaked the behavior of summary.py slightly.
A small mistake in test.py's control flow meant the failing test job
would succesfully kill all other test jobs, but then humorously start
up a new process to continue testing.
Using errors=replace in python utf-8 decoding makes these scripts more
resilient to underlying errors, rather than just throwing an unhelpfully
generic decode error.
A full summary of static measurements (code size, stack usage, etc) can now
be found with:
make summary
This is done through the combination of a new ./scripts/summary.py
script and the ability of existing scripts to merge into existing csv
files, allowing multiple results to be merged either in a pipeline, or
in parallel with a single ./script/summary.py call.
The ./scripts/summary.py script can also be used to quickly compare
different builds or configurations. This is a proper implementation
of a similar but hacky shell script that has already been very useful
for making optimization decisions:
$ ./scripts/structs.py new.csv -d old.csv --summary
name (2 added, 0 removed) code stack structs
TOTAL 28648 (-2.7%) 2448 1012
Also some other small tweaks to scripts:
- Removed state saving diff rules. This isn't the most useful way to
handle comparing changes.
- Added short flags for --summary (-Y) and --files (-F), since these
are quite often used.
- Added -L/--depth argument to show dependencies for scripts/stack.py,
this replaces calls.py
- Additional internal restructuring to avoid repeated code
- Removed incorrect diff percentage when there is no actual size
- Consistent percentage rendering in test.py
This required a patch to the --diff flag for the scripts to ignore
a missing file. This enables the useful one liner for making comparisons
with potentially missing previous versions:
./scripts/code.py lfs.o -d lfs.o.code.csv -o lfs.o.code.csv
function (0 added, 0 removed) old new diff
TOTAL 25476 25476 +0
One downside, these previous files are easy to delete as a part of make
clean, which limits their usefulness for comparing configuration
changes...
Note this detects loops (recursion), and renders this as infinity.
Currently littlefs does have a single recursive function and you can see
how this infects the full call graph. Eventually this should be removed.
This is to avoid unexpected script behavior even though data.py should
always return 0 bytes for littlefs. Maybe a check for this should be
added to CI?
Now with -s/--sort and -S/--reverse-sort for sorting the functions by
size.
You may wonder why add reverse-sort, since its utility doesn't seem
worth the cost to implement (these are just helper scripts after all),
the reason is that reverse-sort is quite useful on the command-line,
where scrollback may be truncated, and you only care about the larger
entries.
Outside of the command-line, normal sort is prefered.
Fortunately the difference is just the sign in the sort key.
Note this conflicts with the short --summary flag, so that has been
removed.
This was lost in the Travis -> GitHub transition, in serializing some of
the jobs, I missed that we need to clean between tests with different
geometry configurations. Otherwise we end up running outdated binaries,
which explains some of the weird test behavior we were seeing.
Also tweaked a few script things:
- Better subprocess error reporting (dump stderr on failure)
- Fixed a BUILDDIR rule issue in test.py
- Changed test-not-run status to None instead of undefined
Now littlefs's Makefile can work with a custom build directory
for compilation output. Just set the BUILDDIR variable and the Makefile
will take care of the rest.
make BUILDDIR=build size
This makes it very easy to compare builds with different compile-time
configurations or different cross-compilers.
This meant most of code.py's build isolation is no longer needed,
so revisted the scripts and cleaned/tweaked a number of things.
Also bought code.py in line with coverage.py, fixing some of the
inconsistencies that were created while developing these scripts.
One change to note was removing the inline measuring logic, I realized
this feature is unnecessary thanks to GCC's -fkeep-static-functions and
-fno-inline flags.
Mostly taken from .travis.yml, biggest changes were around how to get
the status updates to work.
We can't use a token on PRs the same way we could in Travis, so instead
we use a second workflow that checks every pull request for "status"
artifacts, and create the actual statuses in the "workflow_run" event,
where we have full access to repo secrets.