This wasn't implemented correctly anyways, as it would need to recursively
rename directories that may not exist. Things would also get a bit
complicated if only some files in a directory were renamed.
Doable, but not needed for our use case.
For now just ignore any directory components. Though this may be worth
changing if the source directory structure becomes more complicated in
the future (maybe with a -r/--recursive flag?).
Initially I thought the fcrc would be sufficient for all of the
end-of-commit context, since indicating that there is a new commit is a
simple as invalidating the fcrc. But it turns out there are cases that
make this impossible.
The surprising, and actually common, case, is that of an fcrc that
will end up containing a full commit. This is common as soon as the
prog_size is big, as small commits are padded to the prog_size at
minimum.
.------------------. \
| metadata | |
| | |
| | +-.
|------------------| | |
| foward CRC ------------.
|------------------| / | |
| commit CRC -----' |
|------------------| |
| padding | |
| | |
|------------------| \ \ |
| metadata | | | |
| | +-. | |
| | | | +-'
|------------------| / | |
| commit CRC --------' |
|------------------| |
| | /
'------------------'
When the commit + crc is all contained in the fcrc, something silly
happens with the math behind crcs. Everything in the commit gets
canceled out:
crc(m) = m(x) x^|P|-1 mod P(x)
m ++ crc(m) = m(x) x^|P|-1 + (m(x) x^|P|-1 mod P(x))
crc(m ++ crc(m)) = (m(x) x^|P|-1 + (m(x) x^|P|-1 mod P(x))) x^|P|-1 mod P(x)
crc(m ++ crc(m)) = (m(x) x^|P|-1 + m(x) x^|P|-1) x^|P|-1 mod P(x)
crc(m ++ crc(m)) = 0 * x^|P|-1 mod P(x)
This is the reason the crc of a message + naive crc is zero. Even with an
initializer/bit-fiddling, the crc of the whole commit ends up as some
constant.
So no manipulation of the commit can change the fcrc...
But even if this did work, or we changed this scheme to use two
different checksums, it would still require calculating the fcrc of
the whole commit to know if we need to tweak the first bit to invalidate
the unlikely-but-problematic case where we happen to match the fcrc. This
would add a large amount of complexity to the commit code.
It's much simpler and cheaper to keep the 1-bit counter in the tag, even
if it adds another moving part to the system.
This fixes most of the remaining bugs (except one with multiple padding
commits + noop erases in test_badblocks), with some other code tweaks.
The biggest change was dropping reliance on end-of-block commits to know
when to stop parsing commits. We can just continue to parse tags and
rely on the crc for catch bad commits, avoiding a backwards-compatiblity
hiccup. So no new commit tag.
Also renamed nprogcrc -> fcrc and commitcrc -> ccrc and made naming in
the code a bit more consistent.
Previously forward-looking CRCs was just two new CRC types, one for
commits with forward-looking CRCs, one without. These both contained the
CRC needed to complete the current commit (note that the commit CRC
must come last!).
[-- 32 --|-- 32 --|-- 32 --|-- 32 --]
with: [ crc3 tag | nprog size | nprog crc | commit crc ]
without: [ crc2 tag | commit crc ]
This meant there had to be several checks for the two possible structure
sizes, messying up the implementation.
[-- 32 --|-- 32 --|-- 32 --|-- 32 --|-- 32 --]
with: [nprogcrc tag| nprog size | nprog crc | commit tag | commit crc ]
without: [ commit tag | commit crc ]
But we already have a mechanism for storing optional metadata! The
different metadata tags! So why not use a separate tage for the
forward-looking CRC, separate from the commit CRC?
I wasn't sure this would actually help that much, there are still
necessary conditions for wether or not a forward-looking CRC is there,
but in the end it simplified the code quite nicely, and resulted in a ~200 byte
code-cost saving.
This change is necessary to handle out-of-order writes found by pjsg's
fuzzing work.
The problem is that it is possible for (non-NOR) block devices to write
pages in any order, or to even write random data in the case of a
power-loss. This breaks littlefs's use of the first bit in a page to
indicate the erase-state.
pjsg notes this behavior is documented in the W25Q here:
https://community.cypress.com/docs/DOC-10507
---
The basic idea here is to CRC the next page, and use this "erase-state CRC" to
check if the next page is erased and ready to accept programs.
.------------------. \ commit
| metadata | |
| | +---.
| | | |
|------------------| | |
| erase-state CRC -----. |
|------------------| | | |
| commit CRC ---|-|-'
|------------------| / |
| padding | | padding (doesn't need CRC)
| | |
|------------------| \ | next prog
| erased? | +-'
| | | |
| v | /
| |
| |
'------------------'
This is made a bit annoying since littlefs doesn't actually store the
page (prog_size) in the superblock, since it doesn't need to know the
size for any other operation. We can work around this by storing both
the CRC and size of the next page when necessary.
Another interesting note is that we don't need to any bit tweaking
information, since we read the next page every time we would need to
know how to clobber the erase-state CRC. And since we only read
prog_size, this works really well with our caching, since the caches
must be a multiple of prog_size.
This also brings back the internal lfs_bd_crc function, in which we can
use some optimizations added to lfs_bd_cmp.
Needs some cleanup but the idea is passing most relevant tests.
- Added support for negative numbers in the leb16 encoding with an
optional 'w' prefix.
- Changed prettyasserts.py rule to .a.c => .c, allowing other .a.c files
in the future.
- Updated .gitignore with missing generated files (tags, .csv).
- Removed suite-namespacing of test symbols, these are no longer needed.
- Changed test define overrides to have higher priority than explicit
defines encoded in test ids. So:
./runners/bench_runner bench_dir_open:0f1g12gg2b8c8dgg4e0 -DREAD_SIZE=16
Behaves as expected.
Otherwise it's not easy to experiment with known failing test cases.
- Fixed issue where the -b flag ignored explicit test/bench ids.
Driven primarily by a want to compare measurements of different runtime
complexities (it's difficult to fit O(n) and O(log n) on the same plot),
this adds the ability to nest subplots in the same .svg which try to align
as much as possible. This turned out to be surprisingly complicated.
As a part of this, adopted matplotlib's relatively recent
constrained_layout, which behaves much more consistently.
Also dropped --legend-left, no one should really be using that.
The difference between ggplot's gray and GitHub's gray was a bit jarring.
This also adds --foreground and --font-color for this sort of additional
color control without needing to add a new flag for every color scheme
out there.
- Renamed struct_.py -> structs.py again.
- Removed lfs.csv, instead prefering script specific csv files.
- Added *-diff make rules for quick comparison against a previous
result, results are now implicitly written on each run.
For example, `make code` creates lfs.code.csv and prints the summary, which
can be followed by `make code-diff` to compare changes against the saved
lfs.code.csv without overwriting.
- Added nargs=? support for -s and -S, now uses a per-result _sort
attribute to decide sort if fields are unspecified.
For long running processes (testing with >1pls) these logs can grow into
multiple gigabytes, humorously we never access more than the last n lines
as requested by --context. Piping the stdout with --stdout does not use
additional RAM.
- Fixed added/removed count in scripts when an entry has no field in
the expected results
- Fixed a python-sort-type issue when by-field is missing in a result
This happens in rare situations where there is a failed mdir relocation,
interrupted by a power-loss, containing the destination of a directory
rename operation, where the directory being renamed preceded the
relocating mdir in the mdir tail-list. This requires at some point for a
previous directory rename to create a cycle.
If this happens, it's possible for the half-orphan to contain the only
reference to the renamed directory. Since half-orphans contain outdated
state when viewed through the mdir tail-list, the renamed directory
appears to be a full-orphan until we fix the relocating half-orphan.
This causes littlefs to incorrectly remove the renamed directory from
the mdir tail-list, causes catastrophic problems down the line.
The source of the problem is that the two different types of orphans
really operate on two different levels of abstraction: half-orphans fix
failed mdir commits, while full-orphans fix directory removes/renames.
Conflating the two leads to situations where we attempt to fix assumed
problems about the directory tree before we have fixed problems with the
mdir state.
The fix here is to separate out the deorphan search into two passes: one
to fix half-orphans and correct any mdir-commits, restoring the mdirs
and gstate to a known good state, then two to fix failed
removes/renames.
---
This was found with the -Plinear heuristic powerloss testing, which now
runs on more geometries. The failing case was:
test_relocations_reentrant_renames:112gg261dk1e3f3:123456789abcdefg1h1i1j1k1
l1m1n1o1p1q1r1s1t1u1v1g2h2i2j2k2l2m2n2o2p2q2r2s2t2
Also fixed/tweaked some parts of the test framework as a part of finding
this bug:
- Fixed off-by-one in exhaustive powerloss state encoding.
- Added --gdb-powerloss-before and --gdb-powerloss-after to help debug
state changes through a failing powerloss, maybe this should be
expanded to any arbitrary powerloss number in the future.
- Added lfs_emubd_crc and lfs_emubd_bdcrc to get block/bd crcs for quick
state comparisons while debugging.
- Fixed bd read/prog/erase counts not being copied during exhaustive
powerloss testing.
- Fixed small typo in lfs_emubd trace.
- Changed --(tool)-tool to --(tool)-path in scripts, this seems to be
a more common name for this sort of flag.
- Changed BUILDDIR to not have implicit slash, makes Makefile internals
a bit more readable.
- Fixed some outdated names hidden in less-often used ifdefs.
Added a couple flags to make the script a bit more flexible, and removed
littlefs-specific default in line with the other scripts which aren't
really littlefs-specific. (These defaults can be moved to the
littlefs-specific Makefile easily enough).
The original behavior can be reproduced like so:
./script/changeprefix.py lfs lfs2 --git
- Fixed prettyasserts.py parsing when '->' is in expr
- Made prettyasserts.py failures not crash (yay dynamic typing)
- Fixed the initial state of the emubd disk file to match the internal
state in RAM
- Fixed true/false getting changed to True/False in test.py/bench.py
defines
- Fixed accidental substring matching in plot.py's --by comparison
- Fixed a missed LFS_BLOCk_CYCLES in test_superblocks.toml that was
missed
- Changed test.py/bench.py -v to only show commands being run
Including the test output is still possible with test.py -v -O-, making
the implicit inclusion redundant and noisy.
- Added license comments to bench_runner/test_runner
Note that plotmpl.py tries to share many arguments with plot.py,
allowing plot.py to act as a sort of draft mode for previewing plots
before creating an svg.
Based loosely on Linux's perf tool, perfbd.py uses trace output with
backtraces to aggregate and show the block device usage of all functions
in a program, propagating block devices operation cost up the backtrace
for each operation.
This combined with --trace-period and --trace-freq for
sampling/filtering trace events allow the bench-runner to very
efficiently record the general cost of block device operations with very
little overhead.
Adopted this as the default side-effect of make bench, replacing
cycle-based performance measurements which are less important for
littlefs.
This adds -P/--propagate and -Z/--depth to perf.py for showing recursive
results, making it easy to narrow down on where spikes in performance
come from.
This ended up being a bit different from stack.py's recursive results,
as we end up with different (diminishing) numbers as we descend.
This provides 2 things:
1. perf integration with the bench/test runners - This is a bit tricky
with perf as it doesn't have its own way to combine perf measurements
across multiple processes. perf.py works around this by writing
everything to a zip file, using flock to synchronize. As a plus, free
compression!
2. Parsing and presentation of perf results in a format consistent with
the other CSV-based tools. This actually ran into a surprising number of
issues:
- We need to process raw events to get the information we want, this
ends up being a lot of data (~16MiB at 100Hz uncompressed), so we
paralellize the parsing of each decompressed perf file.
- perf reports raw addresses post-ASLR. It does provide sym+off which
is very useful, but to find the source of static functions we need to
reverse the ASLR by finding the delta the produces the best
symbol<->addr matches.
- This isn't related to perf, but decoding dwarf line-numbers is
really complicated. You basically need to write a tiny VM.
This also turns on perf measurement by default for the bench-runner, but at a
low frequency (100 Hz). This can be decreased or removed in the future
if it causes any slowdown.
The main change is requiring field names for -b/-f/-s/-S, this
is a bit more powerful, and supports hidden extra fields, but
can require a bit more typing in some cases.
- Changed multi-field flags to action=append instead of comma-separated.
- Dropped short-names for geometries/powerlosses
- Renamed -Pexponential -> -Plog
- Allowed omitting the 0 for -W0/-H0/-n0 and made -j0 consistent
- Better handling of --xlim/--ylim
Instead of trying to align to block-boundaries tracebd.py now just
aliases to whatever dimensions are provided.
Also reworked how scripts handle default sizing. Now using reasonable
defaults with 0 being a placeholder for automatic sizing. The addition
of -z/--cat makes it possible to pipe directly to stdout.
Also added support for dots/braille output which can capture more
detail, though care needs to be taken to not rely on accurate coloring.
Now both scripts also fallback to guessing what fields to use based on
what fields can be converted to integers. This is more falible, and
doesn't work for tests/benchmarks, but in those cases explicit fields
can be used (which is what would be needed without guessing anyways).
These are really just different flavors of test.py and test_runner.c
without support for power-loss testing, but with support for measuring
the cumulative number of bytes read, programmed, and erased.
Note that the existing define parameterization should work perfectly
fine for running benchmarks across various dimensions:
./scripts/bench.py \
runners/bench_runner \
bench_file_read \
-gnor \
-DSIZE='range(0,131072,1024)'
Also added a couple basic benchmarks as a starting point.
- Added the littlefs license note to the scripts.
- Adopted parse_intermixed_args everywhere for more consistent arg
handling.
- Removed argparse's implicit help text formatting as it does not
work with perse_intermixed_args and breaks sometimes.
- Used string concatenation for argparse everywhere, uses backslashed
line continuations only works with argparse because it strips
redundant whitespace.
- Consistent argparse formatting.
- Consistent openio mode handling.
- Consistent color argument handling.
- Adopted functools.lru_cache in tracebd.py.
- Moved unicode printing behind --subscripts in traceby.py, making all
scripts ascii by default.
- Renamed pretty_asserts.py -> prettyasserts.py.
- Renamed struct.py -> struct_.py, the original name conflicts with
Python's built in struct module in horrible ways.
The main benefit is small test ids everywhere, though this is with the
downside of needing longer names to properly prefix and avoid
collisions. But this fits into the rest of the scripts with globally
unique names a bit better. This is a C project after all.
The other small benefit is test generators may have an easier time since
per-case symbols can expect to be unique.
With more scripts generating CSV files this moves most CSV manipulation
into summary.py, which can now handle more or less any arbitrary CSV
file with arbitrary names and fields.
This also includes a bunch of additional, probably unnecessary, tweaks:
- summary.py/coverage.py use a custom fractional type for encoding
fractions, this will also be used for test counts.
- Added a smaller diff output for size scripts with the --percent flag.
- Added line and hit info to coverage.py's CSV files.
- Added --tree flag to stack.py to show only the call tree without
other noise.
- Renamed structs.py to struct.py.
- Changed a few flags around for consistency between size/summary scripts.
- Added `make sizes` alias.
- Added `make lfs.code.csv` rules
This is really more work for the bench runner. With this change defines
can be manipulated at a rather high level at runtime. Which should be
useful for generating benchmarks across various dimensions.
The define grammar in the test_runner is now a bit more powerful,
accepting:
1. A single value: -DN=42
2. A list of values, which get permuted: -DN=1,2,3
3. A range: -DN=range(10)
4. Some combo: -DN=1,2,range(3,0,-1)
This is more complex in the test .toml defines, which can also be C
expressions:
1. A single value: define=42
2. A single expression: define='42*42'
3. A list: define=[1,2,3]
4. A comma separated string: define='1,2,3'
5. A range: define='42*range(10)'
6. This mess: define=[1,2,'3,4,range(2)*range(2)+3']
This is probably how the test runner should have been implemented in the
first place, but it took a few tries to get here.
This makes it so the test identifier, which is a bit longer now, fully
encodes the state of the defines in the test. This removes the need for
the extra geometry field and allows reproduction of tests with custom
defines at runtime.
The test runner may have already seemed like a solved problem, but these
changes are really to enable repurposing the test runner as a bench
runner.
These are just some minor quality of life improvements
- Added a "make build-test" alias
- Made test runner a positional arg for test.py since it is almost
always required. This shortens the command line invocation most of the
time.
- Added --context to test.py
- Renamed --output in test.py to --stdout, note this still merges
stderr. Maybe at some point these should be split, but it's not really
worth it for now.
- Reworked the test_id parsing code a bit.
- Changed the test runner --step to take a range such as -s0,12,2
- Changed tracebd.py --block and --off to take ranges
Previously didn't think this would work without making test.py aware of
the number of implicit defines, which risks being incredibly fragile.
Fortunately it turns out we can defer the actual array size calculation
until the C preprocessor. This simplifies a few things.
Also a bitmap-based caching layer for the defines. Since the test
defines have been upgraded to callbacks recursive defines risk spending
a decent amount of time evaluating on every lookup. Some quick testing
shows 408015154 hits to 46160 misses so that's a good sign.
Also changed the geometries to be their own leb16-encoded part of the
test identifier. This means any geometry can be captured and reproduced
with just the test identifier. Here are the current test geometries:
./runners/test_runner --list-geometries
geometry read prog erase count size leb16
d,default 16 16 512 2048 1048576 g1gg2
e,eeprom 1 1 512 2048 1048576 1gg2
E,emmc 512 512 512 2048 1048576 gg2
n,nor 1 1 4096 256 1048576 1ggg1
N,nand 4096 4096 32768 32 1048576 ggg1ggg8
Based on a handful of local hacky variations, this sort of trace
rendering is surprisingly useful for getting an understanding of how
different filesystem operations interact with the underlying
block-device.
At some point it would probably be good to reimplement this in a
compiled language. Parsing and tracking the trace output quickly
becomes a bottleneck with the amount of trace output the tests
generate.
Note also that since tracebd.py run on trace output, it can also be
used to debug logged block-device operations post-run.
This mostly involved futzing around with some of the less intuitive
parts of Unix's named-pipes behavior.
This is a bit important since the tests can quickly generate several
gigabytes of trace output.