Commit Graph

839 Commits

Author SHA1 Message Date
Christopher Haster 5812d2b5cf Reworked how multi-layered defines work in the test-runner
In the test-runner, defines are parameterized constants (limited
to integers) that are generated from the test suite tomls resulting
in many permutations of each test.

In order to make this efficient, these defines are implemented as
multi-layered lookup tables, using per-layer/per-scope indirect
mappings. This lets the test-runner and test suites define their
own defines with compile-time indexes independently. It also makes
building of the lookup tables very efficient, since they can be
incrementally populated as we expand the test permutations.

The four current define layers and when we need to build them:

layer                           defines         predefine_map   define_map
user-provided overrides         per-run         per-run         per-suite
per-permutation defines         per-perm        per-case        per-perm
per-geometry defines            per-perm        compile-time    -
default defines                 compile-time    compile-time    -
2022-06-06 01:35:00 -05:00
Christopher Haster 64436933e2 Putting together rewritten test.py script 2022-06-06 01:34:57 -05:00
Gerard Marull-Paretas 652f2c5646 zephyr: update include paths to use <zephyr/...>
Zephyr has prefixed all of its includes with <zephyr/...>. While the old
mode can still be used (CONFIG_LEGACY_INCLUDE_PATH) and is still enabled
by default, it's better to be prepared for its removal in the future.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2022-05-11 20:26:33 +09:00
Kevin ORourke 6c720dc2bb Fix unused function warning with LFS_NO_MALLOC 2022-04-25 12:12:41 +02:00
Christopher Haster 92a600a980 Added trace and persist flags to test_runner 2022-04-19 02:12:24 -05:00
Christopher Haster 9281ce26a7 More test_runner progress
- Added filtering based on suite, case, perm, type, geometry
- Added --skip, --count, and --every (will be used for parallelism)
- Implemented --list-defines
- Better helptext for flags with arguments
- Other minor tweaks
2022-04-18 15:15:57 -05:00
Christopher Haster 4b0aa6272e Some more minor improvements to the test_runner
- Indirect index map instead of bitmap+sparse array
- test_define_t and test_type_t
- Added back conditional filtering
- Added suite-level defines and filtering
2022-04-18 00:09:01 -05:00
Christopher Haster d683f1c76c Reintroduced test-defines into the new test_runner
This moves defines entirely into the runtime of the test_runner,
simplifying thing and reducing the amount of generated code that needs
to be build, at the cost of limiting test-defines to uintmax_t types.

This is implemented using a set of index-based scopes (created by
test.py) that allow different layers to override defines from other
layers, accessible through the global `test_define` function.

layers:
1. command-line overrides
2. per-case defines
3. per-geometry defines
2022-04-17 21:45:47 -05:00
Christopher Haster 56a990336b Created new test_runner.c and test_.py
This is to try a different design for testing, the goals are to make the
test infrastructure a bit simpler, with clear stages for building and
running, and faster, by avoiding rebuilding lfs.c n-times.
2022-04-16 13:50:34 -05:00
Christopher Haster 40dba4a556 Merge pull request #669 from littlefs-project/devel
Minor release: v2.5
2022-04-13 22:49:41 -05:00
Christopher Haster 148e312ea3 Bumped minor version to v2.5 2022-04-13 22:47:43 -05:00
Christopher Haster abbfe8e92e Reduced lfs_dir_traverse's explicit stack to 3 frames
This is possible thanks to invoxiaamo's optimization of compacting
renames to avoid the O(n^3) nested filters. Not only does this
significantly reduce the runtime cost of that operation, but it
reduces the maximum possible depth of recursion to 3 frames.

Deepest lfs_dir_traverse before:

traverse with commit
'-> traverse with filter
    '-> traverse with move
        '-> traverse with filter

Deepest lfs_dir_traverse after:

traverse with commit
'-> traverse with move
    '-> traverse with filter
2022-04-10 23:27:49 -05:00
Christopher Haster c60c977c25 Merge pull request #658 from littlefs-project/no-recursion
Restructure littlefs to not use recursion, measure stack usage
2022-04-10 23:23:39 -05:00
Christopher Haster 3ce64d1ac0
Merge pull request #666 from invoxiaamo/rename-opti2
Optimization of the rename case.
2022-04-10 22:02:04 -05:00
Christopher Haster 0ced3623d4
Merge pull request #657 from littlefs-project/copyright-update
Update copyright notice
2022-04-10 21:59:27 -05:00
Christopher Haster 5451a6d503 Merge pull request #643 from microist/fix-filebd-windows
Fixes to use lfs_filebd on windows platforms
2022-04-10 21:56:08 -05:00
Martin Hoffmann 1e038c81fc Fixes to use lfs_filebd on windows platforms
There are two issues, when using the file-based block device emulation
on Windows Platforms:
1. There is no fsync implementation available. This needs to be mapped
   to a Windows-specific FlushFileBuffers system call.
2. The block device file needs to be opened as binary file (O_BINARY)
	   The corresponding flag is not required for Linux.
2022-04-10 21:55:00 -05:00
Christopher Haster f28ac3ea7d
Merge pull request #638 from lmapii/master
Removed invalid overwrite for return value.
2022-04-10 21:52:48 -05:00
Christopher Haster a94fbda1cd
Merge pull request #632 from robekras/patch-1
Fix lfs_file_rawseek performance issue
2022-04-10 21:52:27 -05:00
Christopher Haster cc025653ed
Merge pull request #630 from Johnxjj/dev-johnxjj
add the limit, the cursor cannot be set to a negative number
2022-04-10 14:44:47 -05:00
Christopher Haster bfb9bd2483
Merge pull request #614 from nnayo/fix_no_malloc_2
don't use lfs_file_open() when LFS_NO_MALLOC is set
2022-04-10 14:44:33 -05:00
Christopher Haster f40b854ab5
Merge pull request #584 from colin-foster-in-advantage/block_size_mount_fail
Fail mount when the block size changes
2022-04-10 14:44:24 -05:00
Arnaud Mouiche c2fa1bb7df Optimization of the rename case.
Rename can be VERY time consuming. One of the reasons is the 4 recursion
level depth of lfs_dir_traverse() seen if a compaction happened during the
rename.

lfs_dir_compact()
  size computation
    [1] lfs_dir_traverse(cb=lfs_dir_commit_size)
         - do 'duplicates and tag update'
       [2] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1])
           - Reaching a LFS_FROM_MOVE tag (here)
         [3] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1]) <= on 'from' dir
             - do 'duplicates and tag update'
           [4] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[3])
  followed by the compaction itself:
    [1] lfs_dir_traverse(cb=lfs_dir_commit_commit)
         - do 'duplicates and tag update'
       [2] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1])
           - Reaching a LFS_FROM_MOVE tag (here)
         [3] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[1]) <= on 'from' dir
             - do 'duplicates and tag update'
           [4] lfs_dir_traverse(cb=lfs_dir_traverse_filter, data=tag[3])

Yet, analyse shows that levels [3] and [4] don't perform anything
if the callback is lfs_dir_traverse_filter...

A practical example:

- format and mount a 4KB block FS
- create 100 files of 256 Bytes named "/dummy_%d"
- create a 1024 Byte file "/test"
- rename "/test" "/test_rename"
- create a 1024 Byte file "/test"
- rename "/test" "/test_rename"
This triggers a compaction where lfs_dir_traverse was called 148393 times,
generating 25e6+ lfs_bd_read calls (~100 MB+ of data)

With the optimization, lfs_dir_traverse is now called 3248 times
(589e3 lfs_bds_calls (~2.3MB of data)

=> x 43 improvement...
2022-04-10 13:12:45 -05:00
martin 3b62ec1c47 Updated error handling for NOSPC 2022-04-10 13:00:13 -05:00
xujunjun b898977fd8 Set the limit, the cursor cannot be set to a negative number 2022-04-10 12:57:42 -05:00
Colin Foster cf274e6ec6 Squash of CR changes
- nit: Moving brace to end of if statement line for consistency
- mount: add more debug info per CR
- Fix compiler error from extra parentheses
- Fix superblock typo
2022-04-10 12:53:33 -05:00
Christopher Haster 425dc810a5 Modified robekras's optimization to avoid flush for all seeks in cache
The basic idea is simple, if we seek to a position in the currently
loaded cache, don't flush the cache. Notably this ensures that seek is
always as fast or faster than just reading the data.

This is a bit tricky since we need to check that our new block and
offset match the cache, fortunately we can skip the block check by
reevaluating the block index for both the current and new positions.

Note this only works whene reading, for writing we need to always flush
the cache, or else we will lose the pending write data.
2022-04-10 12:46:51 -05:00
robekras a6f01b7d6e Update lfs.c
This should fix the performance issue if a new seek position belongs to currently cached data.
This avoids unnecessary rereads of file data.
2022-04-09 02:12:18 -05:00
Christopher Haster 9c7e232086 Fixed missing definition of lfs_cache_drop in readonly mode
Interestingly this was introduced by two different PRs which were not tested
together until pre-release testing:

- Fix lfs_file_seek doesn't update cache properties correctly
- Fix compiler warnings when LFS_READONLY defined
2022-03-21 20:29:04 -05:00
Christopher Haster c676bcee4c Merge branch 'bf_lfs_file_seek_readonly' into HEAD 2022-03-20 23:16:15 -05:00
Christopher Haster 03f088b92c Tweaked lfs_file_flush to still flush caches when build under LFS_READONLY
A slight varation to the fix from ondrap
2022-03-20 23:14:34 -05:00
ondrap e955b9f65d Fix lfs_file_seek doesn't update cache properties correctly in readonly mode. Invalidate cache to fix it. 2022-03-20 23:10:11 -05:00
Christopher Haster 99f58139cb
Merge pull request #650 from Kongduino/patch-1
Typo
2022-03-20 23:09:41 -05:00
Christopher Haster 5801169348
Merge pull request #635 from mikee47/fix/spelling-errors
Fix spelling errors
2022-03-20 23:09:23 -05:00
Christopher Haster 2d6f4ead13
Merge pull request #620 from XinStellaris/master
fix bug:lfs_alloc will alloc one block repeatedly in multiple split
2022-03-20 23:09:04 -05:00
Christopher Haster 3d1b89b41a
Merge pull request #612 from tniessen/patch-1
Always zero rambd buffer before first use
2022-03-20 23:08:31 -05:00
Christopher Haster 45cefb825d
Merge pull request #606 from eclig/improve-config-doc
Specify unit of the size members of the lfs_config struct
2022-03-20 23:07:51 -05:00
Christopher Haster bbb9e3873e
Merge pull request #593 from tannewt/patch-1
Indent sub-portions of tag fields
2022-03-20 23:07:32 -05:00
Christopher Haster c6d3c48939
Merge pull request #569 from tniessen/fix-compilation-with-lfs_readonly
Fix compiler warnings when LFS_READONLY defined
2022-03-20 23:06:50 -05:00
Christopher Haster 2db5dc80c2 Update copyright notice 2022-03-20 23:03:52 -05:00
田昕 1363c9f9d4 fix bug:lfs_alloc will alloc one block repeatedly in multiple split
BUG CASE:Assume there are 6 blocks in littlefs, block 0,1,2,3 already allocated. 0 has a tail pair of {2, 3}. Now we try to write more into 0.
When writing to block 0, we will split(FIRST SPLIT), thus allocate block 4 and 5. Up to now , everything is as expected.
Then we will try to commit in block 4, during which split(SECOND SPLIT) is triggered again(In our case, some files are large, some are small, one split may not be enough).  Still as expected now.
BUG happens when we try to alloc a new block pair for the second split:
As lookahead buffer reaches the end , a new lookahead buffer will be generated from flash content, and block 4, 5 are unused blocks in the new lookahead buffer because they are not programed yet. HOWEVER, block 4,5 should be occupied in the first split!!!!!  The result is block 4,5 are allocated again(This is where things are getting wrong).

commit ce2c01f results in this bug. In the commit, a lfs_alloc_ack is inserted in lfs_dir_split, which will cause split to reset lfs->free.ack to block count.
In summary, this problem exists after 2.1.3.

Solution: don't call lfs_alloc_ack in lfs_dir_split.
2022-03-20 20:53:48 -05:00
Kongduino 5bc682a0d4 Typo
s/propogated/propagated/
2022-03-20 20:49:45 -05:00
Christopher Haster 8109f28266 Removed recursion from lfs_dir_traverse
lfs_dir_traverse is a bit unpleasant in that it is inherently a
recursive function, but without a strict bound of 4 calls (commit -> filter ->
move -> filter), and efforts to unroll the recursion comes at a
signification code cost.

It turns out the best solution I've found so far is to simple create an
explicit stack with an explicit bound of 4 calls (or more accurately,
3 pushed frames).

---

This actually highlights one of the bigger flaws in littlefs right now,
which is that this function, lfs_dir_traverse, takes O(n^2) disk reads
to traverse.

Note that LFS_FROM_MOVE can only occur once per commit, which is why
this code is O(n^2) and not O(n^4).
2022-03-20 04:27:54 -05:00
Christopher Haster fedf646c79 Removed recursion in file read/writes
This mostly just required separate functions for "lfs_file_rawwrite" and
"lfs_file_flushedwrite", since lfs_file_flush recursively invokes
lfs_file_rawread and lfs_file_rawwrite.

This comes at a code cost, but gives us bounded and measurable RAM usage
on this code path.
2022-03-20 04:25:24 -05:00
Christopher Haster 84da4c0b1a Removed recursion from commit/relocate code path
lfs_dir_commit originally relied heavily on tail-recursion, though at
least one path (through relocations) was not tail-recursive, and could
cause unbounded stack usage in extreme cases of bad blocks. (Keep in
mind even extreme cases of bad blocks should be in scope for littlefs).

In order to remove recursion from this code path, several changed were
raequired:

- The lfs_dir_compact logic had to be somewhat inverted. Instead of
  first compacting and then resolving issues such as relocations and
  orphans, the overarching lfs_dir_commit now contains a state-machine
  which after committing or compacting handles the extra changes to the
  filesystem in a single, non-recursive loop

- Instead of fixing all relocations recursively, >1 relocation requires
  defering to a full deorphan step. This step is unfortunately an
  additional n^2 process. It also required some changes to lfs_deorphan
  in order to ignore intentional orphans created as an intermediary in
  lfs_mkdir. Maybe in the future we should remove these.

- Tail recursion normally found in lfs_fs_deorphan had to be rewritten
  as a loop which restarts any time a new commit causes a relocation.
  This does show that the algorithm may not terminate, but only if every
  block is bad, which will eventually cause littlefs to run out of
  blocks to write to.
2022-03-20 04:24:44 -05:00
Christopher Haster 554e4b1444 Fixed Popen deadlock issue in test.py
As noted in Python's subprocess library:

> This will deadlock when using stdout=PIPE and/or stderr=PIPE and the
> child process generates enough output to a pipe such that it blocks
> waiting for the OS pipe buffer to accept more data.

Curiously, this only became a problem when updating to Ubuntu 20.04
in CI (python3.6 -> python3.8).
2022-03-20 03:44:39 -05:00
Christopher Haster fe8f3d4f18 Changed./scripts/struct.py to organize by header file
Avoids redundant counting of structs shared in multiple .c files, which
is very common. This is different from the other scripts,
code.py/data.py/stack.py, but this difference makes sense as struct
declarations have a very different lifetime.
2022-03-20 03:41:37 -05:00
Christopher Haster 316b019f41 In CI, determine loop devices dynamically to avoid conflicts with Ubuntu snaps
Introduced when updating CI to Ubuntu 20.04, Ubuntu snaps consume
loop devices, which conflict with out assumption that /dev/loop0
will always be unused. Changed to request a dynamic loop device from
losetup, though it would have been nice if Ubuntu snaps allocated
from the last device or something.
2022-03-20 03:39:23 -05:00
Christopher Haster 8475c8064d Limit ./scripts/structs.py to report structs in local .h files
This requires parsing an additional section of the dwarfinfo (--dwarf=rawlines)
to get the declaration file info.

---

Interpreting the results of ./scripts/structs.py reporting is a bit more
complicated than other scripts, structs aren't used in a consistent
manner so the cost of a large struct depends on the context in which it
is used.

But that being said, there really isn't much reason to report
internal-only structs. These structs really only exist for type-checking
in internal algorithms, and their cost will end up reflected in other RAM
measurements, either stack, heap, or other.
2022-03-20 03:39:23 -05:00
Christopher Haster 563af5f364 Cleaned up make clean 2022-03-20 03:39:23 -05:00