The previous implementation had a check-then-create race condition where
multiple threads could simultaneously:
1. Check the cache and find no provider (line 122)
2. Create their own providers (lines 126-145)
3. Insert into cache (line 161)
This resulted in multiple credential providers being created when
downloading multiple packages in parallel, as each .narinfo download
would trigger provider creation on its own thread.
Fix by using boost::concurrent_flat_map's try_emplace_and_cvisit, which
provides atomic get-or-create semantics:
- f1 callback: Called atomically during insertion, creates the provider
- f2 callback: Called if key exists, returns cached provider
- Other threads are blocked during f1, so no nullptr is ever visible
This will reduce the load on hydra. It doesn't make sense to
build 2 slightly different variations where the difference
is only in the nix-perl-bindings and additional sanitizers.
There's some unfortunate ODR violations that get dianosed with GCC but not Clang
for static inline constexpr variables defined inside the class body:
template<typename T>
struct static_const
{
static JSON_INLINE_VARIABLE constexpr T value{};
};
This can be ignored pretty much. There is the same problem for std::piecewise_construct:
http://lists.boost.org/Archives/boost/2007/06/123353.php
==2455704==ERROR: AddressSanitizer: odr-violation (0x7efddc460e20):
[1] size=1 'value' /nix/store/235hvgzcbl06fxy53515q8sr6lljvf68-nlohmann_json-3.11.3/include/nlohmann/detail/meta/cpp_future.hpp:156:45 in /nix/store/pkmljfq97a83dbanr0n64zbm8cyhna33-nix-store-2.33.0pre/lib/libnixstore.so.2.33.0
[2] size=1 'value' /nix/store/235hvgzcbl06fxy53515q8sr6lljvf68-nlohmann_json-3.11.3/include/nlohmann/detail/meta/cpp_future.hpp:156:45 in /nix/store/gbjpkjj0g8vk20fzlyrwj491gwp6g1qw-nix-util-2.33.0pre/lib/libnixutil.so.2.33.0
Instead of specifying env variables all the time
we can instead embed the __asan_default_options symbol
in all executables / shared objects. This reduces code
duplication.
This change overrides __assert_fail on glibc/musl
to instead call std::terminate that we have a custom
handler for. This ensures that we have more context
to diagnose issues encountered by users in the wild.
This commit adds two key fixes to http-binary-cache-store.cc to
properly support the new curl-based S3 implementation:
1. **Consistent cache key handling**: Use `getReference().render(withParams=false)`
for disk cache keys instead of `cacheUri.to_string()`. This ensures cache
keys are consistent with the S3 implementation and don't include query
parameters, which matches the behavior expected by Store::queryPathInfo()
lookups.
2. **S3 query parameter preservation**: When generating file transfer requests
for S3 URLs, preserve query parameters from the base URL (region, endpoint,
etc.) when the relative path doesn't have its own query parameters. This
ensures S3-specific configuration is propagated to all requests.
I want to separate "policy" from "mechanism".
Now the logic to decide how to build (a policy choice, though with some
hard constraints) is all in derivation building goal, and all in the
same spot. build hook, external builder, or local builder --- the choice
between all three is made in the same spot --- pure policy.
Now, if you want to use the external deriation builder, you simply
provide the `ExternalBuilder` you wish to use, and there is no
additional checking --- pure mechanism. It is the responsibility of the
caller to choose an external builder that works for the derivation in
question.
Also, `checkSystem()` was the only thing throwing `BuildError` from
`startBuilder`. Now that that is gone, we can now remove the
`try...catch` around that.
Add a new S3BinaryCacheStore implementation that inherits from
HttpBinaryCacheStore.
The implementation is activated with NIX_WITH_CURL_S3, keeping the
existing NIX_WITH_S3_SUPPORT (AWS SDK) implementation unchanged.
This code had several issues:
1. Not going through the SourceAccessor means that we can only work
with physical paths.
2. It did not actually check that the file exists. (std::ifstream does not check
it by default).
Most of the eval cache logic is flake-independent and libexpr,
but the loading part is not.
`nix-flake` is the right component for this, as the eval cache
isn't exactly specific to the command line.
we have now merge queues for maintainance branches. We still build it
for master to have our installer beeing updated. In future this part
could go in new workflow instead.