2025-38: A week in conda-forge

To continue to give you a glance into my conda-forge work, we’re continuing with the second week of reporting. This week was a bit leaner on my activities here as I spend my time on preparing my PyData Paris 2025.

Still, the talk also provided the need for one PR on conda-forge: I migrated the polars-feedstock to make it cargo-auditable. This has been useful as an example in my talk as polars statically links a huge number of Rust dependencies. Thus, it is a good case to highlight the existence of Phantom Dependencies (dependencies you have in your environment that your package manager doesn’t list/is aware of).

Later this week, I continued working on the jaxlib v0.7.1 PR. Here, we needed to move to clang as the default compiler on all platforms. This meant that we also needed to adjust the bazel-toolchain feedstock to also handle this. Furthermore, to work around some build issues, we had to update the bundled gloo version and the abseil-cpp dependency that we pull in as a conda dependency. Once these dependencies were updated, we got a linker error with abseil-cpp on Linux. The linker error was for a missing symbol named _ZN4absl12lts_202505124CordC1INSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEETnNSt9enable_ifIXsr3std7is_sameIT_S8_EE5valueEiE4typeELi0EEEOSA_. The symbol actually existed in the source code, but was encoded in the binary as _ZN4absl12lts_202505124CordC1INSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEELi0EEEOT_. The noticeable difference here is that the first contains the condition std::is_same, whereas the latter doesn’t. As this only happens on Linux, the obvious culprit here is the difference in compilers. abseil-cpp itself is built using conda-forge’s standard compiler on Linux (GCC), whereas we moved to clang. Looking at clang’s issue tracker, there is https://github.com/llvm/llvm-project/issues/85656. This reveals that adding -fclang-abi-compat=17 as a compiler/linker flag solves the issues by falling back to the old symbol naming behaviour.

As part of the move to clang, we replaced not only GCC but also nvcc for compiling CUDA device code. This sadly led to one compilation error for __glibcxx_requires_subscript, which was appearing in device code, but had no respective device implementation:

In file included from /home/ubuntu/.pixi/envs/conda-build/conda-bld/jaxlib_1758038148038/_build_env/bin/../lib/gcc/x86_64-conda-linux-gnu/15.1.0/../../../gcc/x86_64-conda-linux-gnu/15.1.0/include/c++/functional:67:
/home/ubuntu/.pixi/envs/conda-build/conda-bld/jaxlib_1758038148038/_build_env/bin/../lib/gcc/x86_64-conda-linux-gnu/15.1.0/../../../gcc/x86_64-conda-linux-gnu/15.1.0/include/c++/array:210:2: error: reference to __host__ function '__glibcxx_assert_fail' in __host__ __device__ function
  210 |         __glibcxx_requires_subscript(__n);
      |         ^
/home/ubuntu/.pixi/envs/conda-build/conda-bld/jaxlib_1758038148038/_build_env/bin/../lib/gcc/x86_64-conda-linux-gnu/15.1.0/../../../gcc/x86_64-conda-linux-gnu/15.1.0/include/c++/debug/assertions.h:39:3: note: expanded from macro '__glibcxx_requires_subscript'
   39 |   __glibcxx_assert(_N < this->size())
      |   ^
/home/ubuntu/.pixi/envs/conda-build/conda-bld/jaxlib_1758038148038/_build_env/bin/../lib/gcc/x86_64-conda-linux-gnu/15.1.0/../../../gcc/x86_64-conda-linux-gnu/15.1.0/include/c++//x86_64-conda-linux-gnu/bits/c++config.h:658:12: note: expanded from macro '__glibcxx_assert'
  658 |       std::__glibcxx_assert_fail();                                     \
      |            ^

In GCC’s and Clang’s issue trackers we find similar-looking resources: gcc#115740, llvm#95183, and llvm#49727. The fix for that has been merged in llvm#136133. I have tried to apply that fix in clangdev-feedstock#383, but sadly that was insufficient as raised in clangdev-feedstock#384. The correct fix then landed in clangdev-feedstock#385:

While my plan is to get to more in the Python 3.14 migration in the coming weeks, this week the work solely focused on kicking off the build for python 3.14.0rc3.

Sadly, I ran into a bit of a mess with the AWS C stack this week. I needed to issue aws-sdk-cpp-feedstock#970 manually as aws-sdk-cpp-feedstock#969 was closed, but the bot did not retry it. Instead, the bot issued PRs to arrow-cpp-feedstock directly (see arrow-cpp-feedstock#1854). This led, in general, to a messy situation with PRs in the arrow-cpp repository. At least [main] Rebuild for aws-c* (Sep '25) seemed to have worked fine, but all arrow versions there were already in the archive (i.e. all except the latest) had some download issues, and the PRs needed to be restarted. Restarting some failed jobs in [20.x] Rebuild for aws-c* (Sep ‘25), [19.x] Rebuild for aws-c* (Sep ‘25), and [18.x] Rebuild for aws-c* (Sep ‘25) made them pass again. Afterwards, we needed to rebase the PRs (main, 20.x, 19.x, 18.x) for aws-crt-cpp 0.34.3 migration.

This week, there were also numerous small tasks I engaged in:

And finally, the list of pull requests where I only did a review and merged them, but no real interaction: