Software rot: Saving science’s digital legacy
Research relies on fragile software. Experts discuss the crisis of “software rot,” and the role of open source and artificial intelligence.
posts appearing on the main Software Heritage blog
Research relies on fragile software. Experts discuss the crisis of “software rot,” and the role of open source and artificial intelligence.
Data librarian Fanny Sébire and Software Heritage Ambassador Bertrand Néron detail their collaboration at the Institut Pasteur. They explain how their complementary skills are being used to drive a cultural shift, moving research software from a secondary artifact to a verifiable scientific output through standardized dual archiving.
Join the movement shaping CodeMeta v4.0. We’re defining the standards for software metadata to improve discovery, trust, and interoperability across the global research ecosystem.
CTO Thomas Aynaud on the SWHID: How the new ISO standard defeats fragile dependencies and guarantees code integrity.
How Paris-Saclay University, through its Data, Algorithm, and Code Administrator (ADAC) Cédric Mercier, manages institutional research data and code. Read about their strategy and new Software Heritage sponsorship.
The Netherlands eScience Center’s Research Software Directory (RSD) adopted the Software Heritage Identifier (SWHID) to ensure source code is archived for the long term.
A skills shortage limits the future of tech in Europe. In a white paper from the Eclipse Foundation, Roberto Di Cosmo argues that open source can offer a fix.
Clément Pieyre, Bibliothèque Diderot de Lyon, uses the Olympic rings to symbolize the indispensable role of university libraries in Open Science.
Bastien Guerry, the former French Chief Free Software Officer, discusses his shift to Software Heritage. His core insight: “Policies without products are empty, products without policies are blind.” Read the full interview on building the bridge between policy and infrastructure.
CodeCommons is testing the limits of swh-fuse using large-scale clusters. Preliminary experiments ran on the 10,000-core Kraken cluster. The system validated performance by hitting an optimal file storage rate of 30,000 reads per second and sustained 8,000 file writes per second.