We need a lister for SourceForge, in order to be able to archive what's there.
Sourceforge uses the Apache Allura forge under the hood to host open source projects.
Unfortunately, the associated REST API does not offer the possibility to list all hosted projects. A ticket has been created on the subject a couple of years ago but no action have been taken so far.
It is nonetheless possible to do full and incremental listing, using sitemaps and the REST API to query project-by-project information. See a specification blueprint by @zack in #735 (closed) below. It has been designed discussing with a SourceForge tech contact.
Below are some intels I managed to gather in order to fulfill that task.
Listing projects on sourceforge
Two solutions could be used.
First one is to do some web scraping from the Sourceforge directory url: https://sourceforge.net/directory/. This is the solution used by archiveteam, the source code of their scraper (in Ruby) can be found on Github: https://github.com/marcroberts/archiveteam-sourceforge-lister. However this does not seem reliable as not all pages from the Sourceforge directory can be browsed. Currently, there is 18831 available pages about Sourceforge projects but trying to browse pages number greater or equal than 1000 returns an error 500 (for instance, https://sourceforge.net/directory/?sort=name&page=2000).
Second one, as pointed by //pombreda// on IRC, is to use rsync mirrors of files made available for download (typically release tarballs) in Sourceforge projects: rsync://netix.dl.sourceforge.net/sfmir/, rsync://rsync.mirrorservice.org/downloads.sourceforge.net/. That solution seems better as it will allow us to list all relevant projects names on Sourceforge (thus discarding empty projects and those without any releases). Please find below a sample output when using rsync to list projects whose name start with gl.
antoine@antoine-X550CC:~$ rsync --list-only rsync://rsync.mirrorservice.org/downloads.sourceforge.net/g/gl/----------------------------------------------------------------------------Welcome to the University of Kent's UK Mirror Service.More information can be found at our web site: http://www.mirrorservice.org/Please send comments or questions to help@mirrorservice.org.----------------------------------------------------------------------------drwxr-xr-x 20,480 2017/07/13 02:27:00 .lrwxrwxrwx 19 2010/01/05 07:08:57 index-sf.htmldrwxr-xr-x 4,096 2016/08/25 07:30:46 gl-117drwxr-xr-x 4,096 2016/08/25 07:30:46 glabelsdrwxr-xr-x 4,096 2016/08/25 07:30:46 gladewin32drwxr-xr-x 4,096 2017/06/10 02:25:52 gladysdrwxr-xr-x 4,096 2016/08/25 07:30:55 glass-themedrwxr-xr-x 4,096 2016/08/25 07:30:57 glattonydrwxr-xr-x 4,096 2016/08/25 07:30:59 glaunchdrwxr-xr-x 4,096 2016/08/25 07:31:35 glc-libdrwxr-xr-x 4,096 2016/08/25 07:31:37 glc-playerdrwxr-xr-x 4,096 2016/08/25 07:32:34 glcdtoolsdrwxr-xr-x 4,096 2016/08/25 07:32:38 glchessdrwxr-xr-x 4,096 2016/08/25 07:32:46 gldirectdrwxr-xr-x 4,096 2016/08/25 07:32:49 gledrwxr-xr-x 4,096 2016/08/25 07:33:36 glesiusdrwxr-xr-x 4,096 2017/06/11 02:28:24 glestdrwxr-xr-x 4,096 2016/08/25 07:33:53 glew...```**Ingesting sourceforge projects into the SWH archive**Once a list of relevant projects is obtained, some preprocessing has to be done before being able to ingest a project into the SWH archive.From a Sourceforge project name, its associated metadata can easily be obtained using the public Allura REST API (Allura being the software forge used on Sourcefore, see https://allura.apache.org/).For instance, to get the metadata about the glew project: https://sourceforge.net/rest/p/glew. The url of the VCS repository (can be cvs, svn, hg, git) used by the project can be reconstructed from the retrieved metadata.I found a project on Github, released on the public domain, dedicated to the metadata retrieval of open source projects hosted on Sourceforge: https://github.com/chpwssn/sourceforge-items/. In particular, the following Python script https://github.com/chpwssn/sourceforge-items/blob/master/rsync-disco/apiscrape.py could be reused by us.
The scripts and data at https://github.com/chpwssn/sourceforge-items/ look to be exactly what is required with that person (chpwssn) having already identified over 350,000 SVN, Mercurial, and GIT repositories on SourceForge with associated rsync commands for downloading them.
I started looking into this task myself with simple scripts that scraped the directory, but this looks like it's already super close to completion (or essentially already complete, but someone needs to create the SH-bits.
from there extract the list of project "tools"; they include tools that corresponds to VCS, with names like "git", "svn", "cvs"
associated to each VCS tool there is a URL, from which we can build clone/checkout commands (or, equivalently, origin URLs for a full lister). The URL pattern (to be verified) should be {type}.code.sf.net/p/{project}/{mount_point} (e.g., svn, git)
I've put a prototype implementation of this (up to the listing of all tool types and URLs included, but with no integration with the swh-lister API) in the snippet repo.
I've run it once, successfully listing all of SourceForge in ~4 hours with 8 parallel threads to query the REST endpoint.
As of that run I've listed 480'711 projects and 402'908 VCS "tools" (see $829 for details), with the following breakdown by VCS type:
182'858 git
145'225 svn
44'493 cvs (read-only)
29'148 hg
1'184 bzr
Other improvements needed are:
//incremental listing//: this is possible to do exploiting the <lastmod> value in sitemaps. We have been told by SourceForge that that last modification timestamp is unique per project and that //it is// updated when the VCS is updated. It is therefor possible to be smart and do incremental listing that only list updated repositories w.r.t. the last lister run
there are some //subprojects// on SourceForge, although we have been told by SourceForge they are very very rare. We should consider including them too. An example is: computerastherapy/ict-framework (note how the "project" here is computerastherapy/ict-framework
in order to play nice with SourceForge while crawling we should:
set the crawler //user-agent// to something identifying it as coming from Software Heritage
make sure the crawler IP address(es) have a //reverse DNS// entry (ideally pointing to a Software Heritage hostname too)
keep //parallelism// at 8 concurrent workers maximum
It looks like there are projects outside of the /p/ namespace. Just looking at the very first sitemap, I got an /adobe/ namespace (https://sourceforge.net/rest/adobe/manjobi), which implies that we should also consider namespaces outside of /p/ when listing.
Note also that a lot of entries are duplicated across the /projects/ and /p/ namespaces, while both point to the same thing.
New stats:
317973 distinct projects in the sitemaps (including subprojects)
360 subprojects
356 projects are outside of the normal /p/ namespace, including subprojects
Jun 01 08:04:30 saatchi swh[2685155]: INFO:swh.scheduler.celery_backend.runner:Grabbed 1 tasks list-sourceforge-full
[3] worker:
Jun 01 07:40:21 worker11 python3[1407475]: [2021-06-01 07:40:21,310: INFO/MainProcess] lister@worker11.internal.softwareheritage.org ready.Jun 01 08:05:24 worker11 python3[1407475]: [2021-06-01 08:05:24,006: INFO/MainProcess] Received task: swh.lister.sourceforge.tasks.FullSourceForgeLister[e29c07ff-b01f-4739-a820-1d326e76ad63]Jun 01 08:05:27 worker11 python3[1407482]: [2021-06-01 08:05:27,962: WARNING/ForkPoolWorker-4] Project 'https://sourceforge.net/rest/adobe/wiki' does not have any toolsJun 01 08:05:28 worker11 python3[1407482]: [2021-06-01 08:05:28,338: WARNING/ForkPoolWorker-4] Project 'https://sourceforge.net/rest/adobe/blog' does not have any tools