Since the beginning of 2024, the demand for the content created by the Wikimedia volunteer community – especially for the 144 million images, videos, and other files on Wikimedia Commons – has grow…
They can also crawl this publically-accessible social media source for their data sets.
Crawling would be silly. They can simply setup a lemmy node and subscribe to every other server. Activitypub crawler would be much more efficient as they wouldn’t accidentally crawl things that haven’t changed, but instead can read the activitypub updates.
Sure but we’re in the comments section of an article about wikipedia being crawled, which is silly because they could just download a snapshot of wikipedia
Crawling would be silly. They can simply setup a lemmy node and subscribe to every other server. Activitypub crawler would be much more efficient as they wouldn’t accidentally crawl things that haven’t changed, but instead can read the activitypub updates.
Sure but we’re in the comments section of an article about wikipedia being crawled, which is silly because they could just download a snapshot of wikipedia