Search engine for the Interplanetary Filesystem. Sniffs the DHT gossip and indexes file and directory hashes.
Metadata and contents are extracted using ipfs-tika, searching is done using ElasticSearch 7, queueing is done using RabbitMQ. The crawler is implemented in Go, the API and frontend are built using Node.js.
The ipfs-search command consists of two components: the crawler and the sniffer. The sniffer extracts hashes from the gossip between nodes. The crawler extracts data from the hashes and indexes them.
Documentation is hosted on Read the Docs, based on files contained in the docs folder. In addition, there’s extensive Go docs for the internal API as well as SwaggerHub OpenAPI documentation for the REST API.
ipfs-search provides the daily snapshot for all of the indexed data using elasticsearch snapshots. To learn more about downloading and restoring snapshots please refer to the relevant section in our documentation.
Building a search engine like this takes a considerable amount of resources (money and TLC). If you are able to help out with either of them, mail us at email@example.com or find us at #ipfssearch on Freenode (or #ipfs-search:chat.weho.st on Matrix).
Please read the Contributing.md file before contributing.
For discussing and suggesting features, look at the issues.
RabbitMQ / AMQP server
$ go get ./... $ make
The most convenient way to run the crawler is through Docker. Simply run:
This will start the crawler, the sniffer and all its dependencies. Hashes can also be queued for crawling manually by running
ipfs-search a <hash> from within the running container. For example:
docker-compose exec ipfs-crawler ipfs-search add QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv
Automated deployment can be done on any (virtual) Ubuntu 16.04 machine. The full production stack is automated and can be found in it’s own repository.
Thank you to all our backers! 🙏 [Become a backer]
Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]