https://gitlab.com/ipfs-search.com/ipfs-search/badges/master/pipeline.svgpipeline status https://api.codeclimate.com/v1/badges/1c25261992991d72137c/maintainabilityMaintainability https://api.codeclimate.com/v1/badges/1c25261992991d72137c/test_coverageTest Coverage https://readthedocs.org/projects/ipfs-search/badge/?version=latestDocumentation Status https://pkg.go.dev/badge/github.com/ipfs-search/ipfs-search.svgGo Reference https://opencollective.com/ipfs-search/backers/badge.svgBackers on Open Collective https://opencollective.com/ipfs-search/sponsors/badge.svgSponsors on Open Collective

Search engine for the Interplanetary Filesystem. Sniffs the DHT gossip and indexes file and directory hashes.

Metadata and contents are extracted using ipfs-tika, searching is done using OpenSearch, queueing is done using RabbitMQ. The crawler is implemented in Go, the API and frontend are built using Node.js.

The ipfs-search command consists of two components: the crawler and the sniffer. The sniffer extracts hashes from the gossip between nodes. The crawler extracts data from the hashes and indexes them.


Documentation is hosted on Read the Docs, based on files contained in the docs folder. In addition, there’s extensive Go docs for the internal API as well as SwaggerHub OpenAPI documentation for the REST API.


Please find us on our Freenode/Riot/Matrix channel #ipfs-search:matrix.org.


ipfs-search provides the daily snapshot for all of the indexed data using snapshots. To learn more about downloading and restoring snapshots please refer to the relevant section in our documentation.

Contributors wanted

Building a search engine like this takes a considerable amount of resources (money and TLC). If you are able to help out with either of them, do reach out (see the contact section in this file).

Please read the Contributing.md file before contributing.


For discussing and suggesting features, look at the issues.

External dependencies

  • Go 1.19

  • OpenSearch 2.3.x

  • RabbitMQ / AMQP server

  • NodeJS 9.x

  • IPFS 0.7

  • Redis

Internal dependencies


$ go get ./...
$ make



The most convenient way to run the crawler is through Docker. Simply run:

docker-compose up

This will start the crawler, the sniffer and all its dependencies. Hashes can also be queued for crawling manually by running ipfs-search a <hash> from within the running container. For example:

docker-compose exec ipfs-crawler ipfs-search add QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv

Ansible deployment

Automated deployment can be done on any (virtual) Ubuntu 16.04 machine. The full production stack is automated and can be found in it’s own repository.


This project exists thanks to all the people who contribute.


Thank you to all our backers! 🙏 [Become a backer]


ipfs-search is supported by NLNet through the EU’s Next Generation Internet (NGI0) programme.

RedPencil is supporting the hosting of ipfs-search.com.

Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]