I would love to see a benchmark of hyper vs ipfs. Anecdotally, hyper is faster and more user-friendly.
Hyper suffers from a branding problem (it's a direct evolution of dat; changing the name didn't help). But it has a strong case for being the best technical solution.
(The problem being: Bittorrent is super effective, but torrents are immutable. How do we make a swarm protocol that supports streams and other datasets that change over time?)
Imo (having built several projects on each of them) both projects have issues from the perspective of anyone wanting to depend on them for any project that has users or customers. But dat/hyper is the only one worth bothering with.
IPFS at this point is a write off for me. It seemed like it was built up as a project that courted the decentralisation/p2p/etc people, then did an ICO off the hype and essentially vaporized. The tech is reasonable, the abstractions and tooling were a really interesting approach and enabled a ton of powerful new things, and they did really well with making it easy to get started and productive. But I'd never build anything on it again because I fundamentally don't trust them now.
Dat/hyper etc. is a great project and ecosystem. Technologically it's incredibly impressive. The project and ecosystem are themselves decentralised (which is a profound demonstration that the people in this community are true to their stated values). Unfortunately this means it suffers from two major, related, problems (which can be framed as selling points, depending on your perspective):
- To build something with hyper you compose small modules. This is an excellent strategy if you can quickly discover and understand which modules to use and how to compose them. There are thousands of tiny useful (often remarkably elegantly written) modules that do useful stuff relevant to hyper and can be composed to do really amazing things - and it's impossible to find them quickly or learn how to compose them except by having hundreds of conversations with people in the community. So it's really not possible to be productive with it unless you're looking to just immerse yourself in it. If you want to build something on it as a dependency without participating in the community - good luck.
- Once you've got a project that depends on the ecosystem, it's almost impossible to keep it working and up to date. To find out what the current state of the ecosystem is (which libraries are the current ones, which dependencies should I use for what, what has replaced some previous dependency), again you have to have a lot of conversations or very actively follow other people's conversations. As a dependency, it's a lot of continued investment, and it's probably only worth it (or even possible) to do that if you have a huge amount of spare energy and time (or money to pay other people) to invest in it.
In summary with IPFS the problem is it's vaporware and I fundamentally don't believe the project is safe to build important decentralised projects on. With dat/hyper the problem is the opposite - that it's a completely decentralised community and it's very labour intensive to onboard and then keep up. It's missing the meta layers needed. Neither one is currently the right choice if you want a reliable decentralised stack for something critical. But hyper is likely to become it, and is a great project.
I would love to see a benchmark of hyper vs ipfs. Anecdotally, hyper is faster and more user-friendly.
Hyper suffers from a branding problem (it's a direct evolution of dat; changing the name didn't help). But it has a strong case for being the best technical solution.
(The problem being: Bittorrent is super effective, but torrents are immutable. How do we make a swarm protocol that supports streams and other datasets that change over time?)