Hacker Newsnew | past | comments | ask | show | jobs | submit | barbazfoo12's commentslogin

You would be well-advised to read this company's Privacy Policy, especially sections 6-9. It is a wonder they can use the term "Complete privacy" in their website copy without any fear of liability.


Interesting how "private" is in quotes.

Maybe tech journalists, perhaps even their readers, are getting smarter.

So with this app you are letting one company, Everyme, collect and archive all of your private data (to be shared with whom, and used for what purpose?) instead of another, Facebook?

Well, getting away from Facebook is a start. Maybe your data won't be scraped by non-advertisers. But it's still going to shared with third parties who will try to profit from it. Why is this necessary?

We still have a long way to go.

Peer to peer.

No third party.

Cut out the middleman.

That is the easiest, most efficient and most sensible way to exchange photos and have privacy. It's old, reliable technology that underlies the internet itself. And it's ready and waiting until people's privacy gets abused enough that they start demanding direct links to their friends, instead of always involving third parties, whose motives and deceptive tactics are becoming increasingly better known.


4. Provide a compressed archive of the data the scrapers want and make it available.

No one should have to scrape in the first place.

It's not 1993 anymore. Sites want Google and others to have their data. Turns out that allowing scraping produced something everyone agrees is valuable: a decent search engine. Sites are being designed to be scraped by a search engine bot. This is silly when you think about it. Just give them the data already.

There is too much unnecessary scraping going on. We could save a whole lot of energy by moving more toward a data dump standard.

Plenty of examples to follow. Wikimedia, StackExchange, Public Resource, Amazon's AWS suggestions for free data sources, etc.


One might argue that indexing from a data-dump will lead to search results that are only as up to date as the last dump.

In StackExchange's case, most of these are now a week or more old.

Maybe it's a good idea, but I'm not sure how many would want to dump their data on a daily basis to keep Google updated, when Google can quite easily crawl their sites as and when it needs to.


Have you considered rysnc? Dropbox uses it. So lots of people who don't even know what rsync is are now using it. We could all be using it for much more than just Dropbox. And if you have ever used gzip on html you know how well it compresses. The savings are quite substantial. Do you think most browsers are normally requesting compressed html?


It could be /data.zip like /robots.txt


Are they blocking by IP or just DNS? Are you guys typing just domain names, or are you trying the IP addresses too? Add a line to your HOSTS file and see what happens: 74.113.233.128 vimeo.com www.vimeo.com


Several are claiming that HTTPS still works, which means it's not DNS-level or IP-level. It may be IP+port level, or the only thing that I can really think which is left is deep-packet inspection: a Linux user in India could easily test this with something like:

    curl -v -H "Host: thepiratebay.se" http://www.google.com/


I got "Access to this site has been blocked as per Court Orders" message when I tried this.

It's not surprising, because a year or so ago Airtel was using deep packet inspection to throttle torrents and file sharing sites. They already have the necessary equipment in place, they can use it when and as they please.


Excellent, http://74.113.233.128/ works.


Isn't this the crux of infamous "ad hominem" argument: attacking the person instead of their reasoning?

Even smart people sometimes make stupid arguments.

And good arguments can be made by anyone.

It's very difficult to be right 100% of the time.

It's also quite unusual to be wrong 100% of the time.

Evaluate the reasoning, not the author.

Stupid argument, not stupid person.

Look at what Alsup said to Boies.

Still, this is easier said than done.


All due respect to Google, but I give primary credit for this to Danny Hillis. He was gathering and processing the data for this project years before going public with it and before merging with Google. It's yet another Google acquisition that is probably going to be viewed by many Google users as another amazing Google innovation.

"Standing on the shoulders of giants"

Do they still use that slogan?


Google is more about "burying the giants in piles of money, and standing on that."


"Facebook can now use data on users to serve them ads when they are not on the Facebook website."

"This could generate billions."

Or it could prime even more users for a privacy-respecting alternative.


Google does the same thing. So what? If I'm going to have to look at ads, I'd rather they be relevant. I've actually found Facebook ads genuinely useful on a few occasions. I even applied for a job that I heard about on a Facebook ad once. What I hate is when my friend uses my computer to search for tents, then all I see is tent ads all over the internet for the next month.


lol. I hear ya. (But I think Google and Facebook's data are qualitatively a bit different.)

It's anyone's guess whether this online privacy stuff really means anything to most people. Obviously Google, Facebook, et al. are hedging their bets that it's not a big deal.

As users we can only speak for ourselves. This is because we generally don't watch others, looking over their shoulder as they use a computer to see exactly what they do... which raises an interesting question: Does that imply that we are recognising some sort of right to privacy? A lot of effort goes into trying to figure out how others use a computer. But unless it's study of volunteers it's not done by just standing behind them and watching.


This seems easily doable without involving a third party that needs to keep a record of every file sent, the time and the sender and recipient.

But at least this is a step in the right direction.

Now, what if she sends an encrypted file consisting of copyrighted materials? RIAA, MPAA, are you reading along?

What are we going to do to prevent that?


I wonder if this could be done as a desktop app that makes a peer-to-peer connection. The peer-to-peer part makes it so that no one has to spend any money relaying the files (as well as your security/privacy point), and that plus the fact that it's a desktop app means no one has spend any money hosting servers or anything.


The answer is yes. There are multiple (non-VPN) ways to do it, all variations on a common theme. It's been done, multiple times over the past 10 years. But not much effort has gone into making these solutions user friendly and giving them the marketing push of something like this venture.

Obviously if you release somethng like this you run the risk of triggering the usual "illegal file sharing" issues.

But you absolutely do not need cloud storage to move large files. There are other ways to do it.


> But not much effort has gone into making these solutions user friendly and giving them the marketing push of something like this venture.

This is the key thing. The desktop app I'm imaging is user friendly, and probably at least somewhat pretty. I think the main technical issue is probably the fact that most people are NATed by their routers, so peer to peer is tricky, but I think with uPnP, you could get around that.

I was originally thinking of this as an open source app, but I suppose one could go the paid app route.


"I think the main technical issue is probably the fact that most people are NATed."

It's really not much of an issue as long as at least one peer has a reachable IP. I sometimes wonder how many people are under the impression that NAT's are a showstopper. This is simply not true. The showstopper is probably the RIAA and MPAA.

Skype slipped under the radar because they branded themselves as VOIP not file sharing. But it's really no different. It's peer to peer data exchange.

Only if both peers are behind the same NAT does the NAT pose a problem, in which case an external "supernode" is needed. But that's easy to set up. And it does not need access to packet payloads.

You could do a paid app. But the code to accomplish the job is very simple and has been made public in various forms multiple times.

File sharing copyright concerns, monitoring communications to catch bad guys and all that stuff is what's holding this back, not lack of a solution for connecting through NAT's.


> File sharing copyright concerns, monitoring communications to catch bad guys and all that stuff is what's holding this back, not lack of a solution for connecting through NAT's.

Why? It's software transferring files between two parties, just like you can do with email and things like Skype and even AIM, and hell, even the service we're talking about. Is there precedent for the people who write such software having legal problems? I'm aware of cases like rapidshare and torrent sites, but for example, are the authors of bittorrent clients also targeted?

> I sometimes wonder how many people are under the impression that NAT's are a showstopper. This is simply not true.

Hmm, I'll admit I don't fully understand this, but back in my torrenting days, you always had to forward a port to be reachable. How do you get around that?


If you want some interesting precedent, do some research on "Internet2" and the testimony of the RIAA to legislators.

Do you know what a LAN is? It is an evil invention to share copyrighted works. It must be stopped.

If you want a better understanding, read everything you can find on UDP, Ethernet, firewalls, NAT and encapsulation, in that order. I would suggest not to waste time trying to figure out "pre-packaged" peer to peer software solutions (i.e. all the different approaches people have taken, e.g., aeroFS, Kicksend or whatever). They often include far more complexity than you need to accomplish peer to peer. As such, they won't help you much to understand the basics: how connections are made.


You are thinking of http://www.aerofs.com/


Hmm... Aerofs seems like dropbox, but peer-to-peer. Not quite the same as what I was thinking of, but very similar.


23andMe has some very close connections to Google.

An affinity for collecting personal information on others runs in the family. ;)


The way you learn to write well is by reading.

And of course it makes a difference what you read.

Learning how to do anything well takes practice.

But you will not become a good writer unless you read.

As for what you should read,

"Garbage in, garbage out."

What do you think are the effects on your writing from reading blogs like "codinghorror"?

If your aim is to be a better written communicator in business, then you should read business correspondence from good sources.


While reading helps to improve your understanding of what makes good writing, I feel writing regularly is more important. You need to learn to foster your own ideas through writing, being careful not to just think what others write.

"For ever reading, never to be read." - The Dunciad III


Right. But if you only ever write and rarely ever read your writing will not be as good. Reading is essential to good writing. The quote you use is encouraging the reader to write, but it also presupposes that the reader is well read ("A lumberhouse of books in every head"). Maybe that's not a coincindence.

Is there a difference in encouraging a "scholiast wit" to write versus encouraging a "dimwitted blogger"?


Yes, I agree. Like all things it's a balance. In looking to refine skills everyone will require tuning in different areas. A programmer may be great at writing code but often fails to identify re-use of libraries/patterns. Reading others' code and technical articles may assist him/her in developing in that area. Likewise you may be able to tell when a writer lacks a 'lumberhouse of books'.

Being aware of why you are not developing a skill is probably key -- review from others helps to provide insight into potential underdeveloped areas.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: