Yes, OP seems to think the blockchain somehow solves the (extremely subjective) issue of valuation, as opposed to simply "did this transaction occur between counterparties?
No. I support the blockchain and think it will be revolutionary. However, Enron's issues weren't attributable to some discrepancy between credits and debits.
Accounting, surprisingly to folks who've never studied it and think it's simple arithmetic, requires judgment, based on rules that are sometimes grey and subjective. When you add in legal complexities, it becomes... well, more complex and therefore open to manipulation.
For instance, Enron was able to hide many liabilities by marking it (in hindsight) below market, since... there was no market for said liabilities. So it was impossible to value them. Furthermore, it hid other debt in obscure subsidiaries that were only tangentially, legally and fiscally speaking, connected to Enron.
The blockchain confirms, with better accuracy than existing systems, "X transaction occurred between Y and Z partners." It does not confirm "X's assets were appropriately valued and marked as such to a non-existent market, and X is most definitely a legal subsidiary and is overseen by the fiduciary duty of A Holdings, Corp."
The blockchain has the potential to be a far more efficient settlement system. But saying it prevents Enron or Madoff is simply not true at all.
The automated clearing house system, our current means of settling payments, is pretty secure. The bigger threats are someone getting ahold of your credit card or checking account info, which is the same threat level whenever you want to convert your Coinbase balance back into your local currency (or someone getting ahold of your public key).
The blockchain, rather, represents a better clearinghouse merely by being a decentralized platform to account for payments rather than a centralized one.
OP says this prevents accounting fraud. Not one iota.
Which only further demonstrates the gap between a 4-yr Comp Sci degree and software development.
Which, don't get me wrong, I'm a university snob and think software engineers should understand things all the way down to the machine level, but the snobbery in reaction to JS* being taught in 4-yr curricula (which I wouldn't doubt as a widely held reaction) is saddening.
*JS wouldn't be my first choice for an actual university course on real-world coding; but I do resent the the chasm between applicable and theoretical software engineering skills... though this problem isn't confined to Comp Sci.
University CS courses tend to be heavy on varyingly useful theory, light on practical knowledge. Which, fine, that's like most university degrees even in the most vocational of fields.
So while this dude's "degree" might be substandard (MIT-OCW vs. MIT-MIT), I'd argue their education – at least in terms of applicability and self-direction – is equal, if not superior.
I'm ceaselessly amused by the folks fearing Amazon Echos or Google Homes as though they're constantly listening microphones sent to the NSA. As though we don't already carry devices on our person everywhere we go that have mics (smartphones).
Also, while we trade our privacy to Google (or Amazon, or whomever) in exchange for customization and convenience (a social contract I'm generally happy to sign off on), those companies have even more incentive to keep our data safe.
Google is one of the most valuable companies in the world precisely because it, and only it, has the AdSense knowledge (and whatever other knowledge Google collects about me) to target me.
Insurance companies go to Google and say "show our ads to people Googling insurance companies" – that's how Google makes money. It's not as though Google says "here you go State Farm, here's everyone who's been looking up car insurance." It's business model is based on proprietary customer knowledge. It can't give away this data; it's incentivized to limit it to its own ad-targeting tech.
Are there still problems with this model? Sure. If the government decides to subpoena Google on me, they'll turn over my Gmail. But is it a hell of a lot easier getting access to Google services (e.g. Google Maps knowing where I generally go and what the traffic is like) versus using, say, Duck Duck Go on a VPN (let alone Tor or Tails)? For me, and I'd assume most people, yes.
EDIT: I would also point out that we've long been facing the privacy vs. convenience issue. It used to be that merely signing up for a landline meant getting your phone # listed in the White Pages. Paying utility bills makes your name and home address a matter of public record (unless you choose to shield them via owning and paying through a corporate shell). Ditto real estate transactions involving your name/address. All public records, unless you choose to hire attorneys to setup shell corps for the sake of privacy. Not so expensive to do this now in the age of LegalZoom, etc., but this used to cost quite the pretty penny.
> those companies have even more incentive to keep our data safe.
Then why aren't they doing it, and why aren't they informing us when breaches happen?
I absolutely think providing that data should be voluntary. If you want to send it to them, go ahead! But in many ways it's not. I can't remove my data from those companies, nor can I control what they do with it, and that's a serious problem.
You really don't get it do you. Most of us aren't worried about Amazon and Google per se but rather what happens to our data when (a) they are compromised and (b) government surveillance increases in scope.
There have been countless incidents (and these will only increase in frequency) where people's sensitive information have been stolen and used for blackmail and identity fraud. There is also the increasing use of private data by governments for example in deciding on visa entry or immigration cases. The use cases for criminals and governments are only going to increase in scope and sophistication and will be applied not just to future data collected but current data.
These are all legitimate situations which are completely unprecedented and only possible because of the increased data collection policies of site like Google or Facebook.
The problem isn't really privacy, it's privacy asymmetry.
Would Facebook agree to make all of their employee web searches public? Would Google? How about all phone traffic? Emails?
Thought experiment: imagine a world where everyone can see what everyone else is doing all of the time.
Assume absolutely no exceptions or restrictions. You can eavesdrop on anyone in the world. Anyone can eavesdrop on you.
How many "I am fine with no privacy" advocates would be happy with this?
It's an extreme thought experiment to highlight how asymmetric the current model is. In the current model privacy is becoming a privilege that is available more and more selectively.
To eliminate the privilege, you either need user controls and permissions for specific profitable use cases, or you need full openness - which I think most people would find terrifying, for all kinds of reasons.
I would have far more respect for no privacy advocates if they made public a daily ISO of the contents of their computer.
Would they really have the same position once their identity had been stolen, their credit cards maxed out and every thing they have said taken out of context and made available to their friends, family, boss and the TSA.
> It's an extreme thought experiment to highlight how asymmetric the current model is. In the current model privacy is becoming a privilege that is available more and more selectively.
Slightly tangential to this topic, your comment reminded me of this short talk titled "Your smartphone is a civil rights issue" by Christopher Soghoian. [1] It truly is a great privilege to be able to control one's privacy in today's world (to whatever extent it is possible).
> I'm ceaselessly amused by the folks fearing Amazon Echos or Google Homes as though they're constantly listening microphones sent to the NSA. As though we don't already carry devices on our person everywhere we go that have mics (smartphones).
> ...
> Are there still problems with this model? Sure. If the government decides to subpoena Google on me, they'll turn over my Gmail.
It seems to me like you've read about the Snowden revelations but don't see any issues with warrantless tapping and mass surveillance. As I said in another comment, privacy is not just about you or me. It's about all humans and the rights that we have granted ourselves in many countries around the world.
This is interesting because while Apple's (comparative) dedication to privacy is endearing, it's a long-term existential threat.
Google knows all about me and its assistant is, usually, great. Amazon has troves of data on what I buy, and I get to yell at Alexa to order more TP as soon as I see we're on the last roll.
Apple knows much less about me and, while I'm still an Apple fan and am tied to iPhones/Macs thanks to iMessage, Siri stinks as a result.
If voice assistants based on machine learning (specifically, personalized voice assistants) are the next big thing, Apple's privacy ethos will separate it from its major tech competitors – either in a great way, or a very negative way.
Totally agree. Lots of commentators here on HN love to get angry about companies like Google collecting so much of their data, which is perfectly justified. However, you have to be willing to accept the consequences of privacy policy like that, which is the kind that Apple somehow still carries out stringently. The consequences are that any service or product that relies on data, ML, AI etc. aren't going to work well coming from a privacy conscious company like Apple, at least not nearly as well as the products from the companies like Google which don't value privacy as much. You can't complain about Google taking your data but also complain about Siri being a load of garbage, you can't have it both ways (and I see lots of people here trying to have it both ways).
It will be interesting to see if Apple holds its ground on privacy with the increase of AI/ML driven features and products. Coupled with Apple's closed culture which discourages open research (although they have been improving this), their concern for privacy could put them well behind other companies in this space. Depending on how you look at privacy vs. product, this could be a good thing or bad thing.
Aside from recommendation engines and targeted advertising, what does this deep knowledge of everywhere I've been and everything I've done do to dramatically improve the experience of using the AI?
I think it makes Apple slower at making their assistant good at parsing what you're saying and returning an answer, but that's a problem that benefits from crunching reams of data in general, not so much knowing everything about you personally.
For one thing, a virtual assistant that works even when I don't have an active internet connection seems like a perk in itself no? The Siri approach is closer to being there than the Alexa/Google approach.
I think many Apple users would be happy if only Siri could answer factual questions objectively and follow a conversation, being happy with a minimum of privacy collection like current location, name of spouse for messaging, etc.
While some may prefer a knowledge about you or you via people like you, as in, "Please recommend me a great movie", Siri is currently not in a stage where she may tell you stuff like "How do I mix a White Russian?" Google Assistant give me 6 steps with a photo and a follow up question "What about Screwdriver?" Siri gets utterly wrecked in the knowledge based questions.
I think the main problem is not privacy-related, but knowledge base-related. Google is building upon a fricking huge search engine via a knowledge graph and the sky is the limit for how well an AI can do. Apple is building on what, a shut down Ping social network, Apple Music listening habits, Wolfram Alpha hopes if all else fails, and a sparse Bing Search API if that failed too?
>While some may prefer a knowledge about you or you via people like you, as in, "Please recommend me a great movie". . .
Not to mention that Apple positioned itself as a brand for people who "Think Different." I don't think people who got taken in by that messaging would be attracted to the prospect of services that can more efficiently pigeonhole them.
I thought Apple figured out a way to create unique IDs and profiles that can't be traced back to the individual person? So they can still find people like you for analytical purposes to tune their models.
Speech recognition and natural language understanding.
If you watched the HomePod reveal you'd know that it sends your speech to the cloud for understanding. Which seems like a pretty clear admission that this can't all be done on-device easily.
The simplest example would be speech recognition. With the exception of Keyword Spotting Systems like Hey Siri or OK Google, it's currently practically impossible to implement larger vocabulary speech recognition on device.
Google will happily permit you to download trained models of <100MB to your phone via Google Translate, which permit not only offline vocabulary recognition of a remarkably wide range, but will also then translate that input into another language (also offline).
I believe the restrictions imposed by keyword-spotting have more to do with the always-listening and/or power-efficient nature of the task, rather than the restrictions on the legibility of offline speech recognition.
I'm pretty sure larger vocabulary local speech recognition on devices with less computing power than modern smartphones has existed (products for continuous speech recognition on PC go back to at least Dragon Naturally Speaking in 1997.)
I don't see it as humblebragging (any of us could create a Cook contact / ReCode readers likely know Walt Mossberg's relationship to Apple) but, worse, a faulty premise.
Odds are, if you have them in your contact database, you already know them; you're not going to want Siri to give you their Wikipedia bio.
Siri has myriad faults, and thankfully someone of Mossberg's stature might push Apple to address them, but this is not one of them.
If you are asking specifically for _who_ somebody is, it should tell you who that person is - not give the contact card, because that card does not tell you who somebody is.
At least, that's what I would expect if I asked who somebody is.
I'm presupposing the contact card has their job position/title, and therefore still answers the question – also presupposing the original (odd) premise of wondering who someone already saved in your phone is.
The Venn overlap between the set of "contacts in your phone" and "people with Wikipedia bios" is likely rather small. Hence why I think it's a faulty premise to complain about Siri defaulting to contact card when these two sets do intersect.
That's not the point either. The point is that if I asked you, a human, "Who is Tim Cook" and you replied "123-555-1212" or "tim@apple.com" I would be a little dumbfounded.
If I'm asking the question - maybe a friend is nearby who doesn't know them and I'm too lazy to explain - then I want to know who they are, not what their contact info is.
I could imagine some scenarios when I ask a human assistant "who is Joe Bloggs" and it would be quite reasonable for them to answer "oh you've met him, he even gave you his business card".
Sure, but only after telling you who Joe Bloggs is, because that's what you asked, you didn't ask 'Have I met Joe Bloggs?', an assistant (human or virtual) that doesn't actually help isn't going to keep their job for long.
I don't think anyone's saying that the question "who is x" ever means "give me the contact information for x". They are just arguing as to whether or not Siri's counter-intuitive behavior may be a reasonable response, given that it's a computer and not a human. This is not a hard question for humans.
It's a very hard question for humans. Back in the 70s AI had already worked out that conversations take place in "frames" which include a a ton of implied state. It turns out that state is essential to make sense of human conversations, because words and constructs have different meanings in different frames.
Even simple questions like "Who is..." has many different interpretations. A human will understand the context. An AI won't, because you can't derive the context from the words themselves. It's a function of social setting, physical setting, relationship, previous conversations, and so on.
At the moment conversational interfaces are more like a Bash shell with a speech recogniser on the front. The shell needs a precisely formed command and has almost no concept of state or context at all. (I think Siri actually has some, but not much.)
So it's completely unrealistic to expect CIs to be able to do this today. It will only be possible when NLP gets a whole lot more sophisticated and starts tracking context and state - although even that will still be a hard problem, because social state is defined as much by location, physical surroundings, time of day, and custom as by the words being used.
Except the contact card has meta-information like: company they work at, where they're located, etc. (including custom fields you can create).
A contact card could definitely be used for that. If one exists, Siri should give the info from it and then wait to see if the user also wants external information (from Wikipedia or elsewhere).
I don't think one needs to necessarily choose a side. I'm of the opinion that what Gawker did is wrong, but surreptitiously funding lawsuits to bankrupt the company is a perversion of the justice system.
The tension between privacy and speech is a discussion that needs to be had. Litigating it in a Pinellas County courtroom is an odd way to go about having that debate.
Thiel has now demonstrated that those with deep enough pockets can now use the courts to exact revenge in a roundabout way. Yes, Gawker is tawdry but one can't help pondering a chilling effect here on more worthy stories. Were I a journalist, I'd certainly think twice now of pursuing an investigative piece that might offend a billionaire, given that my own financial livelihood could become fair game.
Consider the way things were before Thiel showed up. Gawker destroyed lives[1] with impunity because their victims didn't have deep enough pockets to fight back in court. Thiel is just leveling the playing field.
You're free to propose that we outlaw the funding of lawsuits by non-parties, as was done historically (google 'maintenance' and 'champerty' if you want more history on that) but you're going to shut down the ACLU and company when you do so. So it's a matter of which is more important to you, really.
>I'm of the opinion that what Gawker did is wrong, but surreptitiously funding lawsuits to bankrupt the company is a perversion of the justice system?
Why is funding a case related to a cause you support a perversion of the justice system? Every time an article about the EFF funding another lawsuit pops up do you cringe in horror?
I mentioned this to someone else who made the same point (albeit with the ACLU instead of the EFF): Hm, that's an interesting point that I'll have to noodle over. Thanks for pointing it out.