I don't know - on the other side of the coin imagine if no data was collected. It would be near impossible for companies to do any sort of troubleshooting or QA.
Taking it a step further, it'd also be significantly more difficult for them to understand their audience and users... which could actually make the experience worse for those users.
Even in the more nefarious case of ad-tech, I'd personally prefer more relevant ads than generic non-relevant ones.
I'm not saying all data collection is fine and well intentioned, but I also don't think it is necessarily as zero-sum as people think.
I do think when data gets licensed or shared w/ 3rd parties it should be more clear how it gets used.
>Even in the more nefarious case of ad-tech, I'd personally prefer more relevant ads than generic non-relevant ones.
Personally, i'm the opposite. Every time I see an obviously targetted ad based on something i've done, watched, said in a text conversation, my emails, etc. It creeps me the hell out.
> I don't know - on the other side of the coin imagine if no data was collected. It would be near impossible for companies to do any sort of troubleshooting or QA.
Good. Companies got by in the pre-internet era by running supervised QA tests with customers in person, or using special devices modified to allow surveillance. There's no reason why this cannot be brought back.
Nokia phones used to have a feature called "AutoSMS" which would silently send back crash reports and various usage metrics. Of course this feature was never included in production releases, only for field/user testing. After product launch, feedback was gathered by surveys and crash reports from service centers.
Obviously phones have always had the capability to send covert data, but back then it would have been utterly unacceptable. How times & business models have changed :(
There were still mechanical failures all over in safety devices. We're drilling much deeper than under higher pressures now than Deepwater Horizon.
To imply that deepwater oil production is safe, is crazy. Storms happen. Mudslides (underwater) happen. Pandemics happen. This is why majors contract out to the offshore drillers. They want nothing to do with it.
That's not exactly accurate. Transocean was the contract driller on deepwater horizon but BP still took the brunt of the liability and PR backlash. The contract structure actually hasn't changed much in the past 20 years.
^This.. I don't see why people thinking hoarding cash when capital is cheap is a bad strategy. It totally ignores the fact that ease of raising and cost of capital is time dependent. If you had a war chest of cash in 2008/2009, that'd be insanely valuable.
On the other hand, that's timing the market. But as the common wisdom goes, time in market pays better. Or does it? Do these companies know something that we don't? Is it different when you have piles of money?
This is really good advice. I'd say it generalizes to learning most engineering challenges:
Pick a problem you're interested in and solve it from top to bottom, tweaking all components as you move forward.
Isn't something about to be open source the opposite of vendor lock-in? Couldn't you just use databricks and then pretty easily migrate over to your own managed spark cluster if you so desired?
Honestly, if you really rationally break it down, starting a business rarely makes financial sense when factoring in the risk. If you want to maximize guaranteed lifetime income, having a job w/ a high demand skill (That can remain in high demand) is probably the way to go.
Then the question is if studying efficacy of harm reduction methods including bans or age limits or wide educational campaigns, monitoring of quality or other measures.
Well we have an entire profession of SRE/Systems Eng roles out there that are mostly based on limiting impact for bad code. Some of the places I've worked with the worst code/stacks had the best safety nets. I spent a while shaking my head wondering how this shit ran without an outage for so long until I realized that there was a lot of code and process involved in keeping the dumpster fire in the dumpster.
Which do you prefer? Some of the best stacks and code I’ve worked in wound up with stability issues that were a long series of changes that weren’t simple to rework. By contrast, I’ve worked in messy code, complex stacks, that gave great feedback. In the end, the answer is I want both, but I actually sort of prefer “messy” with well thought out safety nets to beautiful code and elegant design with none.
One thing that stands out from both types of stacks that I've worked with, is that most of the time, doing things simply the first time without putting in a lot of work to guess what other complications will arise later tends to produce a stack with a higher uptime even if the code gets messy later.
There are certainly some things to plan ahead for, but if you start with something complex it will never get simple again. If you start with something simple, it will get more complex as time goes by but there is a chance that the scaling problems you anticipated present in a little different way and there's a simple fix.
I like to say, 'Simple Scales' in design reviews and aim to only add complexity when absolutely necessary.
Ah, but that's a lot of big corps being more stupid in the last month than last year? If it's two or three more, that's normal variation. We're now at something more like 7 or 8 more. The industry didn't get that much stupider in the last year.
This is more of an opinion, I'm sure there are plenty of cases where the founders didn't know each other directly but were introduced due to common interests and it worked out just fine (My current company included). Yea, you probably shouldn't jump right into it, but I don't think it is a bad idea. Also - if you're sticking to your current network, I imagine the chances of finding someone who has more complementary skills is lower (ie: I'm an engineer & most of my network is engineers, what I'd really need though is more of a business/sales guy)