I tried bowling like this when I was a teen, trying to get massive spin on the ball. I never got to a point where I could be consistent with spin and went back to using my thumb. Seeing a pro do it (and be successful) might have encouraged me to keep trying it.
Why not let the market sort it out? Maybe these startups need to take a down round, or be acquired for a fraction of their value?
Maybe the VCs should step in and make bridge loans available if they want to keep their investments? If you believe in your portfolio, why not help them weather the storm? Or are you afraid to be exposed to additional risk? Should the taxpayers take on that risk instead?
Indeed, VCs bridge startups all the time. There are other mechanisms available also-- hedge funds are willing to pay 80 cents on the dollar for these deposits (indicating that they expect to recover more than that)
I believe the worst thing you can be is a hypocrite. We can disagree about value systems, but if you don’t practice what you preach then I have no reason to respect you.
I don’t care if Twitter decides to allow this kind of information or not, but don’t claim safety and then disregard the safety of others. Don’t claim free speech and then silence speech you don’t like.
Elon is clearly doing what’s best for Elon, and that’s fine too. But I won’t tolerate him pretending he’s doing anything other than putting himself first. And I’ll make my own choices based on that. If I think my interests align with his, I should stick with him. But practically none of our interests align. We don’t share any common problems.
Elon doesn’t think he needs me or anyone else (and he has so much wealth he really doesn’t). I wouldn’t count myself lucky to be in the same lifeboat as him on the off chance he decides I’m dead weight and tosses me overboard.
The lack of empathy in the comments here is worrisome. The manager was clearly failing to manage effectively. Blaming the engineer for not managing up with a manager who didn’t want to listen doesn’t make sense.
This scenario is my biggest fear as a people leader - that ICs are being mismanaged and I won’t be able to see it or intervene before it is too late.
The pick of $ and $$ as delimiters seems rushed, to be frank. Although I’m not a big fan of mixing LaTeX in Markdown, I understand that choice (alternatively you could go with ascii-math like syntax which mixes way better with Markdown IMO). But $ and $$ makes not a lot of sense other then LaTeX does it this way. It would have been easy e.g. to use $$ for inline math and $$$ + newline or ```math for block, and that would have gotten rid of many of the warts of mixing Markdown and LaTeX.
In my opinion the familiarity of $ and $$ is sacrificing a lot for not much benefit.
Given that the main point of using LaTeX in Markdown is familiarity of users, using $ and $$ is actually the ONLY proper choice. But yeah, it leads to problems, which is why I would not use Markdown in the first place, but some Markdown inspired format which mixes better with $ and $$.
This is why I prefer AsciiDoc. It's consistent because there's only one implementation, it's less ambiguous, and more predictable. Although it takes a bit longer to remember all the syntax, it's not difficult, especially if you're only going to use the same subset of features that markdown supports since it supports most of the markdown syntax as well. I also much prefer the flexibility with tables compared to markdown. I just wish there were more parsers/converters other than the main ruby one and the transpiled JS one, although I know there's work being done on other language implementations.
As an example for math/equations, inline math is stem:[sqrt(4)], which defaults to AciiMath, but can be changed with a page attribute. To specify inline, LaTeX is latexmath:[\sqrt(2)] and AciiMath is asciimath:[sqrt(2)].
For blocks (which you can replace stem with either latexmath or asciimath to specify),
True, it's more verbose, but I'd take increased verbosity and standardization over increased ambiguity and inconsistency since every markdown parser and renderer translates things a little differently, which is why there's things like Babelmark[0]. That verbosity also provides consistent, more powerful features like multi-line table cells, table cell spanning, table nesting, sidebars, admonitions, footnotes, table of contents, image embedding, cross-doc references, latex-like includes, etc. that all follow a similar inline and block syntax and are rather clear from a glance.
It's certainly not perfect, but I much prefer it for the flexibility and consistency to the dozens of markdown implementations that all do things a little different and not needing to drop down into HTML when I need to do something just outside of markdown's capabilities.
I've been a managing editor for scientific journals for a number of years, and I can tell that -- while $ is still popular -- (almost) nobody uses $$ anymore. So I wouldn't say this is "where people already are".
No. This is a very simple and reasonably efficient image format. It is notable for its simplicity and straightforward implementation, as well as its speed compared with png.
Video compression is typically not about "take each frame, and make it as small as possible". Video compression revolves around cleverly making use of deltas and segmenting the image so that you can reuse as much data as possible from previous frames/keyframes.
2D image compression is missing a dimension. Much of the savings to be had in a video compression algorithm are going to be based upon the similarity between frames (ie. across time) rather than within a given frame. Therefore, an algorithm that is designed only for the 2D image is not going to deliver the goods when applied to video. An example would be animated GIF. It's terrible.
I didn't read the spec, but I assume that if a single-picture format doesn't have the concept of making use of reference pictures (so previous or future frames), it's leaving _a lot_ of compression potential on the table.
You can of course just have a sequence of individually compressed frames, just as you can do with PNG, TARGA, or JPEG, but these are not usually the ways you would want to distribute the video due to the huge sizes.
I'm no expert on this but I assume because it would be all keyframes? I.e. the format doesn't support encoding transitions from frame to frame. But I guess decent video support could be possible if you define video as pixel perfect animation, i.e. Animated GIF. These would of course not compete with traditional video.
Video is massive. Uncompressed 1080p video at 60fps is 1Gb per second, compared to ~5mbps for compressed video. QOI is great for pictures where 3-5x compression is about as good as you can do, but video compression is about getting close to 500x compression.
I’m not sure I understand the hostility here. Isn’t the point of the web to be open standards that reach consensus?