Maybe I'm old, but I still think a repository should be a repository: sitting on a server somewhere, receiving clean commits with well written messages, running CI. And a local copy should be a local copy: sitting on my machine, allowing me to make changes willy-nilly, and then clean them up for review and commit. That's just a different set of operations. There's no reason a local copy should have the exact same implementation as a repository, git made a wrong turn in this, let's just admit it.
> And a local copy should be a local copy: sitting on my machine, allowing me to make changes willy-nilly, and then clean them up for review and commit.
That's exactly what Git is. You have your own local copy that you can mess about with and it's only when you sync with the remote that anyone else sees it.
"There's no reason a local copy should have the exact same implementation as a repository, git made a wrong turn in this."
Who is forcing you to keep a local copy in the exact same configuration at upstream? Nothing at all is stopping you from applying your style to your repos. You're saying that not being opinionated about project structure is a "wrong turn"? I don't think so.
I think most "ground truth" open-source repos do end up operating like this. They're not letting randos push branches willy-nilly and kick off CI. Contributors fork it, work on their own branches, open a PR upstream (hence that name: PULL Request), reviews happen, nice clean commits get merged to the upstream repository that is just being a repository on a server somewhere running CI.
I agree but I think git got the distributed (ie all nodes the same) part right. I also think what you say doesn't take it far enough.
I think it should be possible to assign different instances of the repository different "roles" and have the tooling assist with that. For example. A "clean" instance that will only ever contain fully working commits and can be used in conjunction with production and debugging. And various "local" instances - per feature, per developer, or per something else - that might be duplicated across any number of devices.
You can DIY this using raw git with tags, a bit of overhead, and discipline. Or the github "pull" model facilitates it well. But either you're doing extra work or you're using an external service. It would be nice if instead it was natively supported.
This might seem silly and unnecessary but consider how you handle security sensitive branches or company internal (proprietary) versus FOSS releases. In the latter case consider the difficulty of collaborating with the community across the divide.
> I still think a repository should be a repository: sitting on a server somewhere, receiving clean commits with well written messages, running CI. And a local copy should be a local copy: sitting on my machine, allowing me to make changes willy-nilly, and then clean them up for review and commit
This is one way to see things and work and git supports that workflow. Higher-level tooling tailored for this view (like GitHub) is plentiful.
> There's no reason a local copy should have the exact same implementation as a repository
...Except to also support the many git users who are different from you and in different context. Bending gits API to your preferences would make it less useful, harder to use, or not even suitable at all for many others.
> git made a wrong turn in this, let's just admit it.
Nope. I prefer my VCS decentralized and flexible, thank you very much. SVN and Perforce are still there for you.
Besides, it's objectively wrong calling it "a wrong turn" if you consider the context in which git was born and got early traction: Sharing patches over e-mail. That is what git was built for. Had it been built your way (first-class concepts coupled to p2p email), your workflow would most likely not be supported and GitHub would not exist.
If you are really as old as you imply, you are showing your lack of history more than your age.
Consider a second-price auction: everyone submits bids, the highest bidder gets the resource, and pays the price submitted by the second-highest bidder. This is incentive-compatible: everyone is incentivized to submit the maximum amount they're willing to pay, no more no less. Does it matter if the resource is being sold by its original owner or a scalper? No. Who gets the resource and how much they pay depends only on which people wanted the resource and how much. The only loser from the scalper's existence is the original owner, because they sold to the scalper too cheaply.
If there are villains in this situation, they aren't those who extract market price: a scarce resource was always going to be sold at market price. If the price is set lower, people will line up in queues and so on, to "burn" an amount of patience and time equal to the price difference in their eyes. Except in a queue all participants end up spending this "burn", so it's strictly more wasteful for society than a market where only the winner pays.
No, the real villains are those who engineer the market so the resource is scarce to begin with. In case of housing: not landlords, but people who vote for laws restricting housing construction. In other words, most homeowners. That's the unpleasant conclusion that people are trying to ignore when they blame landlords, price fixing and so on.
That’s fine as a “second order” rebuttal, but you’re leaving out _third_ order effects which are where all the action is in terms of the unique horribleness of real estate rental.
The world is full of goods that share many of the nasty features that the real estate rental market has. For example, it’s not hard to find goods where:
- The value partially derives from the limited supply
- The limited supply is artificially limited by forces that the market cannot correct for (either because law prevents entrance of new competitors, or because would-be competitors are colluding to form a cartel that is deliberately restricting it)
For instance, taxi medallions and diamonds meet these criteria.
What makes rental housing special is other qualities:
- The vast majority of a rental property’s value derives from its proximity to publicly funded resources which the seller did not create themselves. If your tax dollars pay for a new park, the value of that park is vacuumed up by the landlords near the park. (This is what it IS to be an economic rent… thus the name.)
- Demand at the low end is extremely inelastic. People have to live somewhere if their life is entangled with that city. Compare with diamonds or taxi medallions, which you can opt out of.
- In theory, most landlord-tenant relationships operate on a year-long cadence because it mixes flexibility with predictability. The renter doesn’t have to commit their life to staying in a particular city for multiple years just to please some landlord, and the landlord gets to re-auction the rental rights by re-setting the price once a year, keeping up with the going market rate. However, in practice, most renters end up wanting to stay more than one year, and are not mentally or logistically preparing to move. Thus, a substantial price increase is disruptive. You might be tempted to say that the real problem is that the renter went in blind without guarantees about what they were really getting signed up for, and thus a fix could be to secure much longer leases which schedule the rent increases up front. However, as the lease duration goes up, the chances go up that the renter experiences changes in life circumstance that make it impossible or intolerable to continue renting. Barring the creation of a society of debt prisoners, the landlord will inevitably end up enduring lease breaks. Because the switching cost is uniquely high, this creates a fundamental dilemma: people don’t want to move until they do, yet they need to be prepared to move frequently - unless they secure longer leases, which they can’t realistically promise to honor.
So yes, you have cartel behavior and supply distorted by out-of-band zoning restrictions that the market can’t correct, but those are par for the course. The real anger comes from the fact that a place to live isn’t really a “good” in the first place - everybody needs one, and while a roof over your head and good plumbing is worth _something_, the rent you’re paying is driven primarily by a segment of our society _preventing_ you from being able to live close to the public center unless you pay their troll toll. This is where the perceived injustice comes from. When you layer in the Gordian knot of lease duration, rent increase, and the high switching costs, that’s when people really start to hate you.
I love the kind of science reporting on display in this article! It stays at a consistent, objective level of detail throughout (no "imagine a vector space as a block of jello" or whatever it is that Quanta and other publications are always doing). It allows specialists to understand exactly what's being claimed, and at the same time stays accessible to laypeople. It feels like it's written for the kind of reader that I aspire to be: not necessarily a specialist on every topic under the sun, but someone who has finished high school and is paying attention.
Though I guess writing like this doesn't pay off in the modern world. Most readers don't consistently pay attention when reading, and to be honest, I don't either.
For many publications you could be critisizing, I'd agree with you, but Quanta usually reaches a higher standard that I feel they deserve credit for. Here's the Quanta article on the same thing [1]. It goes into much more detail, it shows a picture of the perfect sofa, and links to the actual research paper. They're aimed at a level above "finished high school", and I appreciate that; it gives me a chance to learn from the solution to a problem, and encourages me to think about it independently.
I agree with you that Quanta doesn't always "allow specialists to understand exactly what's being claimed", which is a problem; but linking to the research papers greatly mitigates that sin.
And here's how they clearly explain the proof strategy.
> First, he showed that for any sofa in his space, the output of Q would be at least as big as the sofa’s area. It essentially measured the area of a shape that contained the sofa. That meant that if Baek could find the maximum value of Q, it would give him a good upper bound on the area of the optimal sofa.
> This alone wasn’t enough to resolve the moving sofa problem. But Baek also defined Q so that for Gerver’s sofa, the function didn’t just give an upper bound. Its output was exactly equal to the sofa’s area. Baek therefore just had to prove that Q hit its maximum value when its input was Gerver’s sofa. That would mean that Gerver’s sofa had the biggest area of all the potential sofas, making it the solution to the moving sofa problem.
I agree that Quanta can be irritatingly stretchy with the metaphors sometimes, but to be fair, "What's the biggest couch you can fit through this hallway corner" is inherently easier to explain to laypeople than like, the Riemann Hypothesis.
i.e. if you apply the zeta function to a complex number, and you get zero, then that number must have been either a negative even integer or had a half as its real part.
What could be simpler than that? Those are all fairly simple concepts, and the definition of the function itself is nothing too exotic. I think any highschooler should be able to understand the statement and compute some values of zeta numerically. I'd like to see a statement about couches written so succinctly with only well-defined terms!
(I'm being intentionally a bit silly, but part of the magic of the Riemann Hypothesis is that it's relatively easy to understand its statement, it's the search for a proof that's astonishingly deep.)
That's a good point. I do remember doing problems related to extending formulae outside the radius of convergence in my final year before university, but I don't think it's fair to ask for proper complex analysis from 17-year-olds.
Agree, but I wanted more. What is the intuition behind the optimality proof? I realize you cannot summarize a 119-page paper in two paragraphs, but still.
Right, but if it's noticeably hotter than the environment, then that temperature difference could be used to drive a heat engine and get some more useful work. So the knee-jerk response "omg, we see the heat from space? it's gotta be wasteful!" is kind of correct, in theory.
Some people are saying "waste heat" in the technical sense of "the heat my industrial process created and I need to get rid of" and others are saying "waste heat" as "heat humans are emitting into space without slapping at least one Carnot engine on it yet".
If the heat being generated were economically worthwhile, the miners would be incentivized to use it to offset their costs. Since they aren't, we can somewhat reasonably assume that it would cost more to recapture than it's probably worth.
Is it the same as flipping every parenthesis to the other side of the number it's adjacent to, and then adding enough parentheses at the start and end?
For example,
(1 + 2) * (3 + 4)
becomes
1) + (2 * 3) + (4
and then we add the missing parentheses and it becomes
(1) + (2 * 3) + (4)
which seems to achieve a similar goal and is pretty unambiguous.
I guess from the inside it feels different: I'll read 99 mind-numbingly bad comments and cut them all slack (in the sense of not replying to them at all), but these 99 instances of benevolence are invisible and count for nothing, because the 100th comment will make me fly in a rage and that's when I'll actually post something. And unload a bunch of my frustrations from the previous 99, too. The internet selects for extreme reactions.
Altruist? DARPA is a military agency, ARPANET was a prototype network designed to survive a nuclear strike. I think the grandparent comment's point is that the innovation was government-funded and made available openly; none of which depends on the slightest on its being altruist.
> The CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. Datagrams were exchanged on the network using transport protocols that do not guarantee reliable delivery, but only attempt best-effort [..] The experience with these concepts led to the design of key features of the Internet Protocol in the ARPANET project
Keeping with the theme of the thread, CYCLADES was destroyed because of greed:
> Data transmission was a state monopoly in France at the time, and IRIA needed a special dispensation to run the CYCLADES network. The PTT did not agree to funding by the government of a competitor to their Transpac network, and insisted that the permission and funding be rescinded. By 1981, Cyclades was forced to shut down.
> Rumors had persisted for years that the ARPANET had been built to protect national security in the face of a nuclear attack. It was a myth that had gone unchallenged long enough to become widely accepted as fact.
No, the Internet (inclusive of ARPANET, NSFNet, and so on) was not designed to survive a nuclear war. It's the worst kind of myth: One you can cite legitimate sources for, because it's been repeated long enough even semi-experts believe it.
The ARPANET was made to help researchers and to justify the cost of a mainframe computer:
> It's understandable how it could spread. Military communications during Nuclear War makes a more memorable story than designing a way to remote access what would become the first massively parallel computer, the ILLIAC IV. The funding and motivation for building ARPANET was partially to get this computer, once built, to be "online" in order to justify the cost of building it. This way more scientists could use the expensive machine.
That's a valiant attempt at myth-fighting, but it doesn't fully convince me. For example, one hop to Wikipedia gives this:
> Later, in the 1970s, ARPA did emphasize the goal of "command and control". According to Stephen J. Lukasik, who was deputy director (1967–1970) and Director of DARPA (1970–1975):
> "The goal was to exploit new computer technologies to meet the needs of military command and control against nuclear threats, achieve survivable control of US nuclear forces, and improve military tactical and management decision making."
> That's a valiant attempt at myth-fighting, but it doesn't fully convince me. For example, one hop to Wikipedia gives this:
And in that same Wikipedia section there are 3-4 other people, including Herzfeld, who was the guy/director who authorized the actual starting of the project, who say otherwise:
Meanwhile you cherrypick 1-2 paragraphs at the end, while there are over a dozen that say the opposite. Note that Kukasik was later the director of DARPA, which had a completely different mandate that (no-D) ARPA.
Not to mention the functions are also translated to the other language. I think both these are the fault of Excel to be honest. I had this problem long before Google came around.
And it's really irritating when you have the computer read something out to you that contains numbers. 53.1 km reads like you expect but 53,1 km becomes "fifty-three (long pause) one kilometer".
> Not to mention the functions are also translated to the other language.
This makes a lot of sense when you recognize that Excel formulas, unlike proper programming languages, aren't necessarily written by people with a sufficient grasp of the English language, especially when it comes to more abstract mathematical concepts, which aren't taught in secondary English language classes at school, but it in their native language mathematics classes.
The behaviour predates Google Sheets and likely comes from Excel (whose behavior Sheets emulate/reverse engineer in many places). And I wouldn't be surprised if Excel got it from Lotus.
Not sure if this still is the case, but Excel used to fail to open CSV files correctly if the locale used another list separator than ',' – for example ';'.
Sometimes you double click and it opens everything just fine and silently corrupts and changes and drops data without warning or notification and gives you no way to prevent it.
The day I found that Intellij has a built in CSV tabular editor and viewer was the best day.
Given that world is about evenly split on the decimal separator [0] (and correspondingly on the thousands grouping separator), it’s hard to avoid. You could standardize on “;” as the argument separator, but “1,000” would still remain ambiguous.
aha, in Microsoft Excel they translate even the shortcuts. The Brazilian version Ctrl-s is "Underline" instead of "Save". Every sheet of mine ends with a lot of underlined cells :-)
reply