Block space is always a little bit of a bidding war, even with zero fees. That's because each additional byte added to a block increases the time it takes to propagate the block by just a little bit.
When miners find a block, they race to get it published as quickly as possible to as many nodes as possible. If A finds a block, and stuffs it full of transactions; and shortly thereafter B finds a block, and includes no transactions, B might actually "win" the race because B can propagate his tiny block much faster than A. So each txn added incurs a tiny penalty just by virtue of adding to the payload.
> they decide which fork to run
In reality most hashpower is pooled and pools autoswitch between forks depending on which is more popular in the moment. So most people running miners are mining all forks.
> the validation that ensures that consensus is maintained between nodes
That's a terrible misunderstanding. If nodes can reach consensus by simply agreeing on transaction validity, then what purpose do you believe miners serve?
The definition of a node is provided in Section 5 of the white paper mentioned in OP. The logic that explains "why you must mine in order to be a peer" is explained in Section 4.
Non-mining nodes are trivial to Sybil, they are "one-IP-one-vote" per Section 4. Only miners are "one-CPU-one-vote." That is why nonminers (what you call "nodes") are not peers to the system, but rather leeches / relays.
You have had a one year long lesson in your misunderstanding of how bitcoin works. And yet you are here trying to explain how the fork attempts didn't turn out exactly as i say.
> A peer is a node, which is a client that validates all transactions and blocks in the blockchain.
According to the white paper, Section 5, a peer is a miner. That has not changed, regardless of attempts to redefine the paper. To be a peer, you MUST contribute proof of work.
Running a non mining node gives you a copy of the blockchain data that you can trust is valid according to the rules you used to validate it. It does not make you a peer.
He just said nodes validate the information. It just doesn't matter to anyone else other than the node operator and users of that node.
The only occassion the propagation is valid if you're transferring a transaction from another full node to a miner (or helping to do so). As long as there is any path to do so more nodes do not matter.
All miners are already connected together using high speed channels.
You're referring to isolated incidents, which is an invalid comparison.
Instead, if you have information that I have a pattern of murdering lots of women, and my friend has a pattern of murdering lots of men, and you choose to release information about me and not my friend, it immediately suggests that you support killing men but not women.
I don't deny that it may show a bias, but as long as the information about the murders that I release is true why shouldn't it be acted upon?
Further if the Huffington Post (or name a left learning publication if you believe they are not) does an article on Trump and the facts they release are verified should we not act on them due to the lack of a similar article in regards to Obama?
A purely functional statically typed one, like OCaml or Haskell, because there would be so much less to review, and an insane amount of the reviewing work is automatic.
Zero bugs around state, zero bugs around memory mgmt, zero bugs about error conditions not being handled, zero bugs due to some data being expected but not being written, etc.
Well, a couple months back I attended a talk on a Haskell implementation of the Noise protocol.
The programmer admitted that he was a cryptography novice, and in fact a Haskell novice.
As a result the code he wrote is needlessly abstract - for one thing the guy uses Free monads and in turn ropes in template Haskell as part of his state model. I really have no idea what code he's generating.
The code has other features that make review challenging - for instance he doesn't qualify any of his imports so it's hard to tell where to look for the functions he's implementing.
Maybe you are better at auditing Haskell than I am. As DJ Bernstein writes in various places, one common exploit is to construct an elliptic curve Diffie Helman shared secret with input that isn't a curve point. I really can't tell if the guy is mitigating against this attack or not, but here you can have a look:
How can tainted evidence be used to establish guilt beyond reasonable doubt, when the entire basis of trust in the collection of the evidence is ostensibly gone.
These are all amazing benefits, but when you're making art, you'll throw all of those out the window if doing so get you the sound you need.
The problem is thinking that this is either/or. I use a 48 channel ProTools HD system married to gear that's largely 40-70 years old. Best of both worlds.
> There's a reason they never use actual data and measurements to prove their points
Artists use the most advanced measurement devices in the world: a binaural listening device connected to the world's most advanced neural net. The neural net is the best part.
Sorry. There is much more to sound than frequency response, dynamic range, phase shift, and total harmonic distortion -- and moreover, the "optimum" amount of each of these is not fixed, but depends entirely on the material and the taste of the engineer & producer.
You can get great sound "in the box," don't get me wrong, but ultimately there is, as yet, no substitute for the real thing. A saturation plugin is nowhere close.
To accurately simulate this device, you would have to practically simulate at the molecular level.
> The workflow makes the sound. The lack of automation forces you to make choices. Forces your hand, literally.
You raise an excellent point here. A full-blown DAW is practically without limits: hundreds of tracks, unlimited effects on every track, any signal chain is possible.
But making art is about working within constraints. Most of the best art derives from the constraints as much - or more - as the capabilities of the devices or instruments used.
However there is a technical reason why these devices do sound different, and that is that they all impart euphonic distortion. Bass sounds "bigger." Treble sounds "clearer." Mixed tracks "glue together." Vocals "pop out." A sense of "depth of field" may be imparted. Technically speaking this is all "distortion" of the original signal.
> However there is a technical reason why these devices do sound different, and that is that they all impart euphonic distortion. Bass sounds "bigger." Treble sounds "clearer." Mixed tracks "glue together." Vocals "pop out." A sense of "depth of field" may be imparted.
Digital advocates are quick to point out that all of those phenomena can be perfectly modelled in the box. We have convolution reverbs and tube amp models and emulation that can be scientifically proven to match the analog gear.
By point is, you _can_ do all that, but will you? You have a world of possibilities, and so the likelihood when working with a digital / software workflow is that you'll stick to the relative strengths of that setup.
Another thing to note about the hardware console interface is the nature of a classic design. Consoles like the Neve or SSL are familiar to engineers, they are instruments with a hand feel. A recording or mixing engineer can go into a studio and get similar results from similar gear. The listeners ear can pick up a je ne cest pas familiarity from it, what you call "good sound" in another comment, without quote knowing what they're hearing. The same way the Telecaster just "sounds good". It's not better, it's just familiar. Digital workstations are all different and don't achieve the same familiar sound.
> all of those phenomena can be perfectly modelled in the box
Nope.
How you gonna model the interaction between a microphone and the preamp load it's driving, and the compressor that's driving, and the EQ it's driving, and the nonlinear summing bus that's feeding, when all of these are interacting in a live signal chain, and all entering various forms of nonlinear behavior based on the age of components, their tolerances which vary from box to box, heat, etc? It might be theoretically possible, but past a certain point, to model these devices requires modeling physics at the materials level. In point of fact most digital "simulations" of these sorts of devices are not simulations at all, but approximations that impart similar EQ, dynamics, and harmonic distortion.
I'm a dev by trade, been doing audio for decades too. I used to believe this was all modellable too. I think there's a tendency for people who are strong in digital signal processing but naive to what these devices are really doing to the signal to be overconfident in our ability to simulate them in realtime.
You can definitely achieve a reasonable facsimile! But if you want the sound of this console then you're going to have to make one or buy one.
This is all true, but there's a point beyond which it stops being musically/artistically relevant.
I've probably been doing this stuff as long as you have, and I'm not actually sure where that point is any more.
I actually hate the sound of most of the Beatles albums. I think they sound crap by modern standards - tinny, rattly, clogged-up, mid-heavy mixes with no deep bass.
Put them up against a modern trance single mixed ITB and the latter sounds huge, dynamic, cinematic, and infinitely more polished.
Which is better? It depends...
The Pink Floyd albums hit a sweet spot by being musically ground breaking while also being the first examples of hifi multitrack recording in its modern form.
Now I tend to think ITB is fine for electronica and dance, because sometimes you want polish and a slightly unreal shine. But for rock, country and maybe even hiphop the older hardware is going to give you more character, bite, and depth.
Ultimately they're just colours you can use. If you have talent, it doesn't matter if you mix ITB or not.
> I actually hate the sound of most of the Beatles albums. I think they sound crap by modern standards - tinny, rattly, clogged-up, mid-heavy mixes with no deep bass.
Of course the whole point of bands like the Beatles was that they stood "engineering" on its head, as it was understood at the time (actual scientists wearing actual lab coats attempting to capture sound as accurately as possible).
EMI engineers making classical records were trying to create photographic style recordings. The early Beatles records sound, mostly, like you're standing at the Cavern club in front of a late-1950s sound reinforcement system. Photographic.
The Beatles helped to change the idea of making "photographic" records into making records like painting on canvas. Together with the other influential artists of the time (Beach Boys, Pink Floyd, Mike Oldfield, etc) they transformed modern music.
When yo
>Put them up against a modern trance single mixed ITB and the latter sounds huge, dynamic, cinematic, and infinitely more polished.
>Which is better? It depends...
I think you make my point here.
If I were making a Crystal Method record, of course I would use a different signal chain that if I were making a Dawes record.
> I actually hate the sound of most of the Beatles albums.
I agree, and I found the 2009 re-releases to be incredibly disappointing given that they didn't even do the most basic repair work on the most obvious glitches and errors. They were marketed as some major improvement, but in reality were a new remastering only. Removing half of a dog turd from my cup of coffee doesn't improve the coffee.
What did you think of the 2015 stereo mixes? (Only on the 2015 re-re-release of "1", unfortunately. If they released a complete box set of new mixes redone in the same fashion, I'd be first in line. And I'm not even a big Beatles fan.)
And a popular way to model it is with a DSP technique called a Volterra series.
You may want to consider that impolite words are also used about people who consider themselves very knowledgeable about domains where they show no evidence of understanding the fundamentals.
When miners find a block, they race to get it published as quickly as possible to as many nodes as possible. If A finds a block, and stuffs it full of transactions; and shortly thereafter B finds a block, and includes no transactions, B might actually "win" the race because B can propagate his tiny block much faster than A. So each txn added incurs a tiny penalty just by virtue of adding to the payload.
> they decide which fork to run
In reality most hashpower is pooled and pools autoswitch between forks depending on which is more popular in the moment. So most people running miners are mining all forks.