Hacker Newsnew | past | comments | ask | show | jobs | submit | braindeath's commentslogin

Not a particularly informative or well-written one frankly.

"Also, with more memory, there are simply many more possible features that MIDI 2.0 can try to emulate. More memory should also reduce the chance of the timing between playing a MIDI instrument and digital recording to be slightly off. This should mean music played on MIDI 2.0 instruments will feel more analog, and make it possible for non-keyboard instruments to work better with MIDI."

Eh? What?

"The fact that MIDI 2.0 is bidirectional has two major effects. First, it means that it is backwards compatible, and won’t make the billions of MIDI 1.0 devices already out in the world obsolete."

No, backwards compatibility does not follow from MIDI 2.0 being bidirectional.

"“I think using a MIDI guitar would change the way I make music. The way our brain orients to making music on a guitar is just different to a keyboard layout. I used to have a MIDI guitar instrument, but I don’t have it anymore because I felt like there was a lot of latency and I didn’t really like the results I got. I am hoping [MIDI 2.0] will solve some of the issues I had before.”"

Well prepare to be disappointed. The problems with digital non-keyboard instruments has little to do with MIDI. In the case of a MIDI guitar, the latency problem is an issue of physics, not the digital transport.


No.

The popular C compilers have a feature where they will do some additional type checking on the arguments passed to "format" functions. You can mark your own functions with this attribute.

See the format attribute https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attribute....

printf is not an oddball function. Also, typechecking format strings in general does not have to be that complicated. They are still used in golang.

Of all the security pitfalls of C, the format string design of printf is way down the list. As others have noted, printf is not what makes the C type system weak.


The front-end is in Visual Basic.

The backend is most certainly not in anything close to Visual Basic.. (MUMPS - using Intersystems Cache as the object store implementation)


Umm.. in any implementation you can already "cast" a std::string to a null-terminated string without a buffer (cf std::string::data() and std::string::c_str()) -- C++11 essentially mandates that. c_str must return "a pointer p such that p + i == &operator[](i) for each i in [0,size()]."

The history of those demonstrate some of the bat-shit insanity of the evolution of C++. Originally data didn't have to null-terminated, but then they finally realized that making c_str actually work without this invariant in general was nearly impossible. So since C++11 data and c_str are equivalent.

Edit: by impossible I really mean something usable with sane complexity requirements -- that wouldn't make people just immediately discard std::string for something better. This is the same C++ that brought you auto_ptr, and other garbage. C++11 corrected many things, this included.


It was possible with copy on write strings (e.g. libstdc++'s strings until C++11), but C++11 added complexity requirements that outlawed COW strings, because they're only really useful in very specific circumstances, at the cost of making the common case slow.


C++11 was really a new language. It even had a whole new ABI. I'm surprised the transition went smoothly and not like Python 2 to 3.


Imo this is probably in part because you can mix the two more easily? I imagine you could link again C++ 2003 code compiled with a compiler understanding the new ABI?


Yes you could[1], but you couldn't link C++11 object code with a C++03 compiler.

[1] with a compiler flag use_cxx11_abi=0 or something


If the c++11 object code does not expose any c++11 specific features in its abi/api you should be able to.


There was no need to plug in the NUL until somebody called c_str(). That possibility was lost in C++11. Now, if somebody plugs something else there (which is done, UB notwithstanding), c_str() produces the wrong answer.

NUL-terminated strings were one of the dumber things C++ inherited from C, and code that depends on NUL termination is not a thing to be proud of.


How do you "plugin" the NUL. If there's no space for it, then everytime you want a C string, you need to do an allocation. And why would you put something in it's place - as you said if you modify the c_str that was always UB -- so you pretty much lost all guarantees at that point -- that was just dumb.

Basically the NUL-terminator amounts to a 1-byte of wasted space -- which is almost always completely in the noise -- if you're using std::string for tiny strings you're paying at least 24 bytes anyway. There are just very few cases where that one extra byte matters.

I think the overwhelming opinion is that this 1-byte trade-off is better than the overhead of allocation and copying when you want to pass that string around. NUL-terminated strings aren't important in C++ because of C (well, not directly), but because they are essentially the ABI of countless existing libraries and the major operating systems.

> code that depends on NUL termination is not a thing to be proud of.

Whatever. Code that depends on NUL termination is ubiquitous.


>How do you "plugin" the NUL. If there's no space for it, then everytime you want a C string, you need to do an allocation.

You have not thought it through. There was always space for a NUL, but nobody needs it to have a NUL in it until somebody calls c_str(). Anyway, that was true until somebody jiggered the Standard, just before 1998, to require a NUL visible to s[s.size()].

>C Code that depends on NUL termination is ubiquitous.

Fixed that for you.


> Anyway, that was true until somebody jiggered the Standard, just before 1998, to require a NUL visible to s[s.size()].

This is not technically true. You could plugin your hated NUL byte in the implementation of operator [].

> C Code that depends on NUL termination is ubiquitous.

No, that's not what I said. It really doesn't matter what language underlying the API of all the code that expects NUL terminated strings is written in (a lot of it is C++ of course too). Windows, MacOS, and all POSIX-ish systems have a large API that consumes NUL terminated strings. NUL terminated strings are ubiquitous in computing at this point. Sure blame C from 40 years ago and burn a dmr effigy, I don't care, but that battle was long lost.

NUL terminated strings may be terrible, but the C++ accomodation for them is not -- its a well-thought out trade-off. My understanding is that neither go nor Rust make this trade-off. golangs FFI and syscall overhead has always been something of a performance disaster anyway. C++ has always had a greater demand to "play nice" with legacy code and systems then either of those.

The overhead of just having the NUL-byte is almost always a non-factor. If it really is, then use a byte vector.


Buggering op[] would be the slowest possible way to get the NUL there. Some of us would have liked for string operations not to be as absolutely slow as can be achieved. Evidently that is a minority preference.

If you think anybody is complaining about the extra space for the NUL, what you are barking up is not even a tree.


> Buggering op[] would be the slowest possible way to get the NUL there.

> Evidently that is a minority preference.

What evidence? Pointing out a fact isn't an endorsement.

> If you think anybody is complaining about the extra space for the NUL

Then what exactly are you complaining about? Setting the byte to 0? If not, why are you being so obtuse?

> what you are barking up is not even a tree.

In this thread your tone is repeatedly that of a condescending jerk.


You have presented your credentials.


Are you saying that there should not be an extra 0 byte at the end until the string fills up and c_str() is called? Inconsistent allocations seem like a recipe for difficult to debug performance problems.


> Are you saying that there should not be an extra 0 byte at the end until the string fills up and c_str() is called?

No.


> Hard links mostly work as you tell them to. You may need to set things up properly, but it's essentially the same thing being done.

That's bullshit. hard links are the same exact inode, with all that entails. reflinking is at the data block level. Totally different.


How is linking based on file and linking based on block "totally different?"


Hard links are essentially the filename that points to an inode. Creating a hard link creates just another filename associated with the same inode. Once a hard link, always a hard link, until destroyed.

Reflink has its own fs metadata including inode, with (initially) shared extents. Those shared extents can have their blocks individually and independently modified, per file. The point at which there are no more shared blocks, they're not reflinks.


IIUC: If you hard link two files, changing one file changes the other. If you dedupe two blocks, it will copy-on-write, meaning that changing one block does not change the other.

It's not an intrinsic property of file-dedupe vs block-dedupe. It's just how it's conventionally done.


hardlinks are paths pointing to the same inode - the same metadata, the same contents - they are the same file - just happen to have multiple addresses.

Inodes that happen to share blocks are not the same file. ie totally different.


In the US for home connections (cable, fiber, DSL) everybody gets an accessible IP address pretty much -- the worst is that some ports are blocked like port 80 or 25. Phones don't get a dedicated IPv4.


In other parts of the World that didn't get as many IPv4 addresses as the US, the standard for home connections is that you DON'T get a public IP address, you get a private IP address behind carrier grade NAT

And that's really NAT, not a firewall, so nothing is "blocked".

You do get public IPv6 addresses from some ISPs, though.


Finally some truth.


For most people it's dynamic. Mine is dynamic with the PPPoE fibre session.


I have a rpi set up with a minutely cron job to update my domain name to point to home. Works pretty well. At the worst you lose connection for a minute but usually the IP address only changes when the home connection fails which can take more than a minute to reset anyway.


Isn't this what the DynDNS protocol and various daemons are for? Why write your own? :P


That's precisely how they work. You install a client that pings their server. They see if there's an IP address change and switch the A DNS record.

If your DNS provider has an API, this is probably the very first example in the docs.


Why not? It is pretty simple and very fun! My first project in golang was a program that polled for the machine's IP address and updated a AWS Route53 record.


Its not exactly "write your own" I have a single line in my crontab that just uses curl to post to a url and the remote server takes the IP address it got the request from and sets the dns to that.


Though usually on firmware like openwrt the request going out is tied to a particular interface going up (and down) as it should be, so its somewhat more robust and 'correct' than crontab would be.


Dynamic in theory, but for many people the IP is unchanged for a long time. I remember reading an article that said the average length of time between dynamic IP changes tracked by some company was something like seven months, though I can't find it now.

I have cable with a theoretically dynamic DNS but it's changed once in >4 years.


Can you be sure that during 4yrs it never changed >1 even for a short time, maybe hours or days, then reverted back


That's not really a thing. The pools are large-ish; the chances of winding back on the same IP after a change are tiny.


Not speaking for all ISPs worldwide.


Maybe running dynamic DNS client that keeps logs of IP address changes. Do DDNS clients keep logs. Maybe passive DNS would detect changes.

The point of the comment was that cannot just assume it never changed unless monitoring it contiuously.


Well if you don't notice it, for a home vpn, well... you wont notice it.


You can usually get companies to drop the blocked ports though if you call customer service.


> Afterwards, you will start to see the shell as a glorious, essential element of our civilization, worth of respect and deserving our careful attention.

Nah.

There should be (probably is) a term for the phenomenon/trope where you can take something that was not carefully designed in the first place (like basically all of Unix) and then down the line you can hyper-analyze the hell out of certain bits of it and wax poetic about the few elegant bits that are inevitably there (even BASIC will work, yes) - while conveniently ignoring the whole is still a steaming pile. Lord knows that's what happened with "Unix" starting in the 90s, and Javascript in the 2000s.


> something that was not carefully designed in the first place (like basically all of Unix)

While I would generally agree that the Unix shell language (of which bash is a superset) and the bash language itself is not the most elegant and well designed things in the world - and that it should be used in a limited way and there are better options in most cases - I do have to say that to me at least it is significantly better than PowerShell which, while it clearly had a lot of design work put into it, seemed to have been designed by someone who has never used a shell or a terminal and maybe only had some limited interaction with a computer of any kind.


>Lord knows that's what happened with "Unix" starting in the 90s, and Javascript in the 2000s.

What do you mean with this bit?


Unix: a shift from the pragmatic view that Unix is 80% nice and "worse is better" to a mystic longing for a time when everything was a file and you could compose things from well understood utilities which did one thing. (Which never existed, that dream was Plan9.)

Javascript: the idea that it's actually a nice language anyone should use except under duress.


Yeah, unfortunately color perception is one of those things that is a bit too complex for a self-discovery method. While the "symmetry" explanation is satisfying it really isn't correct at all. Color perception and color matching within art (where it was useful) and more recently as a science is something that is complicated and took many years of the best scientific minds to figure out. Sometimes you need authorities on knowledge, and stand on the shoulders of giants as it were.

https://web.archive.org/web/20080717034228/http://www.handpr...

The teachers aren't completely "wrong", they were just conveying a simplification of history of pigments (also touched on in that article). It is after all true you can mix those colors and get a wide-range of colors (including a blacker black then you would with CMY). But any pedagogy that says there is such a thing as "primary" colors that make all colors is necessarily going to be wrong, even if its CMY.


> If these are the primary colors, why aren't they what printers use? Printers use cyan, magenta, and yellow.

There really is no such thing as "the" primary colors. The school primary colors are still primary as those used by a printer, and are based on historically widely used pigments. CMY (and usually K) allow for a wider gamut. But even this isn't perfect. There are printing processes with more primaries to get a wider gamut.

> And the only thing that makes these colors primary to us is that they're the colors that the cones in our eyes perceive.

This isn't quite right. For one, our cones are not monochromatic receptors, and moreover, they overlap! There isn't really just one true red, green, blue used in computer monitors either.

Because of the way our brain perceives colors (metamerism), you can create a wide gamut of colors with "alternative" primaries.


What sort of alternative primaries? Just slightly-adjusted colors that still trigger each of the three cones individually?


You can't trigger cones individually. The cones are responsive to a wide-spectrum and they overlap, especially in the case of the L and M receptors. The peak wavelength of the L receptor (the "reddest") is about 580nm -- that is not red, that's yellow-green.

Color vision and stimulus is not a straightforward mapping of primaries triggering cones. If it were that simple you could trivially render all perceivable colors with 3 chosen primary colors. This is impossible to do. You can mathematically define 3 primaries that cover the entire visible spectrum but they cannot physically exist (complete, but imaginary). Any chosen set of 3 primaries is a compromise. For subtractive materials it is trickier, which is why photo inkjet printers will use up to 8 primaries.

You might find this informative: https://web.archive.org/web/20080717034228/http://www.handpr.... The section starting with Maxwell and the "3 artist's misconceptions" especially.

But the only general definition of primary is basically just any set of colorants that can be mixed to get a useful gamut. In subtractive materials, this is why you won't see a painter messing around with mixing cyan, magenta, and yellow (better explained in the link).


> The general consensus is that putting big files in git means you're doing something wrong and the problem is with your environment not git.

This is not a “general” consensus. It’s a consensus among hardcore proponents of git. I love git, it has made my life better. I still think it’s large file support story is shitty/suboptimal, and there are valid use cases where a general purpose VCS is used to track large binary assets along code and git would do well to be a general purpose VCS. It’s a limitation of git. It’s not a fatal limitation, and git still has enough benefits (which include availability and mindshare), but it is still an unfortunate limitation and somewhat ironic for a tool born in a world where everything is just a “sequence of bytes”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: