I thought this was going to be an interesting article. Then I read
>At first glance, this is sound—even admirable—advice for aspiring business leaders. But a closer look at Klarman’s remarks, as well as the origins and trajectory of his career, suggests a deeply flawed messenger.
And I realized this was just another article which refuses to engage ideas and falls back to ad hominem. I don’t care if the messenger is flawed, or what somebody thinks his or her motivation is for delivering a particular message. I want to hear a critique of the message itself.
I think doing something positive with you life (e.g. advocating for people to make responsible procreation decisions) could give you a net positive existence. I think advocating people commit suicide isn't going to win you many adherents though (unless you have a really cool story about where you go after suicide, e.g. Heaven's Gate :P).
The revulsion a lot folks express at one based indexing is always bizarre to me. I write code in c, python and matlab. Switching between these is really not that hard. The two indexing models just seem to be convenient/painful for different things.
And yes, perhaps one based indexing introduces a class of bugs when you need to call into c. But zero based indexing has the same problem if you need to call into fortran, and calling fortran is really common for numerical code.
I agree and I would have been a shrill critic of any 1-based indexing.
Most of the classic linear algebra algorithms seem to be described using 1-based indexing and the last time I needed to use one of these algorithms, I stubbornly tried to translate everything into zero-based indexing which was more difficult than you'd imagine and it was difficult to have confidence that I had correctly captured the algorithm.
I switched to using a 1-based matrix implementation and everything became trivial. It's not that hard to switch between looping from 0 to n (exclusive) and looping from 1 to n (inclusive).
The issue is whether you want to cater to established conventions (both in mathematics where 1-based indexing is typical, but non-mathematicians will also similarly use 1-based counting), or you want to cater to what is most logical.
I suspect the main reason 1-based seems convenient for some things is because of convention. Note that 0 wasn't even really used in mathematics until hundreds of years after our system of counting years was created.
Since our year counting is 1-based, we end up with odd things like "2019" being the "19th year" of the "21st century" as opposed to "2018" being "year 18" of "century 20". I suspect it's also fairly unintuitive that the current century began in the year "2001" rather than "2000".
Pretty much all languages designed for mathematics use 1-based indexing. Mathematica, R, Matlab, Fortran, etc. Either people have to think that the designers of these languages all made a mistake, or realize that it makes much more sense for mathematical computing to follow mathematical standards.
Is it possible that mathematics got it slightly wrong? The whole concept of 0 is relatively recent. Plenty of mathematics comes from before its inclusion, so presumably the idea of maintaining convention was there for successive mathematicians too.
It's not about right or wrong, it's just that they work for different things but programming languages unlike math or human languages have to pick only one as default. 1-index is good for counting, if I want the first element up to the 6th element, then I pick 1:6 which is more natural than 0:5 (from the 0th to the 5th). 0-index is good for offset, for example I'm born on the first year of my life, but I wasn't born as a 1 year old, but as a "0 year old".
And since pointer arithmetic is based on offset, it wouldn't make sense for C to use anything other than 0-index. But mathematical languages aren't focusing on mapping the hardware in any way, but to map the mathematics which already uses 1-index for vector/matrix indexing. You can see the relation of languages in [1].
If you want to write generic code for arrays in Julia, you shouldn't use direct indexing anyway, but iterators [2] which allows you to use arrays with any offset you want according to your problem, and for stuff that is tricky to do with 1-indexing like circular buffer the base library already provides solutions (such as mod1()).
> 1:6 which is more natural than 0:5 (from the 0th to the 5th)
This is again just begging the question. When you want to refer to the initial element as the "1st", it is due to the established convention of starting to count from 1. The point is that the reasonining for starting from 1 might only be that: conventional, not based on some inherent logic.
You start counting with 1 because 0 is the term created later to indicate an absence of stuff to count. If I have one kid, I start counting by the number one, if I have 0 kids I don't have anything to count.
But then I agree that there is no inherent logic, math is invented and not discovered, and you could define it any way you want. If we all had 8 fingers we would probably use base 8 instead of 10 after all.
Actually we naturally count from 0, because that's the initial value of the counter.
It just so happens that this edge case of 0 things doesn't occur when we actually need to count something. Starting from 1 is kinda like head is a partial function (bad!) in some functional programming languages. Practicality beats purity.
Does it matter if it's wrong? In mathematics it's a pretty standard, if not written, convention that for example the top left corner of a matrix has the position (1, 1) and not (0, 0). If I read an equation and saw an "a3" in it I can safely assume that there exists an a1 and an a2, all three of which are constants of some sort. I can safely assume that there does not exist an a0, because this just isn't the convention. And furthermore, when I do encounter a 0 subscript (e.g, v0), it is implicitly a special value referencing some reference value or the original starting value. This is different than if I were to see a 1 subscript, such as v1. For example, the equations
f = v0 + x
f = v1 + x
Those are the same equations right? Sure, but when I see v1 I'm not really sure what it is or could be, vs if I saw v0 I can assume it may be the initial velocity when I can look up.
The article may be old, but as if 6 months ago (the last time I tried Julia), the complaints about the jit were still valid. Typing something into the repl with a syntax error took tens of seconds to produce an error. Creating an array with 3 elements took over a second. Plotting took forever. It was a very frustrating experience.
As one of the Julia developers; this is quite atypical. We’d like to get a bug report on our GitHub tracker from you if you’re willing to open one. Anecdotally, on my 2018 MacBook Pro, full startup of Julia, compilation and execution of a syntax error, and cleaning everything up, takes about 0.8s. (Measured with “time julia -e ‘foo foo foo’”). That’s not a time to brag about, but it’s an order of magnitude faster than your comment. Your system may be slower about certain things, but tens of seconds is way far out of the distribution of reasonable times.
Creating an array of three numbers is much faster; on my system (subtracting startup time) it’s less than 50ms, and that’s all because of compilation time. After running it once (so as to compile the random number generation and array construction routines) constructing a random array takes ~4ns.
Again, we’d like to see an issue opened in our github tracker to help figure out whats going wrong. Feel free to open one at https://github.com/JuliaLang/julia
Fast startup, high throughput, high productivity: Choose two.
C/C++/Fortran take fast startup and high throughput, python takes fast startup and high productivity, and julia takes high productivity and high throughput.
I used to smoke a bowl almost every night. I quit because it had a number of affects on me I didn’t like: (1) it made me sleep poorly. (2) it made me paranoid/anxious, as in I would become hyper focused on all the ways my life could hypothetically fall apart (3) I couldn’t manage to do much besides sit on the couch and watch tv or game (4) I would basically become paralyzed by overanlysis, to the extent that I couldn’t really talk to people.
Friends of mine smoke regularly have basically the opposite experience. It really appears to me that weed affects people in really different ways, so it’s hard to make a general statement.
I think HN follows the aesthetic, to a good extent, of a sparse text based doc page. Those kinds of sites rank high in my personal trustworthiness rubric.
The more something has been “designed”, the more the sight falls into the untrustworthy category, barring substantial evidence to the contrary.
My field is control systems. Every academic I know, and every paper I’ve read which mentions a software stack, uses matlab/simulink.
Simulink appears to me to have no good alternative (maybe jmodelica or something?) There are some python/Julia alternatives to matlab, but the existing control libraries are really pretty limited in comparison.
I’m not sure exactly how dependent particle physics is on ROOT, so direct comparison is difficult.
The Modelica systems are a good alternative, but don't really exist in high level languages yet, other than some transpilers which are a little iffy. We are planning to change that with Julia though which has enough of an ecosystem to easily build such an open source tool unlike Python or R.
>At first glance, this is sound—even admirable—advice for aspiring business leaders. But a closer look at Klarman’s remarks, as well as the origins and trajectory of his career, suggests a deeply flawed messenger.
And I realized this was just another article which refuses to engage ideas and falls back to ad hominem. I don’t care if the messenger is flawed, or what somebody thinks his or her motivation is for delivering a particular message. I want to hear a critique of the message itself.