Hacker Newsnew | past | comments | ask | show | jobs | submit | doxcf434's commentslogin

A better question would be how much simpler does k8s make thing compared to all the rube goldberg scripts admins have written over the years? A lot.


The problem is that there are many email addresses that are valid but are likely just abusers. An email address entirely of * and 200 chars long is valid in the RFC, but clearly not a human.

I settled on < 100 chars and:

`^[\w\.\+\-]+@[\w\-]+\.[\w\-\.]+$`

We'll see how it goes in production :)


What value does your system provide by limiting addresses to 100 characters and the given regex.

Why not just allow any input and validate the address by attempting to send to it. It's really the only way to tell if its a real address.

What abuse can a person bring on your system by having a 200 char email address? That should be nothing in terms of server load.


Take a rational approach but then provide a human-feedback mechanism for the very small number of edge cases that may crop up?

It shouldn't be "lets automate this and hope it goes well in production" i.e. the Google approach. It should be "lets use common sense and manage failure in a way that doesn't piss off customers".



I wouldn't expect `make clean` to delete stuff it didn't create itself. This does though as it just executes `git clean -f`!


Arguably the simplest Makefile is not having any Makefile and just doing

  make <executable name>
The next step up would be a Makefile that is essentially just a set of sh scripts, like your example. But I think it's supposed to be implied that the blog is specifically talking about medium-sized C projects.


It's simply because they have zero realization, that's not what the title means. Also evident in the article, they're clearly clueless.

What you're seeing in Thailand is respect for Buddhism itself, and a way to keep people interested in Buddhism. In Buddhism there are levels of realization, and a monk is a zero on that scale. Not sure how else to clarify. There are other titles, normally translated as Venerable, which should indicate at least some modest level of accomplishment. But to cite a monk as having any idea at all is fantasy, you just put on a robe, and anyone is suddenly a monk, no education or knowledge required, let alone realization.


I agree anyone can put on a robe, but when people give anyone with that robe special treatment and powers, then they have some power and influence.

Anyone in the US can run for a local political position, they are not powerless just because anyone can do it.


I've often used Jenkins for this use case, and really appreciate how it scales to teams too. While it works well, there are lots of pitfalls in it too, logs filling up disks, lots of configs to tweak. I think you've just gotten past those issues so it's stable for your use case.


What could possibly go wrong?


Well, a Brazilian priest attempted a similar feat in 2008[1]. He won the 2008 Darwin Award for it [2].

[1] https://gizmodo.com/5022283/sad-ending-flying-priest-found-d...

[2] http://darwinawards.com/darwin/darwin2008-16.html


> Darwin Award

Making fun of people dying. Fucking sick stuff.

It's also logically crap. Often it's people (as in this case) doing things outside the normal. Things as hackers we should celebrate.

It's also often people in extreme poverty just trying to make a living, aka the fucking sick part that we as rich educated people make fun of cause we don't have to do dirty things like recycle metal from unexploded ordnances cause rich.


Hey, tell me about it. There was a tragic accident at my school, when a structure collapsed and killed several students, because it had insufficient engineering oversight. But, eh, Darwin Awards had to make a joke, so they collectively gave it to all the victims, who were getting up at the crack of dawn to volunteer on a group project, and following all the safety rules they were given.


That's weird because it directly contradicts most rules of Darwin Awards, that the people must be mature (well I don't know what kind of school it was), that they must be the ones responsible for their death (from what you say, the engineering is what killed students later) and that it must be because of "extraordinary misjudgment" on the part of the people both responsible and victims (who are supposed to be the same).

Can you link to your story on their website in order to contact them and withdraw the award since it breaks the rules?


I dug into it a bit further, and apparently the story is that the Darwin Awards used to be more crowd-sourced, but as a direct result of the incident I'm referring to, they instituted heavy moderation and apologized for any distress caused by their seeming approval of a tasteless article. So it's a little more forgivable than I realized. My opinion gelled back when the story was still in progress, but I didn't hear about the conclusion.

If you want to read the details, just google "aggie bonfire", or "darwin award aggie bonfire".


That is sick.


IIRC it's based on upvotes. And the most upvoted ones are also about criminals meeting prompt demise: http://darwinawards.com/darwin/darwin1993-06.html


Top in 2017 -

Petty criminal, probable drunkard.

Workplace accident, I wouldn't say criminal.

Probable drunkard, not criminal (I also once made a ethernet ladder, yes it does not work well, no I didnt use it at a deadly height)

Not criminal, a bit silly. Funny because racism. I'd guess alcohol related.

Not criminal, probably alcohol involved, just a vehicular accident.

1 and 3 I'd also say involved mental illness. True if you want to be harsh we don't want the mentally ill or those susceptible to drug abuse to breed. I kinda think killing them off is not great why not just sterilize them?

Larry Walters who this story would have been inspired by (Not sure why people are saying the movie Up) also got a honorary Darwin award. He killed himself eventually. So he had issues I guess. But he was a legend as far as I'm concerned I'd prefer a world full of him over people making fun over people dying.


I still very much enjoy this one about Ronald Opus: http://www.darwinawards.com/legends/legends1998-16.html


I agree we should respect people and no mock them for making mistakes that they pay the ultimate price for. It doesn't matter that they're doing it for fun instead of some desperate survival need. Early aviators gave us aeroplanes and balloons by taking foolish risks and many died, but we glorify a few survivors because of their important contributions.

For some reason, many kinds of deaths are protected from mockery by society, but not adventure accidents. Those are fair game and bring out the cruel uncaring side of otherwise seemingly nice people.

If the Darwin awards included death from alcoholism or suicide, any mention of them would be blotted out from the "polite" internet like HN. Somebody will probably complain about me linking mental illness to accident deaths just to enforce the social more that we must not disrespect certain arbitrary groups of people but other groups are fair game.


The problem is that some people are engaging in needlessly risky activities, they then put the people that rescue them at risk. For example, in the UK we regularly have people rescued from mountains who are woefully ill equipped, like this guy:

http://www.bbc.co.uk/news/uk-wales-north-west-wales-41306122

Yet properly equipped mountaineers can get unlucky and die anyway. The difference between death by hypothermia because you climbed a mountain in only your underpants and being well equipped but getting hit by an avalanche? In the first case the coroner has a verdict of "death by misadventure" because you took unnecessary risks.


It's become a sort of joke in croatia, tourists in flip-flops trying to hike into mountains (famous stereotype is of czech tourists), and currently the rescue by HGSS (mountain rescue) is free of charge, and they often have to deploy a military helicopter (they don't have their own), which is costing the tax payers a pretty penny. The rescuers are mostly volunteers.

Examples of irresponsible behavior : http://www.dailymail.co.uk/travel/travel_news/article-368792...

https://www.lonelyplanet.com/news/2017/08/08/croatian-mounta...


There are some candidates that died under influence, but they did so in a spectacular way: http://darwinawards.com/darwin/darwin2017-06.html


And they don't check their sources. Several of the most popular "awards" from the early years were fake, e.g. the "JATO rocket strapped to pickup" thing.


Or perhaps based upon a kernel of truth?

https://www.wired.com/2000/08/rocketcar/


I believe the adage is, 'Play stupid games, win stupid prizes.' The Internet is full of terrible things. There's a site that keeps track of spree killing totals and celebrates new high scores. Well, there was. I'm not sure it exists anymore and I'm too lazy to Google.


HRH Prince William ‏@DukeCambridgeUK Condolences from the House of Windsor to the House Organa. RIP HRH Princess Leia.

https://goo.gl/8TqY8Y


That's not a real account...


People can't even tell when a Twitter account is fake (even when the bio clearly says "fictional") and yet somehow people think that we can trust the masses to discern fake news. Ugh.


Terrifying, isn't it.


The most terrifying part (to me), is that those people vote!


Good point. Amaranth is used in Mexican cuisine, such as tortillas, so it's well established already.

I think mass market appeal needs to be based off of dishes people already know, vs. a blow of random ingredients with whimsical names. Another poster mentioned impossible burger, which seems more realistic.


We've been doing tests in GCE in the 60-80k core range:

What we like:

- slightly lower latency to end users in USA and Europe than AWS

- faster image builds and deployment times than AWS

- fast machines, live migrations blackouts are getting better too

- per min billing (after 10mins), and lower rates for continued uses vs. AWS RIs where you need to figure out your usage up front

- project make it easy to track costs w/o having to write scripts to tag everything like in AWS, down side is project discovery is hard since there's no master account

What we don't like:

- basic lack of maturity, AWS is far a head here e.g. we've had 100s of VMs get rebooted w/o explanation, the op log ui forces you to page through results, log search is slow enough to be unsuable, billing costs don't match our records for the number of core hours and they simply can't explain them, quota limit increases take nearly week, support takes close to an hour to get on the phone and they make you hunt down a PIN to call them

- until you buy primare support (aka a TAM), they limit the number of ppl who can open support cases, caused us terrible friction since it's so unexpected esp. when it's their bugs you're trying to report and they can mature from fixing them


Sorry to hear about your troubles. Are you running with onHostMaintenance set to terminate or are you losing "regular" VMs. If you want to ping me with your project id (my username at google), I'd like to investigate. 100s of VM failures is well outside of our acceptable range.

Also, if it's been a while since your last quota request, we've drastically improved the turnaround time. All I can say is, your complaints were heard and we've tried to fix it. Keep yelling if something is busted! (And yes, I see the irony of the support ticket statement; out of curiosity which support are you on?)

Disclosure: I work on Compute Engine.


Maybe there is something special for the member of GCE startup program, but for us the quotas requests take between 1min and 1 hour, where the same requests over aws took a few days, and endless discussions.

Our all experience with the folks over at Google has been amazing compared to the poor level we had with AWS.

Granted we are on a range way lower than yours.


Ditto -- we've had about five quota requests handled within an hour or two. AWS took about a week for each of two requests.


Thanks for sharing your experience. Its really helpful!


I recently had to maintain some new perl code. I didn't think it would be a big deal, but found a number of things I take for granted today that perl hasn't kept up with:

1) The perl cpan module doesn't resolve dependencies

2) The cpan module has parsing errors when passing in a list of CPAN packages

3) You have to manually grep your perl code to see what modules it depends on

4) Module installs take a long time since they can compile and unit test the code, unit tests can even make connections to the internet or try to access databases and fail, so you just have for force them to install

5) Non-interactive installs of CPAN modules requires digging in the docs and learning you need to set an env var to enable

6) CPAN modules aren't used that heavily and can have bugs that would be caught in wider used modules. (e.g. the AWS EC2::* modules don't page results from AWS so results sets can be incomplete, whereas the wider used boto lib works correctly and is better maintained.)

7) Perl devs don't think twice about shelling out to an external binary (that may or may not be installed)

8) Even if regexs are not needed, inevitably the perl dev will use them since that's the perl hammer, and it's hard to know what the intention is with regexes or what the source data even looks like

9) You have to manually include the DataDumper package to debug data structs

10) You have to manually enable warnings and strict check, it's not on by default.

Anyhow, I think we've made a lot of progress since the 1990s. :)


A few comments:

* It is often recommended to use cpanminus[1] instead of the CPAN.pm module. But it is up to the distribution you try to install to declare it's dependencies correctly. Not doing that is a bug.

* If you use cpanminus you can use the --notest flag to skip tests. But tests are a feature.

* Software have bugs. Reporting them when they are found is how software get less bugs.

* Cpan distributions should not[2] use external binaries (and exceptions should be clearly documented and motivated).

* The ease of use of regexes in Perl is not an argument for not documenting them (and in this case) the document format they are meant to parse.

* There are several different data dumpers. No assumption on the user's preference is made.

* If you use a newer Perl (5.12+) you get strict enabled automatically[3], and also (depending on which version your code requires) some new features. Due to backwards compatibility it is not possible for newer Perls to enable strict or warnings implicitly.

The Perl of today is also vastly improved since the 1990s, hopefully you will come across some modern perl too.

[1] https://metacpan.org/pod/App::cpanminus

[2] https://www.ietf.org/rfc/rfc2119.txt

[3] https://metacpan.org/pod/release/JESSE/perl-5.12.0/pod/perl5...


I think the difference is in other languages I don't have to think about these things any more than I think about what IRQ my sound card is on.

In the CPAN case, if cpanminus is the "good one", then it should be installed by default and CPAN.pm needs to tell you to use that instead or just be deprecated. I don't want 5 choices in package managers, I just want the good one. :)


One factor that sometimes leads to problems in this regard is (as mentioned) backwards compatibility. Pretty much nothing that once has worked can be removed or changed because somewhere mission-critical software depends on it.

Another issue is discoverability. A concrete example is that https://metacpan.org/ is a much better (imho) presentation of cpan than http://search.cpan.org/.

It is the curse of being a very stable language and ecosystem.


> Perl devs don't think twice about shelling out to an external binary

No, most of them do. Perl ecosystem has a killer feature called cpantesters, that allows everyone to see which modules work on which systems out of the box. You should always check cpantesters matrix before choosing a particular dependency.

> Even if regexs are not needed

They got overly complicated over the years, but they are needed. They are DSLs to make things easier when working with strings. I.e. so you wouldn't have to write 20 lines of hard to grasp code with bytes.Index(), bytes.HasSuffix(), bytes.TrimRight(), etc., like people do in Go, but a single nice regexp and therefore reduce your chances to make a mistake in that code.


> so you wouldn't have to write 20 lines of hard to grasp code with bytes.Index(), bytes.HasSuffix(), bytes.TrimRight(), etc., like people do in Go

Go has regexps, and a very good implementation at it.

Depending on what you do and on the specific code-path, compiling and/or executing a regexp might be slower than manually parsing the string. Go standard library is pretty concerned with performance (much more than Python's or Ruby's, for instance), so it tends to avoid regexps.


It shouldn't be like that, that's the problem. Regular expressions should be compiled into a native code and be even faster than a bunch of hand written bytes.HasSuffix() combinations.


Your previous post said that they are a very useful DSL for Perl so that "people don't have to do like they do in Go".

Both Perl and Go implement regexps, and neither or them compile them to native code. So I don't get your previous comment at all.

The main difference is that, in Perl, if you ever had to write manual string parsing, it would be much much slower than using regexps as Perl is an interpreted language. So regexps are needed to perform fast string parsing. In Go, you have regexps if you want, or you can go even faster if you feel it's required.


> Both Perl and Go implement regexps, and neither or them compile them to native code. So I don't get your previous comment at all.

Ok, I'll try to explain.

People feel discouraged to use regexps in Go, because they are very slow for many typical parsing and validating cases and require extra step of compilation and all of the additional code complexity associated with that. So, people do parsing manually instead, with all of its problems. It's not that they need that performance, almost no one does, but the whole idea behind regular expressions is not working, parsing code is still bad most of the time.


You've made me curious: Is there a language out there which does this, i.e. compiles Regex down to native code which is then as fast/faster than hand-coded bytes.hasSuffix(..) calls?


I found this with a bit of searching and clicking around on Stackoverflow: https://www.colm.net/open-source/ragel/ (via http://stackoverflow.com/a/15608037).

I didn't look long enough to know if there's an easy way to convert a regular expression to Ragel syntax.


> Go has regexps, and a very good implementation at it.

In my experience, porting code from Perl to Go, Go's regexp package is vastly inferior to Perl's, in multiple areas, speed, memory, unicode handling (eg: \b works on ascii-only in Go), etc. For example, for some large regexps handling url blacklists, reduced programmatically with Perl's awesome regexp assembly tools, I had to rely on PCRE in the end, Go just could not cope with that (not even the c++ re2). I do avoid regexps, regexps are usually best avoided, and all that, but there are areas in which they are by far the best option. In those areas, I postulate, from my own experience, that Perl's implementation is king. Speed, memory usage, Unicode.


> (not even the c++ re2)

Did you try using RE2's "set" functionality?


No, I did not get that far, would've meant a larger rewrite of the ecosystem, the data files were created by other tools, already in "alternate form" [1] needing to be used by other programs as well. I stopped trying to load them with re2 (both Go and C++), after glancing over all those gigabytes of RSS, while Perl kept them in the 2-300 MB range. PCRE was a good compromise at the time, but with other tradeoffs, because C libs seem to be frowned upon in the Go community, ie. semi-official voices arguing how best to avoid them. :/ (eg: blocking inside C isn't under the gomaxprocs limit, costly overhead crossing the C boundaries, static binary troubles, less portability and so on)

#1. perl -MRegexp::Assemble -E'my @list = qw< foo fo0z bar baz >; my $rx = Regexp::Assemble->new->add( @list )->re; say $rx'

(?^:(?:fo(?:0z|o)|ba[rz]))


cpantesters looks very useful. [1]

I wonder if there's anything like that for Python and Ruby.

[1]: for example, http://cpantesters.org/author/D/DAMOG.html


Less code is generally better. But I've noticed a lot of folks still using ^ or $ when what they really mean is \A or \z


What's the difference? ^ and $ is basically all I remember from when I read Mastering Regular Expressions


\A and \z always match beginning/end of the string.

^ and $ can be changed to mean beginning/end of each line in the string with the /m flag.


>8) Even if regexs are not needed, inevitably the perl dev will use them since that's the perl hammer, and it's hard to know what the intention is with regexes or what the source data even looks like

I'm going to disagree with this one. There's lots of things in any language where it can be hard to see, at a glance, what the intention of the programmer was. That's why we have commenting. You're supposed to comment your blocks of code so that someone else can look at it and understand what that block of code is supposed to do.

Unfortunately, as far as I can tell by looking at other people's code, I appear to be one of the only programmers on the planet who actually uses comments....


Ideally the code itself should communicate that intent. And comments can become obsolete as code changes. Hence the movement to reduce comments to only what's necessary.


1. What? (anyway use cpanminus these days). 2. Again what? 3. Nope, there are a variety of tools available. Try `cpanm Perl::PrereqScanner::App` followed by `scan-perl-prereqs .` 4. Yeah you can skip test runs `cpanm --notest` , you really want to? The subsequent complaint, you're clearly having an experience I don't have. 5. Again see cpanm 6. Can't comment on this one. 7. Umm, that's a code smell. From cpan that outcome is rare. 8. You use regexes when you need certain kind of things done fast. Don't forget the `/x` flag to ensure it's documented if a non-trivial regex. 9. Actually I spend most of my time in the perl debugger. Older perl codebases do suffer from the magic payload pattern quite a lot. Modern perl, less so. 10. Yeah I agree, one should probably have to explicitly turn off warnings and strict, but whatever.

Anyway I agree, perl has made huge progress since the 1990s. I also agree there's a problem with discoverability in some parts of the cpan ecosystem. Be sure to read the Modern Perl book next time you need to do some perl work. You ought to be pleasantly surprised. Personally with the Moo(se)? family of modules, I enjoy having a multiparadigm language with reasonable optional runtime typing to keep me sane. My biggest complaint is the reference counted garbage collection.


> 1) The perl cpan module doesn't resolve dependencies

What? CPAN absolutely does.

> 2) The cpan module has parsing errors when passing in a list of CPAN packages

Both from the commandline, and in CPAN itself can i install a list of modules as such:

    cpan Data::Dumper Devel::Confess
    
    install Data::Dumper Devel::Confess
> 3) You have to manually grep your perl code to see what modules it depends on

Or you can use a CPAN module for that.

> 4) Module installs take a long time since they can compile and unit test the code

Or you just install them like this, if you're confident in your system:

    install Data::Dumper Devel::Confess
> 5) Non-interactive installs of CPAN modules requires digging in the docs

Non-interactive installs should be using your operating system's package manager, unless you have a special use-case, in which some doc digging is fine.

> 6) CPAN modules aren't used that heavily and can have bugs that would be caught in wider used modules.

You mean "Some CPAN modules".

> 7) Perl devs don't think twice about shelling out to an external binary (that may or may not be installed)

Again, some.

> 8) Even if regexs are not needed, inevitably the perl dev will use them since that's the perl hammer

Eh, fair enough.

> 9) You have to manually include the DataDumper package to debug data structs

    Data::Dumper was first released with perl 5.005
> 10) You have to manually enable warnings and strict check, it's not on by default.

Same in JS, and similar with other languages.

> Anyhow, I think we've made a lot of progress since the 1990s. :)

Not really sure, the trolling culture seems to still be the same as back then.


Regarding the module dependency woes, check out Carton (https://metacpan.org/pod/Carton).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: