Hacker Newsnew | past | comments | ask | show | jobs | submit | gboudrias's commentslogin

As long as it also applies to juice. It's usually the same amount of "bad sugar" but since it says "orange" on the bottle, people think it's healthy...


Or even some "coffee" drinks.

Black coffee? Pretty healthy, just water and caffeine. 0 calories. 0g sugar. 0g sodium.

A Starbucks "White Chocolate Mocha"? 530 calories, 320mg of sodium, 69g of sugar per 20 oz "Venti" serving. That's twice as many calories as a 20 oz Coke, with twice as much sodium, and higher sugar levels.

Nobody is defending soda here. Just pointing out, this good idea needs to be applied consistently or people will switch from "unhealthy" soda to "healthy" coffee drinks or "healthy" fruit juices both of which can have scary-high sugar too.

I like the "color code" system they have in Europe[0]. You buy a soda and it has a red warning on the sugar/salt levels.

[0] https://www.nutrition.org.uk/images/cache/537bf3c6516df64581...


The "soda tax" in Philly doesn't apply to Starbucks, there was some suggestion that it may of been purposefully excluded due to lobbying efforts though I haven't looked too deeply.

It looks like Seattle has a similar exemption: https://www.seattletimes.com/seattle-news/politics/fancy-lat...

Exemptions like these make me pause, I can see the point of such a law (though I'm libertarian and not in favor) but when exemptions like that are made I find it hard to take seriously. If the state was concerned with sweetened beverages they would apply the same measures to all of them. Taxing or applying warning labels to soda alone strikes me as a classist move as if the lower classes are too dumb to read the nutrition information.


1 cup of black coffee has <5 cals according to the internet, not 0. it's basically 0 but i think it's important to be precise when it comes to diet.


Most sources list it as 0 or 1 calories. Additionally considering that caffeine may increase your RMR ("Resting Metabolic Rate") by 3% or more[0][1], if you want to be "precise" it may be more accurate to say that black coffee contains negative calories.

So how precise do you need it to be? 0, 1, <5, or -1, -5 calories? Or is this just pedantic nitpicking?

[0] https://www.ncbi.nlm.nih.gov/pubmed/7486839

[1] https://www.ncbi.nlm.nih.gov/pubmed/2912010


>it may be more accurate to say that black coffee contains negative calories.

Calorie has a pretty specific definition. What makes you feel like the scientific definition doesn't work for you?


5 cals can really be rounded down to zero. It's important to also focus on the big picture.


No, actually if one is making a decision whether or not to drink black coffee while fasting, it is interesting and actionable data.

I do in fact drink black coffee in a fasted state and if it had 5 calories (which I don't believe it does) I would choose not to ...


5 calories will not take your body out of its fasting state... unless you're being religious about it


This makes me think of premature optimization. There is such thing as energy homeostasis in biology. Most things about diet are going to be imprecise, and the target system is adapted to imprecision.


Just imagine the calories in a Double Ristretto Venti Half-Soy Nonfat Decaf Organic Chocolate Brownie Iced Vanilla Double-Shot Gingerbread Frappuccino Extra Hot With Foam Whipped Cream Upside Down Double Blended, One Sweet'N Low and One Nutrasweet, and Ice.

This is the longest possible order you could order at Starbucks. Once you have ordered this, you cannot order anything larger.


I don’t think you can extra-hot and ice.

However, you could probably add “half syrup” modifiers or something. “In my personal reusable insulated starbucks mug” has a lot of characters in it.

Also I don’t see the words nitro or blonde in there.

Remember the poor person at the counter can accept more modification requests than the app.

Keep working on it. :-)



The situation is more nuanced and complicated with fruit juice . Too much is probably bad for you, but moderate amounts have been shown to have health benefits. In poorer areas many can afford juice but not whole fruit.

https://www.cambridge.org/core/journals/british-journal-of-n...


same as milk. milk (even the "fatfree" one) has a ton of sugar. should we also place a warning on milk?

we should also start labeling fruits. because sugar.

if you ask me this is a bit ridiculous. If you don't get that soda/soft-drinks aren't exactly good for you a warning is not going to help you.


> As long as it also applies to juice.

Agreed! Also would like to see full health warnings for each of the non-sugar sweeteners, colors, preservatives, etc.; anything with studies showing it's bad.


Flavored (and sweetened) milk drinks have already been given a special exemption from the rules, with no particular reason stated.

I'd like to be able to debate the fundamental merits of labelling and public-interest laws like this. But in practice I just end up opposing them because they're reliably full of stupid loopholes and special-interest favoritism.


This is a situation where we can rank preferences:

1. Worst case scenario - no sugary drinks are labeled 2. Partial progress - Some sugary drinks are labeled (Soda) 3. Full progress - All sugary drinks are labeled.

I wouldn't try to slow the momentum by fighting 2 if 3 is the ultimate goal.


I have no objection to labeling juice based on sugar content. But you can't really state that a particular juice is "healthy" or "unhealthy". It depends on the quantity.


And what's your objection? Zork good Fortnite bad?


I didn't stop at that sentence, but I can understand the latter sentiment. Zork was a revolutionary game, Fortnite is a cartoon shooter. Other than both games being popular there is simply no comparison between the two in my mind.


Article author here. I get why people trip over that comparison. Unfortunately it wasn't added by myself, nor part of the original draft. The comparison is (somewhat) obviously intended to make it clear that it was a super-popular game.

That said, I'm glad that people like the rest of the article :)


Not the OP, but Zork was easy to put down and pick up later, and it wasn't multiplayer. It's the comparison that ignores something really interesting.


Also not the OP, but the OP is probably talking about its popularity and novelty.

Also, Zork wasn't easy to put down and pick up later.

I was also multi-player in a different way. Instead of being competitive multi-player, it was cooperative multi-player, as in you had friends over and you'd solve it together.


This seems... Unsustainable?


Why is it unsustainable? After enough time, most people who would use it for free already are. Now, the goal is to either get rid of the free users or convert them to paid users.


Really curious how shutting down every service works as a business model.


> Really curious how shutting down every service works as a business model

They don't shut down every service, just ones that aren't performant from a business perspective.

Which, I would think, works pretty well as a business model.


These services aren't Google's business model of Ads.


It is almost like their products are just PR puff pieces.


I should really move away Fromm google Calendar! I keep wanting to do it and never get around to it. It’s highly doubtful they’ll kill calendar (GSuite) but I also know they’ll find a way to.


Probably better than keeping products that lose money around?


Hi, psychology student here. It's not very good in a psychometric sense, but its validity problems aren't really a concern when it comes to idiographic data ie: "helpful tools for getting to know the people with whom she was creating a new company".

It's not the devil we memed it into, there are better tools for categorizing and rejecting candidates based on objective-ish traits but this wasn't the purpose according to thr article.


Out of (genuine) interest, which alternative tools do you recommend?


The MMPI-2 is the big reference right now. Legally you'll probably need a licensed psychologist to administer it (depending on your state/country), but that would be true of most useful tests.


Yeah "just" a cron job except the implementation changes several times a year. Somehow this automated process was more time-consuming than the previous, manual one.


Many cloud providers will make this process pretty much entirely automated. But let's say you don't want to do that: when is the last time the way you run caddy changed? Or the last time python-certbot-nginx changed?


This was a few years ago, so things may have changed by now. But as they say, once bitten twice shy, and the wisdom of "just cron it" doesn't work with highly experimental tools like LE was for what I estimate to be the majority of its lifetime.


I'm sure there's a way to make your LE experience consistently suck but the way to run caddy for a static website has been the same for about as long as caddy has had support for automatic HTTPS, and that's also true for python-nginx-certbot. But more importantly: we can argue about what it was 4 years ago, or we can just observe that it's really easy now.


A tool not working well or being "experimental" does not dismiss the premise that frequently run automated tools are a better than infrequently run manual tasks when those manual tasks can take down your infrastructure if done improperly, missed or forgotten.

All it being new means is that depending on your risk ratio you need to decide whether updates to the software need testing or whether you need to invest in your own solution - or, how about just wait until it matures and keep the old process until then.

Waiting doesn't invalidate the premise either. It just means you lack the resources to implement it safely and that's ok.


For the life of me I can't parse that title.


China has contracted with Private Space to pick up entire enterprises and send them into orbit, thus making the development of a space station in orbit irrelevant :-) (ok it's Friday alright?)

It is a pretty convoluted title. Perhaps they could have said, "China's Commercial Space companies are now at the 'blowing up rockets' stage." Which is the penultimate stage to the one where the rocket gets all the way into orbit.

It is kind of a weird concept in purely Communist country insomuch as the idea of commercial space in an open market country is to to capture market capital and prioritize it to space launch capability. What is harder is that you can't really sell orbital rockets to third parties (the pesky NPT gets in the way, and it is generally frowned upon). I would have expected them to go with something similar to the the setup Russia has with USRC[1].

[1] https://en.wikipedia.org/wiki/United_Rocket_and_Space_Corpor...


Not a radio engineer, but it's the shape of the Faraday cage that matters I think, not the size. The required shape most likely depends on the wavelength, as the idea is to distort the waves... Basically if your grid is too tight (ie a sheet of metal) the waves keep their shape, but if the grid is too loose, the waves just pass right through.

That's what I've been told by fellow meshnet hobbyists anyhow ;)


And Coke kills people (union leaders) in South America as a cost of doing business. So yeah.


Agreed. I remember thinking "what don't I get? Why do we need getters and setters?". After some years (and discovering Python), I realized there's nothing to get, it's just ridiculous overengineering 95% of the time. Same goes for a lot of stuff in OO. I attribute it to the corporate mindset it seems to thrive in, but I could be wrong.


The important thing is restricting your public interface, hiding implementation details, and thinking about how easy your code (and code that uses it) will be to change later. It's not an OO vs anything thing.

When you want a value from a module/object/function/whatever, whether or not it's fetched from a location in memory is an implementation detail. Java and co provide a short syntax for exposing that implementation detail. Python doesn't: o.x does not necessarily mean accessing an x slot, and you aren't locking yourself into any implementation by exposing that interface as the way to get that value. It's more complicated than Java or whatever, here, but it hides that complexity behind a nice syntax that encourages you to do the right thing.

Some languages provide short syntax for something you shouldn't do and make you write things by hand that could be easily generated in the common case. Reducing coupling is still a good idea.


> The important thing is restricting your public interface

That is the important thing sometimes. At other times the important thing is to provide a flexible, fluent public interface that can be used in ways you didn't intend.

It really depends on what you're building and what properties of a codebase are most valuable to you. Encapsulation always comes at a cost. The current swing back towards strong typing and "bondage and discipline" languages tends to forget this in favour of it's benefits.


> At other times the important thing is to provide a flexible, fluent public interface that can be used in ways you didn't intend.

That scares me. How do you maintain and extend software used in ways you didn't intend?

Quality assurance should be challenging.


> That scares me.

It scares you because you're making some assumptions:

1. You assume that I'm writing software that I expect to use for a long period of time.

2. Even if I plan to use my software for an extended period of time, you're assuming that I want future updates from you.

Let me give you an example of my present experience where neither of these things are true. I'm writing some code to create visual effects based on an API provided by a 3rd party. Potentially - once I render the effects (or for interactive applications - once I create a build) my software has done it's job. Even if I want to archive the code for future reuse - I can pin it to a specific version of the API. I don't care if future changes cause breakage.

And going even further - if neither of these conditions apply the worst that happens is that I have to update my code. That's a much less onerous outcome than "I couldn't do what I wanted in the first place because the API had the smallest possible surface area".

I'll happily trade future breakage in return for power and flexibility right now.


Maybe instead of "restrict" it would be better to say "be cognizant of." If you want to expose a get/set interface, that's fine, but doing it with a public property in Java additionally says "and it's stored in this slot, and it always will be, and it will never do anything else, ever." I don't see what value that gives in making easy changes for anyone. I don't see why that additional declaration should be the default in a language.

You get into the same issue with eg making your interface be that you return an array, instead of a higher-level sequence abstraction like "something that responds to #each". By keeping a minimal interface that clearly expresses your intent, you can easily hook into modules specialised on providing functionality around that intent, and get power and flexibility right now in a way that doesn't hamstring you later. Other code can use that broad interface with your minimal implementation. Think about what you actually mean by the code you write, and try to be aware when you write code that says more than that.

I think it's interesting that you associate that interface-conscious viewpoint with bondage and discipline languages. I mostly think of it in terms of Lisp and Python and languages like that where interfaces are mostly conceptual and access control is mostly conventional. If anything, I think stricter type systems let you be more lax with exposing implementations. In a highly dynamic language, you don't have that guard rail protecting you from changing implementations falling out of sync with interfaces they used to provide, so writing good interfaces and being aware of what implementation details you're exposing becomes even more crucial to writing maintainable code, even if you don't have clients you care about breaking.

Of course all this stuff goes out the window if you're planning to ditch the codebase in a week.


I don’t think I’ve ever seen a useful “Getter” abstraction...


  getArea() {
    return this.width * this.height;  
  }

  getIcon() {
    // if icon hasn't been loaded, load it
    return this.icon;
  }


Those aren’t abstractions... Also, I’m not arguing that you can’t contrive an abstraction around a getter, I’m arguing that it’s useful to do so (so please spare me contrived examples!).


You're always using a getter. It's just a question of what syntax your language provides for different ways of getting values, and how much they say about your implementation.

Most people don't have a problem with getters and setters, they have a problem with writing pure boilerplate by hand. Languages like Python and Lisp save you from the boilerplate and don't provide a nicer syntax for the implementation-exposing way, so people don't generally complain about getters and setters in those languages, only in Java and C++ and things.


You misunderstood my post. I said I haven’t seen a useful getter abstraction. Not all data access is via a method nor is it always abstract.

I specifically object to the useless abstraction, not the boilerplate (boilerplate is cheap).


I think we're coming at it from different angles. My point is that there shouldn't be any abstraction to write, and it should just be the way the language works. Primitive slot access in Java is not just a get/set interface, it's a get/set interface that also specifies implementation characteristics and what the code will be capable of in the future. It should be in the language so that you can have primitive slots, but it shouldn't be part of the interface you expose for your own modules, because adding pointless coupling to your code does nothing but restrict future changes. Languages should not provide an easy shortcut for writing interfaces like that.

I don't view it as a useless abstraction, because I view it as the natural way of things. I view specifying that your get/set implementation is and always will be implemented as slot access to be uselessly sharing implementation details that does nothing but freeze your current implementation strategy.

I think a better question is when that abstraction gets in your way. When does it bother you that nullary functions aren't reading memory locations? Why do you feel that's an essential thing to specify in your public interface, as a default? There's nothing stopping you from writing code in Python and mentally modelling o.x as slot access, because it follows the interface you want from it.

If you only care because it's something extra you have to do, then that's what I meant by boilerplate. I think it's a misfeature of Java's that it presents a model where that's something extra you have to do.


> My point is that there shouldn't be any abstraction to write, and it should just be the way the language works.

I understand your point, but I think you misunderstand what "abstraction" means. "abstraction" doesn't mean "function" (although functions are frequently used to build abstractions), and if you have "dynamic properties" (or whatever you'd like to call them) a la Python, then you're still abstracting. My point is that abstracting over property access (regardless of property-vs-function syntax) is not useful, or rather, I'm skeptical that it's useful.

> I think a better question is when that abstraction gets in your way. When does it bother you that nullary functions aren't reading memory locations? Why do you feel that's an essential thing to specify in your public interface, as a default? There's nothing stopping you from writing code in Python and mentally modelling o.x as slot access, because it follows the interface you want from it.

I think this is a good question, because it illustrates a philosophical difference--if I understand your position correctly, you'd prefer to be as abstract as possible until it's problematic; I prefer to be as concrete as possible until abstraction is necessary. There's a lot of mathematical elegance in your position, and when I'm programming for fun I sometimes try to be maximally abstract; however, when I'm building something and _working with people_, experience and conventional wisdom tells me that I should be as concrete and flat-footed as possible (needless abstraction only makes it harder to understand).

To answer your question, that abstraction gets in your way all the time. The performance difference between a memory access (especially a cache-hit) and an HTTP request is several orders of magnitude. If you're doing that property access in a tight loop, you're wasting time on human-perceivable timescales. While you can "just be aware that any given property access could incur a network call", that really sucks for developers, and I see them miss this all the time (I work in a Python shop). We moved away from this kind of "smart object" pattern in our latest product, and I think everyone would agree that our code is much cleaner as a result (obviously this is subjective).

TL;DR: It's useful to have semantics for "this is a memory access", but that's unrelated to my original point :)


It's frustrating to read this thread and your comment kind of crystallized this for me so I'll respond to you.

Using an array without having to (manually) calculate the size of the objects contained within is like the major triumph of OO. This is a getter that you almost certainly use constantly.

Please try to consider your statements and potential counter factuals before spraying nonsense into the void


> Using an array without having to (manually) calculate the size of the objects contained within is like the major triumph of OO.

Er, aside from C and ASM, few non-OO languages require that kind of manual effort. That's not a triumph of OO, it's a triumph of using just about any language that has an approach to memory management above the level of assembly.


> Please try to consider your statements and potential counter factuals before spraying nonsense into the void

My claim was that getter abstractions as described by the GP (abstracting over the “accessed from memory” implementation detail) are not useful. Why do you imagine that your array length example is a reasonable rebuttal?


Its not the length of the array. Its using things like array[20]. Yes that exists pre-OO and outside of OO, but its the foundational aspect of OO and one of the strongest use cases.

Sorry for the way I communicated- I was tired and should have reconsidered.


> Sorry for the way I communicated- I was tired and should have reconsidered.

No worries, it happens. :)

> Its not the length of the array. Its using things like array[20]. Yes that exists pre-OO and outside of OO, but its the foundational aspect of OO and one of the strongest use cases.

I'm not sure what you're getting at then. Indexing into an array? Are you making a more general point than arrays? I'm not following at all, I'm afraid.


I think my argument is basically that arrays are effectively object oriented abstractions in most languages.

You aren't responsible for maintaining any of the internal details, it just works like you want it to. My example was with the getter for the item at index 21 (since you had specifically called out useless getters), but equally well applies to inserting, deleting, capacity changes, etc.


> I think my argument is basically that arrays are effectively object oriented abstractions in most languages.

I think I see what you mean, although I think it's worth being precise here--arrays can be operated on via functions/methods. This isn't special to OO; you can do the same in C (the reason it's tedious in C is that it lacks generics, not because it lacks some OO feature) or Go or Rust or lisp.

These functions aren't even abstractions, but rather they're concrete implementations; however, they can implement abstractions as evidenced by Java's `ArrayList<T> implements List<T>`.

And to the extent that an abstract container item access is a "getter", you're right that it's a useful abstraction; however, I don't think that's what most people think of when they think of "getter" and it falls outside the intended scope of my original claim.


Watch your tone!


> Using an array without having to (manually) calculate the size of the objects contained within is like the major triumph of OO.

I've used arrays in countless OO and non-OO programming languages, and I do not recall ever having to manually calculate the size of objects contained therein – what are you talking about? Only C requires crap like that, but precisely because it doesn't have first class arrays.


Downvoters, care to elaborate what you think is wrong with the above? Literally even fortran can do better than

   size_t len_a = sizeof(a)/sizeof(a[0]);
or

   my_pseudo_foo_array = (foo*) malloc(len * sizeof(foo));


You're not wrong. Even BASIC was better than this.


HTTP GET :-)


> Python doesn't: o.x does not necessarily mean accessing an x slot

C# also 'fixes' that. o.x could be a slot or it could be a getter/setter.


Initially seen in languages like Eiffel and Delphi.


I have basically no experience with Java. But in C# I think the above is whats behind stuff like

fooobj.events += my_eventhandler;


It is, but those languages did it about 6 years before C# came into existence.

Which isn't surprising, given that Delphi took the idea from Eiffel, which share the same Pascal influence, and was designed by Anders.


Not to mention Anders and C#.


I get that "what don't I get?" feeling all the time. Overengineering is basically an epidemic at this point, at least in the JS/front-end industry.

My guess is there's a correlation between overengineering and career success, which drives it. Simple, 'KISS' style code is the easiest to work with, but usually involves ditching less essential libraries and sticking more to standards, which looks crap on your resume. Most interviewers are more interested in whether you can piece together whatever stack they're using rather than whether you can implement a tough bit of logic and leave great documentation for it; so from a career perspective there's zero reason for me to go for a (relatively) simple 100 line solution to a tough problem when I can instead go for a library that solves 100 different use cases and has 10k lines of documentation that future devs have to wade through. The former might be the 'best' solution for maintainability but the latter will make me appear to be a better engineer, especially to non-technical people, of which there are far too many of on the average team.


Thanks for that. That resonates a lot with me. It makes me feel better realizing that I'm not alone in thinking that.

Recent writings by Joe Armstrong are also resonate with me the same way.


Well, it depends on what you are doing. I designed some systems that were too complex and some that were too simple and couldn't grow as a result. So, with experience, one will hopefully see that supposed overengineering is sometimes only overengineering until you actually need that specific flexibility in a growing system. And there is little substitute for experience to know which is which.


In the original JavaBeans spec, getters and setters served two purposes:

1. By declaring a getter without a setter, you could make a field read-only.

2. A setter could trigger other side effects. Specifically, the JavaBeans spec allowed for an arbitrary number of listeners to register callbacks that trigger whenever a value gets changed.

Of course, nobody actually understood or correctly implemented all this, and it all got cargo culted to hell.


Finally someone mentions using getters to create read only fields. Objects are the owners and guardians of their own state. I don't see how this is possible without having (some) state-related fields that only can be read from the outside.


Pretty obvious to readers of "Object-Oriented Software Construction" from Meyer.

A big problem is cargo culting without reading the CS references.


IME the thing with getters and setters is that everyone is doing it (inertia) and that other options either suck (syntactically) or break the "everything is a class" constraint.

Ruby is far from being my favorite language, but I like how Structs "solve" the getter/setter problem in it:

    my_struct = Struct.new(:field_one, :field_two)
It doesn't clutter your code with multiple lines of boilerplate, and it returns a regular class for you to use, not breaking the "everything is a class" constraint.


In my opinion OO design is valuable in extremely large code bases and/or code bases that will likely exist for decades and go through multiple generations of significant refactoring.

With respect to your setters and getters question, particularly in regards to Python... The @Property feature in Python is just a specific implementation of the setters/getters OO design principle. I can easily be convinced typing foo.x is better than foo.getX(), but I have a hard time having a strong emotional reaction to one vs the other if the language allows them to have the same benefits.


I feel like what happened to agile development happened to OOP, people morphed it into something that it was never meant to be.


Yeah, somewhere it stopped being about modeling your problem and it became a code organization technique. There was an incredible effort to formalize different modeling techniques/languages but it’s dried up.

It seems to be what we do, I’d say fp is in the same place. My CS program was heavily built around the ML family of languages, specifically Standard ML, with the algebraic types, functions, pattern matching (on your types,) etc. it seems like that “functional programming” is a radically different thing than what people do in js or erlang and call it that. It all comes around, I guess, static types were pretty gauche 10-15 years back and now how many folks are using typescript to make their js better?


Evolutionary design is that way in general really. Your intentions never matter - just what it can be used for.


Either you tell your objects what to do, which means they have mutable state, which means you are programming in an imperative way.

Or you get values from your objects. You need getters for this, but you can guarantee immutability and apply functional programming principles to your code.

You can't have your cake and eat it too. At the end of the day, you need values.


It's harder to write simple code because that requires a crystallized understanding of the problem. You can start banging out FactoryManagerFactories without having the faintest idea of the core problem at hand. Maybe some of the silliest OO patterns are like finger warmup for coders? Unfortunately that stuff still ends up sticking to the codebase.


Getters and setters make more sense in languages where you can't override attribute lookup.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: