Hacker Newsnew | past | comments | ask | show | jobs | submit | yoodenvranx's commentslogin

The funny thing is that Chrome re-added icons to their menu on Android 1-2 months ago


Yeah, that product is pretty much dead on arrival in Germany

Such emails would look _very_ sketchy to most Germans


As a native German I have to admit that the problem with .kz only occurred to me after you mentioned it. I'm not sure if I should feel good or bad about this.


If you go to /r/amd you will see that basically everybody over there hates userbenchmark.com because it's intel-biased garbage.

Just one of many threads from the last few weeks:

https://www.reddit.com/r/Amd/comments/fyhl1g/tim_from_hardwa...


The specific issue is how userbenchmark weights CPUs seems to be very coupled to Intel's specific ideas on how many CPU cores there should be, or at least very firmly stuck in the 5 years ago. So last year the weights were 40% for single-core performance, 58% for quad-core, and 2% for "multi-core".

The idea was this was supposed to be what games care about, but it isn't. Modern games have issues with even 6c/6t CPUs, such as the horrible 1% lows on the 9600K in Far Cry 5: https://www.gamersnexus.net/hwreviews/3407-intel-i5-9600k-cp...

It looks like Userbench has since adjusted to weight up to 8 threads of performance? Which is maaaaybe less trash if all you care about is gaming. But the Core i5 series still tops charts on userbench despite reviewers no longer recommending the i5's due to performance inconsistency.

They even make ludicrous claims like that the 9100F is perfectly fine for gaming, and is even 10% better than a 2700X. They seem to be basing this decision entirely on older games or games specifically built for as broad a userbase as possible (eg, CSGO, Fortnite & Overwatch). Meanwhile actual reviews say things like "The quad-core Core i3-9100F was hopeless in Battlefield V, pretty bad in Assassin’s Creed: Odyssey, fairly useless in The Division 2, and weak in Shadow of the Tomb Raider." https://www.techspot.com/review/1983-intel-vs-amd-budget-cpu...

So even if you're an Intel fan, userbench is still a terrible way to pick a CPU.


HWUB is not known for being particularly even-handed in their editorial positions. They tend to 'beg the question' by picking game suites that produce the outcome they want to discuss, and tend to over-reach on the conclusions.

"fairly useless" here is over 60 fps average in the heavy titles and 90-110 fps in the multiplayer titles, with a similar ratio of minimums as the 1600 AF (so no more or less prone to stutter). And that's with them loading the dice by picking the absolute most thread-heavy games they could find, most games the 9100F does comparatively much better than that.

And the reality is that Zen1 and Zen+ actually are pretty weak in gaming. Zen2 made a ~30% improvement over Zen1 in gaming performance (much better than the "average" gains for other workloads), and it's still 10-15% behind the fastest Intel processors. Zen1 especially was hot garbage in gaming, those thread-heavy titles aren't representative of its average performance. About all you can say is that it aged better than the 4Cs that Intel had on the consumer platform at the time (or the 8100/9100F/etc that followed), an OC'd 8700K lays a smackdown on it and an OC'd 5820K remains extremely viable even today.

I'm not going to defend userbenchmark's composite scores, but gaming performance does heavily depend on per-core performance even today. Having 8 faster cores is still more desirable for gaming than 16 slower cores. And single-core performance is a good analogue of "per-core performance" so this number remains very relevant.


> "fairly useless" here is over 60 fps average in the heavy titles and 90-110 fps in the multiplayer titles

You missed the point. The point was UB claimed the 9100F was 10% faster than a 2700X. In reality the 2700X absolutely massacres the 9100F in gaming performance. Higher average FPS, higher min FPS, etc...

Even the 1600AF trivially beats the 9100F.

> with a similar ratio of minimums as the 1600 AF (so no more or less prone to stutter).

1600 AF in Battlefield V: 126 average, 91 1% lows

9100F in Battlefield V: 116 average, 49 1% lows

That's not a similar ratio at all.

> Zen2 made a ~30% improvement over Zen1 in gaming performance

No it didn't. You're massively misrepresenting (or mis-remembering) Zen1's gaming performance.

https://tpucdn.com/review/amd-ryzen-7-3700x/images/relative-...

3.6ghz/4.4ghz boost 3700X is ~11% faster than the 3.6ghz/4ghz boost 1800X in 1080p gaming.

Even at 720p it's a 15% gap between those two, not 30% https://tpucdn.com/review/amd-ryzen-7-3700x/images/relative-...

> Zen1 especially was hot garbage in gaming, those thread-heavy titles aren't representative of its average performance.

No it wasn't. It lost to the equivalent Intel CPU, but it was far from bad. You could easily pair a Zen1 CPU with just about any GPU and never see a significant bottleneck. The exception being the absolute top-end. And, critically, if you had an older Intel quad core, like a 7600K, the Zen1/Zen+ CPUs were still an upgrade in gaming performance.

See for example at 1440p the gap between Zen1 & Zen2 & Intel being almost nonexistent even with a 2080 Ti: https://tpucdn.com/review/amd-ryzen-7-3700x/images/relative-...

> Having 8 faster cores is still more desirable for gaming than 16 slower cores.

Of course, but you still need enough cores to avoid stuttering. Which means...

> And single-core performance is a good analogue of "per-core performance" so this number remains very relevant.

Is not correct at all. Single-core performance isn't an analogue of anything these days. You need a minimum number of cores and good single-core performance.

And it's not just HWUB with these conclusions that an i5 is no longer sufficient. Gamersnexus has the same recommendations: "In more games each year, we’re noticing the cut-down Core i5 exhibiting high frametime variability that counteracts its fleeting performance superiority with unreliable, stuttery behavior. The AMD R5 3600 is more reliable and consistent in its performance across all games we’ve tested, making it the better gaming option." ( https://www.gamersnexus.net/guides/3533-best-cpus-of-2019-ro... )


> So last year the weights were 40% for single-core performance, 58% for quad-core, and 2% for "multi-core".

And for reference, they used to have it at 30% single core, 60% quad core, 10% multi core. But that didn't advantage Intel enough, or something.


> Modern games have issues with even 6c/6t CPUs

Any idea why?


It's really specific to Far Cry 5, and it produces some quite strange results that people are putting way too much weight on.

0.1% lows really tank on FC5 on processors without SMT, for example a 5.2 GHz 9600K has less than half the 0.1% FPS as a stock Pentium G5600 2C4T processor. In other words it's stuttering on the 9600K but running ok on the G5600.

https://www.gamersnexus.net/images/media/2018/cpus/2600k/int...

4C4T processors (R3 1200) do OK, but that one is AMD, so it isn't clear whether it's specifically something the engine is doing wrong around Intel processors, or if there's some hardcoded assumption that if there are 6+ cores then SMT must be available, or what.

But I mean, this specific game is not evidence that "6C6T is no longer sufficient for gaming", it's just a badly programmed game that has something going wrong under the hood on 6C6T processors.


The actual scores themselves are useful though. When it says "quad core: x% faster" or "multi core FP: y% faster" that is actually fairly accurate (as accurate as a synthetic can be). People just don't like the way userbenchmark weights these numbers in the composite score ("effective speed").

It's still a very useful site for comparing niche hardware that will never get a true review - how does a J5005 compare to a i5 750? How does a Xeon E5-1650 compare to a Ryzen 1600? Probably not going to ever be directly tested. The only alternatives are things like Passmark that are much less accurate. UserBenchmark lets you compare against all kinds of niche or rare hardware at will, that's an incredibly valuable resource. Some people are just so butthurt about the "effective speed" composite scores that they can't bring themselves to scroll past a single line, which is a little ridiculous.

Generally r/AMD constantly gets their panties in a bunch about something or other, it's constant conspiracies about how this or that is a NVIDIA or Intel backed conspiracy. Don't take them too seriously.

At times they have sent death threats because they didn't like the conclusion of a review. After the initial Ryzen launch they decided that Steve from GamersNexus (among others) was an Intel shill and started threatening his family. iirc there have been other "incidents" as well.

https://www.reddit.com/r/Amd/comments/5xkw1b/gamersnexus_rec...

(most of those "removed" posts are people justifying it because Steve is an Intel shill who put out a "biased review")

They really take the whole fanboy thing to a whole new level. It is practically a uniquely toxic subreddit, even among other "brand" subreddits, more like a sports team sub or something.


Interesting, I was not aware of this.

As it happens I am awaiting delivery of a Ryzen 5 3600, good to know it's likely to be even better than userbenchmark suggested!


They day Youtube decides to ban the LockpickingLawyer Channel for "showing users how to bypass secure doors" will be the day I uninstall and block Youtube on all my devices.


If computers were still ENIAC-sized, then… well, you see where I am going


I want to have both options because traditional folders and a tag based file system solve different problems.

Let's say I go on vacation with my dog and make pictures. After I am home again I want to sort the pictures but then I have a problem: the pictures in which you can see my dog in belong into the '2019 vacation to Bavaria' collection _and_ also in the 'Best pictures of my dog' collection.

I'd love to have some sort of universal file-database where I can store all my "final" images and then create collections by adding tags.


In macOS (for some number of years now) you store files in a standard folder hierarchy, and you can add tags to files. With or without tags, you can use Spotlight (cmd-space) to quickly find files.


Yes, I know that there are approaches which use the normal file system.

But what I want is different:

I want a universial "DB for binary files" where I can store binary data and all its metadata.

Then I can use this DB to build a app for picture galleries, music collections and tons of other things.

This DB should also support:

* automatic checksumming so that I can detect data corruption

* Some sort of version history so that I can store multiple versions of a file

* there could be built-in replication which I can use to see the same data (or parts of it) on all my devices


Is this common enough a use case that it ought to be a file system feature?

Plenty of applications do just this today by using the file system plus an index in SQLite. Is that method insufficient?


You asked: "Is this common enough a use case?"

Then you answered your own question: "Plenty of applications do just this today".

So yes, it is a common enough case that it could/should be built into the OS.


Well, OS could certainly offer some support, like file change detection, but the main indexing is often too application specific.

Photo albums wants to do face recognition. Music player wants BPM detection. Should those be done by OS? I do not think so.


Sounds a lot like ZFS.


Exactly - ZFS offers everything on the bullet list.


I think the old BeOS supported what you are describing.


And Haiku, its open-source successor does also.


I think we'll start to see that happening as machine learning is used to tag the files. See Google Photos for example, where you can basically use search in place of any sorting.


I think most of the photo organizers offer this? I remember using digiKam back around 2006 or so, and it already had this feature.

I think tags have limited scope. They are great for photos. They are OK for music, but strictly in addition to the main hierarchy. You could use them for text docs, but folders + full text search is much better. And file level tags are completely useless for code/programming


If you have a rich set of tags, you can have more than one main hierarchy.

Artist sort, year sort, genre sort, etc. Genre is really hard though.

There isn't much reason to use tags as the only way of organizing things though, they work great as views.


Agree, views, but not main organizing principle.

For example, my music collection is big and diverse, and both "year sort" and "album sort" are kinda useless now, because there are actually multiple disjoint subsets. There is no point ever in showing me audiobooks for year 2010 and regular music for year 2010. I always only want a subset of it.

This is what I meant "strictly in addition to main hierarchy" -- let me keep my folders, and maybe when I want to go deep enough, I want to browse by tag. But even then it would not be a hashtag-like tags that the original page refers to.


You can achieve this in existing systems with a hardlink.


A hardlink is a (poor, partial) implementation of a tag in the file system. A tag will recover a collection of all the elements filed under the same label, and a link can't do that.


I think hard links could. To take the up-thread example, you have one folder called "Vacation2018", and another called "BestDogPics". The photo of your dog on vacation lives in both folders, but hardlinked together.


I think the key thing here is that files can indeed be categorized in different ways, and could - and should - exist simultaneously in different collections with different structures.

I therefore think it'd be worth separating the concept of a "filesystem" from a "folder system" or "index system". That is: keep the file storage itself flat (e.g. in a relational database table), then have different categorical "views" that could be relational and/or hierarchical pointing to these files. Naturally, those collections will have their own sets of metadata for that file.

So for example, you have a file named "img-8675309.png" in your camera's storage. The operating system presents a view of said camera storage, in the form of a flat list of files and some basic metadata like creation date (plus perhaps camera-specific metadata if, say, the camera driver is the thing generating the view). You could then open up views for your 2019 Bavaria vacation ("Vacations" → "Bavaria 2019" → "Photos") and your dog ("Pets" → "Fido" → "Photos"), set a sorting field for each view (for the vacation, probably chronological; for your dog, by however you define "best"), drag/drop the camera file into those views (in the latter case, maybe even drag it into the spot where you want it to show up ranking-wise), and the operating system would then add references to that file automatically (almost certainly copying it into a local cache) in both your opened views and potentially in some system-maintained views (e.g. "Local Files" → "Photos").

One of the slick things here is that file access could be entirely transparent to how those files are stored. For example, those views will of course include your device's internal storage, but might also include external devices (like the camera in the above example) or even remote services (like, say, your social media account). If you accidentally delete your prized Fido photo on your local machine, the "Pets" → "Fido" → "Photos" view could still have a reference to the copy on the camera, or a copy in your social media posts, or a copy in the system backup that automatically ran last Sunday, and thus retrieve it and re-cache it locally (or prompt you to plug your camera or your external USB drive back in so it can check there).


I would like both as well. I can see value in tags but I am thinking from a developer perspective, I'm not sure how I would target a specific file in my code without a tree structure? Maybe I just haven't thought about this enough but it seems necessary.


My personal nightmare:

Working in a company where I have only one monitor and a PC without SSDs. And I am not allowed to bring my own mouse/keyboard. And I don't have admin rights for the OS.


I totally agree with the general idea. Especially the mouse.

However, since I’ve bought my 27” 4k monitor I pretty much stopped using extra ones. Unless I have to for something very specific, like debugging an app which output to multiple monitors.


I am looking for a DIY kit for a split ergonomic keyboard _with_ an attached num block.

Does anyone know if such a kit exists? There are a few kits for split ergonomic layouts but none of them seem to include the num block.


I recommend going a separate numpad: https://www.reddit.com/r/MechanicalKeyboards/wiki/numpads

Once you're willing to separate, the options WIDELY open up.


There is not such a kit. In this day and age of hand-wired keyboards and 3D printers, though, you can probably make it.

I use an Ergodox with one one of the bottom keys set to switch modes while pressed; in that mode m,.jkluio translates to 123456789. It works fine for me.

The Ergodox has too many keys, so using some of them to switch layers is practical.


No. But a keypad PCB is 5-10 USD and well worth it.

I need it for my daily work and I can't really map it well on my 40%.


Because there is regularly no need for these. You can add num block to any part of the keyboard and switch to it with one key press(Hold or Tap) You can also get separate and programmable num block in addition if you really want it.


On my ergo I use the home row for the numbers. It's way faster because you can use 8 fingers for 10 numbers instead of 3 fingers for 10.


Of course there is no need for them but I am just used to use one and I would miss it.


I wonder if at that point users aren’t just emulating a ten key with some mode switch, or just building a separate ten key?


> I'm starting to think that comment sections on news sites are a really bad idea

I have no idea how it is in other countries but in Germany the comment sections in every single newspaper is _horrible_, especially when it is articles about refugees, climate change and similar topics.


That is sadly the way anywhere mainstream without heavy moderation. The scary thing about Youtube comments isn't that they are extra stupid because they aren't - they are the norm and essentially what happens when a representative sample of the population is available.


Just because the majority of the population has the ability to comment doesn't mean the majority participates. Voluntary participation like that tends to attract people with strong opinions on the position under debate - especially people who disagree with the position upheld by the article and especially people who feel like theirs is the minority opinion.


I suspect you're defining horrible comments as, "comments expressing views I don't like and wish didn't exist". Germany has particularly extreme and unusual policies regarding migrants/asylum seekers/refugees, climate change and a few other such things. Is it any surprise that extreme views in one direction trigger extreme views in the other?


> Your post seems odd. The autobahn is generally regarded as some of the best highway experience there is when it comes to other road users

I drive Autobahn both in Germany and Netherlands and I can tell you that driving in Germany is a 100 times more stressfull compared to NL! As soon as you cross the border to Germany everyone channels their inner race car driver and it becomes stressful. Driving in NL is much more relaxed and laid back.


Well, driving in most countries but Germany is more relaxing because everyone goes at the same low speed.

You can have the same experience on a German Autobahn by staying in the right lane. You'll go at the same low speed as the trucks, nice and cozy, straight, without much worries or concentration needed.

Now, if you want to go faster, you do have the option (unlike in other countries), but then you'll have to be a bit more careful and dynamic, because it is quite possible that a car approaches you from behind faster than you approach a stationary target (if the car goes 250 km/h while you go 120 km/h, say). Of course that requires keeping an eye on not only the traffic in front, but also the traffic behind, and getting out of the way when required, so you have to change lanes quite frequently.

It is somewhat more stressful (if you choose to leave the slow lane), but that's the price you pay.

Furthermore, after a while, all this becomes second nature, and rather stress free, as long as everyone follows the rule. (What remains stressful are drivers not following the rules, eg not signalling before changing lanes, or changing into your lane when you're about to overtake them, etc.)


> Of course that requires keeping an eye on not only the traffic in front, but also the traffic behind, and getting out of the way when required, so you have to change lanes quite frequently.

This hits the nail on the head. I know people who complain intensely about Autobahn driving and when I ride with them all of them have very poor viewing technique ("Blicktechnik") and are the kind of driver that's rather oblivious to anything that happens on the road that isn't squarely in front of them. Of course, with these predispositions, just things like merging or changing lanes become stressful each time. You have to be able to do both on the Autobahn, so Autobahn = very stressful for them.


Trucks go at 90 kph IIRC, so going on the right lane with them only makes sense if you’re eco-conscious (or stingy) and want to burn as little fuel as possible. Here in Poland the speed limit is 140 kph, and most people either obey that or go a little over, so you can have relatively smooth and stress-free drive even in the left lane.


Since we're talking about stress, let's talk about rechts vor links in residential areas... =)

http://crankydriver.com/word/rechts-vor-links/


I have opposite experience, as soon as I go to NL, I constantly have to watch out for drivers cutting in, watch the speed limit carefully, road is more congested. Also it's sometimes too "boring" to drive, you have to watch not to sleep. Driving from Poland to Netherlands I can say Polish autobahn is the best. Least roadworks, least slowdowns to 80, least people cutting you when you try to pass, highest average speed (constant 140km/h). Only disadvantage - driving once through whole Poland costs as much as yearly vignette in Switzerland.


Get a ticket for 4kph over the limit in NL should fix that relax for you


Yeah 4kph over results in a slap on the wrist in Germany. Traffic fines get expensive quickly in the Netherlands! A simple parking ticket is ~95 euros >.<


I think this article is about the study about that topic which started it all:

https://www.nytimes.com/1998/07/14/science/praise-children-f...


Thanks for the link.

''Praising children's intelligence, far from boosting their self-esteem, encourages them to embrace self-defeating behaviors such as worrying about failure and avoiding risks,''

This is arguably good advice regarding 5th graders. For adult software engineers, however, go ahead and praise them for being smart, since worrying about software failure and avoiding software risks is great.


The use of risk in both circumstances is probably too broad to make an assertion.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: