Hacker Newsnew | past | comments | ask | show | jobs | submit | learn_more's commentslogin

14% over estimates it because the user isn't clicking with uniform randomness, their clicks are normally distributed about the center of the line.

>In total the thickness went down from 7 to 6 pixels, which is a 14% decrease, making it 14% more likely to miss it.

Pedantic, but chance of miss is actually less than 14% more likely since the user's click location is not uniformly random over the thickness area, it's biased toward the center (normally distributed).


Pedantic, you don't know the distribution, so the chance could be higher

The reduction was specifically to the in-window side of the edge, so it's definitely greater than 14%.

Interesting, I've always approached from the outside in.

I approach from whatever side the mouse happens to be on...

never thought about it before but after playing with it a while i notice i tend to approach from the right, which means moving out if i'm inside on the right side. i think this is because my positioning accuracy seems to be higher moving leftwards than rightwards...

We can safely assume they're more likely to be close to the edge they're trying to grab than some random location on the window

Aim wider: why window and not screen?

Yeah, and not to mention the increase in likelihood click events the user intends for the application will make it through successfully, rather than being stolen by the window manager.

Technically correct is the best kind of correct.

I had similar thought but didn't want to be that guy.

My take is sometimes we get paid to be that guy and precision has its place and value.

We get lost when being right is seen as having value - instead of improving clarity and precision if needed in a specific context.


No furniture in front of the window.


See https://Schematix.com

Has a graph query capability which is like a reasoning engine. And beyond that, can run simulations on the models.

Multi user, version control with branch and merge, time shifting, etc.

It's for diagramming whole environments, not just discrete diagrams.


This is incredibly cool! I've been using Neo4j to hack something like this together but the UI on this is amazing. I honestly believe that tools like this should be as fundamental to IT as double entry bookkeeping is to accounting and CAD is to engineering and blueprints are to construction.


How many died waiting for approval of cures we did find?


How many have died because we didn’t care enough to try harder to cure these terrible diseases. Tens of millions!

If you have a disease and want to find clinical trials for unapproved drugs, they are available.

https://clinicaltrials.gov/

https://clinicaltrials.gov/study/NCT05855200?locStr=New%20Yo...


Far less than if we didn't have a proper system in place. We literally have miracle cures for things like measles, polio, and small pox yet people are far too stupid to take them. You think just letting people try whatever they want on people in a desperate situation is going to lead to good things?


If you like diagrams that are also interactive via topological graph queries, See:

https://schematix.com/video/depmap/


Kinda cool looking product, I like its perspective a lot especially the 'impact' view you can do. The physical view is something unique as well.



You might be interested in:

https://schematix.com/video/depmap

I'm the founder. It's a tool for interacting with deployment diagrams like you mentioned in your article.

We have customers who also model state machines and generate code from the diagrams.


> Schematix provides diagrams as a dynamic resource using its API. They aren't images you export, they don't end up in My Documents. This isn't Corel Draw. In Schematix, you specify part of your model using a graph expression, and the system automatically generates a diagram of the objects and relations that match. As your Schematix model changes, the results of the graph expression may change, and thus the visual diagram will also change. But the system doesn't need you to point and click for it. Once you've told it what you want, you're done.

What an interesting tool! It's rare to see robust data models, flexible UX abstractions for dev + ops, lightweight process notations, programmatic inventory, live API dashboards and a multi-browser web client in one product.

Do you have commercial competitors? If not, it might be worth doing a blog post and/or Show HN on OSS tooling (e.g Netbox inventory, netflow analysis of service dependencies) which offer a subset of Schematix, to help potential customers understand what you've accomplished.

Operational risk management consultants in the finance sector could benefit from Schematix, https://www.mckinsey.com/capabilities/risk-and-resilience/ou.... Lots of complexity and data for neutral visualization tooling.


Schematix is somewhat unique. Direct competitors? -- not exactly, but IT asset managers, DCIM, BC/DR tools, and CMDBs are all competitors to some degree.

Some of our best users are professional consultants who use us for a project which often introduces us to a new customer.

A Show HN would certainly be in order. Thanks for the thoughts!


Do your blog posts have individual URLs? I would like to share a specific post, rather than the cumulative log.


The subject matter of the example used to demonstrate the interface is unnecessarily complex. It can be simple yet still stimulate a complex discussion. That would make it easier to understand the interface.


I agree. We have access to five complex discussions, and unfortunately, the ones with simple topics are not available to public.


I think he was describing the fact that they already operate with decision framework that they already understand. Implicit in the results received from a particular test is the fact that there was a particular observation made that suggested they get such a test.

If they get results from a test, but without the compelling observation, they're then operating outside their well established statistical framework, and they can't confidently evaluate the meaningfulness of the test results.

To me, this doesn't mean the extra information is bad, or unhelpful, it's just they are not yet properly calibrated to use it properly.

I've heard this sentiment from medical professionals before and this was my conclusion.


That makes sense. Explainability would be a big issue/requirement with any attempted automated decision framework. I don't know if I would want my doctor to just order up tests based on the output of some app without understanding why they're ordering them up.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: