Hacker Newsnew | past | comments | ask | show | jobs | submit | SpaceNoodled's commentslogin

You seriously underestimate how myopic people can be. Not everyone is a sophisticated and socially aware HN commenter like you and I.

I remember feeling like a professional my first time reading an issue Dr. Dobb's I got at an airport on the '90s.

My first article was published in Dr Dobb’s in 1992!

DDJ and Creative Computing were by far my favorite computer magazines that I looked forward to every month.

The DDJ editor Ray Valdez was kind enough to (without me even asking) grant me keep the copyright to the article about pie menus that I wrote for the Dec 1991 UI issue.

The Design and Implementation of Pie Menus: They’re Fast, Easy, and Self-Revealing.

Originally published in Dr. Dobb’s Journal, Dec. 1991, cover story, user interface issue.

https://donhopkins.medium.com/the-design-and-implementation-...

https://news.ycombinator.com/item?id=5616247


I have some bad news for you

Ah, but it's no longer "inexplicably!"

I think their point is that the challenge becomes more enjoyable than tedious.

It has all the same working features as github

Is there nothing like HIPAA there or what?

Very little protections. The entire medical records of a significant percentage of the NZ population were stolen recently and put up for sale online. Zero consequences for the medical practices who adopted the hacked software.

Interesting, a person was telling me recently that NZ privacy laws were quite strong. Perhaps a different category.

https://news.ycombinator.com/item?id=44564349


The laws are, the policing is not. At least not in medical data

Many AI companies, including Azure with their OpenAI hosting, are more than willing to sign privacy agreements that allow processing sensitive medical data with their models.

The devil is in the details. For example, OAI does not have regional processing for AU [0] and their ZDR does not cover files[1]. Anthropic's ZDR [2] also does not cover files, so you really need to be careful, as a patient/consumer, to ensure that your health, or other sensitive data, that is being processed by SaaS frontier models is not contained in files. Which is asking a a lot of the medical provider to know how their systems work, they won't, which is why I will never opt in.

[0] https://developers.openai.com/api/docs/guides/your-data#whic...

[1] https://developers.openai.com/api/docs/guides/your-data#stor...

[2] https://platform.claude.com/docs/en/build-with-claude/zero-d...


Azure OpenAI is not the same as paying OpenAI directly. While you may not be able to pay OpenAI for them to run models in Australia, you can pay Azure: https://azure.microsoft.com/en-au/pricing/details/azure-open...

The models are licensed to Microsoft, and you pay them for the inference.


There is no way to upload files as a part of context with Azure deployments, you have to use the OAI API [0], and without having an architecture diagram of the solution, I am not going to trust it based off of the known native limitations with Azure's OAI implementation.

[0] https://github.com/openai/openai-python/issues/2300


When all you've got AI, every problem looks like ... Uh, whatever hole an LLM's output goes into. A garbage can, ideally.

AI seems great when you have no way of truly validating its output.


That's only because we're trying to not be too condescending.


That would imply that the majority of this AI hype is just additional bluster, and likely wouldn't improve anything in any significant manner.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: