Hacker Newsnew | past | comments | ask | show | jobs | submit | gvv's commentslogin

"Claude build me a Claude Cowork clone, make no mistakes"


Price seems to be around 3.9k EUR https://preorderspark.com/


Any idea if or when it will be available in EU? https://apps.apple.com/us/app/sora-by-openai/id6744034028

edit: as per usual it's not yet...


nice, super useful for debugging API responses. Would be nice to be able to use it as a VSCode extension!


> Would be nice to be able to use it as a VSCode extension!

I've added support to use jd as a Git diff engine: https://github.com/josephburnett/jd?tab=readme-ov-file#use-g.... Can you configure VS Code use a custom command to show diffs?


I've had most success using PDFMinerLoader

(https://api.python.langchain.com/en/latest/document_loaders/...)

It deals pretty well with PDF containing a lot of images.


this is hackernews?


Yes, I found it interesting.


Spot on


Nice job! The horrors GPT-4 must endure to watch ads, truly inhumane


Cost of doing business


great but wth is wrong with it https://i.imgur.com/gWIilWU.png


I suspect they've trained it on old stories on which they added this caveat, and now “once upon a time” became tightly coupled to the caveat in the model.


Yes, we wouldn't want to produce output that perpetuates harmful stereotypes about people who live in gingerbread houses; dangerously over-estimates the suitability of hair for safely working at height; or creates unrealistic expectations about the hospitality of people with dwarfism.

I wonder if this sort of behaviour was more nuanced in the initial model, and something like quantisation has degraded the performance?


In fairness, there are lots of things in old tales we may not an LLM to take literally.

For instance, unlike kids, at training time an LLM isn't going to ask “It's not very nice for the parents to abandon their children in the forest, is it?”.

I know conservatives are easily triggered by such caveats, but at the same time, they are literally banning books from library ¯\_(ツ)_/¯


My guess is that they accidentally trained it to object to the past for being racist, i.e. "once upon a time" promotes "outdated" attitudes.


Ya to me this is an immediate disqualification. They're building the political commissars into the tech, and they're actual nonsense political correctness rules. Instead of blocking actual racism etc they block "once upon a time"?

Throw it in the trash, its worthless.


AI models absorb all kind of racist/sexist/hateful speech, so it has to be neutered or it will end up like that Microsoft AI that started spouting nazi lingo after a day or two of training because of trolls.

Apparently AI companies can't be bothered to filter out the harmful training data so you end up with this warning every time you reference something even remotely controversial. It paints a bleak future if AI companies will keep producing these censoring AIs rather than fix the problem with their input.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: