Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe others' experiences are different, but I find smaller models to work just as well for "reductive" tasks.

Dolly sucks for generating long-form content (not very creative) but if I need a summary or classification, it's quicker and easier to spin up dolly-3b than vicuna-13b.

I suspect OpenAI is routing prompts to select models based on similar logic.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: