Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not a joke and not everybody is jumping on "AI via API calls", luckily.

As more models are released, it becomes possible to integrate directly in some stacks (such as Elixir) without "direct" third-party reliance (except you still depend on a model, of course).

For instance, see:

- https://www.youtube.com/watch?v=HK38-HIK6NA (in "LiveBook", but the same code would go inside an app, in a way that is quite easy to adapt)

- https://news.livebook.dev/speech-to-text-with-whisper-timest... for the companion blog post

I have already seen more than a few people running SaaS app on twitter complaining about AI-downtime :-)

Of course, it will also come with a (maintenance) cost (but like external dependencies), as I described here:

https://twitter.com/thibaut_barrere/status/17221729157334307...



Yes, sooner or later this is going to become the future of GPT in applications. The models are going to be embedded directly within the applications.

I'm hoping for more progress in the performance of vectorized computing so that both model training and usage can become cheaper. If that happens, I am hopeful we are going to see a lot of open source models that can embedded into the applications.


The average world and business user doesn't use an API directly.

It can be easy to lose sight of that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: