Yes, Promptless already has the ability to be triggered from Slack channels (e.g. Slack Connect support channels with customers). Discord likely coming later this week or early next week!
Yes, obviously! I began a blog post on the issues with Trusted Computing but ended up first writing up on the way Trusted Computing works, since most explanations I could find would bury the underlying motivations behind technical jargon.
BTW: Here's a more performant version (fewer tokens) https://preview.promptjoy.com/apis/jNqCA2 that uses a smaller example but will still generate pretty good results.
- if the only reason you're using v4 over v3.5 is to generate JSON, you can now use this API and downgrade for faster and cheaper API calls.
- malicious user input may break your json (by asking GPT to include comments around the JSON, as another user suggested); this may or may not be an issue (e. g. if one user can influence other users' experience)
I would recommend creating a simplified JSON schema for the slides (say, presentation is an array of slides, each slide has a title, body, optional image, optional diagram, each diagram is one of pie, table, ...
Then use a library to generate the pptx file from the content generated.
It seems to me that a Transformer should excel at Transforming, say, text into pptx or pdf or HTML with CSS etc.
Why don't they train it on that? So I don't have to sit there with manually written libraries. It can easily transform HTML to XML or text bullet points so why not the other formats?
I don't think the name "Transformer" is meant in the sense of "transforming between file formats".
My intuition is that LLMs tend to be good at things human brains are good at (e.g. reasoning), and bad at things human brains are bad at (e.g. math, writing pptx binary files from scratch, ...).
Eventually, we might get LLMs that can open PowerPoint and quickly design the whole presentation using a virtual mouse and keyboard but we're not there yet.
I believe functions do count in some way toward the token usage; but it seems to be in a more efficient way than pasting raw JSON schemas into the prompt. Nevertheless, the token usage seems to be far lower than previous alternatives, which is awesome!