This isn't suggesting no one understands how these models are architected, nor is anyone saying that SDPA / matrix multiplication isn't understood by those who create these systems.
What's being said is that the result of training and the way in which information is processed in latent space is opaque.
There are strategies to dissect a models inner workings, but this is an active field of research and incomplete.
Whatever comes out of any LLM will directly depend upon the data you fed it and which answers your reinforced as correct. There is nothing unknown or mystical about it.
The same could be said of people, revealing the emptiness of this idea. Knowing the process at a mechanism level says nothing about the outcome. Some people output German, some English. It’s sub-mechanisms are plastic and emergent
storage is cheap, but if you wanted to improve this:
1. find a way to dedup media
2. ensure content blockers are doing well
3. for news articles, put it through readability and store the markdown instead. if you wanted to be really fancy, instead you could attempt to programatically create a "template" of sites you've visited with multiple endpoints so the style is retained but you're not storing the content. alternatively a good compression algo could do this, if you had your directory like /home/andrew/archive/boehs.org.tar.gz and inside of the tar all the boehs.org pages you visited are saved
How do you manage those? Do you have a way to search them, or a specific way to catalogue them, which will make it easy to find exactly what you need from them?
KaraKeep is a decent self hostable app that has support for receiving singlefile pages via singlefile browser extension and pointing to karakeep API. This allows me to search for archived pages. (Plus auto summarization and tagging via LLM).
I don't get it aside. How does that help him search files on his local file system? Or is he syncing an index of his entire web history to his mobile device?
GP is using SingleFile browser extension. Which allows him to download the entire page as a single .html file. But SingleFile also allows sending that page to Karakeep directly instead of downloading it to his local file system. (if he's hosting karakeep on a NAS on his network). He can then use the mobile app or Karakeep web UI to search and view that archived page. Karakeep does the indexing. (Including auto-tagging via LLM)
Thanks. I didn't know about this and it looks great.
A couple of questions:
- do you store them compressed or plain?
- what about private info like bank accounts or health issuance?
I guess for privacy one could train oneself to use private browsing mode.
Regarding compression, for thousands of files don't all those self-extraction headers add up? Wouldn't there be space savings by having a global compression dictionary and only storing the encoded data?
Can’t speak to your other issues but I would think the right file system will save you here. Hopefully someone with more insight can provide color here, but my understanding is that file systems like ZFS were specifically built for use cases like this where you have a large set of data you want to store in a space efficient manner. Rather than a compression dictionary, I believe tech like ZFS simply looks at bytes on disk and compresses those.
By default, singlefile only saves when you tell it to, so there's no worry about leaking personal information.
I haven't put the effort in to make a "bookmark server" that will accomplish what singlefile does but on the internet because of how well singlefile works.
i was considering a similar setup, but i don’t really trust extensions. Im curious;
- Do you also archive logged in pages, infinite scrollers, banking sites, fb etc?
- How many entries is that?
- How often do you go back to the archive? is stuff easy to find?
- do you have any organization or additional process (eg bookmarks)?
Are you automating this in some fashion? Is there another extension you've authored or similar to invoke SingleFile functionality on a new page load or similar?
I've had such issues with them in the past too, yeah. I never figured out the root cause. But in recent times I haven't had issues, for whatever that's worth. (I also haven't really tried to open many of the old files either.)
By default, we do not train on any inputs or outputs from our products for business users, including ChatGPT Team, ChatGPT Enterprise, and the API. We offer API customers a way to opt-in to share data with us, such as by providing feedback in the Playground, which we then use to improve our models. Unless they explicitly opt-in, organizations are opted out of data-sharing by default.
The business bit is confusing, I guess they see the API as a business product, but they do not train on API data.
Where does DeepSeek say that about API usage? Their privacy policy says they store all data on servers in China, and their terms of use says that they can use any user data to improve their services. I can’t see anything where they say that they don’t train on API data.
> Services for businesses, such as ChatGPT Team, ChatGPT Enterprise, and our API Platform
> By default, we do not train on any inputs or outputs from our products for business users, including ChatGPT Team, ChatGPT Enterprise, and the API.
So on API they don't train by default, for other paid subscription they mention you can opt-out
Look at it from an algorithmic perspective. In computer science many algorithms take a non-constant number of steps to execute. However, in transformers models, there are a limited number of decoder blocks, and a limited number of FFN layers in each block. This presents a theoretical upper bound on the complexity of the algorithms a decoder network can solve in a single token generation pass.
This explains why GPT4 cannot accurately perform large number multiplication and decimal exponentiation. [0]
This example can extend to general natural language generation. While some answers can be immediately retrieved or generated by a "cache" / algorithm which exists in latent space, some tokens have better quality when their latent-space algorithm is executed in multiple steps.
> Quiet-STaR: Language Models Can Teach Themselves to
Think Before Speaking
This paper suggests that a large language model should "think ahead" by predicting not only the next token but also a "supporting thought." The approach involves generating all tokens simultaneously, allowing for a single forward pass that produces both the next token and a supporting thought, which might consist of, for example, 16 tokens.
This supporting thought influences the model's prediction. The process is then extended to multiple supporting thoughts by ingeniously masking cross-attention between thoughts to ensure their independence. So in essence we can fill all the remaining context with supporting thoughts and benefit from all of them in the same single forward pass.
The supporting thoughts themselves are trained with the objective to maximize the probability of a longer sequence ahead, using RL. So they are trained to optimize for longer-term, instead of the myopic next token prediction task.
Very interested in the expansion of RL for transformers, but I can't quite tell what this project is.
Could you please add links to the documentation to the readme where it states "It includes detailed documentation".
Also maybe DPO should use the DDPG acronym instead so your repos Deterministic Policy Optimization isn't confused for trl's Direct Preference Optimization.
> I’ve done some investigation into this. In a well trained model, if you plot the intermediate output for the last token in the sequence, you see the values update gradually layer to layer. In a model that produces repeating sequences I almost always see a sudden discontinuity at some specific layer. The residual connections are basically flooding the next layer with a distribution of values outside anything else in the dataset.
> The discontinuity is pretty classic overfitting. You’ve both trained a specific token to attend primarily to itself and also incentivized that token to be sampled more often. The result is that if that token is ever included at the end of the context the model is incentivized to repeat it again.
...
> Literally just plotting the output of the layer normalized between zero and one. For one token in mistral 7B it’s a 4096 dimension tensor. Because of the residual connections if you plot that graph for every layer you get a really nice visualization.
> Edit: Here's my visualization. It’s a simple idea but I've never personally seen it done before. AFAIK this is a somewhat novel way to look at transformer layer output.
This is nearly identical to the overfitting example in the repo, only really representing a binary, but it's a good start. Perhaps some transformations can be applied to help further?
Not a material science expert, however per their paper, they use DFT to verify the stability, then use the verification status to improve the model.
>candidate structures filtered using GNoME are evaluated using DFT calculations with standardized settings from the Materials Project. Resulting energies of relaxed structures not only verify the stability of crystal structures but are also incorporated into the iterative active-learning workflow as further training data and structures for candidate generation
What's being said is that the result of training and the way in which information is processed in latent space is opaque.
There are strategies to dissect a models inner workings, but this is an active field of research and incomplete.