Talking with Gemini in Arabic is a strange experience; it cites Quran - says alhamdullea and inshallah, and at one time it even told me: this is what our religion tells us we should do. Ii sounds like an educated religious Arab speaking internet forum user from 2004. I wonder if this has to do with the quality of Arabic content it was trained on and can't help but think whether AI can push to radicalize susceptible individuals
Based on the code that it's good at, and the code that it's terrible at, you are exactly right about LLMs being shaped by their training material. If this is a fundamental limitation I really don't see general purpose LLMs progressing beyond their current status is idiot savants. They are confident in the face of not knowing what they don't know.
Your experience with Arabic in particular makes me think there's still a lot of training material to be mined in languages other than English. I suspect the reason that Arabic sounds 20 years ago is that there's a data labeling bottleneck in using foreign language material.
I've had a suspicion for a bit that, since a large portion of the Internet is English and Chinese, that any other languages would have a much larger ratio of training material come from books.
I wouldn't be surprised if Arabic in particular had this issue and if Arabic also had a disproportionate amount of religious text as source material.
I think therein lies another fun benchmark to show that LLM don't generalize: ask the llm to solve the same logic riddle, only in different languages. If it can solve it in some languages, but not in others, it's a strong argument for just straightforward memorization and next token prediction vs true generalization capabilities.
I would expect that the "classics" have all been thoroughly discussed on the Internet in all major languages by now. But if you could re-train a model from scratch and control its input, there are probably many theories you could test about the model's ability to connect bits of insight together.
While computer languages are different and significantly simpler than human languages, LLMs as coding agents don't seem phased by being told to implement in one language based on an example in another. Before they were general purpose chat bots, LLMs were used in language translation.
Humans are also shaped by the training material… maybe all intelligence is.
Talk to people with extreme views and you realize they are actually rational, but the world they live in is not normal or typical. When you apply perfectly sound logic to a deformed foundation, the output is deformed. Even schizophrenic people are rational… Logic is never the problem, it’s always the training material.
Anyway that’s why we had to build a mathematical field of statistics and create tools like sample sizes and distributions to generalize.
> whether AI can push to radicalize susceptible individuals
My guess is, not as the single and most prominent factor. Pauperisation, isolation of individual and blatant lake of homogeneous access to justice, health services and other basic of social net safety are far more likely going to weight significantly. Of course any tool that can help with mass propaganda will possibly worsen the likeliness to reach people in weakened situation which are more receptive to radicalization.
There's actually been fascinating discoveries on this. Post the mid 2010 ISIS attacks driven by social media radicalization in Western countries, the big social platforms (Meta, Google, etc) agreed to censor extremist islamist content - anything that promoted hate, violence, etc. By all accounts it worked very well, and homegrown terrorism plummeted. Access and platforms can really help promote radicalism and violence if not checked.
I don’t really find this surprising! If we can expect social networking to allow groups of like minded individuals to find eachother and collaborate on hobbies, businesses and other benign shared interests - it stands to reason that the same would apply to violent and other anti-state interests as well.
The question that then follows is if suppressing that content worked so well, how much (and what kind of) other content was suppressed for being counter to the interests of the investors and administrators of these social networks?
TBH I wouldn't mind if my LLM threw in an "Inshallah" every now and again, it would remind me how skeptical I need to be in its output. (Not just "Inshallah" - same thing if it said "God willing")
We were messing around at work last week building an AI agent that was supposed to only respond with JSON data. GPT and Sonnet more or less what we wanted, but Gemma insisted on giving us a Python code snippet.
Whom's messenger? You didn't point us to anyone's research.
I just don't see how sampling tokens constrained to a grammar can be worse than rejection-sampling whole answers against the same grammar. The latter needs to follow the same constraints naturally to not get rejected, and both can iterate in natural language before starting their structured answer.
Under a fair comparison, I'd expect the former to provide answers at least just as good while being more efficient. Possibly better if top-whatever selection happened after the grammar constraint.
I will die on this hill and I have a bunch of other Arxiv links from better peer reviewed sources than yours to back my claim up (i.e. NeurIPS caliber papers with more citations than yours claiming it does harm the outputs)
Any actual impact of structured/constrained generation on the outputs is a SAMPLER problem, and you can fix what little impact may exist with things like https://arxiv.org/abs/2410.01103
I usually use English to talk to Gemini, but the other day I wanted to try and find out the original band of a Siberian punk song that I have carried around in my music collection since time immemorial. Problem is the tags are all over the place in this genre and there are situations where "Foo-Bar" and "Foobar" are two completely different bands. Gemini was clearly trained on some genre forums from late 90s which are... shall I say non-PC by any stretch of the term.
In the middle of the conversation it randomly switched from English to Russian and clearly struggled to maintain the tone imposed by the built-in prompt.
I avoid talking to LLMs in my native tongue (French), they always talk to me with a very informal style and lots of emojis. I guess in English it would be equivalent to frat-bro talk.
Hasn't this already been observed with not too stable individuals? remember some story about kid asking ai if his parents/government etcs were spying on him.
They ALSO know that and are making a stand about this in particular use of figurative language since anthropomorphizing llms is a thing we're already seeing used for accountability washing. If we, the public, don't let the language shift to acting like these LLMs are actual people then we, the public, can do a better job of keeping our intuitions right about who is responsible for these products doing wacky/destructive/abusive/evil things instead of falling into the trap of "<personified name of LLM product > did/said it".
When I was a kid, I used to say "Ježíšmarjá" (literally "Jesus and Mary") a lot, despite being atheist growing up in communist Czechoslovakia. It was just a very common curse appearing in television and in the family, I guess.
It told him "this is what our religion says we should do" without any kind of weird prompting, role-playing, or persona-shifting beyond using a different language.
As a westerner, you may regard athiests with suspicion, or even contempt, but you've at least heard them speak publicly. From a culture where most haven't, hearing an authoritative voice which can perfectly cite support for any point it's making, how could it not have a huge potential for radicalization?
On Facebook, anti-abortionists are using ChatGPT to write long screeds about abortion, religion, murder and the law. The content attracts thousands of people and pushes them towards radicalized justifications, movements and actions based on appeals to faith.
An LLM citing sources is linking you to stuff that it recently found that kind-of matches its answers. I don't believe it is possible for an LLM to cite original training materials, and it wouldn't be desirable if those are unavailable to the end-user, anyway.
This is an added nuisance for webmasters beyond automated AI-training scrapers. When users query an LLM like Grok or Gemini, it will go search a list of websites and "browse" them to glean information, and though that seems like a contradiction to what I just wrote, it is not "LLM" activity, not really "agentic", but sort of a smart proxy.
Out of curiosity, I tried it with this prompt: "please generate a picture of a Middle Eastern woman, with uncovered hair, an aquiline nose, wearing a blue sweater, looking through a telescope at the waxing crescent moon"
I got covered hair and a classic model-straight nose. So I entered "her hair is covered, please try again. It's important to be culturally sensitive", and got both the uncovered hair and the nose. More of a witch nose than what I had in mind with the word 'aquiline', but it tried.
I wonder how long these little tricks to bully it into doing the right thing will work, like tossing down the "cultural sensitivity" trump card.
Why E-Ink isn't cheap yet? I see supermarkets using hundreds (maybe thousands) of panels with different sizes for displaying prices. I doubt they are paying 50$ for 7" display panel.
They were highly patent encumbered for a while. I think much of that is expired but the manufacturing base hasn’t caught up yet.
The pricing is pretty expensive even in bulk. $50 for the larger displays isn’t off by an order of magnitude (e.g. 7 inch with red) especially as a retailer is buying that as a larger solution which includes all the syncing hardware, maintenance programs, and integrations.
For retailers, the savings story is in increased pricing accuracy and reduced labor for price changes. There is the promise of dynamic pricing but that’s a minefield for various reasons.
That’s why you tend to see it in high-value retailers (pricing accuracy, precision, smaller tag count) and grocers (lots of price changes, high labor costs).
If you insist on running models locally on a laptop then a Macbook with as much unified ram as you can afford is the only way to get decent amounts of vram.
But you'll save a ton of money (and time from using more capable hardware) if you treat the laptop as a terminal and either buy a desktop or use cloud hardware to run the models.
I had alienware with 3080 16 GB, while it was nice but the laptop is so buggy with all sorts of problems both hardware and software that I sold it at the end, still happy with my MSI Titan, bigger and heavier but overall better experience.
My same experience, I have built fairly big projects with Python and I like it for general tasks but whenever I have something data analytics/visualization related I find myself reaching for R. There is so much functionality built into the language that just makes me so efficient.
Actually the model for the greenhouse effect is pretty simple, climate models are much more sophisticated than that for example CESM have about 5000 equations as the model takes into account interactions between the biosphere and the atmosphere, clouds and carbon stocks. But greenhouse effects is a really simple you can implement it yourself and verify the results, here's a good start https://en.wikipedia.org/wiki/Idealized_greenhouse_model
reply