Hacker Newsnew | past | comments | ask | show | jobs | submit | msvana's commentslogin

This reminds me of the idea that LLMs are simulators. Given the current state (the prompt + the previously generated text), they generate the next state (the next token) using rules derived from training data.

As simulators, LLMs can simulate many things, including agents that exhibit human-like properties. But LLMs themselves are not agents.

More on this idea here: https://www.alignmentforum.org/posts/vJFdjigzmcXMhNTsx/agi-s...

This perspective makes a lot of sense to me. Still, I wouldn't avoid anthropomorphization altogether. First, in some cases, it might be a useful mental tool to understand some aspect of LLMs. Second, there is a lot of uncertainty about how LLMs work, so I would stay epistemically humble. The second argument applies in the opposite direction as well: for example, it's equally bad to say that LLMs are 100% conscious.

On the other hand, if someone argues against anthropomorphizing LLMs, I would avoid phrasing it as: "It's just matrix multiplication." The article demonstrates why this is a bad idea pretty well.


I had this exact analogy in mind when writing point no. 3


Thanks, sounds kinda similar to my first point.


Problem no. 2 (Understanding user intent) is relevant not only to writing SQL but also to software development in general. Follow-up questions are something I had in mind for a long time. I wonder why this is not the default for LLMs.


At the beginning, the article mentions correlation with language skills AND problem-solving. Focusing only on language skills in the second half is misleading. According to the abstract of the original paper, problem solving and working memory capacity were FAR MORE important.

Also, the article doesn't mention "math skills". It talks about numeracy, which is defined in a cited paper as "the ability to understand, manipulate, and use numerical information, including probabilities". This is only a very small part of mathematics. I would even argue that mathematics involves a lot of problem solving and since problem solving is a good predictor, math skills are good predictor.


Going further, it seems like Language Aptitude was primarily significant in explaining variance in learning rate, measured by how many Codecademy lessons they completed in the allotted time, but wasn't explanatory for learning outcomes based on writing code or answering multiple-choice questions.

Seeing as Codecademy lessons are written in English, I would think this may just be a result of participants with higher Language Aptitude being faster readers.

I do think that language skills are undervalued for programming, if only for their impact on your ability to read and write documentations or specifications, but I'm not sure this study is demonstrating that link in a meaningful way.


Hmm this gave me an interesting project idea: a coding assitant that talks shit about your lack of skills and low code quality.


Hopefully you'll forgive my ignorance, but this is the first time I hear about DuckDB. What space does it occupy in the DBMS landscape? What are its use cases? How does it compare to other DBMS solutions?


Hi, DuckDB devrel here. DuckDB is an analytical SQL database in the form factor of SQLite (i.e., in-process). This quadrant summarizes its space in the landscape:

https://blobs.duckdb.org/slides/goto-amsterdam-2024-duckdb-g...

It works as a replacement / complementary component to dataframe libraries due to it's speed and (vertical) scalability. It's lightweight and dependency-free, so it also works as part of data processing pipelines.


Hello, I'd love to use this but I work with highly confidential data. How can we be sure our data isn't leaking with this new UI? What assurances are there on this, and can you comment on the scope of the MotherDuck server interactions?


I have a few thoughts after reading this:

- I started to see LLMs as a kind of search engines. I cannot say they are better than traditional search engines. On one hand, they are better at personalizing the answer, on the other hand, they hallucinate a lot.

- There is a different view on how new scientific knowledge is made. It's all about connecting existing dots. Maybe LLMs can assist with this task by helping scientists discover relevant dots to connect. But as the author suggests, this is only part of the job. To find the correct ways to connect the dots, you need to ask the right questions, examine the space of counterfactuals, etc. LLMs can be useful tool, but they are not autonomous scientists (yet).

- As someone developing software on top of LLMs, I am slowly coming to a conclusion that human-in-the-loop approaches seem to work better than fully autonomous agents.


Instead of connecting language with physical existence, or entities, it's connecting tokens. An LLM may be able to describe scenes in a video, but a model would tell you that said video is a deep fake because of some principle like conservation of energy and mass informed by experience, assumptions, inference rules, etc.


Well I guess the author of the blog post agrees, since he talks about "its demise in the 1980s". Probably a lot of things have changed since then and honestly I am kinda curious about it.


EH, once ma bell got broken up, not much of interest happened. My dad worked out of their Naperville office until it closed down. By that point it was less R&D and more, “how do we make money”.


Probably in large part because when you're no longer a monopoloy, you can't soak the consumer for enough extra margin to spend many person years of time and money going nowhere most of the time.

That's putting it harshly of course, but it's probably notable that the only places you might find something approaching a team or department like the Bell Labs of old are large incumbents in fields like Apple or Microsoft. If you're a smaller competitor or in a highly competitive space, you don't have the luxury of being able to spend large chunks of money and people on R&D that may never produce anything useful or salable. In theory you might be able to get something like this out of academia, but then you run into the publish or perish mindsets.

I wonder if one way the states and federal government could encourage development in towns and areas that are dying as the world consolidates and small towns lack opportunity would be to subsidize these sorts of non-competitive R&D spaces in those otherwise undesirable living areas. A sort of multi pronged subsidy, to both the workers (discounted home loans, dedicated public transport), to the local industries (grants or loans to builders in the area to build homes and infrastructure) and to the companies themselves (tax incentives, short term subsidizing of salaries etc), and in exchange the public and the government gets the results of the research perhaps under reduced term patents or special licensing deals.


I grew up across a cornfield from the Naperville office, and so our then new subdivision included dads and moms who worked there on engineering topics.

One thing unmentioned we benefited from as kids in the 80s was super kick ass technical book stores in Naperville and Wheaton.

I was sad seeing the big red zero logo of Lucent on the new building across the street, and the withering of the place.


I don't think using your real name helps that much. Most people on Facebook use their real name and still many have no issues with insulting other people or even supporting violence against individuals or groups of people.


I think it's more distance and lack of consequences that causes toxic behaviour. People will be nicer to their friends, relatives and coworkers online than strangers, since hurting or alienating said people will have actual effects on their day to day life.

Meanwhile the average person on Reddit/Twitter/YouTube/whatever is someone you're very unlikely to ever deal with beyond a few passing comments online. So sadly, many people don't care much about being particularly civil towards them, since they're basically a non entity in the asshole's life. Regardless of whether that person has a real name or photo attached to their account.


Yeah, I'm interested in the psychology of that. Perhaps it's a little like "Stanford Prison Experiment". But I also suspect there's something fundamentally different. I don't know. I'm no expert on this.


It's not a panacea by any means - but the level of abuse that get on Facebook doesn't seem quite the same as on sites like Reddit or Twitter. Part of that may be down to the structure of the site, but you don't seem to get the huge mobs of people hurling death threats and abuse at someone on Facebook.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: