Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OK somehow I've become blasé with text and image generation. (I shouldn't be, but hey)

What impressed me was the possibility you used AI to produce a responsive HTML structure and somehow assembled all the assets..... but from what I can tell this was still done manually, even if using a template.

It's funny how I am no longer amazed by these generated images. They are amazing. Yet I am still skeptical AI can assemble assets and html from scratch :)



>Yet I am still skeptical AI can assemble assets and html from scratch :)

the skepticism and the general dismissive tone on HN baffles me.

you're talking to a computer in English ffs and it not only understands you, it goes away and does useful work.

literal science fiction 5 years ago, and now it's somehow: yawn <insert random nitpick>????

talk about hard to impress!


> you're talking to a computer in English ffs and it not only understands you, it goes away and does useful work.

Google has “understood” English queries for a while now in the sense that you can type a question in English into the search bar and it goes off and does “useful work”.

The problem with these AI implementations is that it goes off and does work, but not always useful work. In many cases it’s wrong. In many cases the result is below entry-level.

In those times where it can pass a structured test like a bar exam, then absolutely praise it and get curious about how it can add value to people’s lives.

But in this instance, it couldn’t pass a CS 101 or Design 101 class. It’s on its way, but these claims of being able to use AI to create websites are premature.

At this pace we might be there in a few years or even months, but right now the quality is lacking from any demo I’ve seen and in my own attempts as a designer to use it in the design process.


you've got to be joking mate.

>Google has “understood” English queries

you can't, with a straight face, be comparing a Google search to what ChatGPT is doing. C'mon man.

>In many cases it’s wrong. In many cases the result is below entry-level.

Once again I'm baffled by this dismissal. I don't think you actually understand what you're seeing with ChatGPT. There is very interesting emergent behaviour occurring. Trying to explain the behaviour is literally cutting edge research.

>but right now the quality is lacking

the technology was literal science fiction 3 years ago, what it can do is astonishing. If you're not jaw on the floor astonished then you simply don't understand what kind of a leap this represents.


Google has understood English queries for years. Lately it has even been highlighting the answers you are looking for in the websites it shows. chatGPT is diferente but not far off.


Google doesn't do open ended reasoning.

It can't refine the tasks you give it based on a long conversation.

It can't take elaborate what-if scenarios and role play. Reasoning about multiple actors in interesting situations is light years ahead of Google.

Google can't write code, or translate code to a human language explanation of an algorithm.

There is more than a couple orders of magnitude difference in what is happening!


Neither does ChatGPT. It just appears that it does to a machine (human brain) that can do it and assess if others do it too.


Google has always had a hard time understanding simple constructs like "not" and "from" and "to". ChatGPT seems to actually get what I'm saying most of the time.


> you can't, with a straight face, be comparing a Google search to what ChatGPT is doing. C'mon man.

I’m comparing the utility of the output

> Trying to explain the behaviour is literally cutting edge research.

Here’s a good summary:

What Is ChatGPT Doing … and Why Does It Work? It’s Just Adding One Word at a Time

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-...

> If you're not jaw on the floor astonished then you simply don't understand what kind of a leap this represents.

It’s conversational abilities are truly impressive. But it’s ability to create websites is lacking, which is what this article is about.


The advancement with these LLMs lies in the fact that they can effectively learn to recognize patterns within “large-ish” input text sequences and probabilistically generate a likely next word given those patterns.

It’s a genuine advancement. However it is still just pattern matching. And describing anything it’s doing as “behavior” is a real stretch given that it is a feed-forward network that does not incorporate any notion of agency, memory, or deliberation into its processing.


You are comparing systems that generated completion text based on statistics and correlation with a system that now models actual complex functional relationships between millions of concepts, not just series of letters or words.

The difference is staggering.

It comes about because of the insane level of computational iterations (that are not required for normal statistical completion) mapping vast numbers of terabytes of data into a set of parameters constrained to work together in a way (layers of alternating linear combinations followed by non-linear compressions) that requires functional relationships to be learned in order to compress the information enough to work.

It is a profound difference both in methodology and results.


It's modeling patterns found across the massive corpus of textual training input it has seen -- not the true concepts related by the words as humans understand them. If you don't believe me then ask ChatGPT some bespoke geometry-related brain teasers and see how far it gets.

I want to be clear that the successful scale-up of this training and inference methodology is nonetheless a massive achievement -- but it is inherently limited by the nature of its construction and is in no way indicative of a system that exudes agency or deliberative thought, nor one that "understands" or models the world as a human would.


> [...] no way indicative of a system that exudes agency or deliberative thought, nor one that "understands" or models the world as a human would.

Certainly not - its architecture doesn't model ours. But it has taken a huge step forward in our direction in terms of capabilities, from early to late 2022.

As its reasoning gets better, simply a conversation with itself could become a kind of deliberative thought.

Also, as more data modalities are combined, text with video and audio, human generated and recordings of the natural world, etc., more systematic inclusion of math, its intuition about solving bespoke geometry problems, and other kinds of problems, are likely to improve.

Framing a problem is a lot of the solving of a problem. And we frame geometry with a sensory driven understanding of geometry that the current ChatGPT isn't being given.


>However it is still just pattern matching

the visual cortex in your brain is also "just a pattern matching" system. guess it's not very impressive by your standard.

This[1] isn't my example (it's from another HN user), but if you work as a programmer and you're not absolutely jaw on the floor astonished by this example then I don't know what to say.

Explaining[2] the emergent behaviour is literally cutting edge research. Hand waving this behaviour away as just "probabilistically generating a likely next word" is ignorant.

It's amazing in similar ways to Conway's Game of Life.

[1] https://imgur.com/HOEnxYb

[2]https://ai.googleblog.com/2022/11/characterizing-emergent-ph...


I'm arguing against the notion that these LLMs exhibit "emergent behaviour" as you stated. I don't believe they do, as the term is commonly understood. Emergent behavior usually implies the exhibition of some kind of complexity from a fundamentally simple system. But these LLMs are not fundamentally simple, when considered together with the vast corpus of training data to which they are inextricably linked.

The emergent behavior of Conway's Game of Life arises purely out of the simple rules upon which the simulation proceeds -- a fundamental difference.


did you read the article?

emergent behavior in this context is defined as: "emergent abilities, which we define as abilities that are not present in small models but are present in larger models"

>The emergent behavior of Conway's Game of Life arises purely out of the simple rules upon which the simulation proceeds -- a fundamental difference.

this is a meaningless distinction.


> emergent behavior in this context is defined as: "emergent abilities, which we define as abilities that are not present in small models but are present in larger models"

Then I don't know why you brought up Game of Life because it obviously has nothing to do with this alternative definition of emergent behavior.

> this is a meaningless distinction.

It's meaningful with respect to the claim that LLMs exhibit emergent behavior in the same way in which Game of Life does.


>It's meaningful with respect to the claim that LLMs exhibit emergent behavior in the same way in which Game of Life does.

I said it's amazing in __similar__ ways to Conway's Game of Life.

i.e. a system which behaves in unexpected ways (emergent abilities) and is greater than the sum of its parts.


A propos of [1]

1. Item 3: The ocean is full of floating objects, and it would be hard to see the duck among them? 2. Item 2: is structured as non sequitur, takes a long time because there are many hazards?

I am impressed that you find it impressive. It is plausible-sounding, and I find that disturbing, but it is not useful (and the text prediction paradigm seems a dead end in terms of formulating anything more than plausible sounding)


> the visual cortex in your brain is also "just a pattern matching" system

I can think and a pattern matching system can't.

> you're not absolutely jaw on the floor

Not at all. Autocomplete will autocomplete.


What is thinking?


You are asking the right question.


Technically, it's not pattern matching. It's estimating conditional probabilities and sampling from them (and under the hood, building blocks like QKV attention aka probabilistic hashmap and the optimization used decide what it does anyway, ignoring any theory behind it).


This technology was not literally science fiction three years ago. The initial gpt3 beta was released in June 2020.


what's your point?


curious what the emergent behavior is. it's a massive hash lookup table at a scale never accomplished and then you can lookup recursively and such.

then follows whether humans/consciousness is not also just a hash lookup.

humans have emergent property of consciousness, so philosophy aside, what's currently emerginent in chatgpt?



thanks for earnest response!


> the skepticism and the general dismissive tone on HN baffles me.

I think that it comes from having high expectations. It took me four or five questions until Open AI chat started to write lies, and half trues. When talking with it I feel like a teacher trying to get the correct answer from a student that does not get it. I give it cues, tips, bound the answer and yet it manages to get things wrong.

So, the worst part of Open AI chat is that it gets very boring to work with after the first awe.

It is a great piece of technology, it is impressive. But as a product it is dull. And I need to double check its answer with Google each time, so why bother?

Using Chat AI for Bing is like using blockchain for cryptocurrencies. It is a solution looking for a problem. Bing is the wrong answer, people should accept that and look for better uses for it. They are out there for sure.


If it's remarkable you're easily impressed or you don't know about its shortcomings.

Likely it'll remain an engine for parlor tricks and infinite "demos" for foreseeable future, the many critical mistakes it makes make it impractical (you can't trust any information it outputs without verifying yourself it's not a hallucination again). After all the more text you autocomplete, the more text you need to check.

You're making it sound as if it's something comparable to the development of the Internet in its impact.


I'll take your word for it, but pesky parlor tricks have been persisting in impressing people for a good few centuries now.


There are tricks and there are transformative changes but they are not the same. In this case the flashy trick is based on some interesting developments but IMO it's unclear whether they will lead to transformative changes


Oh hai. Did you not notice the smiley face at the end of my sentence?

Less about any dismissive tone I may have, and more about your terse jumping-in defending computerz.


It can do it, the challenge is the amount of markup. I have a system that can do it given a template. Which you can generate or manipulate the JSON with the content using the AI, but the template doesn't really fit in memory. Although something like pug.js would fit for a lot of templates but is a security issue.


Ask Bing Chat to write you some HTML snippets, say of the current news headlines arranged like a newspaper. You might be surprised.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: