Appreciate the effort but these demos have really quickly become so boring. If you can design an entire website with the AI tools, it appears its almost certainly a website you don't need to create in the first place. Either it has already essentially been made by somebody who actually understood and cared about the code, or it's such a trivial mix of stuff you're better off making a squarespace site or whatever. At least that is hardened and tested as a product.
Like, we should maybe give pause to the fact that this is at least the 20th demo specifically about "making a website." If this technology is so powerful, saves so much time, democratizes the profession so much, etc why are the demos all the same? Either people are unimaginative, which they are not, or perhaps this is pretty much what we can get from this in general.
Coding involve care and inspiration, and should not necessarily be reinventing the wheel again and again, even if youre getting the AI to do the reinvention for you.
But beyond any principled thoughts, just not seeing the returns here that people are trying to get me to see. Lots of laymen have been making wonderful websites for 30 years. Its cool the computer knows what css people have been using for those decades, but it only makes brittle uninspired things that only incidentally work.
I'd really like to know, beyond the hype, if any serious engineers have actually used this stuff for an extended amount of time. Ive tried and tried to use it and convince myself its helping me, but it just leads to so much frustration with the hallucinations and misunderstandings and schizophrenic comprehension of its own context..
give it a rest mate. Most coding is garbage generic gluing of boilerplate and API calls.
Aligning buttons and agonizing over 18px vs 20px padding is a monumental waste of human capital. 99.999% of coding is basically an interface to a DB (fetch some data, display it, maybe do some basic arithmetic and then save the data).
A machine will do all that drudgery better and quicker. We're not there yet, but the writing is on the wall.
>and should not necessarily be reinventing the wheel again and again
You are definitely right that most code is banal and glue, and that is indeed why the models, or this one at least, seems to be best suited to regurgitate it and change the names of your variables.
I guess I still don't see your main point though. There are plenty of tested and compliant style frameworks out there that have figured this stuff out, and they will do it better than whatever Stackoverflow example the AI gives you. If you dont have time to really think about and learn deep or even basic web design, such that you don't spend so much time on your padding, there are a wealth of frameworks and products and tools and templates that solve that problem with intentionality and some basic assurances. From the highest level of squarespace to free coding courses, there is an entire spectrum to meet you halfway.
It just sounds to me like maybe you are burnt out, or maybe you don't like coding stuff as much as others. Which is fine! And if this stuff helps you do your drudgery, that's fine too. But I would just say that you could work even smarter than this (presumably you are using it ).
I would wager using the right dumb tools will in the long run save you more time and enthusiasm than all this, and you will make better stuff, even its "boring" or bad to you. It's a wager I could see losing, at least with the time part, but I'd still make it.
I fully agree that something like this will help keep the bullshit glue we make for our bosses pumping, but is that really something to be so excited for as people are?
I'm getting tired of intelligent people dumping on one of the most incredible advances our field has ever seen.
I was cynical of everything - smartphone incrementalism, slow pace of biology developments, social media. It didn't look like the hope and promise of the 1950's and 60's jet packs and robots would ever pan out. You'll probably find historical comments I've made on HN criticizing Ray Kurzweil and his like for their "bullshit Singularity" (as I used to call it). I firmly believed we had plateaued as a society.
With everything going on in AI/ML, I feel wrong about my assessment and finally have hope for the future. This is the automation of automation. Second order.
Please imagine what this means in the fullness of time. You might think HTML / CSS generation is a toy - it is. But what's coming will wash away everything you know. And it'll happen fast. People are plugging themselves into this tech and starting to dig away at existing problems and markets. It'll happen more quickly than with tech and the internet because there are more of us now.
> these demos have really quickly become so boring.
I'm about to put CNN/NBC/etc. out of business, and I'm just getting started: https://fakeyou.com/news
I can think of an AI/ML way to destroy almost every incumbent business out there. You should be applying yourself to that right now.
These are just the cards I'm willing to show now.
> It just sounds to me like maybe you are burnt out, or maybe you don't like coding stuff as much as others.
Maybe they don't like churning butter. The point isn't to write code, it's to create and solve problems.
This is the biggest opportunity of our lifetimes staring us in the face. Don't sleep on this.
The “GPT is going to wash away everything” discourse sounded familiar here on HN and of course, I had read it multiple times on your posts bordering on spam.
I don’t know if your tech is going to wash away anything, and I do wish you good luck, but CNN has been digging their own grave for the best part of a decade and you claiming the credit for taking them down is some of the worst hubris I’ve read on HN.
As for the applications of LLM I’ll take François Chollet’s view that they may be more limited than many people are claiming.
> You'll probably find historical comments I've made on HN criticizing Ray Kurzweil and his like for their "bullshit Singularity" (as I used to call it).
> I feel wrong about my assessment and finally have hope for the future
> These are just the cards I'm willing to show now.
When you're showing me one flush card, the other one is a rag and you're trying to get me to fold. If you had the flush, you could show me both cards and get me to fold for sure.
AI can give anyone the power to make music, art, film, and media without the time, talent, and institutional capital investments.
AI can automate people's jobs and reduce cost structures to accomplishing work and business objectives.
That's the difference, and the evaluation criteria is easily decided by the free market. Did it do a good job? Will it get hired again to repeat similar work? Are more customers adopting it? That's all you need to judge its success or relevancy.
Until AI can give us a just society where people are fed, clothed, medically treated and housed globally, I'm not tripping over myself calling it the future.
It's just more of the same, just faster and bigger. Sure, that's something, but it doesn't give me much hope by anything it has proven so far.
It's cool how stuff like this can be made so trivially now. It has the potential to be quiet useful, even though it is very simple to copy. What you have here is essentially the same as what is in the article, that was created by someone with no programming experience.
Steps to reproduce:
1. Generate a face
2. Pass face to animator API
3. Feed text to voice api
I think we should generate less CRUD, competing standards, endless new languages, frameworks, databases, Linux distros, and so on. I don't see why we should automate this. We could get far greater gains in productivity by less duplicated effort.
While I don't disagree with you largely. What a really nasty way to say it no? It's true a lot of what happens is boilerplate API calls and copy pasting of solved problems. AI is making that happen a lot faster, however the parent comment also has some good points. If you work for a large company most of your day is spent understanding domain specific problems then executing on that. It would be nice to see something more exciting because this is the Nth version of "AI made a thing" I have seen.
i'm curious what you think is worthy of humans to do?
often in life i find myself reminding myself that everything is just something to do.
serious question, what do you then distinguish as enlightened pursuits vs not, for lack of a better word.
edit to add: i painted my family's garage in my teens, manually. and i'm thirties now and when i go to my family home i see the painted garage. it never leaves me, the value.
I'm disappointed with how unsophisticated modern web development is. Our tools and processes are extremely fragile and error prone, and frequently produce bug ridden junk. So much so that we feel the need to rewrite entire apps every n years if it hasn't already died under it's own weight by then.
The output we produce is 99% predestined by the tooling/framework we are placed in front of. And then we claim to be craftsmen.
To that extent nothing is worthy of humans to do. Nothing we do is truly unique. It only seems unique because we have a poor understanding of it. Might as well not exist or eat or poop because that’s not worthy
I do get the sentiment I will say. Like when I first got a job as a SWE all I did was CRUD apps and they are all fairly similar. Then I worked on wordpress for a while and it felt even worse. For about 6 years now I've personally moved away and work on some more exciting stuff. A lot of that though is still just DB read-write and display at the end of the day, and that's fine imo. I am very skeptical about AI however if I take the most positive spin and it automates away the boring? That could be good?
After reading the replies, i'm understanding that drudgery has to do with lack of agency.
Can't websites be art too? A mindless CRUD app is framed as such if BigCo tells us to do it for the billionth time.
But if it's our agency that compels us, even tho it's still just glue and CRUD, our pov makes it different for us, and we still have a potential to express ourselves in some way, some tiny way.
So it's not that websites as a thing is somehow un-enlightened. Lowly for the robots.
No, it seems there's Art in anything, art is an expressed point of view. I do agree that human value comes down to Art.
> i'm curious what you think is worthy of humans to do?
It depends on what you value. If you value efficiency, automation, thinking deeply about new problems, then gen AI will likely free you from what you consider tedium. However, that’s not to say that those are the only valuable characteristics. Painting your family home is a noble, rewarding thing. If you enjoy the work, then it’s worthy.
This is fun for me, not drudgery. Painting the garage sounds like a hellish nightmare. Maybe they could automate that someday, so I can be free to move pixels around.
I love how every person on the VR website is wearing a completely different headset. It's just haphazardly throwing together vague associations of "VR." It doesn't understand anything.
It's an amazing achievement of AI research, but we're not anywhere close to AI replacing humans in any kind of engineering discipline.
A giant leap for AI but a step backward for man. We're gonna see code quality decline due to people thinking they can let the AI do it. Most importantly, we'll probably see an uptick in vulnerabilities.
He could've made it the same headset using one of the standard tools for defining a specific object to get consistency across images (Textual Inversion, DreamBooth, hypernetworks, LoRA etc etc). I'm sure Midjourney supports something for that purpose, he just didn't use it.
Here is my example. In the last two weeks I have rewritten, from scratch, one of my genomics pipelines which can analyse 1000 clinical exomes in a few hours to return candidate genetic causes of rare disease.
* 42 scripts/programs.
* 3694 lines of code.
* Several languages.
* Process raw fastq data up to germline variant calling.
* Using high-performance computing cluster.
* Process vcf data into clinical variant interpretation.
* Options for custom filtering strategies.
* Statistical analysis and logs.
* Replaced several mainstream tools that are difficult to manage.
Final results:
1. This work was ~6-10x times faster.
2. I probably would not have rewritten this better version as it would have taken too long.
3. Instead of getting stuck at any difficult impasse I can get context-specific alternatives and testable example code.
Only ChatGPT. I usually state bullet points and pseudocode and ask for answer, or current code with request to fix the error. With good requests I usually get a working answer but sometimes need careful checking.
How would you estimate the work breaks down between you and chatgpt as a rough percentage? It sounds like you're still doing a lot of the coding, obviously lots of prompt work, analysing and understanding the results, etc.
Because new-developer market is huge. There are more beginner/intermediate developers than seniors by a large margin. It gets the clicks.
As an anecdote our technical blogs covering vital industry techniques and news get like 10% of the readers of our introduction blogs do. I'd imagine the best selling progamming books are the ones aimed at beginners too.
While such examples are no doubt impressive, I haven't seen these AI tools do anything that couldn't otherwise be achieved by following a simple online tutorial. Bootstrap a fresh project, write hello world or fizzbuzz-equivalent code in a random programming language, generate a landing page...all of these tasks have long been commodified and aren't exactly worth big money. So I'm not scared for my job just yet.
"But one day they will..."
Maybe, maybe not. AI could take over the world, or it could easily be that we are seeing the very peak of what generative models can do. It is impossible to extrapolate the current state of the tech 5/10/20 years into the future and make predictions based on where you think it will be. Look at self driving tech for a close comparison. Research has been plateauing for 5+ years and the tech is even going backward now due to safety concerns.
That's not true about self driving research plateauing. Tesla, Waymo and Cruise have all made significant progress in the last five years.
What other categories of high tech reached their absolute peak right when everyone started to notice them? We can extrapolate because there are clear trends. And a long history of inventing new things when we hit a bottleneck.
It's like someone looking at a car in the early 1900s and saying "this may be the peak of automotive technology, who knows".
> What other categories of high tech reached their absolute peak right when everyone started to notice them?
Just in the last decade – crypto/blockchain/web3, VR/AR, metaverse, smart glasses, 3D TVs, foldable phones, modular phones, chatbots, hyperloop, package delivery robots/drones.
Every single one of these areas had all the top tech talent behind it, billions in VC funding, flashy demos and outsized public hype, but ultimately they all fizzled out due to technology barriers or lack of demand or a real use case.
Actual world-changing innovations may seem obvious in hindsight, but every grand prediction and bet about the future of tech is a lot more likely to fail than succeed.
> It's like someone looking at a car in the early 1900s and saying "this may be the peak of automotive technology, who knows".
And that may have been a fair assessment for the time without the benefit of a crystal ball. Someone else who said the same thing when looking at blimps was 100% correct. In fact if you asked people of that time where the future of transportation would go, a large majority would have said "flying cars", because that was the obvious technological evolution at the time just as self driving tech is for us right now.
Blockchain technology never stopped continuing to improve. Same with VR/AR, the hardware keeps getting more comfortable and more capable. Package delivery drones have yet to be deployed at a large scale but are gradually increasing usage.
Arguably there have been many successful metaverses including things like Second Life, World of Warcraft, Minecraft and Roblox. The VR part of metaverse hasn't taken off yet due to comfort, expense and poor mixed reality capabilities. And we are still waiting for large scale open metaverses not controlled by a company. That probably will come eventually once we get better hardware and wider adoption more people understand the concept of the metaverse.
To the point, cars still have 4 wheels for the same reason wagons had four wheels before cars existed: 3 wheels are much less stable, 5 adds significant complexity, and nothing other than wheels would be as efficient.
Hoover crafts, helicopters, planes, ... all have no wheels (for transportation).
I wasn't meaning to be specific about the 4 wheels, I was trying to point out that innovation happens when necessity makes it necessary - why do we have batteries in cars even though the very first cars[1] were electric?
>>> Ironically cars have basically not changed since the 1900s: 4 wheels, a steering wheel and a polluting oil-based engine.
>> What would you use to replace the 4 wheels
> Hover crafts, helicopters, planes
None of those are "cars"
Computer technology has not been capable of self-driving until now. Some would say it still isn't.
Likewise, battery tech has been practically useless for cars for over 80% of the time cars have existed. Sure, electric vehicles held the land speed record in 1899 -- when the record was 105km/hour. The practical range of an electric vehicle was less than 100 miles as recently as what, fifteen years ago?
In the past 100 years the top speed of everyday vehicles has increased by 2-3 times, the mileage has doubled, and vehicle comfort, safety, and reliability have increased by an insane amount. To say that no progress has been made is absurdly mistaken.
They have changed. The basic utility is to go from point A to point B. They now do that about 3-10 times faster and more comfortably and reliably than they did when they started.
If generative AI gets 4 times better than will matter a lot.
I would like to think that the Apollo missions were to manned space exploration as Marco Polo's journeys were to inter-continental exploration. In other words, an immensely difficult undertaking which was ahead of its time, but served as a proof of concept to inspire future generations.
I'm not particularly an Elon fanboy, but I believe that Starship or another reusable rocket system will eventually lead to extensive and cost-effective human exploration of the Moon and Mars over the coming decades.
OK somehow I've become blasé with text and image generation. (I shouldn't be, but hey)
What impressed me was the possibility you used AI to produce a responsive HTML structure and somehow assembled all the assets..... but from what I can tell this was still done manually, even if using a template.
It's funny how I am no longer amazed by these generated images. They are amazing. Yet I am still skeptical AI can assemble assets and html from scratch :)
> you're talking to a computer in English ffs and it not only understands you, it goes away and does useful work.
Google has “understood” English queries for a while now in the sense that you can type a question in English into the search bar and it goes off and does “useful work”.
The problem with these AI implementations is that it goes off and does work, but not always useful work. In many cases it’s wrong. In many cases the result is below entry-level.
In those times where it can pass a structured test like a bar exam, then absolutely praise it and get curious about how it can add value to people’s lives.
But in this instance, it couldn’t pass a CS 101 or Design 101 class. It’s on its way, but these claims of being able to use AI to create websites are premature.
At this pace we might be there in a few years or even months, but right now the quality is lacking from any demo I’ve seen and in my own attempts as a designer to use it in the design process.
you can't, with a straight face, be comparing a Google search to what ChatGPT is doing. C'mon man.
>In many cases it’s wrong. In many cases the result is below entry-level.
Once again I'm baffled by this dismissal. I don't think you actually understand what you're seeing with ChatGPT. There is very interesting emergent behaviour occurring. Trying to explain the behaviour is literally cutting edge research.
>but right now the quality is lacking
the technology was literal science fiction 3 years ago, what it can do is astonishing. If you're not jaw on the floor astonished then you simply don't understand what kind of a leap this represents.
Google has understood English queries for years. Lately it has even been highlighting the answers you are looking for in the websites it shows. chatGPT is diferente but not far off.
Google has always had a hard time understanding simple constructs like "not" and "from" and "to". ChatGPT seems to actually get what I'm saying most of the time.
The advancement with these LLMs lies in the fact that they can effectively learn to recognize patterns within “large-ish” input text sequences and probabilistically generate a likely next word given those patterns.
It’s a genuine advancement. However it is still just pattern matching. And describing anything it’s doing as “behavior” is a real stretch given that it is a feed-forward network that does not incorporate any notion of agency, memory, or deliberation into its processing.
You are comparing systems that generated completion text based on statistics and correlation with a system that now models actual complex functional relationships between millions of concepts, not just series of letters or words.
The difference is staggering.
It comes about because of the insane level of computational iterations (that are not required for normal statistical completion) mapping vast numbers of terabytes of data into a set of parameters constrained to work together in a way (layers of alternating linear combinations followed by non-linear compressions) that requires functional relationships to be learned in order to compress the information enough to work.
It is a profound difference both in methodology and results.
It's modeling patterns found across the massive corpus of textual training input it has seen -- not the true concepts related by the words as humans understand them. If you don't believe me then ask ChatGPT some bespoke geometry-related brain teasers and see how far it gets.
I want to be clear that the successful scale-up of this training and inference methodology is nonetheless a massive achievement -- but it is inherently limited by the nature of its construction and is in no way indicative of a system that exudes agency or deliberative thought, nor one that "understands" or models the world as a human would.
> [...] no way indicative of a system that exudes agency or deliberative thought, nor one that "understands" or models the world as a human would.
Certainly not - its architecture doesn't model ours. But it has taken a huge step forward in our direction in terms of capabilities, from early to late 2022.
As its reasoning gets better, simply a conversation with itself could become a kind of deliberative thought.
Also, as more data modalities are combined, text with video and audio, human generated and recordings of the natural world, etc., more systematic inclusion of math, its intuition about solving bespoke geometry problems, and other kinds of problems, are likely to improve.
Framing a problem is a lot of the solving of a problem. And we frame geometry with a sensory driven understanding of geometry that the current ChatGPT isn't being given.
the visual cortex in your brain is also "just a pattern matching" system. guess it's not very impressive by your standard.
This[1] isn't my example (it's from another HN user), but if you work as a programmer and you're not absolutely jaw on the floor astonished by this example then I don't know what to say.
Explaining[2] the emergent behaviour is literally cutting edge research. Hand waving this behaviour away as just "probabilistically generating a likely next word" is ignorant.
It's amazing in similar ways to Conway's Game of Life.
I'm arguing against the notion that these LLMs exhibit "emergent behaviour" as you stated. I don't believe they do, as the term is commonly understood. Emergent behavior usually implies the exhibition of some kind of complexity from a fundamentally simple system. But these LLMs are not fundamentally simple, when considered together with the vast corpus of training data to which they are inextricably linked.
The emergent behavior of Conway's Game of Life arises purely out of the simple rules upon which the simulation proceeds -- a fundamental difference.
emergent behavior in this context is defined as: "emergent abilities, which we define as abilities that are not present in small models but are present in larger models"
>The emergent behavior of Conway's Game of Life arises purely out of the simple rules upon which the simulation proceeds -- a fundamental difference.
> emergent behavior in this context is defined as: "emergent abilities, which we define as abilities that are not present in small models but are present in larger models"
Then I don't know why you brought up Game of Life because it obviously has nothing to do with this alternative definition of emergent behavior.
> this is a meaningless distinction.
It's meaningful with respect to the claim that LLMs exhibit emergent behavior in the same way in which Game of Life does.
1. Item 3: The ocean is full of floating objects, and it would be hard to see the duck among them?
2. Item 2: is structured as non sequitur, takes a long time because there are many hazards?
I am impressed that you find it impressive. It is plausible-sounding, and I find that disturbing, but it is not useful (and the text prediction paradigm seems a dead end in terms of formulating anything more than plausible sounding)
Technically, it's not pattern matching. It's estimating conditional probabilities and sampling from them (and under the hood, building blocks like QKV attention aka probabilistic hashmap and the optimization used decide what it does anyway, ignoring any theory behind it).
> the skepticism and the general dismissive tone on HN baffles me.
I think that it comes from having high expectations. It took me four or five questions until Open AI chat started to write lies, and half trues. When talking with it I feel like a teacher trying to get the correct answer from a student that does not get it. I give it cues, tips, bound the answer and yet it manages to get things wrong.
So, the worst part of Open AI chat is that it gets very boring to work with after the first awe.
It is a great piece of technology, it is impressive. But as a product it is dull. And I need to double check its answer with Google each time, so why bother?
Using Chat AI for Bing is like using blockchain for cryptocurrencies. It is a solution looking for a problem. Bing is the wrong answer, people should accept that and look for better uses for it. They are out there for sure.
If it's remarkable you're easily impressed or you don't know about its shortcomings.
Likely it'll remain an engine for parlor tricks and infinite "demos" for foreseeable future, the many critical mistakes it makes make it impractical (you can't trust any information it outputs without verifying yourself it's not a hallucination again). After all the more text you autocomplete, the more text you need to check.
You're making it sound as if it's something comparable to the development of the Internet in its impact.
There are tricks and there are transformative changes but they are not the same. In this case the flashy trick is based on some interesting developments but IMO it's unclear whether they will lead to transformative changes
It can do it, the challenge is the amount of markup. I have a system that can do it given a template. Which you can generate or manipulate the JSON with the content using the AI, but the template doesn't really fit in memory. Although something like pug.js would fit for a lot of templates but is a security issue.
I'm still baffled how so many people are willing to gloss over all the heavy lifting the user has to do to glue all this together. Organizing a project is still and always has been the majority of the work!
It'd probably take them a lot longer to learn to make the images than to organize a project (which is something they already know how to do). Doubly so for the video clip.
Plus this was mostly copy/paste - that's not heavy lifting!
That "heavy lifting" of stitching, isn't usually where most of the time is spent.
Most projects involve vast time sucks related to many of the parts. All the subtasks that become mini projects of their own. Like creating even one original piece of artwork that you really like.
In this case, the mini projects got automated, transforming the main project into a mini project.
The quality/time improvement here is significant, even for a little project like this.
It's so close to being decent but if you lack the design sensibilities to see it through and guide the process, it's kind of a mess.
People think AI is going to bridge some talent disparity between designers and developers vs non-technical types (and it probably will someday). But currently, AI is just acting like an infinity gauntlet with the designers and developers still holding all the stones.
As per the article the AI tools did not design or create the website. The website was created using a visual building tool (webflow).
Various AI tools were used for the website copy, stock images, picking of fonts, 80s color scheme, intro video and the JavaScript required to embed the video.
"I haven’t found a workaround for this yet but I can imagine this would be a game changer if future improvements allow precise blending placement from one image to another."
ControlNet allows you to sketch or use a pre-existing sketch to then use as the basis for image generation. Allows for very, ahem, /stable/ variations between different sub-prompts.
It's entertaining to have an AI create a fantasy web page for a fantasy product that's clearly not serious. Entertaining but I see it sooooo much now that it's kind of wearing off.
However I'd like to see AI create a web page for an product that's appealing and plausible. This would be closer to seeing if the AI as a human replacement or not.
I used to run a design agency catering to tech startups. I’ve tried dozens of times to produce something that looks better than a free template for a personal site or a tech startup and came up well short every time.
Author says he was slow to get into AI because of his skepticism falling for the hype behind new technologies (like NFTs) but clearly also fell for the hype behind those technologies according to his past posts and Twitter.
It seems the author will jump on any tech bandwagon, so take this writing with a grain of salt.
(This isn’t a comment on AI, just on the credibility of the author)
Every time there is a post about AI, there are a fair amount of people commenting on the current status of AI. Yes I get that it look bad and needs heavy human interaction now, but what about in 1 year, or in 5 years? It's the acceleration of AI progress that we should all be worried about.
I’ve gotten a (very) basic website done via Chat-GPT in about five minutes. Very static but you can ask it to do fancier stuff and it seems to work all right.
It can do basic rails stuff. But I do wonder what options there are to train it on your codebase for even a small project so it can just know what files to edit and that to name things.
A lot of times just giving it the description of the task and the directory listing is enough. For simple tasks. Other than that you can also using embeddings to search for code that is relevant which OpenAI also has an API for.
I’m always amazed at the dismissive comments recent AI advancement generates: it’s not really creation, a human would do much better, it’s getting old, etc
Think about it for a second: a machine understood your request in plain English and executed it adequately, instantly and free.
If you can’t think how that will affect every aspect of UX and creative job I don’t know what to tell you.
And that’s not even touching on the philosophical questions like a LLM suddenly displaying personality, like Sydney, and mostly passing the Turing test.
The floor is shifting beneath our feet. This is for real.
I was thinking recently when reading that OpenAI is piping some of the textual prompts into iPython based on some rules, for example when someone asks for an answer that requires math. I could see ChatGPT piping to "itself" in the background and potentially having specialized AIs that would figure out better prompts from your prompt for example, among other obvious specialised ones like piping to image generation models and so on. You'd obviously also use an AI to choose what to pipe too. I agree with you eventually it'll be AI all the way down.
It’s going to be an ecosystem of intelligence, just like it already is today. The notion of “an intelligence” is going to go away.
The singularity is real and AGI is here —but it isn’t like some runaway single agent that self improves and takes over the world. Just wait till it all starts talking to each other, training each other, using each other, all in convergence with humans. It’s very real. And very weird.
Although I wouldn't have employees, I would have a variety of people on fiverr that I would interact with and pay for a month or two and probably spend $1000 total. Most of the tasks being $50-$100 while one or two being the lions share of the costs
But even at those low prices the biggest thing for me is time
going back and forth on a 8 day delivery, the logo designer bottlenecking the website designer and explainer video creator
that alone makes the AI tools worth it to me, to not consider doing a contract with 5 people at all and having so many iterations across a few minutes or few hours
it is very likely that different specialists will just be on fiverr, delivering complex things for $20 in a matter of hours, instead of a craft for $350 one week later
I honestly think this is the biggest evolution of our field in our lifetimes (I was here for early web). I'm so excited and have already been building incredible stuff, and I'm saying that as someone who has had a decent career at top employers.
This will be for real, when I can run this stuff at home.
Stable Diffusion is there. But GPT-3 still has to be run on my behalf by a company that could pull the rug at any moment, or add more stuff that they don't approve of to their TOS. (You already can't use it for "sexy" content. Want help writing a love letter? Oh no, you're banned.)
The "lighter" versions these text models that can be run at home aren't good enough yet.
You've alluded to my main concern regarding ML models, especially large GPT-3-alikes. Less that they could pull the rug, but rather the more general concern of companies "renting-out" access to these valuable tools that we will likely become dependent upon.
Based on Microsoft's relationship with OpenAI, it wouldn't be surprising to me if they also wish to offer access to these AI models via Azure for example as a SaaS offering. Then you can imagine some small start-ups with Billy and Jimmy making some other SaaS product that is based on this very API. It would be a clever way to extract revenue from supposedly novel products whilst controlling and shaping the game.
There's a huge difference between utility software, especially utilities that are inherently network-based like "a search engine", and application software used for content creation.
If Google Docs suddenly disappeared, everyone would just continue as usual, except that they now use MS Word or LibreOffice instead. If GPT-3 disappeared, what's the alternative?
I feel like everyone is just playing devil's advocate. Almost every developer I know wants a native API or have the option of taking something out of the cloud. It's why paas died and docker as a service took over.
There are open models, eg. Bloom. Same with Stable Diffusion (open) and Dall-E. Creating these isn’t impossibly hard. Some ML ops company showed how you could train a GPT-like model for like $250k or something.
Me with not much background experience in programming not knowing what RoR stood for.
Me asking ChatGPT:
In this context, RoR most likely refers to Ruby on Rails, which is a web application framework written in the Ruby programming language. The second comment is suggesting that people had similar dismissive comments about Ruby on Rails (RoR) when it was first introduced, but it has since become a widely used and respected framework in web development. The comment implies that similar trends may apply to AI tools used for web design.
I could never have done that. Never bought into the crypto hype, even though I still find the blockchain very interesting. Never had a cent in crypto currency or NFT.
In fact, I’m extremely averse to hype, perhaps almost to a pathological degree.
And yet, I can’t remember when I was more scared and excited about some tech. Take that as you wish.
I agree, I'm very hype-averse, and have moaned on here before about crypto-nonsense (and previous AI/AGI BS), but this is genuinely different.
It's a shame that people who are rightfully sceptical - precisely because of the hype-merchants and snake-oil salesman of the past decade or so on various tech "advances" have misjudged the quantum leap we are witnessing, and will duly fall behind or paint themselves into a luddite corner because of it.
> fall behind or paint themselves into a luddite corner because of it
I am not too concerned about that. It was easy to dismiss the blockchain/nft/web3 hype because there really is nothing there. No matter how hard the snake-oil salesmen tried, they were unable to create a killer app over the course of more than a decade.
I am likewise on the skeptical end of the AI hype, but I am using ChatGPT daily to do tedious stuff for me. The utility is so obvious and AI will become so ubiquitous that it will be impossible to ignore.
This seems to be a really common sentiment, and I find it very strange. Why do people think that the level of enthusiasm and hype for a technology is predictive of its likelihood to fizzle? Both real, world-changing tech and overhyped tech will create enthusiasm. You have to look deeper than the tone of voice people are using to talk about a technology to understand its potential.
This seems to be a really common sentiment, and I find it very strange.
The pattern: new thing -> decide if good or bad -> hype or attack.
Waiting to see how it evolves and gathering pros or cons is boring. Using it? Boring. Thinking? Better see if there's some expert you trust that has some opinion already. Readiness is all.
For some people, access to decentralized financial products can be a matter of life or death - they live in jurisdictions with limited property rights, and loss of property can translate to lack of food, medicine, and housing.
LLMs mostly provide a way for companies to increase their efficiency. Currently anything an LLM can do a staff creative worker in the correct field can do.
> I’m going to use AI tools to design an entire website!
Proceeds to use AI for ideas only and manually edits everything.
Generative AI tools are extremely powerful. But I’ve been recently getting people asking, “Did you use AI to write that?” for extremely complex, innovative solidity smart contracts and web3 UX design. The hype is so far ahead of reality that no matter how useful gpt4 is once it comes out, people are going to call it a failure.
As a designer none of these make me worried about my skill set, they all look pretty bad with the slightest amount of attention.
The real concern is how ugly a world the capitalists are willing accept to cut labor costs, and we know the answer, just look what capitalism did architecture. Dogshit buildings everywhere designed by contractors to avoid skilled labor. What an ugly time to be alive
You're seeing the results of amateurs adapting unspecialized tools. They're using minimal effort and still deriving head turning results.
A great analogy might be all the folks making Geocities websites back in '99. Look at how far we've come from that. Can you even fathom where we'll be in five years? (Or even just this fall, given the pace of research and new startups optimizing models/workflows?)
These tools are going to be specially purposed for every domain soon. VC is going to pour into every possible optimization niche, and talented teams will build specially purposed tools that put entire careers on easy mode.
This is good. We shouldn't want to write HTML any more than we should want to churn butter.
The job of designer will change to incorporate the new workflows. But everything else will also change. Websites of 2000-2020 will look as dated as magazine ads. New websites will be rich and interactive as never before.
This is the biggest boom and shakeup of our industry perhaps ever. Get excited by the opportunities. You're not going to have to work like a caveman designer anymore. You'll be a year 2080 designer before the decade ends.
> Websites of 2000-2020 will look as dated as magazine ads. New websites will be rich and interactive as never before.
Why? Do we need any of that aside from specialized tools (e.g. data intensive) for which it actually makes sense to use tried and proven tech?
Im sure new things are going to come out of AI, they’re already becoming obvious. But using AI to build things we already know how to do and just add more complexity is bound to not go anywhere useful in my opinion.
I'm truly amazed at this opinion which seems to be prevalent on this thread. The models are tools that already have the ability to accelerate your work now, and will massively improve over the next decade. Comparing it to things we already know how to do provides the ability to benchmark and see how much value it brings to the table. Being on the forefront, exploring, and hacking on side projects has always been the best way to understand new stuff.
"Using cars to drive down roads we already know how to traverse with horses and just add more complexity is bound to not go anywhere useful in my opinion."
To fit better with your analogy, how about using AI to invent a vehicle without using the tools and expertise of engineers that already have experience in building cars. And let AI figure how many doors it needs to have, how many wheels, how an engine works and all that because why not? Maybe it will come up with a batter idea, screw any experience and history we have building anything.
This page isn't something the AI built, the AI built specifically defined pieces of it and the author assembled those into the page using Webflow. I feel like their claim of "no-code" is a bit suspect. Embedding an AI generated block of code into a Webflow site has nothing to do with AI generating a website. If I add a block of AI generated code to a pull request that isn't "Using AI to write an entire PR".
AI still feels like self-driving cars to me. People saw some shadow of success there and said "by 2020 cars won't even have steering wheels" but who knows now if we will ever see them. I could be totally wrong, but my prediction is we see a ton of cool stuff but getting that last 5% down to where we can actually trust AI to run the show will not happen for a long time if ever.
I mean, that's been the trend line of every paradigm, hasn't it? Quality & craftsmanship being sacrificed to scale & automation.
I'd say the good part of AI coding is the democratization of programming. It'll be shittier than hand-crafted stuff but more broadly available and empowering more people to make their own stuff.
>Dogshit buildings everywhere designed by contractors to avoid skilled labor.
This seems to be the opinion of everyone on HN, but I never encounter this in real life (and do not agree with it myself). Where I live (mainly London, but I've spent a bit of time in some US cities as well) the buildings being built/applying for planning permission look pretty good. It's the ones built in the late 20th century that are hideous.
I doubt profit hungry capitalists are your main concern, but your average struggling startup or entrepreneur.
At a startup I worked for we spent about £10,000 just on our brand and website design. That was a big chunk of our budget and I'm genuinely not sure that would be needed anymore. I think I could probably prompt AI to design a site and logo roughly as good.
Like, we should maybe give pause to the fact that this is at least the 20th demo specifically about "making a website." If this technology is so powerful, saves so much time, democratizes the profession so much, etc why are the demos all the same? Either people are unimaginative, which they are not, or perhaps this is pretty much what we can get from this in general.
Coding involve care and inspiration, and should not necessarily be reinventing the wheel again and again, even if youre getting the AI to do the reinvention for you.
But beyond any principled thoughts, just not seeing the returns here that people are trying to get me to see. Lots of laymen have been making wonderful websites for 30 years. Its cool the computer knows what css people have been using for those decades, but it only makes brittle uninspired things that only incidentally work.
I'd really like to know, beyond the hype, if any serious engineers have actually used this stuff for an extended amount of time. Ive tried and tried to use it and convince myself its helping me, but it just leads to so much frustration with the hallucinations and misunderstandings and schizophrenic comprehension of its own context..