Hacker Newsnew | past | comments | ask | show | jobs | submit | the__prestige's commentslogin

In other words they managed to successfully recreate most people's experience on upwork.


In other words they managed to fake it until they make! Like most visionaries in silicon valley, lie now, tweet about it, prompt it through fake influencers with their mouth open on YouTube, get that VC money without any due diligence, hire smart people and force them to do it!


You are supposed to make it before you get caught faking it.


a) Taking notes

b) On the topic of notes: What are the odds of this being your first comment, on a 2 yo account and me taking note. A little sus.


Underrated


TFA article mentions 5 use cases that are supposedly "within the coming decade" and uses terms such as "few million qubits" "more qubits than are currently available" and "within reach", basically implying that there aren't any use cases possible now. This seems to reinforce skepticism[1] expressed by other researchers that practical uses of quantum computing will very likely be different from what we thought possible before, quote - "big compute" problems on small data, not big data problems.

As a newbie I seriously would like to know - are there ANY known real world applications of quantum computing that are possible today?

[1] https://cacm.acm.org/research/disentangling-hype-from-practi...


TFA talks about a tender offer, which allows employees to sell their shares at almost a 3x valuation compared to earlier this year. This already is a "massive pay day".


The only time that 3x trade would be a good deal is if you think that's the best you're going to do. IF you think your gonna be the next amazon/Facebook/google then selling is foolish. The MS deal may limit or wreck that possibility.


In what world do you see OpenAI’s +technology* being behind massively profitable products? What is their most?


OpenAI trains with my chat if I enable history. I absolutely need history so that I can use it as an intelligent diary or note taker. I absolutely don’t want it to train with my data.

On the other hand, they dont train with API usage. So something like this is very interesting.

Question: does LibreChat support importing OpenAI history?


Based on their definitions (https://imgur.com/a/Ta848Lu)

- We already are at Level 1+ with GPT 4, but they are basically assistants and not truly AGI.

- Level 2 "Competent level" is basically AGI (capable of actually replacing many humans in real world tasks). These systems are more generalized, capable of understanding and solving problems in various domains, similar to an average human's ability. The jump from Level 1 to Level 2 is significant as it involves a transition from basic and limited capabilities to a more comprehensive and human-like proficiency.

However, the exact definition is tautological - capabilities better than 50% of skilled adults.

So IMO the paper basically states in a lot of words that we not at AGI and is restating the common understanding of AGI to be "Level 2 Competent", but doesn't otherwise really add to understanding of AGI.


AGI now means many different things to many different people. I don't think that's really any "common definition" anymore.

For some, it's simply Artificial and Generally Intelligent(perform many tasks, adapt). For some, it might mean any that it needs to do everything a normal human can. For some, non-biological life axiomatically cannot become AGI. For some, it must be "conscious" and "sentient".

For some, it might require literal omniscience and omnipotence and accepting anything as AGI means, to them, that they are being told to worship it as a God. For some, it might mean something more like an AI that is more competent than the most competent human at literally every task.

For some, acknowledging it means that we must acknowledge it has person-like rights. For some it cannot be AGI if it lies. For some it cannot be AGI if it makes any mistake. For some it cannot be AGI until it has more power than humans. These are several definitions and implications that are partially or wholly mutually conflicting but I have seen different people say that AGI is each different one of those.


I've got a much simpler definition: an AGI should be able to produce a better version of itself.

I'm not saying this would necessarily lead to the technological singularity: maybe it's somehow a dead end. Maybe the "better version of itself, which itself shall built a better version of itself" will be stuck at some point, hitting some limit that'd still make it less intelligent than the most intelligent humans. That I don't know.

But what I know is that an AI that is incapable of producing a better version of itself is less intelligent than the humans who created it in the first place.


I actually really like this definition and will be giving it some thought. But right off the bat, that’s not how most people will see it - and so while this definition is certainly thought-provoking and useful, it doesn’t specify much that’s relatable to other tasks and therefore I think will always be a niche definition.

An AI that can make a better version of itself may not be able to communicate in any human language for example; and that is now a de facto requirement for most people to see something as AI I think.


> I'm not saying this would necessarily lead to the technological singularity:

You kinda are though: if it hits a limit and can no longer make a better version of itself, then your definition means the final one in the sequence isn't an AGI even though it's worse parent is.

> But what I know is that an AI that is incapable of producing a better version of itself is less intelligent than the humans who created it in the first place.

Neither necessary nor sufficient:

(1) they are made by teams of expert humans, so an AGI could be smarter than any one of them and still not as capable as the group (kinda like how humans are smarter than evolution, but not smart enough to make a superhuman intelligence even though evolution made us by having lots of entities and time)

(2) one that can do this can still merely be a special-purpose AI that's no good at anything else (like an optimising compiler told to compile its own source code)

(3) what if it can only make its own equal, being already at some upper limit?


> an AGI should be able to produce a better version of itself.

So humans do not have GI?


They do. 99% of what you think as human intelligence is social, and has been obtained by previous generations and passed to the person. In a sense, we are hugely overfitted on distilled knowledge, actual biological capabilities are much less impressive.


As a somewhat narrow counter-example, genetic algorithms are able to produce better versions of themselves but do not qualify as AGI.


Are there any examples of genetic algorithms producing genetic algorithms outside of nature?

The classic synthetic way is a genetic algorithm producing increasingly better outputs.


Okay but in reality an AGI is just an agent that can learn new things and reapply existing knowledge to new problems. Generally intelligent, it doesn’t have to mean anything more nor does it imply godlike intelligence.

Everything you just mentioned seems to be some philosophy of sentience or something. A few years ago when ANNs become popular for everything, general intelligence just meant “can do things it wasn’t explicitly trained on”


This definition is itself tautological and also quite flawed. For example at what point in this machine’s development has it attained AGI? What if it learns to/is taught to stop learning? What if the machine is not capable of, e.g. math? What kind of knowledge is legitimate vs illegitimate? In many ways the concept of AGI masks a fundamental social context of the machine to obey standards and only adopt the “correct” knowledge. This is why, e.g. instruction tuning or RLHF was such a leap for the perception of intelligence, because the machines obeyed a social contract with users that was designed into them


This sounds a lot like "If we throw out everyone else's definition, then my definition is the obviously correct one".

Can you give any reason why your definition is correct, and/or why all those others should be dismissed?


This is one of the problems we've had with intelligence we've had for a very long time. We've not been able to break it down well into distinctive pieces for classification. You either have all the pieces of human intelligence, or you're not intelligent at all.


> capabilities better than 50% of skilled adults.

That sents the bar unreasonably high in my opinion. Almost all of humanity does not have skills 'better than 50% of skilled adults' by definition and those definitely qualify as generally intelligent.


It's also rather vague and at least in my first pass skim I'm not seeing them define what it means to be skilled or unskilled. So I'm not sure the metric is even meaningful without this because it's not like you're one day unskilled in driving a car and the next day you're "skilled." Does that mean anyone with a drivers license? Does that mean a professional driver? We talking taxi driver, nascar, rally racing, F1? What? Skills are on a continuous distribution and our definitions of skilled vs unskilled are rather poorly defined and typically revolve around employment rather than capabilities.

I hope I just missed it because the only clarification I saw was with this example

> e.g., “Competent” or higher performance on a task such as English writing ability would only be measured against the set of adults who are literate and fluent in English


That's the whole problem with all these definitions: they are rooted in very imprecise terms whose meaning seems to depend on the beholder of the prose.


I don't see the issue. They listed different levels and that's one of them. Level 1 compares to unskilled humans.


Pretty much every adult has some kind of skills, so this means half of (adult) humanity have skills better than 50% of skilled adults.


> Based on their definitions (https://imgur.com/a/Ta848Lu)

Wow, what a massive jump between Level 0 and Level 2. They state that their goal is to help facilitate conversation within the community but I feel like such a gap very obviously does not help. The arguments are specifically within these regions and we're less concerned about arguing if a 50th percentile general AI is AGI vs a 99th percentile. It's only the hype people (e.g. X-risk and Elon (who has no realistic qualifications here)) who are discussing those levels.

I know whatever you define things as people will be upset and argue, but the point of a work like this is to just make something we can all point to and be a place to refine from, even if messy. But with this large of a gap it makes such refinement difficult and I do not suspect we'll be using these terms as we move forward. After all, with most technology the approach is typically slow before accelerating (since there's a momentum factor). A lot of the disagreement in the community is if we're in the bottom part of the S curve or at the beginning of the exponential part. I have opinions but no one really knows, so that's where we need flags placed to reduce stupid fights (fights that can be significantly reduced by recognizing we're not using the same definitions but assuming the other person is).


> Level 2 "Competent level" is basically AGI (capable of actually replacing many humans in real world tasks).

I still find that a weird definition for two reasons:

1. All the narrow systems already have replaced humans in many real-word tasks.

2. I'm dubious there is much difference between a level 2 and level 4 general AI. And outperforming a % of humans seems like an odd metric, given that humans have diverse skill sets. A more sane metric would be Nth percentile for M% of tasks human workers perform in 2023.


> The jump from Level 1 to Level 2 is significant

That's a speculative claim because we don't really know what's involved. We could be one simple generalization trick away, which wouldn't be very significant in terms of effort, just effect.


> However, the exact definition is tautological - capabilities better than 50% of skilled adults.

It's clear that they mean human capabilities unaugmented by AI.


Can someone please ELI5 why this is true?


A full circle is 360° and a curved piece makes a turn of 30°, so you need 360° / 30° = 12 pieces turning in the same direction to make a full circle.

Every time you use a piece turning the other way, you need to add an extra piece turning the way you want to complete the circle, so the difference between the directions has to be 12.

Note, however, that not every track with exactly 12 more pieces turning one way than the other necessarily makes a complete circle, straight pieces can cause the ends not to match up.


It feels like this reply is posted by Emad Mostaque


The question is whether people have attempted quantization (the int8 / GGML / GPTQ approaches) and whether the "flattening" of distribution due to a larger denominator results in a better quantization behavior. You'd have to specifically try quantization with and without the +1 to understand the advantage. OP argues that the advantage could be be significant.



Hospitals were overflowing with patients. Millions died because they did not get timely treatment. In countries such as India, there wasn't even enough room to cremate bodies. Home quarantining "flattened the infection curve" enough to make this more manageable. So no, it wasn't all pointless.


In the United States, hospitals were not overflowing with patients and the curve was not flattened. It followed roughly the same curve and wave pattern of the 1918 pandemic. Hospitals were furloughing staff in massive quantities. Ohio tracked and published hospitalization rates and I followed them daily. There was never 100% capacity usage.


Not true for all of the US. In the SF Bay Area, at least, we had 100% occupancy rates for significant periods even after home quarantining was established.


I have been doing some digging and unfortunately cannot find data prior to mid 2020 for most areas. This site https://data.statesmanjournal.com/covid-19-hospital-capacity... has information pertaining to capacity since then, for an incredible amount of hospitals.

Looking at the data for the available time period, I am not able to confirm your statement.


I don’t see any history in the site. How are you getting history?

The fact that hospitals did run out of capacity in many places in the world is indication enough to me that control measures were important, and hospitals having some capacity is hardly a measure of safety.


In the left column, if you click on the hospital name, it will show the history.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: