It's an incredible abuse of power to intentionally mark innocent students' answers wrong when they're correct. Just to solve your own problem, that you may very well be responsible for.
Knowing the way a lot of professors act, I'm not surprised, but it's always disheartening to see how many behave like petty tyrants who are happy to throw around their power over the young.
If you cheat, you should get a zero. How is this controversial.
Since high school, the expectation is that you show your work. I remember my high school calculus teacher didn't even LOOK at the final answer - only the work.
The nice thing was that if you made a trivial mistake, like adding 2 + 2 = 5, you got 95% of the credit. It worked out to be massively beneficial for students.
The same thing continued in programming classes. We wrote our programs on paper. The teacher didn't compile anything. They didn't care much if you missed a semicolon, or called a library function by a wrong name. They cared if the overall structure and algorithms were correct. It was all analyzed statically.
I understand both that this is valuable AND how many (most?) education environments are (supposed) to work, but 2 interesting things can happen with the best & brightest:
1. they skip what are to them the obvious steps (we all do as we achieve mastery) and then get penalized for not showing their work.
2. they inherently know and understand the task abut not the mechanized minutia. Think of learning a new language. A diligent student can work through the problem and complete an a->b translation, then go the other way, and repeat. Someone with mastery doesn't do this; they think within one language and then only pass the contextual meaning back and forth when explicitly required.
"showing your work" is really the same thing as "explain how you think" and may be great for basics in learning, but also faces levels of abstraction as you ascend towards mastery.
It's like with the justice system: if you have to choose between the risk of jailing an innocent and the risk letting a guilty person go free, you choose to let a guilty person go free. All the time.
Unless you're 100% sure that a student cheated, you don't punish them. And you don't ask them to prove they're innocent.
That's the default chatbot behavior. Many of these people appear to be creating their own personalities for the chatbots, and it's not too difficult to make an opinionated and challenging chatbot, or one that mimics someone who has their own experiences. Though designing one's ideal partner certainly raises some questions, and I wouldn't be surprised if many are picking sycophantic over challenging.
People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though. It's why you see people shopping around until they find a therapist who will tell them what they want to hear, or why you see people opt to raise dogs instead of kids.
You can make an LLM play pretend at being opinionated and challenging. But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.
And the prompt / context is going to leak into its output and affect what it says, whether you want it to or not, because that's just how LLMs work, so it never really has its own opinions about anything at all.
> But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.
This seems tautological to the point where it's meaningless. It's like saying that if you try to hire an employee that's going to challenge you, they're going to always be a sycophant by definition. Either they won't challenge you (explicit sycophancy), or they will challenge you, but that's what you wanted them to do so it's just another form of sycophancy.
To state things in a different way - it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses).
It's not meaningless. What do you do with a person who contradicts you or behaves in a way that is annoying to you? You can't always just shut that person up or change their mind or avoid them in some other way, can you? And I'm not talking about an employment relationship. Of course, you can simply replace employees or employers. You can also avoid other people you don't like. But if you want to maintain an ongoing relationship with someone, for example, a partnership, then you can't just re-prompt that person. You have a thinking and speaking subject in front of you who looks into the world, evaluates the world, and acts in the world just as consciously as you do.
Sociologists refer to this as double contingency. The nature of the interaction is completely open from both perspectives. Neither party can assume that they alone are in control. And that is precisely what is not the case with LLMs. Of course, you can prompt an LLM to snap at you and boss you around. But if your human partner treats you that way, you can't just prompt that behavior away. In interpersonal relationships (between equals), you are never in sole control. That's why it's so wonderful when they succeed and flourish. It's perfectly clear that an LLM can only ever give you the papier-mâché version of this.
I really can't imagine that you don't understand that.
> Of course, you can simply replace employees or employers. You can also avoid other people you don't like. But if you want to maintain an ongoing relationship with someone, for example, a partnership, then you can't just re-prompt that person.
You can fire an employee who challenges you, or you can reprompt an LLM persona that doesn't. Or you can choose not too. Claiming that power - even if unused - makes everyone a sycophant by default, is a very odd use of the term (to me, at least). I don't think I've ever heard anyone use the word in such a way before.
But maybe it makes sense to you; that's fine. Like I said previously, quibbling over personal definitions of "sycophant" isn't interesting and doesn't change the underlying point:
"...it's possible to prompt an LLM in a way that it will at times strongly and fiercely argue against what you're saying. Even in an emergent manner, where such a disagreement will surprise the user. I don't think "sycophancy" is an accurate description of this, but even if you do, it's clearly different from the behavior that the previous poster was talking about (the overly deferential default responses)."
So feel free to ignore the word "sycophant" if it bothers you that much. We were talking about a particular behavior that LLM's tend to exhibit by default, and ways to change that behavior.
I didn't use that word, and that's not what I'm concerned about. My point is that an LLM is not inherently opinionated and challenging if you've just put it together accordingly.
> I didn't use that word, and that's not what I'm concerned about.
That was what the "meaningless" comment you took issue with was about.
> My point is that an LLM is not inherently opinionated and challenging if you've just put it together accordingly.
But this isn't true, anymore than claiming "a video game is not inherently challenging if you've just put it together accordingly." Just because you created something or set up the scenario, doesn't mean it can't be challenging.
I think they have made clear what they are criticizing. And a video game is exactly that: a video game. You can play it or leave it. You don't seem to be making a good faith effort to understand the other points of view being articulated here. So this is a good point to end the exchange.
> And a video game is exactly that: a video game. You can play it or leave it.
No one is claiming you can't walk away from LLM's, or re-prompt them. The discussion was whether they're inherently unchallenging, or if it's possible to prompt one to be challenging and not sycophantic.
"But you can walk away from them" is a nonsequitur. It's like claiming that all games are unchallenging, and then when presented with a challenging game, going "well, it's not challenging because you can walk away from it." This is true, and no one is arguing otherwise. But it's deliberately avoiding the point.
> This seems tautological to the point where it's meaningless. It's like saying that if you try to hire an employee that's going to challenge you, they're going to always be a sycophant by definition. Either they won't challenge you (explicit sycophancy), or they will challenge you, but that's what you wanted them to do so it's just another form of sycophancy.
I think this insight is meaningful and true. If you hire a people-pleaser employee, and convince them that you want to be challenged, they're going to come up with either minor challenges on things that don't matter or clever challenges that prove you're pretty much right in the end. They won't question deep assumptions that would require you to throw out a bunch of work, or start hard conversations that might reveal you're not as smart as you think; that's just not who they are.
Even "simply following directions" is something the chatbot will do, that a real human would not -- and that interaction with that real human is important for human development.
>> That's the default chatbot behavior. Many of these people appear to be creating their own personalities for the chatbots, and it's not too difficult to make an opinionated and challenging chatbot, or one that mimics someone who has their own experiences. Though designing one's ideal partner certainly raises some questions, and I wouldn't be surprised if many are picking sycophantic over challenging.
> You can make an LLM play pretend at being opinionated and challenging. But it's still an LLM. It's still being sycophantic: it's only "challenging" because that's what you want.
Also: if someone makes it "challenging" it's only going to be "challenging" with the scare quotes, it's not actually going to be challenging. Would anyone deliberately, consciously program in a real challenge and put up with all the negative feelings a real challenge would cause and invest that kind of mental energy for a chatbot?
It's like stepping on a thorn. Sometimes you step on one and you've got to deal with the pain, but no sane person is going to go out stepping on thorns deliberately because of that.
> and it's not too difficult to make an opinionated and challenging chatbot
Funnily enough, I've saved instructions for ChatGPT to always challenge my opinions with at least 2 opposing views; and never to agree with me if it seems that I'm wrong. I've also saved instructions for it to cut down on pleasantries and compliments.
Works quite well. I still have to slap it around for being too supportive / agreeing from time to time - but in general it's good at digging up opposing views and telling me when I'm wrong.
>People opting for unchallenging pseudo-relationships over messy human interaction is part of a larger trend, though.
I don't disagree that some people take AI way too far, but overall, I don't see this as a significant issue. Why must relationships and human interaction be shoved down everyone's throats? People tend to impose their views on what is "right" onto others, whether it concerns religion, politics, appearance, opinions, having children, etc. In the end, it just doesn't matter - choose AI, cats, dogs, family, solitude, life, death, fit in, isolate - it's just a temporary experience. Ultimately, you will die and turn to dust like around 100 billion nameless others.
I lean toward the opinion there are certain things people (especially young people) should be steered away from because they tend to snowball in ways people may not anticipate, like drug abuse and suicide; situations where they wind up much more miserable than they realize, not understanding the various crutches they've adopted to hide from pain/anxiety have kept them from happiness (this is simplistic, though; many introverts are happy and fine).
I don't think I have a clear-enough vision on how AI will evolve to say we should do something about it, though, and few jurisdictions do anything about minors on social media, which we do have a big pile of data on, so I'm not sure it's worth thinking/talking about AI too much yet, at least as it relates to regulating for minors. Unlike social media, too, the general trajectory for AI is hazy. In the meantime, I won't be swayed much by anecdotes in the news.
Regardless, if I were hosting an LLM, I would certainly be cutting off service to any edgy/sexy/philosophy/religious services to minimize risk and culpability. I was reading a few weeks ago on Axios of actual churches offering chatbots. Some were actually neat; I hit up an Episcopalian one to figure out what their deal was and now know just enough to think of them as different-Lutherans. Then there are some where the chatbot is prompted to be Jesus or even Satan. Which, again, could actually be fine and healthy, but if I'm OpenAI or whoever, you could not pay me enough.
I've watched people using dating apps, and I've heard stories from friends. Frankly, AI boyfriends/girlfriends look a lot healthier to me than a lot of the stuff currently happening with dating at the moment.
Treating objects like people isn't nearly as bad as treating people like objects.
If there's a widespread and growing heroin epidemic that's already left 1/3 of society addicted, and a small group of people are able to get off of it by switching to cigarettes, I'm not going to start lecturing them about how it's a terrible idea because cigarettes are unhealthy.
Is it ideal? Not at all. But it's certainly a lesser poison.
> If there's a widespread and growing heroin epidemic that's already left 1/3 of society addicted, and a small group of people are able to get off of it by switching to cigarettes, I'm not going to start lecturing them about how it's a terrible idea because cigarettes are unhealthy.
> Is it ideal? Not at all. But it's certainly a lesser poison.
1. I do not accept your premise that a retreat into solipsistic relationships with a sycophantic chatbots is healthier than "the stuff currently happening with dating at the moment." If you want me to believe that, you're going to have to be more specific about what that "stuff" is.
2. Even accepting your premise, it's more like online dating is heroin and AI chatbots are crack cocaine. Is crack a "lesser poison" than heroin? Maybe, but it's still so fucking bad that whatever relative difference is meaningless.
> If you want me to believe that, you're going to have to be more specific about what that "stuff" is.
not the person you were talking to but I think for well over 50% of young men, dating apps are simply an exercise in further reducing one's self worth.
> not the person you were talking to but I think for well over 50% of young men, dating apps are simply an exercise in further reducing one's self worth.
It totally get that, but dating apps != dating. If dating apps don't work, do something else (that isn't a chatbot).
If tech dug you into a hole, tech isn't going to dig you out. It'll only dig you deeper.
tell that to a world that had devices put infront of them at a young age where dating is tindr.
> If tech dug you into a hole, tech isn't going to dig you out. It'll only dig you deeper.
There are ways to scratch certain itches that insulate one from the negative effects that typically come from the traditional IRL ways of doing so. For people already scarred by mental health issues (possibly in part due to "growing up" using apps) the immediate digital itch scratch is a lot easier, with more predictable outcomes then the arduous IRL path.
> tell that to a world that had devices put infront of them at a young age where dating is tindr.
Their ignorance has no bearing on this discussion.
> There are ways to scratch certain itches that insulate one from the negative effects that typically come from the traditional IRL ways of doing so. For people already scarred by mental health issues (possibly in part due to "growing up" using apps) the immediate digital itch scratch is a lot easier, with more predictable outcomes then the arduous IRL path.
It's pretty obvious that kind of twisted thinking is how someone arrives at "an AI girlfriend sounds like a good idea."
But it doesn't back up the the claim that "AI girlfriends/boyfriends are healthier than online dating." Rather it points to a situation where they're the unhealthy manifestation of an unhealthy cause ("people already scarred by mental health issues (possibly in part due to "growing up" using apps)").
It very well might be genuine surprise. Most people from other countries have an extremely hard time understanding why most U.S. cities allow people to openly break the law in front of authorities with zero consequences.
The U.S. is a pretty far outlier in this regard. It's strange how many people in the U.S. don't realize this at all, and become appalled at when foreigners are shocked by the way things are done in U.S. cities.
Well I now I think it might be genuine ignorance because you managed to read my pretty clear comment ("everyone is mentioning US cities, so obviously they're talking about the US") and contort it into whatever you're on about.
Once might be a coincidence, twice might be me overestimating how carefully people read other comments before jumping into conversations.
All social media (including HN) is horrible in some ways. And they all suffer from too many people being overly credulous to random comments.
But the problem with over credulity goes far beyond social media. I've gotten strong push back for telling people they shouldn't trust Wikipedia and should look at primary sources themselves.
> I've gotten strong push back for telling people they shouldn't trust Wikipedia and should look at primary sources themselves.
Yeah, but basically nobody is capable of evaluating those sources themselves, outside of very narrow topics.
Reading a wikipedia page about Cicero? Better make sure you can read Latin and Greek, and also have a PhD in Roman history and preferably another one in Classical philosophy, or else you will always be stuck with translations and interpretations of other people. And no, reading a Loeb translation from the 1930's doesnt mean you will fully understand what he wrote, because so much of it all hinges on specific words and what those words meant in the context they were written, and how you should interpret whole passages and how those passages relate to other authors and things that happened when he was alive and all that fun stuff.
And that's just one small subject in one discipline. Now move on to an article about Florence during the Renaissance and oh hey suddenly there are yet another couple of languages you should learn and another PhD to get.
> For those who don't attend the prestigious universities with large endowments, average in-state state-run University tuition is under $10K, though again a large percentage of students receive some form of aids or grants to bring that number down even further.
This is an extremely important point that keeps getting ignored. People keep comparing _public_ schools in Europe to _private_ schools in America.
To further your point, just about every place has a community college where you can do the first two years of your education for about half the price of the state school. The total tuition for this route (2 years at community college, 2 years at a state school) is going to average just under $30,000 for 4 years. Which is definitely in the "work your way through college" range.
And this is before any financial assistance, which the majority of students receive.
Foreigners talking about how crazy expensive college is in the U.S., but they're likely mislead by people who took out large loans to go to extremely expensive private colleges. There's an easy way to stop this kind of debt - don't allow federal loans for private institutions. But no one is really interested in stopping it.
>People keep comparing _public_ schools in Europe to _private_ schools in America
Not necessarily the case. In Sweden private schools are paid for by the government, assuming they have been approved by the CSN (central agency for study-support(rough translation))
I don't know how that works in the rest of Europe, because I've never studied outside of Sweden. But in Sweden it doesn't really matter if the school is private or public. The only instance you have to pay yourself is if the school isn't sufficiently good to pass muster.
Also, again in Sweden at least, but possibly other parts of Europe as well, the tuition isn't effectively $0. The government will pay any student enrolled in higher education a monthly support. Back in my day it was 10k SEK per month (roughly 1000usd), no strings attached. Not sure how it currently stands but I imagine it hasn't changed much.
This money is meant to ease the burden on students, so that they can put more focus on studies.
"Working your way through collage" over here means you'll have a 20% job to pay for your cost of living minus the 10k SEK mentioned above.
The difference in cost of study is quite real, even taking your comment into account
People tend to do this justification behavior where they claim their dopamine hits are good for them/their health/society, when in actuality it's detrimental.
Almost no political junkie I know has changed their view on Trump over the past decade. They'll spend hours a day, sometimes hours a week, focused on him, but it ends up absolutely having no positive impact on their selves or their lives (usually a large negative impact).
Then I ask them about their local politicians, where they stand on certain issues, what their record is, what's been happening with their local government - and they have absolutely no clue. They can't even recall who was running in the previous local primary, or why they voted for who they voted for.
They're wasting countless hours on Trump and national politics because it feels good. Then they won't even spend a fraction learning about things that could actually make an important difference in their voting, because it's too boring for them. Even worse, many people will try to pass off these actions as being virtuous or being informed.
Um, I'm not from the US, so my comment was more general than that.
Politics exceeds politicians and specific partisan things. Politics shapes your life and that of your loved ones.
It's not simply about arguing online about stuff.
I'm my opinion one should be informed about local, national and world politics. Also history. What happens in the US unfortunately impacts my country (currently very directly; you are about to bail out Argentina, my country, just because Trump likes our president), so I'm paying attention.
>I'm my opinion one should be informed about local, national and world politics. Also history. What happens in the US unfortunately impacts my country (currently very directly; you are about to bail out Argentina, my country, just because Trump likes our president), so I'm paying attention.
What good does "paying attention" serve? Are you standing ready to send Trump a well timed tweet to get him on your side? Or maybe boycott US products? That's the problem with the 24/7 news cycle. There's "breaking news" happening all the time, and glued to your screen to stay "informed", but what does that actually do?
Moreover the OP isn't even against staying informed. He specifically points out the contrast being glued to some national issue that has no impact on his life, but isn't informed at all for any local issue that actually impacts his life.
I don't understand this position. What good does knowing anything about anything serve? What good does reading about history do?
I like being informed about the world and matters that affect me. Trump extending a lifeline to my disastrous government has implications for my life in our upcoming elections, and possibly beyond (they are saying the bailout comes with draconian "conditions"). I also care about more indirect ramifications and what it means for our sovereignty.
I like being informed about the world.
> He specifically points out the contrast being glued to some national issue that has no impact on his life, but isn't informed at all for any local issue that actually impacts his life.
You can and should be informed about both. There are no issues with absolutely zero impact in your life. Maybe they won't impact now, immediately and in a way that you notice, but in the longer term they will. Even as a trend for your nation.
Everything in life is political (just not about political parties, not sure why people conflate the two things).
PS: I've never used TikTok, I'm arguing out of principle. I do use Facebook and Instagram though. I swore off Twitter even before the Musk era, so I wouldn't know what's it like now (I imagine not good).
How much time do you think people should invest in staying informed about politics?
The upthread discussion was about being glued to the 24/7 news cycle, which at least in the US focuses mostly on national political drama. If you're suggesting that people should spend most of their limited attention budget following that news cycle, then they won't have attention left for much else.
I don't think anyone in this thread would say that spending, say, 15 minutes a day getting caught up on political happenings is a bad thing. It only becomes harmful when it sucks up all of your attention (as it does for political junkies).
And Reddit's far more tightly censored than Tik Tok. Most subs won't even allow open discussion of certain hotly debated topics because the Reddit admins have threatened to shut them down (and shut down subs that didn't tightly censor discussion in the past). Twitter used to be pretty tightly censored as well. Right now there's a huge drama on Bluesky because many people want those that don't agree with them politically banned.
That's one of the things that's tiring about these debates. Too many people only view "free speech" as a rhetorical cudgel, using it to hit "the other side" when it's convenient, then immediately discarding it and going back to "freedom of speech doesn't mean freedom from consequences!" when it's not.
True, though very few (this says "over 1,500"[1]). And from everything I've seen, Spot appears to be a very expensive solution in search of a problem.
They also have Handle, a slow moving robot on wheels with an arm for moving boxes. No idea how many have been sold, but it seems to be even less than Spot.
The robot (BigDog) in that video shows numerous capabilities that Spot still can't do (climbing over terrain like that, being able to respond to a kick like that, the part on the ice, etc.). Even 16 years later.
This only highlights the fact that making a cool prototype do a few cool things on video is far, far easier than making a commercial product that can consistently do these things reliably. It often takes decades to move from the former to the latter. And Figure hasn't even shown us particularly impressive things from its prototypes yet.
It's an unfair comparison. Yes, they're both 4 legged 'dogs', but they use radically different design criteria -- design criteria that the BigDog was used to refine.
I'm not surprised that a Honda Civic can't navigate the Dakar Rally route..
Knowing the way a lot of professors act, I'm not surprised, but it's always disheartening to see how many behave like petty tyrants who are happy to throw around their power over the young.