Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's nice that you think it's clear and responsive, but I think it [1] needs to be validated by an expert in both the material and education. Or we need some way to show that people have actually learned the topic. People sometimes prefer explanations that are intuitive and familiar but not accurate.

Meanwhile, there are math education resources like iXL that maybe cost a little money but the lessons and practice problems are fully curated by human experts (AFAICT). I'm not saying these resources are perfect either, but as a mathematician who has experimented a lot with LLMs, including in supposed tutoring modes, they make a lot of mistakes and take a lot of shortcuts that should materially decrease their effectiveness as tutors.

[1] LLM-based tutoring (edit: footnote added to clarify)



That's exactly what Math Academy is: I'm operating with a grounded set of correct, validated content, and using LLMs to (1) fill in more conceptual explanation and (2) check where I went off the rails when I get things wrong. You can't play the "hallucination" card here. An LLM can reliably do partial fraction decomposition, spot and solve an ODE that admits direct integration, calculate an arc length, invert a matrix, or resolve a gnarly web of trig identities. If you say a current frontier model can't do this --- and do it from OCR'd screencaps! --- I'll respond that you haven't tried.

I can't think of a single instance where O4 or GPT5 got one of these problems wrong. It sees maybe 6-12 of them per day from me. I've been doing this since February.


That's very interesting. Maybe you are doing this the right way, and my concern as a math educator is for the people who may struggle to stay on the straight and narrow, or know what the straight and narrow is in this brave new world.

Where I see deficiencies is not so much in the calculations. When a problem class has a solution algorithm and 10,000 worked examples online, I'm not too surprised that the LLM generalizes pretty reliably to that problem class.

The problem I find is more when it's tricky, out-of-distribution, not entirely on the "happy path" of what the 10,000 examples are about. In that case, LLM responses quickly become irrelevant, illogical, and Pavlovian. It's the math version of messing up the surgeon riddle when presented with a minor variation that is logically very easy, but isn't the popular version everyone talks about [1].

[1] https://www.thealgorithmicbridge.com/p/openai-researchers-ha...


The International Mathematical Olympiad challenges should be pretty safely out of distribution. Gemini and OpenAI's best research models both scored gold on that this year.


When they make a model with those abilities publicly available, I'll happily experiment with it, and I'd anticipate reporting that it is a lot better than what I experienced in the past.


The Gemini one is out now but expensive:

> Gemini Deep Think, our SOTA model with parallel thinking that won the IMO Gold Medal , is now available in the Gemini App for Ultra subscribers!!

https://twitter.com/OfficialLoganK/status/195126226151265943...


No, we're not going to move the goalposts here. You can tweak any argument so that the thread goes nowhere and nobody can update their mental models by positing a sufficiently misguided user of a piece of technology. I'm saying: LLMs are quite good at math tutoring, in many ways probably significantly better than human tutors (they're tireless, can explain any concept 50 different ways, and can rattle off individualized problem sets in seconds). I made that claim, and you pushed back saying that anything I saw "needed to be validated by an expert". You even said that anything I said was an unreliable narrator because I'm studying math. No, to all of this.


What makes you think https://www.mathacademy.com/faq hadn't been evaluated by experts?

That appears to be their whole thing, and they've been in business for longer than LLMs have been around.


I think before that question is useful to ask, we have to know if that FAQ even says anything about LLM-based tutoring. After a few minutes of research, I can't find any evidence that Math Academy offers LLM-based tutoring.


This was linked from the homepage: https://www.mathacademy.com/how-our-ai-works

But more importantly if tptacek says they use LLMs and is a user of the platform that's good enough for me.


I'm using LLMs alongside Math Academy. Math Academy uses machine learning generally (and so now they plug their "AI" technology) but it's not transformer-model-style AI ML; as I understand it, it's just driving their underlying spaced repetition system (which is interleaved through lots of different units).

In the scenario I'm discussing, Math Academy's content is a non-generative source of truth, against which I've benchmarked GPT5 and O4-mini.


Everything described there sounds like old-school adaptive algorithms. I don't see anything about generative AI or LLMs.

I asked Google if MA does LLM tutoring and got back this answer:

> Math Academy does not offer Large Language Model (LLM) tutoring. While the company advertises itself as "AI-powered," this is in reference to a machine-learning-based adaptive learning system, not an interactive LLM tutor.

And here is a HN comment that indicates LLMs are a complement to MA, not part of it: https://news.ycombinator.com/item?id=43281240


You're right, I may have misinterpreted what tptacek said: he said he was using LLMs and that he was using Math Academy but I interpreted that as "Math Academy includes LLM features" - actually it's equally likely he's using Math Academy and having LLMs tutor him on the side.

(Confirmed I got this wrong: https://news.ycombinator.com/item?id=45439001)


You're confused. Math Academy isn't LLM-based. I use an LLM alongside it.


I think parent was clearly referring to LLM use, and not math academy.


I agree that LLM output need to be validated to be valuable but math (unless it's on a quite high level I suppose) seems like one of the areas with the most potential for doing validations, without requiring an expert to validate everything.

If you're working on educational math problems with solutions you can validate against the solutions. If you're working with proofs you can evaluate the proofs in a proof checker. Or you can run the resulting math expressions through a calculator.


There is a bit of oversimplification here.

Understanding if the student has actually learned is a competency piece, in math it’s mostly show your work and/or did you have the right answer.

The continued top down attempts to boil the whole sea with LLMs is part of the current problem.

It’s getting pretty good though for focused tutoring.

For students, models setup to tutor too often are trying to boil a sea (all education) instead of a kiddie pool. The reality is that more and more seems like k-6 if not k-12 students can be supported.

If we look at the EdTech space from the bottom up, namely learner-centric, there is both a real need and opportunity.

For school age students, math largely has not changed in hundreds of years, and doesn’t change often. Either you understand it or have to put in the work.

There’s no shortage of human created written teaching resources. A teacher could create their own touring assistant based off their explanations.

Alternatively, an open source textbook could be inputted. There’s a reason why training or fine tuning on books has caused lawsuits - it can increase accuracy many fold.

Teachers are burdened with repetitive marking, there’s def a place for personalized marking tools.

We know LLMs respond differently to different input. Their superpower is being able to regenerate an input as many different many different ways, which can include personalization.

Just because one has experimented with LLMs doesn’t mean there isn’t a way to get a result from them just because we haven’t been able to understand how.

If examples of the chat logs or prompts can be provided of what did or didn’t work it helps have a conversation without the subjectivity.

Mathematics is a great lens to see that folks are trying to get non-deterministic software to behave like all the deterministic software we’ve had before, instead of finding the places where non-deterministic strengths can shine.

It’s not all or nothing, or one or the other.


>I think it needs to be validated by an expert in both the material and education

LLMs getting it wrong is terrible when it matters but i also don't think it's a huge problem when it comes to acting as an additional resource to learning. Here the parent is using a lesson plan that costs money and using LLM for a little more explanation. It's similar to using web search on a topic and sometimes you get a hit, sometimes you don't.

Asking LLMs for numeric examples of complex maths sometimes fails. It's easy to spot and no great loss. When it works though it's extremely helpful to follow through.


Not sure the condescending tone is really necessary. I’d agree with you if the parent comment was saying they asked an LLM to create a math curriculum and problems for them. But they’re using an established app created by a math major and then using LLMs to ask questions. It’s easier to validate the responses you get back in those cases.


I think students are not a reliable source of information about the effectiveness of LLM tutoring. There is no 100% nice way to say this, but I did my best. You're free to disagree, but I think the tone criticism is off-base.


I agree with you completely. People mistake the impression of learning for learning itself super easily. This is why we have examinations and other tests of mastery, after all. I think using LLMs for generating exams or supplementary material is great, but using them to develop accurate understanding that would actually turn into long term retention seems dubious to me.


We found our way to "No True Math Student". I love it!


It’s interesting how people insist math requires expert validation when it’s literally the most self validating subject there is. The instinct to gatekeep even something as mechanistically checkable as algebra says more about insecurity in education than it does about rigor.


Wanting an actual check on the device that is notorious for making things up is gatekeeping now?


You’re projecting a bad faith use case that the original commenter never described. they’re using it in a exploratory and iterative way, not deferential.


If you're using it for education it is by definition deferential.


No it isn't. Again, what's happening here I think is that this thread doesn't understand what Math Academy is. It's not an LLM. I'm using the LLM alongside it.


"5.11 or 5.9 which number is greater?" was a meme query a few months ago to ask an LLM as it would confidenly prove how 5.11 is greater - so yes, we do need expert validation!


A very, very big problem we have with LLM discourse is that LLMs have changed radically since the beginning of last year. If you're making an argument about modern foundation models based on the idea that they can't generate reliably correct answers to whether 5.11 is greater than 5.9, your mental model is completely out of date.

You don't have to believe me on this, just your own lying eyes. Go try this for yourself right now: ask it dy/dx of h(x)/g(x) where h(x) is x^3 + 1 and g(x) is -2e^x. That's a random Math Academy review problem I did last night that I pulled out of Notes.app. Go look.


I think you’re misreading the situation. the original commenter isn’t outsourcing thinking, they’re using the tool to probe and test ideas, not to blindly accept end result answers which LLMs are (currently) not to be blindly trusted.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: