Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Python script to extract a list of all Elastic IP's from all regions, from multiple AWS accounts."

ChatGPT4 gave me a solid answer hitting all the points I wanted. Phind din't get the account handling correct, didn't address regions, and didn't handle pagination.

"Write a python based script that uses boto3 to query AWS Route53. It should print a list of every record for a given hosted zone ID."

ChatGPT4 did exactly as requested with pagination, and even smartly decided to use "input" so I could give it a zone ID at run time. Phind didn't handle pagination, or do ANY error handling. It was also slower than ChatGPT4 to generate currently, and it wasn't in a single block of copy/pasteable code.

ChatGPT's solution worked without modification. Copy-Paste. Run.



Just worked well for me: https://www.phind.com/search?cache=g9y2uizgjwcn378aovb65v92.

We do have issues with consistency sometimes -- please try regenerating if that is the case.


"We do have issues with consistency sometimes" That's a strange statement. Having issues with consistency means that sometimes the output is wrong. What does it mean to have issues with consistency sometimes ? You're either consistent or you're not.


There's a difference between models that are incompetent and aren't capable of getting the right answers ever and models that are capable of getting the right answer but may not do so every time. The Phind Model is in the latter camp.

Consistency issues can be caused by a wide range of factors from inference hyperparameters to prompting.


I meant that saying "something is inconsistent sometimes" is weird because inconsistency implies "sometimes"


Your example didn’t include pagination.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: