Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

80% of being a good security engineer is knowing the big picture, all the parts and how they work. The correlation that LLM produces has no value if its not actionable. You are the one that determines the weights, values, and features that are important. I'd be very curious of how you currently account for scheduled outages, unscheduled outages, new deployments, upgrades of existing systems, spinning instances up and down for testing, laptop device swap-outs, traffic in different silos. How are you baselining normal communications and session timing between services or across protocols? If you are in the cloud is baselining done by service HTTP, DNS, DB, etc? I could see different weights being constructed to represent defense-in-depth but this would seem to be a constant amount of work while also investigating or feeding true/false positives back into the system.

Entry-level cybersecurity isn't a thing which is why it isn't working out as typically you need prior dev, ops, devops, sre, sysadmin, etc existing experience. The talent shortage is because you can't do an undergrad in cybersecurity and somehow pick-up the prior operational knowledge that develops your skills for understanding and troubleshooting how systems, networks, and applications all function together. Cybersecurity as it stands, and you mention, is in my experiences best as a focus off-of computer science. I mean even the CISSP requires working experience in the field.

The one item I think you are overlooking is that you have the experience in how everything works together which makes a tool like ChatGPT or some other analyzer where you can ask the "right" questions a useful tool because you have the mental mapping and models through experience of the questions to ask. So while a security analyst job might go away you are back at the original problem of developing security engineers that know the architecture, flows, daily expectations, etc and having a LLM buddy is not going to turn a security analyst directly into a cybersecurity engineer over night.



> The correlation that LLM produces has no value if its not actionable.

For security, there are two parts to this:

- correlation within detection engines, i.e. what Crowdstrike does: CS and so on are already doing what you describe (baselining normal system and identity behaviors). It is hit-or-miss still, but noticeably better than a few years ago, and I think this current AI era will push it further. These already took away the need for several sec eng hires.

- correlation across logs, i.e. an incident is happening and under time/under stress, and usually this is a IR team putting together ad hoc search queries and so on. LLMs, as many of them seem to have indexed query languages docs and much of the open source docs on AWS, O365 etc, are in almost invaluable tool here. It's hard to explain how quickly security dev across pre-incident prep or in-incident IR are sped up by them.

> where you can ask the "right" questions a useful tool because...

Yes, this specifically is one of the great value-adds currently - gaining context much quicker than the usual pace. For security incidents, and for self-build use cases that security engineers often run into, this aspect specifically enough to be a huge value add.

And I agree, it will exacerbate the existing version of this, which is my point on replacing analysts:

> you are back at the original problem of developing security engineers...

This is already a problem, and LLMs help fix the immediate generation's issues with it. It's hard to find good sec engs to fit the developmental sec eng roles, so those roles become LLMs. The outcome of this is... idk? But it is certainly happening.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: