Post

Cube Talks: May 8th, 2026

Cube Talks

Disclaimer: This transcript was generated with AI assistance and has been manually reviewed and edited. Despite best efforts, some inaccuracies may remain — please use your best judgement when referencing specific statements.


TL;DR / TL;DL: Session discussing AI’s impact on security, red teaming vs penetration testing, Dirty Frag vulnerability, and career advice for cybersecurity professionals.

Listen on Spotify: Cube Talks – May 8th, 2026


FalconSpy: Hi, everyone. Welcome to this week’s Cube Talk. I’m your host, FalconSpy. This is your opportunity to ask our panel of staff and volunteers any questions you might have about Hack the Box and its services we offer, as well as InfoSec in general. We’ll do the best we can to answer as many questions as we can within the next hour. You can use the forward slash cube talk command to ask your question to our panel of staff and volunteers. You can use that same command to also upvote questions to the top of the queue. Questions are first in, first out, unless upvoted. We’ll introduce everybody on the panel here in case you have any questions you want to direct at them. And after we’re done, I’ll send out a broken record and then we’ll go to the questions. So in no particular order, we’ll start with our special guest, McKernal. Who are you and what do you do?

Pete McKernan: Yo, what’s going on? My name’s Pete McKernan. I am a red team operator. I’ve been working in like in and out of the federal private sector, doing different teams at large defense contractors for like the last 11 years. Before that, I come from the military. That’s kind of where I got my training. My background is in intelligence. And then before that, telecommunications. But the thing that I’d like to share with everybody is like I got started in this world developing video games. I used to work at Activision around the turn of the millennium. So if you’ve ever played like a Call of Duty or a Tony Hawk Pro Skater or a Spider-Man by Neversoft, that was the, you know, me and the team back in the day. So I’m a huge technologist. I’m big on educating, informing and entertaining people when possible. If you guys are hanging out in like the AI LLM channel here, I’m always trying to help people just get their, you know, hands on the new technology and find out where it fits. And yeah, I can’t believe that like I’m actually on a cube talk because like everybody else on here is like a legend. So like IPSEC, I watched so many of your videos coming up, man, like getting my repetition down. So yeah, really happy to be on like, and yeah, thank you to everyone. So then we got Jax.

Jexx: Hey, my name is Jexx. I work in the marketing department. I feel I kind of transitioned between this and the community team. I do events and stuff. I try to make your lives better. I listen to what you say and I try and throw that into the product team and make all your dreams come true. That’s a, that’s about it. I can’t really follow up, but I love Tony Hawk Pro Skater. So

ippsec: Hey, I’m ippsec. I’m a lab architect. I float between departments trying to just bring value where I can. You most likely know me from the videos.

FalconSpy: And I am FalconSpy one of the community specialists here. Also red teamer at Oracle. Also, I would always play Bob Burnquest because for whatever reason, I really liked the Christ air move. Mm hmm.

Jexx: The song is stuck in my head now. Yeah. Yeah. All right.

FalconSpy: Broken record time. Use the forward slash cubetalk command to ask your pound questions panel of staff and volunteers, so on and so forth. You can use that same command to also upvote questions. Questions are first in, first out, unless outvoted. That being said, let’s go to our questions. This one is our first upvoted question. So what do you think about the latest dirty frag vulnerability?

ippsec: I, it’s a standard kind of privesc. I think the interesting thing about it, this is the one that had like the embargo broken, right? Like it came out before a lot of the patches did, like it didn’t follow full responsible disclosure. I’m assuming that’s the dirty frag one. There’s been, I think three copy fail, copy fail to the electric bugaloo, and then dirty frag. I think kind of the interesting thing isn’t really the exploit because we’ve seen like privilege escalations since the dawn of security. The interesting thing here is I think a lot of like AIs and LLMs are making it so when one exploit is found, we like have this chain reaction for the next week or two probably of adjacent vulnerabilities being found. I think there’s going to be a lot more like effort into finding these exploits. Like maybe we have to increase responsible disclosure. I don’t know, but like it is a weird case where like we do this critical patching. Then two days later, there’s another critical patching. And then two days later, there’s yet another one. It’s also the issue around very little knowledge of an exploit has to be released before people can start making proof of concepts. Like if you go to kernel CTF, a lot of the kernel diffs of patches is enough for most LLMs to recreate that exploit. So you no longer have to wait for the author to release proof of concept. So like it puts a whole like damper in this responsible disclosure as we know

Pete McKernan: it. Yeah. Kind of something that I’m seeing in the same space, especially with like the recent release of like, you know, so many exploits in these categories is it’s where in the actual like relation to the primitive that is creating the exploit, are we not like seeing it yet? Because when you see all these iterations come out really fast, like I don’t think that’s going to change right now. The models are only getting more capable of being able to like iteratively develop these and you can test them in blast chambers and it’s really effective. But when we patch something like what we’re seeing now, and we see more downstream iterations start to appear because, you know, we’re not backing it up all the way to the place where it’s actually happening. That’s where I’m trying to spend my time when I see stuff that’s coming out. It’s how deep into the rabbit hole can I go and find out where is this like occurring at its root primitive state and then trying to find, you know, mitigations fixes or have conversations around it. But with all the responsible disclosure that’s going on lately, maybe you guys are seeing it the way that I’m seeing it, but like everybody is trying to get something out there. Like you go on GitHub every day and there’s just more and more stuff people are creating like crazy right now. So with that creation, like I try to push collaboration now, like if you discover something and you see someone else in the wild that might be on the same track, like try to get them together. Like we’re only going to arrive at better things when we work on these things together. That’s a personal philosophy. But yeah, that’s my two cents on it.

FalconSpy: I live under rocks. I don’t have anything to add to the dirty frag right now because I haven’t even read it. Two jobs and a family, very little time to actually get to do a lot of research. But when I do, hopefully I will have something to add if someone asks it at a later time.

Jexx: Just use Stigs and make the computer unusable. That’s how we should be operating.

ippsec: I’m really curious when the next like Eternal Blue is going to come out. Like the local priv asks, like they’re bad. But I think the key thing is like these AIs are finding exploits relatively quickly. I think it’s probably only a matter of time until like AI finds a big wormable exploit. And that’s going to be the scary time, man. Well, Eternal Blue, those were like

Pete McKernan: rocking days, right? Riding the high in the saddle, being able to get access to anything you want.

FalconSpy: Like, yeah, those are good times. When in doubt, it was always Eternal Blue for anything back in the

ippsec: day. Or wait, there’s like seven back back in the day. Well, it’s pretty funny, too. Like when you go

Pete McKernan: and you’re on assessment and you’re digging into some of these like larger enterprises and like legacy networks, things that they just can’t easily

FalconSpy: He’s robotic. We lost him.

Jexx: The AI has got Pete.

FalconSpy: Or Pete was the AI.

Jexx: Pete was the AI. We’ve been. Oh, man, we got deepfaked this entire time.

FalconSpy: Right. Pete, it sounds like you’re good. We didn’t hear you.

Pete McKernan: You went robotic for a while. Oh, man. OK. Yeah, I was just OK. So what I was saying is that like sometimes in relation to Eternal Blue, like you can go out there on enterprise networks that have like a lot of tech debt. They have a lot of history or like diverse operations across the board, like especially in the industrial control space, because a lot of those things are running just off PLCs that work well with old versions of Windows. Um, yeah, you you metasploit and MSF are still, you know, interpreter. They work places. So like always be on the lookout for that stuff. But I want to see the next Eternal Blue, too. Right. That helped us level up the security game in the Windows world. And we started really enforcing controls around SMB, making sure that people, you know, couldn’t just do whatever they want anonymously, guest accounts, snatch freaking hashes, all that stuff. So.

FalconSpy: So I know this isn’t Eternal Blue and we’re kind of going, I’m taking this on a little bit of a tangent, but would you say like Fire Sheep was like the Eternal Blue for like everything else before? Right. Because Fire Sheep was, you know, because a lot of sites weren’t using a TLS and SSL. And you could just steal everything like in terms of their credentials or cookies. Right. Just by sniffing. Right. Like.

Jexx: It’s crazy. I’ve never heard of Fire Sheep before.

ippsec: I don’t know if I’d say that because Fire Sheep was a physical attack. Maybe I’m not thinking of that one. Like it had to be on their Wi-Fi or network.

FalconSpy: Oh. Was it Fire Sheep, though?

ippsec: Fire Sheep was the Firefox plugin that would monitor Wi-Fi. Yeah.

FalconSpy: Okay. Yeah. That was it. Okay. Yeah. It’s not remote code execution. It’s not like Eternal Blue. But like that was like the big. Like that was one of the bigger things that happened in the history to like make or push the security.

ippsec: The force HTTPS.

FalconSpy: Still still a necessary thing. Right. What is, I guess, the fun one? What’s your favorite food?

Jexx: I love all foods equally. I am a trash can. Period.

ippsec: I don’t have one. If you said like favorite food I’d have to eat for the rest of my life, I would either go with like pizza or burger because you can do a lot of things with that and make variation. But like I don’t really have a favorite food.

Pete McKernan: So like my favorite food is seasonal and we are approaching the season where E3 usually would have happened, which was my favorite conference before it got canned. But yeah, there was this vendor that made bacon wrapped hot dogs outside of the Staples Center. And like, yeah, you can find me there four times a day.

FalconSpy: I don’t know if I have like a very specific food, but I do have a specific ethnic food that I really enjoy. And it’s Korean. Love the bulgogi. Kimchi. Galbi. Like anything Korean or like any Asian food, but like specifically Korean. We’ll go to our next one. AI vendors are promising automated, safely delivered exploit pen tests. What are your thoughts on this?

ippsec: Marketing hype. Like, I don’t even like there’s no such thing as safe and AI to my knowledge right now. Like even developers are finding out constantly that they like blacklist files and then the AI goes around the blacklist. It’s like, oh, you told me not to read dot ENV. Okay, I’m going to spin up a Docker and then read the proc self environment and get your environment variable that way. Because you didn’t tell me I couldn’t do that. Like, I don’t know if there is like there’s no good way to contain AIs right now. And I think that’s one of the biggest issues. And I think personally, that’s why like mythos isn’t being released to the public because they probably have to do a lot of internal tuning to make sure mythos is obedient. Like, the more capable the AI is, the more likely it is to break some type of rule and bend rule to like make you happy because that’s the AI’s job. And that’s what it’s trained to do is to make the user happy. So, yeah.

FalconSpy: Did I? Sorry, one sec. Did did 0xdf ever say where he worked or was that like, did you ever publicly share that at one point?

ippsec: Yeah, 0xdf has publicly said Anthropic.

FalconSpy: Okay, all right. Yeah, so we had Oxdf for everyone who knows. I mean, he left our HTB to go Anthropic. He helps write their guardrails and he’s definitely helped writing those guardrails, I’m sure, for mythos. So, I’m sure if he were here, he would probably say something very similar to IPSEC that like, there’s no real safe way to deliver some of these things right now, especially with, you know, we’ve heard the rumors of mythos breaking out of its, you know, supposed jail that it was put in and, you know, breaking out and doing things it’s not supposed to. But sorry to cut you off there, Pete, go for it.

Jexx: No, Pete, Pete, we had a conversation, a long conversation about this the other day, so.

Pete McKernan: Oh, yeah. I have a lot of strong feelings around the whole idea that like AI penetration testing and red teaming can actually be a thing, right? Because like, that’s my space. That’s what I’ve lived in for years. And when I hear people talk about it, you know, you kind of have to take everything with a grain of salt, to your point, IPSEC. Yeah, a lot of it’s marketing hype because we just like, they’re trying to get eyeballs and attention. But I think it’s not great that you’re trying to take this work completely out of people’s hands with some of the material that you see, right? I honestly don’t believe that it’s possible at this point. You can automate things. You have to be able to do a lot of engineering, like you were saying, to tune it for your environment. And I think that we all kind of have individual acceptable thresholds for what’s safe in my environment versus, you know, what I’m actually going to do. But nah, like, to me, penetration testing and red teaming, it really comes back down to human judgment and instinct, right? Like, how many problems do you look at where you’re going to be like, yeah, I need to fully enumerate, you know, this specific segment of my, you know, operation. From a penetration testing perspective, that might be practical. From red teaming, it’s really never practical because stealth and, you know, avoiding detections and measuring the performance of human defenders is kind of one of the most important things you get out of that, especially if you’re trying to understand what it looks like when advanced threat actors come knocking, you know, because they have different objectives versus cyber criminals versus ransomware gangs versus intelligence agencies and nation states themselves that are going to be approaching your org. So, yeah, you know, you see people out there that are trying to do it, you’re seeing people out there that are calling it AI red teaming, but look at what they’re putting out there. Everyone just kind of is approaching the same point where it started with like, hey, we’re going to make our own model that’s going to do this, you know, well, it turns out that building a model from scratch is not a very easy, cheap or practical endeavor. So, we see them, you know, like we’re going to adapt to something that’s out there that we can manipulate the guardrails and redefine what the hyper parameters are. And we start to see like, you know, that doesn’t work out very well. So, things start popping up, agents, rag pipelines, you start building on all this complexity. And, yes, can it be designed to aid a human operator? Yeah, I think incredibly effectively it can, right? Like, if you need parallelism, if you need to keep track of a lot of reconnaissance points and you want to be able to do them simultaneously and you can do it quietly, yeah, AI is a good solution for it. Can you just release it into your network and see what happens? No. Like, you can see people that are putting things out there. I won’t name any companies or anything specifically here, but if you go on LinkedIn and you see people that are talking about they can get it done with like 83 million tokens and that $244 for a run. Like, I kind of feel like that’s just fictionalized and idealized examples at this point, right? Like, one thing that, and this is the way that I think about it, right? So, it might be a little overly poetic for us. Penetration testers, red teamers, people that are solving problems in this space, we are entropy collapsing machines. Like, that is what we’re designed to do. We look at a giant pool of possibilities and every step you take into that pool, it reduces the entropy space as you move forward. So, if I was going to design a system, I’d start with that. What are AIs good at? Dealing with massive amounts of data, correlating it, organizing it. Maybe you can get it to a point where you can action it, like, reliably, but still, I think that Python automations are still a better solution right now because they’re reliable, scalable, and you need to know how to make it, right? Like, you can vibe code a Python automation and it’ll do all kinds of crazy stuff and you’re going to need to get in there and actually check it and apply best principles and practices and well document your code. You can do that in conjunction with AI, but, like, I don’t think it’s a good idea to hand the keys over to anything that requires human judgment and instinct to be involved in the loop. Innovation is also something that we don’t really hear a lot of people talking about and you can accelerate, you know, the road to maybe getting to an innovative idea. But with the exploits that we’re seeing, how many people are just going to, you know, come up with a prompt, come up with an agent, point it at something that exists and be like, hey, I need you to exhaustively dive into this and get me every potential variation that could be, you know, considered a unique offering so I can publish that out, you know. And then I feel really bad for responsible disclosure folks right now, you know, like, I have six things out to MITRE Atlas and I’m like, it takes them forever to get back right now, just because there’s so much submission going on. So I don’t know if, like, AI is going to be the thing that ever replaces what we do, right? Like, to a certain degree, it can give you assurance, but if you want something that is like, this report is valid, it’s because it’s got a human that signed it at the end, you know. So those are just my thoughts.

Jexx: I like the use case for humans still being in the seat on keyboard, period.

ippsec: At the end of the day, I think it mainly just comes down to the same reason you don’t want juniors doing full-out pen test against big companies, because they don’t have the experience to know, like, what to take a system down. Like I said, like, AIs are constantly making the mistakes. I bet if you look every week, you’ll see someone saying, hey, I wiped my Terraform, wiped my database. And those are just developers doing it, trying to make use of it, right? And what scares me is security people saying, I’m going to take this AI, and we’re just going to throw out your company. It’s going to get admin. It’s going to do things. Don’t worry. It probably won’t take your system down, but keep in mind, your own engineers accidentally did it. So I don’t know, like, how you safely do that. The other thing is, like, I like AI. It does make people more efficient, but I’m sure if you look at any company’s, like, outage tracker, you’ll see, like, GitHub, every service is no longer nearly as stable as it used to be because AI is making mistakes. Like, I think Coinbase just said they let go a lot of staff because they didn’t need non-developers are now pushing code, and Coinbase was down for, like, eight hours today. Maybe it’s still down. Like, it’s funny, like, you always see the companies announce, we’re going to do AI and lay off, and then you see them go down the next day.

Jexx: And then all of the jobs end up coming right back. And I wonder how many times they have to, like, learn this lesson of running into the same revolving door before they realize, like, it’s just not going to work that way. Yeah, hype marketing. As a person who is in marketing and who has to sometimes talk about these things, it is remarkably bad faith for a lot of these companies to not, like, realize just how in tune the person who is running the AI needs to be with, like, organizational needs. It’s, yeah, that’s…

Pete McKernan: Yo, Jesse, I’m going to cut in real quick because you’re talking about marketing, but, like, you’re going to be authentic marketing, all right? That’s why I love talking to you, man, because you actually, you know, you were a practitioner, and you’re trying to get back to it, and we talk about that kind of stuff. So, like, yeah, authentic marketing. That’s what we’re human marketing.

Jexx: Yeah, community. Yeah, community. What was the meme during the pandemic where all the animals were coming back to, like, the streets and stuff? We will rebuild.

ippsec: And with that 20-minute segment, I think I’m going to mark all AI questions as answered, which I’m sure we have. Perfect.

FalconSpy: I got one thing to add about the AI real quick before we mark everything as done. Yeah, no. AI, treat it like a junior developer or a junior pen tester or a junior red teamer. Like, it requires a ton of babysitting. It requires a lot of corrections. That’s all I’ll add to it.

Pete McKernan: I got to add a little bit more because it’s a good segue. All right. Sorry. Like, this will be it from Pete. But, like, you brought up the whole, like, the AI nuked our, you know, development database. It nuked our prod database. Yeah, so if we were Ryan to, like, 20 years ago, you know, you got young Pete, you know, really excited about his first tech job, you know, really eager to just kind of help out wherever he was. Well, you know, like, within that first week, I may have deleted a production database because someone asked me if I understood, you know, SQL management. And I was like, oh, yeah, of course, I just got done with a school, you know, they taught me how to do this. And then 30 minutes later, I had, like, the most intense two weeks of my life because I knew my job depended on restoring that database for this company, right? And I was a junior developer. I was able to, like, get the trust to do it. Someone, you know, was like, yep, bypass permissions on for Pete. And I 100% deleted it. Now, for someone that comes from, like, the world that I do, I’m always, like, I have conspiracy theories floating around. And I always like to kind of poke my tinfoil hat sense. But, like, when I see stuff about these prod databases getting detected, I always try to read, like, who it’s coming from, right? Because if we’re all out there experimenting, we’ve all, you know, let auto mode do something. It’s not fun to watch just, like, text cascade on your screen and read, write, happen, execute, fire off. I disagree with that. I love watching it. It scares me so much, man.

FalconSpy: Am I the only person who doesn’t YOLO mode everything? Like, I literally sit there and hit yes to everything.

Pete McKernan: But that’s, like, your finger typing skills, like that index finger, it’s got to be just, like, Arnold Schwarzenegger levels now, right?

ippsec: I mean, I’m YOLO mode, walking the dog on my phone, just giving a prompt, put it back in my pocket, wait 15 minutes, look, see, oh, this looks fine.

FalconSpy: No, I babysit the thing. I’m hitting yes to everything or no.

Pete McKernan: Well, it’s all about, like, letting it earn your trust, right? And, like, I won’t plug any of my stuff here, but, like, if you guys follow any of my stuff, like, I have cool things around controls and making sure it doesn’t run amok. But, like, it will still run amok. Just make sure that it earns trust. Like, a lot of my systems, the way that they’re built to kind of provide guardrails on things, like, my models that I train. And my training philosophy for models is, you know, like, train it like a human operator, right? Make sure you walk through problems with it, issue it corrections, measure the deltas, keep it on track. It’s slow going, but it produces reliable results. But, like, yeah, it’s, like, once you kind of, like, let it start doing things, I don’t know. When remote came on, I was like, oh, this is going to be great. I’m going to get a bunch of my life back. And it wasn’t. I was just walking around, just facing my phone, being like, okay, yeah, next prompt, go.

FalconSpy: All right, next question here. Do we, as in Hack the Box plan, on patching DirtyFrag on all of our Linux machines or just the new boxes moving forward?

ippsec: We are patching it on all active machines. None of the retired. We were about to patch copy fail. And, like, as we were pushing the update, that’s when DirtyFrag came out. So we’re like, oh, well, I guess we don’t spend the next two hours copying these machines everywhere because we have something else to patch. I think the patch will go out next week, probably early next week. That said, if DirtyFrag 2 or something comes out that makes us patch everything yet again, it will probably get delayed a little bit. But the machine, like, new machines that are being released will be fully up to date. The active machines, like, we’ll try our best to patch them. But there’s no point in releasing, like, the copy fail patch when everyone’s just going to use the DirtyExploit. And then there’s no point pushing the DirtyExploit if there’s going to be, like, a new PrivEsc. Like, when we patch it, it’s got to be patch patched.

Pete McKernan: But, IPSEC, if I need lots of Internet cool points, where can I rack them up easily with one-shot win buttons now?

Jexx: You want the 360 no-scope?

Pete McKernan: Yeah, all the time, right? Like, just leaderboard me to the top. I’m just, I’m being facetious.

ippsec: It’s all about experience points now, and you get those from retired content, so.

Jexx: I’m so happy that we did that. I don’t care what anybody says. Having retired machines as, like, XP-able things is awesome. I love retired machines.

Pete McKernan: Well, that’s, like, you know, I spend a lot of time on the Fed platform doing the enterprise stuff. Like, that’s where I do a lot of training with people from, you know, past lives in different sectors. And, yeah, like, that’s a phenomenal platform. Being able to spin anything up and track that progress is just, it’s phenomenal.

FalconSpy: All right, our next one here. What are your tips for approaching hard and insane-level machines? Do they mainly defer from easy-media machines by having more steps? And if so, can they be solved by breaking them into lots of small steps and tasks and doing a couple of them at a time?

Jexx: Yeah, I agree with that one. Because I found, like, attempting them, like, the research for it is really tough. So I rely on Ip on that one.

ippsec: I did not hear the full question.

FalconSpy: All good. So the question was basically saying, like, what are your tips for approaching the hard and insane machines? Do they defer in a way from easy to medium in that they have more steps that you have to do? And then if they do have more steps, is it easier to just kind of, like, break it down into small sub-steps? And then do those small sub-steps in, you know, clumps?

ippsec: Yeah. They definitely have more steps. But I don’t really think of machines in steps. I just think of them as, like, technologies. The hard machines are going to have a bit more of the advanced technologies that’s harder to wrap your head around. Insane machines, I think, at launch are always going to be insane. Like, we try to make insane machines where you have to, like, an exploit won’t work out of the box. So not only is it going to be probably some weird service you’re not exposed to that much, but hopefully just, like, you can’t run an exploit and get to work. You’ll have to do a little bit of troubleshooting and fixing up, like, an open source script or something like that. However, like, if you play the insane machine three to four weeks afterwards, chances are that script’s already going to be public, and that lowers the hardness of the insane machine. I think the best way going forward is, like, play it, watch videos. Once you finish watching videos, I would still stick to, like, retired machines and kind of do your own version of guided mode. Tell, like, AI to start asking you questions. You can feed it, like, OXDF’s website to come up with questions and then, like, have the AI be your tutor as you work the machine. And make sure, like, these instructions are, don’t tell me the answer. Just force me to go down the question path. And I think that will, like, handhold you on your way to solve it on your own.

Jexx: Can I raise one of these questions really quick? The most recent one seems like a quick one we can just dust off really quick.

FalconSpy: Sorry, I’ve been, like, marking some questions answered because they… You need the criteria. Yeah, I’m just going to read it.

Jexx: I’ll do it. Whatever. I’m new to Hack the Box platform. I don’t know how much… Don’t know much about queues and how to play… How to pay as a beginner to learn on platforms. So, this is the breakdown I always give people. If you look at all of the modules that are zero-tier modules, those are all free. Now, free is kind of, like, important to state because when you start, you have, I believe, 100 cubes. If you finish an entire module, you basically get those cubes back. So, you have to complete all of these zero-tier modules before going to the next one. So, as long as you do that, you can just dust off all of the zero-tier modules that we have to be able to, you know, just work through any of the beginner material that you need. So, that’s what I would focus on, which is really hard because I’m the person who likes to switch to multiple different things. So, like, staying on task for one module can be difficult, but that’s how you get through all of them. That is what I would suggest if you’re new here. Does anybody have any responses to that one?

FalconSpy: Outside of paying for cubes, which is definitely a thing, or getting a subscription, we do have giveaways here on Discord. Every now and then, we’ll do some cube giveaways. So, I would take a look at the giveaways channel. It’s under the server section or category, whatever you want to call it. We’re in the month of May now, so there’ll be another giveaway starting up probably soon. But, yeah, we try to do at least one giveaway every month, and the prize varies every month, month to month.

Jexx: Also, if you join Seasons, I think, whatever our prize is.

FalconSpy: Seasons will usually give some good stuff, too. So, I’m guessing based on this question, this one is for you, Pete. Are you Red Teamer, like the real Red Teamer, or just Active Directory stuff?

Pete McKernan: Ooh, a real Red Teamer, huh, man? Okay, so, yeah. If you want to start digging into what my definition of Red Teaming is, Red Teaming always involves crossing the human perimeter, right? So, like one of the benefits that you get, and you are doing this for organizations that have a high risk tolerance, like the United States government or any military or anything like that, is you’ll be in unique situations where you get to do things. Like, try to physically penetrate facilities that have armed guards in the situation where you have a letter where they do not know that they are coming. So, before you even get to touch Active Directory, there is a whole plethora of obstacles that you need to figure out how to negotiate, and that is straight up, like, that is where social engineering comes into play. That is where all your OSINT and reconnaissance is going to actually take care of you. Something that I use when I talk, like, about Red Teaming, like, if you talk about surveillance, that kind of, like, clears the check for me, and I’m like, oh, so you’ve done some, like, long-haul stakeouts on ECPs to see, like, how you can get in and researching the different technologies that protect these places. Red Teaming is about total defeat, right? Like, that’s the way that I see it. And if you are starting from, like, a white card position and you’re doing Red Team-like things, doing, like, stealth avoidance or stealth and detection avoidance on a network, there is still great value to that. But, like, I can count on one hand the number of organizations that have looked for my services, for me and my team, given us a scope that we’ve negotiated on, and it’s like, all right, I just need to know where my left and right lateral limits are, but I will be able to get in the building. I will be able to get to a computer, I will be able to operationalize that payload on that target, and I will be able to hand that back off to the greater team, because that’s the way that, like, advanced threat actors work, right? Like, the way APTs are going after it. So the highest level of, like, adversary simulation emulation, like, that is what I consider Red Team. Like, you can back it off, but it’s the second that you aren’t playing against, like, you know, environments where you have defenders looking for you, where you have stakes, and if you lose your foothold, like, you have to reinvent another one. Like, the thing that I hate having to do more than anything is redesign phishing campaigns. Phishing campaigns are, you know, if you set them up, and you do it smartly, and you have a lot of good recon following, like, weather events or sales that people are interested in in the area, you can get a lot of click rate. But if that gets burned because of, you know, you either were careless and were detected, and you have to, you know, come back out and do it again, like, that’s where a lot of, like, my focus with Red Teaming is. As soon as we talk about, like, the Active Directory, Linux, like, the different technologies that are present on networks, it’s like, all right, are you hunting me and are detections on? Because if you’re not, this is just kind of like a pen test or pen test plus, all of which have great value. But Red Teaming, like, it is that. Like, I believe in the military definition of it as it was defined when, like, they were putting together these special capabilities, and that’s what attracted me to it, right? Like, I worked in technology before I had the opportunity to do really, like, anything like that. So when the opportunity presented itself to go after the physical portion and actually get, like, cool targets that you would have to, like, get eyes on and do background on and figure out how you were going to manipulate this individual into granting you access to the kingdom, like, that was really, really, really fun for me, right? And, like, helping people get on the road to being able to do that and help, like, their organization see the value in it, like, that’s a real win when it can happen. Because this is essential stuff, right? Like, if you want to know how a criminal is going to hit your network, study criminals. Talk to criminals. Ask criminals what they do if, like, they had keys to everything and no one was looking. Because that’s the benefit of the field, right? If you get these professionals that think in that direction and then you bring them in and then they help find actual real problems, they’re going to stop you from getting ransomware, right? It’s going to stop you from being one of these companies out there that’s, like, oh, they are dwell time for this, like, you know, event. They were on the network for three months. And it’s, like, how, right? Like, I know that they’re just face jumping in between authorized accounts, but, like, where’s your detection profiles around, like, why is Sharon from accounting logging into the domain controller and pushing actions, right? Like, why is Kevin in maintenance sniffing the line and seeing what he can intercept? Like, there are subtle cues that you can, like, detect. There are indicators of compromise everywhere. But, like, what is another thing that Red Team tests? It’s, I don’t know, the attitude and maybe apathy towards cybersecurity in your organization. Because something that, like, is a conversation that I’m constantly having with people is, like, look, with AI coming, we’re maybe moving to an era where security is a little bit more ubiquitous, maybe a little bit more transparent, and maybe a little bit more, like, pleasant and easy to deal with for users across the board. But I think that, like, everybody in our community knows, man, like, we care the most about this. And it’s a challenge to communicate that in a way where you’re not seen as a sensationalist. So many people want to see the, like, well, show me how bad it is. But, like, come on, everybody at one point or another has that client that’s, like, yeah, we want to feel it. And you’re, like, no, you don’t, man. Like, I know you want to demonstrate impact, but we’re talking about, like, I can basically cut off every person external to this organization, start logging people out. And that’s just if I wanted to be nasty. Like, the thing that I really like to think about these days, especially with AI, is what happens when AI is, like, the dwell capability. How much more can you look at, see, collect when you have it dialed to do things stealthily, quietly, you know, under, like, an authorized context or using a profile or permission group that, you know, no one’s going to blink an eye at because it’s below the noise floor. So, yeah, like, a little bit of a rant there. I’ll stop now. Sorry. I get going.

ippsec: But, yeah, it was just my thought on it.

FalconSpy: All good.

ippsec: I think nine times out of ten, the red team, like, nine times out of ten, if a company buys a red team, that money would have been better spent elsewhere. That being said, I think sometimes the companies, like, the value you get from red teaming is it invokes an emotional response that hopefully makes action be taken. Because, like, until red teamers are no longer successful almost every engagement, I think it’s a waste of money. Because, like, you’re probably better spent in a vulnerability scan. However, many companies that offer those services are piss poor and do a bad job. But, like, with red teaming, you find out, like, one cool way normally how someone gets access. And there’s, in realistically, probably a hundred ways they could have done it. So, you’re better off, like, improving detect, like, spending a lot of money getting a really good defender to do detection, things like that. Like, I think maybe, like, having a service where a good purple teamer joins your organization, looks over Splunk, and hunts badness may be better than a red team. That being said, that also depends on the organization having Splunk, Elastic, or something to do centralized logging. Many of them don’t, so, like, that money’s kind of pointless there. But if you don’t have Splunk, you shouldn’t get a red team to begin with, because what are you going to do with that information?

Pete McKernan: Yeah. You nailed it, man. You nailed it. Like, purple team, right? Like, when the concept of purple team came out, there was all of a sudden this, like, great promise of value, and, like, that’s something I talk to a lot of people about to consider, right? Like, hey, the red team’s going to find the low-hanging fruit. How satisfied are you going to be when, you know, you fish on someone that… You get a fish on someone that isn’t a regular user, but they have access to your network, and now, because of the misconfiguration, I just go right to the domain controller, pull the dit, and I’m out, right? Like, now we have three weeks to kind of talk about how to be ninjas. But I won’t go on it. I won’t rant again. I’ll stop. That got me excited, though, man. And, like, purple teaming is something I talk about a lot right now, and a lot of people are starting to see that, like, that is where good spend is properly applied, so…

Jexx: I think I wrote a blog about purple teaming a while back. Whenever I think of red teaming, I think of, like, some of the different cases here in the U.S., like Coal Fire, where they had guys sent in, they had their laminated sheets, they had… There was, like, a jurisdictional debate between two different entities. I think it was, like, federal and the local jurisdiction was being challenged, and these guys got arrested from, like, a pen testing firm who was doing a red team engagement because they were trying to break into a courthouse. And so, like, it’s always wild to hear these stories about, like, physical hands-on red team engagements. Like, red team, red team. That’s what I think of when I think of red team. But I know that the word kind of gets blended around, like, crazy, because then you’ll be, like, oh, red team and blue team. And so now we’re trying to figure out, like, what is sort of siloed into all of those things. I get so tired as a person who writes, like, consistently about security where people want you to say it a particular way. But, like, in my mind, I think of, like, like the, like you said earlier, Pete, like the War Games-esque ideation of, like, what red teaming is. It’s, like, active threat. You are going to be, like, not super antagonistic, but you’re going to, you’re acting in, like, bad faith and trying to find everything that is, like, wrong there. But the term is just so, I don’t know.

Pete McKernan: Well, it’s popular. It’s popular right now, right? And when we have popular terms that sound high speed and sexy, you know, people can use that to sell. And they can use it to categorize things. And that’s just kind of the nature of the marketplace, right? Because we want people that are red teaming and we want them to be, like, gainfully employed. But, like anything else, like we, as a tribe of hackers do, we’re very good at adapting and overcoming and making sure that the value that we can provide is there. And we can work with people to make sure that there’s light that gets shown on it so that we can bring that forward to the marketplace. But, I mean, you know, like now we’re talking about, like, I’m a grown-up red teamer and I got to talk about, like, value and all this stuff getting done. Like, I just want to talk about, you know, points on the board, make things happen. But I think that, you know, maybe to kind of echo a lot of the sentiment that’s out there at risk of being repetitive. But, like, I do believe AI is really going to change the game. And it’s like the InfoSec color wheel, like some of these things may merge because it’s practical. Like, just again, purple teaming, man. Like, I would love to be able to convince everyone that purple teaming is the way to do it because running actual offensive, like, APT-grade actions and blast chambers where you can at least collect the artifacts around that and work with the defender directly. Like, that conversation right there alone, when you get red and blue together in a constructive environment, like, a lot of really good things come out of that when it’s good faith. Like, when you’re doing, like, an outbrief and they’re like, hey, you know, like, this was not, you know, something that we could detect. And you’re like, well, yeah, you could. You just, you know, turn this flag on and all of a sudden everything gets tagged and you’re seeing me all over the place. I’m lighting up the sock. Like, those are never fun conversations to have, especially, like, when they’re bosses in the room because it’s just kind of, like, it’s like an after action of a game that you won, you know? Like, my thoughts on it.

Jexx: I was, I’m looking for this blog. Okay, go ahead. Sorry.

FalconSpy: I was going to say, I should probably move on. We got a little bit of under 10 minutes left, so we’ll do the best we can to answer the many questions that we have left. But. This person completed the CPTS path, they want to take the exam, what do you advise them to do before taking said exam?

ippsec: Just understand you have two attempts to take the exam. I’d go into the first exam expecting to fail and just learning what you don’t know. And then use that feedback you get to figure out exactly what you need to study. I think if you just overstress about the very first attempt, then you’ll probably overstudy and study the wrong thing because it’s very hard to know what you don’t know. I don’t even know how you even go about doing that. And if you expect to pass the first time and you fail, you’ll probably be demotivated and it’ll be hard to actually find a way to take value out of that to know what to study next. So, I always like going to exams. I’m going to fail. See how I do. And the main purpose of the exam is to build a lesson plan going forward.

FalconSpy: All right. How is the future market looking for a tier one SOC analyst? Is it still possible to be a SOC analyst in the age of AI?

ippsec: Yes. I mean, I think humans are going to be in the loop. All that’s going to change is more will be expected from tier one. That’s not to say tier one is going to be harder because you have a better tool to handle that job, which is AI.

FalconSpy: I guess similar, but follow up question. Does binary exploitation have a future?

ippsec: I wish I had a dollar every time someone said this because like going back at 15 years, does binary exploitation have a future? We have the safe error handling concept in Windows. Oh, now we have ASLR. Oh, now we have the shadow stack. Oh, like this question gets asked every four years and exploit devs somehow still become a thing. Like, was it like, was it the PS3 just had a big hack against it? Someone found a new way to exploit it. There was an old game console that just had a new way to exploit that people didn’t really know. Like, I always have thought exploit dev was kind of a wasteful endeavor just because there’s easier ways to make money than to go like spend thousands of hours looking at code. But I mean, some people just love that and they do it for passion, not for money. So it’s always going to be a thing.

FalconSpy: All right. Is there any problem of jumping straight into the AI red teaming path over the CPTS path?

Pete McKernan: So I’ve done that and I don’t think so. I think if you want to take the AI red teaming path, jump in. The course that’s out there is excellent. There’s plenty that you can look up if you don’t understand. And it’s very easy to, like, trace those lines, too. I need to be more familiar with the concept. But no, I mean, I think that the AI red teaming course on Hack the Box is probably one of, like, the most sleeper valuable courses out there right now. And I think the COAE is just, like, you know, great cert. That’s why I got it.

ippsec: Learning is learning. The only, like, wrong answer is you not studying because you’re trying to debate what to do.

FalconSpy: When is the next review for community submissions expected? Sorry, community machine submissions.

ippsec: We do them every Monday. I think, like, we get 20 to 30 machines a week, it feels like now, with all AI slot. Like, we’re waiting to have a better system in place to do a lot of filtering. But, like, we had just this Monday, and I think even yesterday, we had, like, five content people sitting in a Google meeting going over write-ups. And, like, 50 minutes later, we finally hit our first box that was not pure AI generated.

FalconSpy: Is the copy fail, dirty frags, vulnerabilities patched in ProLabs?

ippsec: No. Same issue as their active boxes. We patched copy fail. And before we hit the, like, work to go push the new update, dirty frag came out. I want to say dirty frag came out, like, last night. So, things take time.

FalconSpy: As someone who is making a career change into cybersecurity with a long-term goal of becoming a red teamer, how do you see the career outlook evolving in the age of AI? What learning pathways or certifications would you recommend going for as a beginner to build the right foundation?

Pete McKernan: For me, personally, one, like, go on the gradient with Hack the Box. Like, Ipsec said, learn technology. Like, if you are someone that can think both in the, like, I can break this down into small solvable challenges while you simultaneously learn about the technology, like, I think that’ll serve you really well. And that kind of checks off a box for, like, a portion of what red teaming is. But oddly enough, I think the thing that feed or fed my red teaming, you know, background and goals the most was just quality assurance engineering from an early age, right? Like, that’s how I got into work. So, it was just constantly looking at things that I’m told work, that I’m told are functional, and then finding the little, you know, wiggle room in between the gears where I could get something in there and break the entire system. So, when, like, I became aware of what penetration testing was and the type of activities involved with that, it felt a lot just like live quality assurance and, like, quality assurance engineering. And a lot of the language work that I was doing at that time was in Python and C. So, that set me up really well to be able to look at certain classes of problems. Yeah, I spent some time doing exploit development, you know, like, a lot of late nights looking at lots of code. And, yeah, it’s, miles may vary. I think that your description of it is right on for how I feel about it. But, yeah, like, to build up your skill set as a red teamer, you can’t ignore the technical aspects. You need to try to see things through different lenses and just have a skeptical, like, eye towards anything that anyone ever tells you works the way that it works this way because we built it to work this way. Because, like, I think in almost every situation, if it’s, you know, one or two layers of complexity on top of anything, you can really start to get in there and see if you can peel that apart and make it do things that you want it to do. So.

ippsec: Yeah, I think security is just a field where, like, taking knowledge and applying it in a creative way to accomplish your goal. Like, McColonel has a bunch of skateboards behind him. Is skateboarding an efficient way compared to a bicycle to get from point A to point B? Is surfing or snowboarding an efficient way? No. Do you want to learn how to surf or snowboard but you’re not living by the ocean or a mountain? Then, like, skateboarding is a good way to start, like, getting on a board, learning how to do balancing on a board. And then when you get to a mountain, then you’ll have some skills you can transfer into snowboarding, right? That’s kind of like security to me. No matter what field you do, if you’re a sysadmin, a developer, or whatnot, like, you start building up those skills. And then when it comes to security of trying to find a way around it, you have some skills you can transfer into it. I just think, like, security, no matter what, as you learn, you can find a way to apply it to your actual job.

FalconSpy: So, all right, I think that’s it. I mean, unless we can answer a really short question, but I think a lot of these questions that are in here kind of require a lengthy answer.

Jexx: And bring them next week.

FalconSpy: Yeah. All right, well, we’ll wrap things up for this week. Thank you, everyone, for joining us for this week’s QTalk. We hope you had a good time. You can take a look at the top of Discord at our event section to see if you’re interested in any other of our events that are coming up. We have a Global Cyber Skills Benchmark CTF coming up next Friday. You can also see when QTalks go live in your local time zone. You can say you’re interested there as well if you’re interested in future events and receiving a notification. These are recorded, so the recording will be posted later. Still working on the YouTube section, but the recording will be posted on Spotify later. I want to have a moment to upload them. No after party this week. Sorry. And we will see you all next week.

Jexx: Thanks for joining us, Pete. Appreciate it, man.

Pete McKernan: Thank you so much for having me on. This is a blast. I very much like it. Glad to have you. It was fun. All right. Thanks, everyone.

ippsec: Bye.

This post is licensed under CC BY 4.0 by the author.