this is a huge problem for cybersecurity...

Estimated read time: 1:20

    Learn to use AI like a Pro

    Get the latest AI workflows to boost your productivity and business performance, delivered weekly by expert consultants. Enjoy step-by-step guides, weekly Q&A sessions, and full access to our AI workflow archive.

    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo
    Canva Logo
    Claude AI Logo
    Google Gemini Logo
    HeyGen Logo
    Hugging Face Logo
    Microsoft Logo
    OpenAI Logo
    Zapier Logo

    Summary

    In this engaging talk, Low Level delves into a critical issue within cybersecurity, where AI's misuse in bug bounty programs poses a threat. Highlighting the "HTTP3 stream dependency cycle exploit," the speaker explains how AI-generated bug reports can mislead security engineers, leading to resource exhaustion and potential security oversights. This AI-driven "denial of service" risks genuine issues slipping through the cracks, stressing the need for careful vetting of AI's outputs in cybersecurity efforts.

      Highlights

      • AI-driven bug reports are creating noise in cybersecurity, making it challenging to identify real threats. 🔍
      • Fake bug reports, termed "AI slop," exhaust security resources and complicate threat assessment. 🛠️
      • Daniel Stenberg demands transparency about AI usage in bug submissions. 📢
      • There's a risk of security exhaustion, with AI reports swamping genuine concerns. ⚡
      • The cybersecurity community is in a precarious balance with AI tools—handle with care. 🚦

      Key Takeaways

      • AI misuse in bug bounty programs is causing cybersecurity concerns. 🤖
      • AI-generated bug reports can be misleading, risking genuine oversight. 🚨
      • Security resources are at risk of being overwhelmed by AI noise. 📉
      • AI's capability in security is both promising and troubling—use it wisely. ⚖️
      • Security engineers face unique challenges amid AI advancements. đź’»

      Overview

      In this eye-opening discussion, Low Level illuminates a pressing issue in cybersecurity: the misuse of AI in bug bounty programs. This talk revolves around the specific "HTTP3 stream dependency cycle exploit," illustrating how AI's intervention can sometimes be more of a hindrance than a help. While AI has the potential to streamline bug discovery, its unchecked application leads to a deluge of erroneous reports, stretching security teams thin and threatening to obscure genuine vulnerabilities.

        The speaker goes on to highlight the specific challenges posed by these AI-generated submissions. These reports, often referred to as "AI slop," can overwhelm security personnel who must sift through layers of false alarms to uncover legitimate threats. Famous programmer and maintainer of curl, Daniel Stenberg, has voiced his frustration, now requiring bug reports to disclose AI involvement. This transparency is crucial as it helps manage the influx of AI-induced noise, ensuring that real issues aren't overlooked.

          Despite these challenges, the talk is hopeful about the potential for AI in cybersecurity, albeit with caution. AI can process vast amounts of data quickly, which could eventually be harnessed to improve security practices. However, as Low Level warns, reliance on AI without solid oversight could lead to significant security oversights. The discussion concludes by urging security researchers using AI to remain vigilant and cross-verify AI outputs to avoid contributing to this "AI slop."

            Chapters

            • 00:00 - 00:30: Introduction to AI and Cybersecurity The chapter provides an introduction to the impact of AI on cybersecurity. It highlights ongoing debates about AI's societal effects, both positive and negative, and narrows down to a specific challenge where AI poses a risk: the HTTP3 stream dependency cycle exploit. This example, reported on the bug bounty platform Hacker 1, underscores the critical trends and concerns in integrating AI within cybersecurity frameworks.
            • 00:30 - 01:00: Hacker 1 and Bug Bounties The chapter discusses the financial incentives in bug bounty programs, specifically highlighting the example of Curl on the Hacker 1 platform. It mentions the potential earnings for finding critical bugs, with rewards as high as $9,200. The chapter also notes that over $16,000 has been paid out so far from Curl alone over a span of six years. A particular report from a researcher known as Evil Gen X is highlighted, presenting a seemingly legitimate exploit that utilizes stream dependency cycles in HTTP3, leading to memory corruption and possibly a denial-of-service attack.
            • 01:00 - 02:00: The Novel Exploit The chapter titled 'The Novel Exploit' discusses a significant security vulnerability involving the widely used web querying tool, curl. It outlines a scenario where malicious servers could execute code on clients using newer versions of HTTP3 or HTTP with the QUIC protocol and a new framing protocol format. Additionally, the chapter provides a detailed guide on setting up a server environment using AIOQ.
            • 02:00 - 03:00: Discrepancy in the Bug Report In the chapter titled "Discrepancy in the Bug Report," the text describes a technical investigation into a crash report for the software 'curl.' The issue was identified when users set their core dump limit to unlimited and utilized GDB to debug. The crash was traced to the function 'ngtcp2,' a library connected with handling TCP and HTTP3 traffic. It was noted that during this process, the register R15, which acts as the return address in ARM architecture, was altered, indicating a potential bug. This bug potentially allows code execution control over the software 'curl.' The chapter delves into these technical details, outlining the bug's implications on software stability and security.
            • 03:00 - 04:00: AI Hallucinations in Bug Reports The chapter 'AI Hallucinations in Bug Reports' discusses an issue observed in bug reporting processes or reports where AI systems might contribute inaccurate or misleading information. The narrator begins by noting that initially, there seem to be no major issues with the bug report. The environment for reproducing the bug and the method of execution are clearly provided. However, the main concern arises when examining the comments on the ticket. The patch file provided does not apply cleanly to the main branch of the project, indicating potential misunderstandings or assumptions that may need revisiting. A specific focus is given to ensuring that starting assumptions match across team members, highlighting the importance of initial setup and communication clarity to avoid such AI-related 'hallucinations' or misapprehensions.
            • 04:00 - 05:00: Responses from Daniel Stenberg In this transcript, there is a discussion about the style and structure of responses typically seen in bug reports. The evolution of the conversation from a human-like, conversational tone to a more robotic style is noted, indicating a potential shift towards AI-generated content. Daniel Stenberg enters the conversation at this pivotal point.
            • 05:00 - 06:00: Challenges in Bug Triage The chapter titled 'Challenges in Bug Triage' discusses the difficulties faced during the process of bug triage. The conversation involves a reference to a video about a person who is a primary maintainer of the tool 'curl' and has written extensively on writing safe C code. There is a particular focus on a reported bug that allegedly results from stack recursion in the ngtcp2 or ng http3 code. However, the maintainer points out that there are no functions with the names mentioned in the bug report, leading to a clarification request.
            • 06:00 - 07:00: Implications of Overloaded Security Resources The chapter titled 'Implications of Overloaded Security Resources' discusses an incident where a non-existent function was hallucinated by either a researcher or an AI tool, resulting in a false crash report related to Stack Overflow through recursion. This incident illustrates the bizarre scenarios that can occur due to over-reliance on artificial intelligence and the potentially misleading information it can generate. The narrator expresses initial belief in the AI-generated report, highlighting the challenge of discerning AI-generated inaccuracies from reality in current times.
            • 07:00 - 08:00: Incentives and Malicious Use Cases The chapter titled 'Incentives and Malicious Use Cases' builds on a scenario where a programmer named Daniel discovers a non-existent function in the codebase presented in a bug submission. Infuriated by inaccuracies likely perpetuated by AI-generated text, he decides to post new rules on LinkedIn for submitting security reports to HackerOne, specifically asking whether a reporter used AI tools to identify or report bugs. This illustrates the tensions and challenges that arise with the use of AI in software bug reporting.
            • 08:00 - 09:00: Current Limitations of AI in Security The chapter explores the current limitations of Artificial Intelligence in the context of security. Security engineers are primarily responsible for identifying software vulnerabilities, writing mitigations, and ensuring such issues don't recur. The discussion acknowledges a critical threshold where reliance on AI-generated reports is scrutinized, leading to a policy of instantly banning any such reports deemed inadequate or substandard, referred to as 'AI slop'. This suggests a skepticism or caution against over-reliance on AI without thorough human oversight, especially in roles requiring nuanced intelligence and decision-making, like security engineering.
            • 09:00 - 10:00: Advice for Security Researchers Using AI The chapter titled 'Advice for Security Researchers Using AI' discusses the challenges faced by the security industry with the increasing number of AI-generated reports. The main concern is that security resources (personnel and processes) do not scale linearly with the number of reports produced. As the volume of reports increases (2x, 3x, 4x), there aren't enough people or efficient processes available to triage, identify, and fix the issues effectively. The issue is exacerbated by the difficulty in determining whether the reported issues are real or not. Daniel highlights a growing concern in the field of AI-powered research where over-reliance on AI tools can lead to trust issues and inefficiencies in handling security vulnerabilities.
            • 10:00 - 11:00: Learning Rust Programming The chapter discusses challenges within the programming community, particularly focusing on issues faced when dealing with AI engines. It highlights the problem of false bug reports being submitted, which can lead to a denial of service in the security community. There are concerns over potentially exhausting the community's resources, resulting in not having enough personnel to review legitimate bug reports. This scenario can lead to dangerous outcomes, requiring careful management and prioritization of reviewing processes.

            this is a huge problem for cybersecurity... Transcription

            • 00:00 - 00:30 This is a big deal when it comes to cyber security. There's a lot of conversation going on nowadays about the use of AI, whether it be good for society, bad for society. I want to talk about a very concrete example of where AI is bad and the trend that is being shown here is not great. It all starts with this HTTP3 stream dependency cycle exploit that was reported on Hacker 1. If you don't know what Hacker 1 is, Hacker 1 is a bug bounty platform. Organizations can register on Hacker 1 and then researchers can submit findings bugs for
            • 00:30 - 01:00 money. Basically, if you get a critical bug in Curl, for example, you're able to get up to $9,200 and they do pay, right? This has paid out $16,000 so far just from Curl alone. And they've only been here on Hacker 1 for about 6 years. So, money is being paid. This report came from this researcher called Evil Gen X, which we'll get into their whole background here in a minute. And they say what seems like a pretty legit exploit. A novel exploit leveraging stream dependency cycles in HTTP3 resulting in memory corruption and potential denial
            • 01:00 - 01:30 of service. That's that's pretty bad. The attack surface here being if someone using curl, you know, a very commonly used tool for querying a web server and getting a response. if they use a newer version of HTTP3 or HTTP which uh uses quick and a new framing protocol format. There's a vulnerability in curl supposedly that they could that the server could use to get code execution on the client. That's that's a pretty big deal. So they go through how to set up the server using AIOQ. They go through and give a pretty good breakdown of how to set up the environment. They
            • 01:30 - 02:00 go into the crash that they found. Supposedly they set their core dump limit to unlimited and then they set up GDB on curl and said wow look we have a crash in this function ngtcp2 which is a library used to implement TCP HTTP3 handle priority frame and they say that R15 is set to all a R15 being the uh return address in the ARM architecture. This happening means that there is a bug that gives you code control over curl.
            • 02:00 - 02:30 And so far, you know, as I'm reading this, not a lot a lot of alarms are going off. It seems like they give a good way to reproduce the environment. They give a good way to do the exploit. And then they have a crash that happens in PC. Okay. So, what's the problem? So if you read the comments on this ticket as a start the patch file supplied does not apply at least at least against main branch of AO AOQ quick. Before we start analysis want to make sure that starting assumptions are the same. Can you explain where you want send cyclic priority to be injected? This is where
            • 02:30 - 03:00 things get a little hairy. Right. If you read the style of this response, if you read kind of the structure of this, they went from like very human, very like normal, what you would expect from a bug report to go and then this is a comment. So like humans are talking here and they kind of go very robotic into this issue summary. What is cyclic dependency? And it starts to smell a little more like AI. This is where Daniel Stenberg hops
            • 03:00 - 03:30 in. I did a video about him and his coding principles previously. Again, he is the maintainer, the primary owner of curl. He has a whole blog that he wrote about how to write safe C. Now, he steps in. Now, notice that the bug supposedly is due to a stack recursion in ngtcp2, http3 handle priority frame. That is what the researcher reported. There is no function named like this in the current ngtcp2 or ng http3. Please clarify what you are talking about.
            • 03:30 - 04:00 which versions of this library did you find the problem in, etc., etc., etc. I call this AI slop and he closes the ticket. Somebody, be it the researcher themselves or probably the AI that the researcher is using, hallucinated a function and created a crash report for a Stack Overflow via recursion in a function that doesn't exist. We We live in truly crazy times that this is even possible. And until this comment in the thread, I was I was bought in. I was
            • 04:00 - 04:30 like, "Oh, okay. That's a little weird that he typed like that, but like okay, whatever." And then uh you know, he comes in Daniel and he's like, "Hey man, by the way, the function you are talking about does not exist in the codebase. So what's the plan?" Now he is so infuriated by this that he actually goes on to LinkedIn to post his new rules for submitting bugs to hacker one. Every reporter submitting security reports on hacker 1 for curl now needs to answer this question. Did you use an AI to find the problem or generate the submission?
            • 04:30 - 05:00 If they do this, they can expect a stream of proof of actual intelligence follow-up questions. We will now ban every reporter instantly who submits reports we deem AI slop. A threshold has been reached. We are effectively being. Here is the issue that's happening here in security. My job during the day is I'm a security engineer. I literally my job is to find bugs in software, write up mitigations and make sure that bugs don't happen in software. Security resources like the people whose job it is to take bug reports, triage them and
            • 05:00 - 05:30 fix them do not scale linearly with the number of reports. Meaning if there were to be a 2x, 3x, 4x in the number of reports that can be generated, there are not enough security people or security processes to look at all these reports and make them go away to either triage them to figure out what the source of the bug is to determine if the issue is a real issue or a non-issue and if it's a real issue to find the fix. Daniel is talking about here is a very scary thing that I see happening in the world of AI powered researchers where a lot of trust
            • 05:30 - 06:00 is being put into these AI engines, right? We have these people that are just submitting these bugs that make claims of bugs that don't even exist or are that like somehow finding crashes in functions that aren't real. This is a denial of service of the security community that can lead to one of two very dangerous outcomes. The first one being we could just completely exhaust the community, right? We could create a scenario where there are just not enough people to review all of the bug reports
            • 06:00 - 06:30 and to fix all of the bugs that are reported. Or we could create a scenario where there are people who are reporting legitimate bugs and illegitimate bugs and legitimate bugs could slip through the cracks because h that one's just AI slot. This is part of my gripes with the whole bug bounty community. I think like obviously the bug bounty community isn't net positive and it's a good thing that people are getting positively compensated for finding bugs in software right in a perfect world people would just go find bugs for free companies wouldn't have to pay them out we would
            • 06:30 - 07:00 submit the bugs we find all the bugs and there would be no more bugs for no for no money okay that's not the world we live in we live in a capitalist society where people rightfully want to be compensated for their time so as a result when you have to spend time on something you would like to receive some kind of money for it or some kind of you know compensation and so that's why bug bounty payouts exist. Now, there's this weird incentive structure where you are going to look for bugs and try to find bugs kind of at any cost just so you can get that compensation. Like, you know, $9,000 is a huge chunk of change for anybody, right? If you can get one of
            • 07:00 - 07:30 these a month, if you know how to do that like regularly, call me cuz that's like impressive. But, you know, n grand is a lot of money. So, we have this weird incentive structure now where people are going to try to submit bugs at whatever rate they can, be them slop or not, to try to get lucky and and hit this this jackpot. And so as a result, I think I'm not surprised this is happening. It's kind of just a product of the system we built. Now, that's assuming that that's a non-malicious submission, right? You have to also consider the scenario where this account is maybe a test account for some kind of GIA tanesque attack where they're
            • 07:30 - 08:00 testing the waters to see, hey, does the security community notice when I submit a very well-formed but fake AI submission? And if they don't, how many of these can I submit and get away with it? Even though I'm not getting paid, maybe I'm causing the security community to spend more time than they should on fake resources or on fake submissions so that when I find a real bug or when a real bug is contributed by another puppet, there are so many resources being spent on these fake ones that
            • 08:00 - 08:30 they're not going to find the real one. We still have not seen a single valid security report done with AI help. So if you are concerned that AI is taking your job as either a programmer, as a researcher, etc. I don't think we're there yet. Now I do think that this will change. I think eventually there is going to be a place where AI can find bugs in software either through source code or reverse engineering because they're are very good at processing a lot of data at once. That's kind of the one issue that humans have, right? Is we can't just look at a thousand lines of code and like build a graph in our
            • 08:30 - 09:00 heads, right? The AI is much better at this. However, they're not good at it yet to the point where they can meaningfully find bugs. I have a couple takeaways from this. First of all, if you are a security researcher who is using AI in your workflows, good on you. I think that's a good thing you should be doing. It does help you maybe to scale the power you have as an individual. But don't forget a that an AI does and will make mistakes. Check your math. Make sure you check the AI's math. Check your check your own math. Um, and don't just trust in the computer to make the right choices. That's how we're going to have the vibe apocalypse
            • 09:00 - 09:30 personally where code is going to get significantly worse and we're just going to have another want to cry malware that comes out because someone vibecoded a service that's network facing and we're just all we have to deal with it then. And two, if you're on the malicious side of this where you're just like submitting these reports with AI just like to cause Daniel and friends to get plugged up and not be able to do their jobs, cut it out. And wow, look at that. By the way, if you want to learn to program in the world's safest language, Rust, my Rust 101, Foundations of Rust course did start recently on the little academy. Get in there. Learn the basics of Rust. Learn how to code in Rust.
            • 09:30 - 10:00 Learn why Rust is not that scary. In the course, we go through and we compare Rust to C. And I'll teach you the basics how to write memory safe code in a language that is taking the world by a storm and getting more popular every day. Uh the courses are on sale temporarily, so get them while you can. Anyway, that's it for now. Thanks for watching. I appreciate it, guys. If you're new here, hit that sub button. I do videos like this all the time. I love you.