I wrote this blog to help organisations better prepare for and run successful bug bounty programs. The blog touches on my personal experiences as a program owner of both good and badly run programs as well as being on the other side of the fence as a bug bounty hunter.
This blog ended up being a lot longer than I thought it would be. I hope it’s a worth-while read especially to those of you who are considering running or already run a bug bounty program. At the very worst it might help you get to sleep at night 🙂
What is a Bug Bounty Program?
One recurring theme I’ve heard over the years is that Bug Bounties are essentially commission only penetration tests – and to an extent, it’s true but it’s also an over-simplification (I intend to write another blog about the differences between penetration tests and bug bounties).
Security researchers find vulnerabilities in your applications and responsibly disclose them to you; in return, you pay them a reward.
Sounds good right? You get more security eyes looking at your application all of the time, your application is made more secure as a result of vulnerabilities being discovered and fixed and researchers are paid as a result! What’s not to like? Well, If your organisation isn’t ready for a bug bounty program then things could get quite messy. Let’s explore.
Is your organisation ready for a Bug Bounty program?
When I say ready I guess I mean several things. The maturity of the security function within the organisation. The resources made available to get vulnerabilities remediated. Buy-in from management, etc. These are all important things to consider.
Let’s start with security maturity. Organisations with a low maturity level are those that see security as a necessary evil (or compliance checkbox) rather than seeing it as an integral part of the business.
An organisation with a low security maturity is unlikely to be following a Secure Software Development Lifecycle framework or best practices in general. Time and resource probably isn’t made available to help developers write proper, robust, secure code. Penetration tests are probably scheduled at arbitrary points in the year to fulfil a compliance requirement. So on and so forth. In an organisation where security is looked upon as a second-class-citizen, the reality is that the systems/infrastructure/applications are more likely to contain significantly more security vulnerabilities compared to an organisation with a higher maturity level.
In this scenario, launching a bug bounty program probably isn’t a good idea as you’d likely be spending thousands, if not tens of thousands on vulnerabilities you may already know about.
Next let’s talk about resources. If you can’t get the resources or prioritisation to get a vulnerability fixed on a normal day then chances are, you’re not going to get the resources to get them fixed when 8 submissions come through in the same week. What you want to avoid is a scenario where you now have two growing backlogs – one in $favouriteTicketingSystem and the other in the bug bounty platform.
Lastly, I’ll briefly mention buy-in. To put it bluntly, if you have no buy-in from management for a bug bounty program then there’s very little value in spending the time, effort and money trying to start up a program that will likely fail before you’ve had the chance to hit go. It’s hard to demonstrate good value for money when those you are demonstrating to were never bought-in to the value of the program in the first place.
Choosing a Provider
There are a number of bug bounty platforms you can choose from. The two main ones are HackerOne and Bugcrowd. You have some smaller ones like Synack that are also worth investigating. Definitely do your own research here.
“Why do I need to choose a platform?” I hear you ask. The platforms help you ‘advertise’ your program and they also do the hard work of connecting the most appropriate researchers to your program.
When evaluating providers, make sure you ask questions like:
What are your SLA’s for triaging submissions?
Can I filter the type of researchers I want in my program?
Do you have a customer success team near me?
(The customer success team are usually the ones responsible for managing the program and adding researchers. If they’re not in or around your timezone then it could take them a while to add researchers – this can be frustrating)
Do you have an Application Security Engineer (ASE) near me?
(again, it could take a while for someone to triage submissions and/or reply to your comments if there are no ASEs in or around your timezone)
Do you take any fees from the researcher reward pot?
Are you able to make yourself available on $favouriteCommunicationTool?
(it can be very useful to have someone from the provider on Slack – especially at pre-launch and some time after post-launch)
Public or Private?
Your program can be one of two types – private or public. It’s an easy one to answer.
Do you want to slowly ramp-up the number of researchers with a specific skill-set over time so that you can familiarise yourself with the process and ensure you’re not being overwhelmed with submissions?
Do you want to open the floodgates and let all hell break loose?
I strongly recommend going down the private program route if you’re starting a new program. Even if you think your organisation is very mature and your app has been through countless threat models, code reviews, pentests, etc, it’s still a good idea to ease yourself into the platform and process. You can ramp up the number of researchers as fast as you want so there’s very little benefit going public from the get-go.
Do you need to seek permission from 3rd parties?
If your in-scope applications/services are hosted by or link to a third party such as AWS, Azure, Heroku, Sentry, Crashlytics, etc then it would be a good idea to research whether you need to seek approval before starting a bug bounty program.
Generally speaking, third parties are usually okay and do not require notice as long as you explicitly call out third party services as out of scope in your program scope.
For example, you don’t need to ask AWS for permission as long as the researchers are testing for application vulnerabilities and aren’t trying to DoS the platform.
Writing the program scope
Every bug bounty program needs a program scope. This gives the researchers context to the application/service they will be testing. More importantly, it’s where you, the program owner define what is in and out of scope. Crucially, this is also where you define acceptable and unacceptable testing. A scope should contain the following information:
- Brief introduction and context – is it a mobile app, web app or some other application or service? What does it do? Why is it important?
- Access and credentials – provide clear guidance to the researchers as to how they can access the items in scope. Can they signup on their own or do you need to provision accounts for them in advance?
- Guidelines – make clear what is and isn’t acceptable research
- Focus areas. What are the most critical functions of your app that could do with more attention? For example; authentication and authorisation – bullet point these focus areas to make it easier for the researchers to ingest
- Bonus: Set up a separate Slack Workspace and include an invite link in the scope. It can be very useful to have researchers, program owners and bug bounty Application Security Engineer all in one place
- Bonus: Offer a percentage reward uplift for submissions which are comprehensive and well articulated – I’ve found this technique works
- Don’t: Don’t put barriers in front of researchers unnecessarily. My biggest pet peeve is when programs force researchers to go through proxies for public facing assets. Your domain is on the public internet and is probably being attacked by bad guys this very moment – why raise the bar for the good guys to start their bug hunting?
Selecting your researchers
It is important to think about the type of researchers you want in your program; their skill set, rank, average severity rating, activity in the last 90 days, etc.
There are several hundred thousand researchers on Bugcrowd and HackerOne combined. The reality is that probably only 5 to 10% (if that) of those researchers are actually active and represent the researchers who find the good stuff.
My advice is to make your requirements very clear about the type of researchers you want in your program. Don’t be afraid of saying something like:
“I want 20 researchers who rank in the top 500 with an average severity rating below 3”
Remember, it’s your program and you’re the one paying for the service.
I’m also going to make a point to mention the potential pitfalls of restricting researchers based on country of origin.
I’ve run a program where we only allowed researchers from the UK; not only was this move incredibly short sighted, it significantly reduced the researcher talent pool (something to do with the legal team and the Slavery Act – don’t ask). I would strongly recommend against restricting researchers to a single country unless you have an actual legitimate reason to do so (you probably don’t).
Saying that, I strongly recommend carefully choosing your researcher selection criteria otherwise you could end up with poor quality researchers and reports.
Post program launch
So you’ve launched your program – amazing!
At this point, you will have been assigned an Application Security Engineer (ASE). This person will be responsible for triaging submissions to ensure:
- The submission is in scope
- Check whether the submission is a duplicate
- The ASE will map the submission against a vulnerability rating taxonomy/baseline – e.g., this SQL injection vulnerability is a P1 (critical)
- Then they will check whether the submission contains enough information so that it can be reproduced
After all of the above has been complete – this is the point at which you take over.
An important note – you don’t need to wait for an ASE to do all of that – if a submission comes through and you’re able to look at it – feel free to handle it yourself. If I saw a submission come through with the title “Customer data all over pastebin” then you can guarantee I’d be all over that like a fat kid on cake.
Below I will summarise some Do’s and Don’t’s you should think about and consider after your program has launched.
- Do: when you and/or the ASE confirm the validity of a submission, pay the researcher as soon as possible
- Do: ask the researcher to re-test when you have fixed the vulnerability. If the researcher find another way to exploit the vulnerability then reward them for it
- Do: I’ve mentioned this before but it can be very useful to have a dedicated Slack Workspace just for your program. I’m a big believer of creating channels of communication (excuse the pun) – the big platforms have no built in mechanism for program owner and researchers to discuss the program, ask questions about the application/systems in scope, etc.
- Do: ask the assigned ASE to mediate on your behalf if there is some tension between you and the researcher or if you cannot agree on some detail. Respect the ASE’s opinion; they’ve been doing this a lot longer than you have.
- Do: pay for the worst possible case outcome for a reported vulnerability. E.g., If a researcher finds credentials for a test box but it would have allowed them to jump on to production systems and lead to customer data then you should reward them as if they had found the customer data.
- Don’t: don’t delay payment for nonsensical reasons like “we’re not going to pay you until we’ve fixed it” – it drives researchers away. Once you’ve confirmed that a submission is valid, pay up.
- Don’t: if a submission is out of scope, don’t dismiss it right away. It could be a valid issue in another area of your infrastructure that you care about. Pay for these findings, even if they are out of scope but politely remind the researchers to remain on scope
I’ve read too many horror stories of programs dismissing critical findings just because they are out of scope – all you do is encourage the researcher to stop working on your program or worse still, publicly disclose the out of scope finding
- Don’t: don’t be cheap. Don’t downgrade ratings just so you can pay the researcher less. Researchers who are paid on time and treated well will be motivated to hunt for more vulnerabilities which is what you want. Build those relationships.
If I were to summarise this monstrosity of a blog post in to a few bullet points it would be these:
- Make sure you have the support from your leadership team to run a Bug Bounty Program and ensure you’ve communicated its launch to the relevant parties
- Write a good scope – the do’s and the don’t’s
- Start small (in terms of scope and number of researchers) and grow organically
- Pay the researchers in a timely manner
I hope this blog post was useful. Feel free to leave comments if you have any questions or suggestions!