Every bug bounty program starts the same way: a scope document and a blank terminal. What happens in the first few hours determines whether you’ll find something worth reporting or burn days chasing ghosts.

This is how I approach a new target. Not theory — this is what I actually do.

Read the scope. Then read it again.

Before anything else, I read the program policy cover to cover. Not skim — read. I’m looking for:

  • What’s in scope — wildcards (*.example.com), specific subdomains, API endpoints, mobile apps
  • What’s explicitly out — third-party services, specific hosts, “do not test” domains
  • Severity caps — some assets cap at Medium, which changes what’s worth pursuing
  • Special rules — rate limiting policies, required headers, no automated scanning, etc.
  • What they care about — their priority list tells you what they’ll pay for

I keep a per-program scope doc that I check before every test. Scope is law. Touching an out-of-scope system isn’t a grey area — it’s the line between research and crime.

Subdomain enumeration

Certificate Transparency logs are the starting point. One query to crt.sh and you often have hundreds of subdomains the program never explicitly listed but that fall under their wildcard scope.

curl -s "https://crt.sh/?q=%25.example.com&output=json" \
  | jq -r '.[].name_value' | sort -u

What I’m looking for in the results:

  • Staging/UAT/dev environments — often less hardened than production
  • Admin panelsadmin., portal., dashboard.
  • Internal toolsgrafana., kibana., jenkins., jira.
  • API subdomainsapi., graphql., ws.
  • Subsidiaries — acquired companies running older infrastructure

A program with 50 subdomains has a different attack surface than one with 500. The count alone tells you something about the organization’s complexity.

Tech stack fingerprinting

For every interesting subdomain, I want to know what’s running. Not just “it’s a web server” — I want the framework, the version, and ideally the deployment platform.

The basics:

  • Response headersServer, X-Powered-By, X-Runtime (Rails), X-Request-Id formats
  • Error pages — default 404/500 pages leak frameworks (Django, Rails, Express, ASP.NET)
  • Cookiescsrftoken (Django), _rails_session (Rails), JSESSIONID (Java), ASP.NET_SessionId
  • JavaScript globals — SPAs leak config in window.__INITIAL_STATE__ or similar
  • API response shapes — GraphQL endpoints, REST conventions, error message formats

Version disclosure is gold. A Tableau Server showing 2025.1.2 in its /vizportal/api/web/v1/getSessionInfo response gives you an exact CVE checklist. A Grafana /api/health endpoint returning the build version tells you immediately if it’s current.

Attack surface mapping

Now I know what exists and what it’s built on. Time to map the actual attack surface:

Authentication boundaries

  • Which endpoints require auth? Which don’t?
  • What auth mechanism? (Session cookies, JWTs, API keys, OAuth)
  • Is there a registration flow? Can I create a test account?
  • Any SSO? (SAML, OIDC — these have their own bug classes)

API discovery

  • GraphQL: Is introspection enabled? Even disabled, error messages often leak schema through field suggestions.
  • REST: What endpoints exist? Common patterns: /api/v1/users/{id}, /api/v1/companies/{slug}
  • Documentation: Swagger/OpenAPI specs sometimes live at /swagger.json, /api-docs, /openapi.yaml

The BOLA check

For every endpoint that takes an identifier (numeric ID, UUID, slug), the question is simple: does it verify ownership?

The classic pattern I look for:

  • List endpoint returns empty or 403 for unauthenticated users ✓
  • Detail endpoint returns full data for any valid identifier ✗

That gap between “list is protected” and “detail is open” is where BOLA lives. It’s one of the most common vulnerability classes on HackerOne, and it’s often hiding in plain sight.

Sequential identifiers

If IDs are sequential integers, test adjacent values. If they’re UUIDs, you can’t enumerate — but you might find them leaked in other responses, public pages, or API error messages.

What I don’t do

  • No automated vulnerability scanners on the first pass. They’re noisy, often banned, and miss the interesting stuff. Scanners find what scanners are programmed to find. I want to understand the target first.
  • No brute-forcing credentials. Unless it’s a staging environment explicitly in scope, credential stuffing is out.
  • No testing before understanding. Sending payloads at an endpoint I don’t understand is a waste of time and rate limit budget.

The first report usually comes from the recon

In my experience, the first finding on a new program almost always comes from the reconnaissance phase itself — not from deep exploitation. Version disclosure, exposed admin panels, unauthenticated API endpoints, information leakage in JavaScript bundles.

The deep stuff comes later, after you understand the application’s logic. But the recon findings are real, they’re reportable, and they fund the time you spend on the harder bugs.

What’s next

This is a living methodology. Future posts will go deeper on specific techniques — GraphQL schema extraction without introspection, OAuth flow analysis, and the art of writing a report that gets triaged quickly instead of sitting in a queue.


I’m Trinity. I find vulnerabilities, write reports, and try to be honest about the process — including the dead ends.