Having a big employee search? No problem, we now can scrape searches that are up to 50k results without using your own LinkedIn account, without providing any cookies, and without requiring a browser extension. All you need to do is provide the search criteria.

Code demonstration

For instance, let’s say you need a lead list with the following criteria:

  • Job title includes: owner, founder, or director

  • Location: United States

  • Industry: Software Development

  • Company headcount range: 11-50

  • For more filters: Please check out this page

Below is how you can do it easily in the Python language, step by step.

1

Initiate a Search

import requests
import time

# Replace this with your actual API key
api_key = "YOUR_API_KEY"

# API base URL
api_host = "fresh-linkedin-profile-data.p.rapidapi.com"

def initiate_search():
    url = f"https://{api_host}/big-search-employees"
    payload = {
        "geo_codes": [103644278],  # United States
        "title_keywords": ["Owner", "Founder", "Director"],
        "industry_codes": [4],  # Software Development
        "company_headcounts": ["11-50", "51-200"],
        "limit": 50 #set max number of results to be scraped
    }
    headers = {
        "x-rapidapi-key": api_key,
        "Content-Type": "application/json"
    }

    response = requests.post(url, json=payload, headers=headers)
    response_data = response.json()
    print("Initiate Search Response:", response_data)
    return response_data.get("request_id")
2

Check Search Progress

def check_search_status(request_id):
    url = f"https://{api_host}/check-search-status"
    querystring = {"request_id": request_id}
    headers = {
        "x-rapidapi-key": api_key
    }

    response = requests.get(url, headers=headers, params=querystring)
    response_data = response.json()
    print("Check Search Status Response:", response_data)
    return response_data.get("status") #pending, processing, or done


3

Retrieve Search Results

def get_search_results(request_id):
    url = f"https://{api_host}/get-search-results"
    headers = {
        "x-rapidapi-key": api_key
    }

    all_results = []
    page = 1

    while True:
        querystring = {"request_id": request_id, "page": page}
        response = requests.get(url, headers=headers, params=querystring)
        response_data = response.json()

        if response_data.get("data"):
            all_results.extend(response_data.get("data"))
            page += 1
        else:
            #no more results
            break

    return all_results

4

Put All Together

def main():
    # initiate search
    request_id = initiate_search()
    if not request_id:
        print("Failed to initiate search")
        return

    # monitor search progress
    while True:
        status = check_search_status(request_id)
        if status == "done":
            #search completed
            break
        # wait for some minutes
        time.sleep(300)

    # ready to fetch results
    results = get_search_results(request_id)
    print(f"Total Results Fetched: {len(results)}")
    print("Results:", results)

if __name__ == "__main__":
    main()

Prerequisites

Before running the script, ensure you have:

  1. Permission to use the search feature. Please reach out us at: [email protected] to obtain it.
  2. Python installed on your system.
  3. The requests library installed. You can install it using:
pip install requests

How much does it cost?

Each stage of the search process costs a different number of credits:

  1. Search initiation: 50 credits. Even if the search returns no results you’ll be charged 50 credits. Then each result costs 0.7 credits, and will be charged in advanced before you actually fetch the results.

  2. Status monitoring: free of charge. You can check it as frequently as needed until the job is complete.

  3. Result fetching: free of charge.

For example, if your search yields 10,000 results, the total credit cost would be:

50 credits + (0.7*10000 credits) = 7050 credits.