Having a big employee search? No problem, we now can scrape searches that are up to 50k results without using your own LinkedIn account, without providing any cookies, and without requiring a browser extension. All you need to do is provide the search criteria.
Below is how you can do it easily in the Python language, step by step.
1
Initiate a Search
Copy
import requestsimport time# Replace this with your actual API keyapi_key = "YOUR_API_KEY"# API base URLapi_host = "web-scraping-api2.p.rapidapi.com"def initiate_search(): url = f"https://{api_host}/lead-search-at-scale" payload = { "geo_codes": [103644278], # United States "title_keywords": ["Owner", "Founder", "Director"], "industry_codes": [4], # Software Development "company_headcounts": ["11-50", "51-200"], "limit": 50 #set max number of results to be scraped } headers = { "x-rapidapi-key": api_key, "Content-Type": "application/json" } response = requests.post(url, json=payload, headers=headers) response_data = response.json() print("Initiate Search Response:", response_data) return response_data.get("request_id")
Each stage of the search process costs a different number of credits:
Search initiation: 50 credits. Even if the search returns no results you’ll be charged 50 credits. Then each result costs 1 credits, and will be charged in advanced before you actually fetch the results.
Status monitoring: free of charge. You can check it as frequently as needed until the job is complete.
Result fetching: free of charge.
For example, if your search yields 10,000 results, the total credit cost would be:50 credits + (1*10000 credits) = 10050 credits.