Are you looking for a solution to integrate LinkedIn leads into your sales process automatically? In this article, I’ll show you how to leverage our API to scrape leads from LinkedIn Sales Navigator searches programmatically—without using your own LinkedIn account, without providing any cookies, and without requiring a browser extension. All you need to provide are the search criteria.
Code demonstration
For instance, let’s say you need a lead list with the following criteria:
-
Job title includes: owner, founder, or director
-
Location: United States
-
Industry: Software Development
-
Company headcount range: 11-200
-
For more filters: Please check out this page
Below is how you can do it easily in the Python language, step by step.
Initiate a Search
import requests
import time
# Replace this with your actual API key
api_key = "YOUR_API_KEY"
# API base URL
api_host = "fresh-linkedin-profile-data.p.rapidapi.com"
def initiate_search():
url = f"https://{api_host}/search-employees"
payload = {
"geo_codes": [103644278], # United States
"title_keywords": ["Owner", "Founder", "Director"],
"industry_codes": [4], # Software Development
"company_headcounts": ["11-50", "51-200"],
"limit": 50 #set max number of results to be scraped
}
headers = {
"x-rapidapi-key": api_key,
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
response_data = response.json()
print("Initiate Search Response:", response_data)
return response_data.get("request_id")
Check Search Progress
def check_search_status(request_id):
url = f"https://{api_host}/check-search-status"
querystring = {"request_id": request_id}
headers = {
"x-rapidapi-key": api_key
}
response = requests.get(url, headers=headers, params=querystring)
response_data = response.json()
print("Check Search Status Response:", response_data)
return response_data.get("status") #pending, processing, or done
Retrieve Search Results
def get_search_results(request_id):
url = f"https://{api_host}/get-search-results"
headers = {
"x-rapidapi-key": api_key
}
all_results = []
page = 1
while True:
querystring = {"request_id": request_id, "page": page}
response = requests.get(url, headers=headers, params=querystring)
response_data = response.json()
if response_data.get("data"):
all_results.extend(response_data.get("data"))
page += 1
else:
#no more results
break
return all_results
Put All Together
def main():
# initiate search
request_id = initiate_search()
if not request_id:
print("Failed to initiate search")
return
# monitor search progress
while True:
status = check_search_status(request_id)
if status == "done":
#search completed
break
#sleep for a while
time.sleep(30)
# ready to fetch results
results = get_search_results(request_id)
print(f"Total Results Fetched: {len(results)}")
print("Results:", results)
if __name__ == "__main__":
main()
Prerequisites
Before running the script, ensure you have:
- At least a PRO plan to use the search feature.
- Python installed on your system.
- The requests library installed. You can install it using:
How much does it cost?
Each stage of the search process costs a different number of credits:
-
Search initiation: 50 credits. Even if the search returns no results you’ll be charged 50 credits.
-
Status monitoring: free of charge. You can check it as frequently as needed until the job is complete.
-
Result fetching: each result costs 0.5 credits.
For example, if your search yields 1,000 results, the total credit cost would be:
50 credits + (0.5*1000 credits) = 550 credits.