Blog

Automate Job Applications Without Selenium

Use Unbrowse to interact with job boards like LinkedIn, Indeed, and Glassdoor through their shadow APIs. Skip the brittle Selenium scripts and call the real endpoints directly.

Lewis Tham
April 3, 2026

Automate Job Applications Without Selenium

Job searching is one of the most tedious processes on the internet. You visit the same 5 job boards daily, search the same keywords, scroll through the same results, and copy-paste your information into the same form fields. Naturally, developers try to automate this. And naturally, Selenium is the first tool they reach for.

It is also the worst tool for the job.

Why Selenium Fails for Job Automation

Job boards are specifically designed to detect and block automated browsers. LinkedIn, Indeed, and Glassdoor all employ sophisticated bot detection:

  • Browser fingerprinting: Selenium-driven browsers have detectable characteristics
  • Rate limiting: Automated navigation patterns trigger throttling
  • CAPTCHAs: Triggered by behavior that looks non-human
  • Session invalidation: Accounts get flagged or banned

A typical Selenium job search script looks like this:

# The old way: fragile, detectable, maintenance-heavy
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

driver = webdriver.Chrome()
driver.get("https://www.linkedin.com/jobs/search/?keywords=senior+engineer")

# Wait for dynamic content to load
wait = WebDriverWait(driver, 10)
job_cards = wait.until(EC.presence_of_all_elements_located(
    (By.CSS_SELECTOR, ".jobs-search-results__list-item")
))

for card in job_cards:
    title = card.find_element(By.CSS_SELECTOR, ".job-card-list__title").text
    company = card.find_element(By.CSS_SELECTOR, ".job-card-container__company-name").text
    # ... 50 more lines of fragile selectors

This breaks every time LinkedIn updates their CSS classes. Which they do frequently, specifically to break scrapers.

The Shadow API Approach

Behind every job board's search page is a structured API. When LinkedIn renders job search results, it calls an internal endpoint that returns JSON with job titles, companies, locations, salary ranges, and application URLs. The HTML is just a rendering layer.

Unbrowse discovers these APIs and lets your agent call them directly:

import { Unbrowse } from "@unbrowse/sdk";

const unbrowse = new Unbrowse();

async function searchJobs(query, location) {
  const result = await unbrowse.resolve({
    intent: `search for ${query} jobs in ${location}`,
    url: `https://www.linkedin.com/jobs/search/?keywords=${encodeURIComponent(query)}&location=${encodeURIComponent(location)}`,
  });

  return result.data;
}

const jobs = await searchJobs("senior software engineer", "San Francisco");
console.log(`Found ${jobs?.results?.length || 0} jobs`);

No browser launch. No CSS selectors. No bot detection. The shadow API returns the same structured data that LinkedIn's frontend consumes.

Multi-Board Job Search Agent

The real power comes from searching multiple job boards simultaneously:

import { Unbrowse } from "@unbrowse/sdk";

const unbrowse = new Unbrowse();

async function multiSearch(query, location) {
  const boards = [
    {
      name: "LinkedIn",
      url: `https://www.linkedin.com/jobs/search/?keywords=${encodeURIComponent(query)}&location=${encodeURIComponent(location)}`,
    },
    {
      name: "Indeed",
      url: `https://www.indeed.com/jobs?q=${encodeURIComponent(query)}&l=${encodeURIComponent(location)}`,
    },
    {
      name: "Glassdoor",
      url: `https://www.glassdoor.com/Job/jobs.htm?sc.keyword=${encodeURIComponent(query)}&locT=C&locKeyword=${encodeURIComponent(location)}`,
    },
  ];

  const results = await Promise.all(
    boards.map(async (board) => {
      const result = await unbrowse.resolve({
        intent: `find ${query} job listings in ${location}`,
        url: board.url,
      });
      return {
        board: board.name,
        jobs: result.data?.results || result.data?.jobs || [],
      };
    })
  );

  return results;
}

const allJobs = await multiSearch("machine learning engineer", "New York");

for (const board of allJobs) {
  console.log(`\n${board.board}: ${board.jobs.length} results`);
  for (const job of board.jobs.slice(0, 3)) {
    console.log(`  - ${job.title} at ${job.company}`);
  }
}

All three boards are searched in parallel. With cached routes, the entire search completes in under 2 seconds.

Building a Job Tracker

Once you can search efficiently, build a tracker that monitors new listings and avoids showing you duplicates:

import { Unbrowse } from "@unbrowse/sdk";
import { writeFileSync, readFileSync, existsSync } from "fs";

const unbrowse = new Unbrowse();
const STATE_FILE = "./job-tracker.json";

function loadState() {
  return existsSync(STATE_FILE)
    ? JSON.parse(readFileSync(STATE_FILE, "utf-8"))
    : { seen: {}, saved: [], lastRun: null };
}

function saveState(state) {
  writeFileSync(STATE_FILE, JSON.stringify(state, null, 2));
}

async function findNewJobs(query, location) {
  const state = loadState();

  const result = await unbrowse.resolve({
    intent: `find recent ${query} job listings in ${location}`,
    url: `https://www.indeed.com/jobs?q=${encodeURIComponent(query)}&l=${encodeURIComponent(location)}&sort=date`,
  });

  const listings = result.data?.results || result.data?.jobs || [];
  const newJobs = [];

  for (const job of listings) {
    const id = job.id || job.url || `${job.title}-${job.company}`;
    if (!state.seen[id]) {
      state.seen[id] = {
        firstSeen: new Date().toISOString(),
        title: job.title,
        company: job.company,
        url: job.url,
      };
      newJobs.push(job);
    }
  }

  state.lastRun = new Date().toISOString();
  saveState(state);

  return newJobs;
}

const newListings = await findNewJobs("AI engineer", "Remote");
console.log(`${newListings.length} new jobs found since last check`);
for (const job of newListings) {
  console.log(`  NEW: ${job.title} at ${job.company}`);
}

Enriching Job Data

Shadow APIs often return more data than what is visible on the page. You can enrich job listings with company details:

async function enrichJob(jobUrl) {
  const details = await unbrowse.resolve({
    intent: "get full job description, requirements, salary range, and company info",
    url: jobUrl,
  });

  return {
    description: details.data?.description,
    requirements: details.data?.requirements || details.data?.qualifications,
    salary: details.data?.salary || details.data?.compensation,
    benefits: details.data?.benefits,
    companySize: details.data?.companySize,
    posted: details.data?.postedDate,
  };
}

Job board shadow APIs frequently include salary range data, company size, remote policy, and other fields that may not be prominently displayed in the UI.

Smart Filtering

Use the structured data to build intelligent filters that go beyond keyword matching:

async function smartFilter(jobs, preferences) {
  const enriched = [];

  for (const job of jobs) {
    if (!job.url) continue;

    const details = await unbrowse.resolve({
      intent: "get salary range, remote policy, and required experience",
      url: job.url,
    });

    const salary = details.data?.salary;
    const remote = details.data?.remote || details.data?.workType;
    const experience = details.data?.experienceYears;

    // Apply preference filters
    if (preferences.minSalary && salary?.min < preferences.minSalary) continue;
    if (preferences.remote && !remote?.toLowerCase().includes("remote")) continue;
    if (preferences.maxExperience && experience > preferences.maxExperience) continue;

    enriched.push({
      ...job,
      salary,
      remote,
      experience,
      matchScore: calculateMatchScore(details.data, preferences),
    });
  }

  return enriched.sort((a, b) => b.matchScore - a.matchScore);
}

function calculateMatchScore(data, preferences) {
  let score = 0;
  if (data?.salary?.min >= (preferences.minSalary || 0)) score += 30;
  if (data?.remote?.toLowerCase().includes("remote")) score += 25;
  if (data?.description?.toLowerCase().includes(preferences.mustHaveSkill || "")) score += 20;
  return score;
}

The Selenium vs. Unbrowse Comparison

Aspect Selenium/Playwright Unbrowse
Setup WebDriver + browser binaries npm install @unbrowse/sdk
Bot detection Frequently blocked Uses real API patterns
Speed per search 5-15 seconds 100-500ms (cached)
Maintenance Weekly selector fixes Zero
Data format Raw HTML to parse Structured JSON
Multi-board search Sequential, slow Parallel, fast
Account risk Flagging/bans common Minimal — API-level access

Application Tracking Integration

Combine job discovery with application tracking to build a complete job search pipeline:

async function trackApplication(job) {
  const state = loadState();

  state.saved.push({
    ...job,
    status: "interested",
    addedAt: new Date().toISOString(),
    notes: "",
  });

  saveState(state);
}

// Mark jobs you want to apply to
const newJobs = await findNewJobs("senior engineer", "Remote");
for (const job of newJobs) {
  if (job.salary?.min >= 180000) {
    await trackApplication(job);
    console.log(`Saved: ${job.title} at ${job.company}`);
  }
}

Getting Started

git clone --single-branch --depth 1 https://github.com/unbrowse-ai/unbrowse.git ~/unbrowse
cd ~/unbrowse && ./setup --host off

Or install the SDK:

npm install @unbrowse/sdk

Start by searching one job board with a single query. Unbrowse discovers the shadow API on the first run. After that, every search is a direct API call — fast, reliable, and invisible to bot detection.

Your job search should be as automated as the rest of your workflow. Stop fighting Selenium and start calling the APIs that job boards use internally.