Most scraping projects do not fail because the parser is wrong. They fail because the team chose a full browser when a simple request would do, or because the automation has no plan for login state, downloads, consent banners, and UI changes. That is the main reason old PhantomJS-era notes feel thin today: modern sites are more dynamic, and the maintenance cost matters as much as the first successful run.
If you are evaluating scraping or browser automation for reporting, QA, lead capture checks, partner portals, or routine back-office tasks, the practical question is not which library is trendy. It is what the lightest tool is that will still survive real production conditions.
Start With The Lightest Approach That Works
If the data is already present in the raw HTML, use standard HTTP requests and parsing first. That route is faster, cheaper to run, and much easier to support. Move to a real browser only when the page depends on JavaScript rendering, authenticated sessions, multi-step forms, infinite scroll, or user actions such as clicks, uploads, and downloads.
That distinction matters for business teams because it affects both cost and reliability. A browser-driven workflow is justified when you need to:
- log into a portal and carry state across multiple pages
- extract data from interfaces that render content after page load
- trigger exports, screenshots, PDFs, or other browser-side actions
- validate that a real user journey still works after a site change
Retire PhantomJS For New Work
PhantomJS still appears in old answers and snippets, but its own website says development is suspended. That makes it a poor foundation for new client work. If your goal is modern browser behaviour, security updates, and fewer surprises around JavaScript-heavy sites, do not start a fresh build on PhantomJS in 2026.
Choose The Tool By The Workflow
Playwright: the default for most new automation
If I am setting up a new browser automation workflow today, Playwright is the default choice. It supports Chromium, WebKit, and Firefox, and it runs headless or with the browser visible for debugging. More importantly, its action model is built around waiting for the page to be ready, which reduces a lot of the brittle timing code that used to make scripts flaky.
npm init playwright@latestThat makes Playwright a strong fit for operational automations, portal logins, form completion, export flows, and scraping jobs that need to click, type, paginate, or capture files in a controlled way. If a team needs one modern starting point and does not have a hard reason to pick something else, this is usually it.
Puppeteer: a good Chrome-first choice
Puppeteer is still a solid option when your team already works in Node and the automation is mainly Chrome or Chromium based. Current Puppeteer installation is simpler than many older guides suggest because it downloads a compatible Chrome for Testing browser for you.
npm i puppeteerI usually reach for Puppeteer when the job is narrowly Chrome-focused: internal admin tooling, scripted screenshots, PDF generation, or small utilities that do not need broader browser coverage. If you already know the Chrome ecosystem well, Puppeteer can still be an efficient choice.
Chromote: the pragmatic R route
If the surrounding workflow already lives in R, chromote is the modern place to start. It is an R implementation of the Chrome DevTools Protocol, works with Chrome-based browsers, and includes convenience methods for common tasks. It also powers rvest::read_html_live(), which is useful when a page only becomes scrapeable after JavaScript has rendered it.
install.packages("chromote")
library(chromote)
b <- ChromoteSession$new()That combination is often enough for reporting pipelines and research tasks where the team wants to stay in R instead of introducing a separate Node service just to reach a dynamic page.
If you already run Selenium or RSelenium in-house, keep it for existing WebDriver-based workflows. I would just be careful about using an older stack by default for a fresh small-to-medium build unless there is a real browser-policy or infrastructure reason to do so.
Implementation Rules That Save Time Later
- Use browser automation only where it adds real value. If a normal request returns the same data, take the simpler route.
- Wait on application state, not arbitrary sleeps. A script built around
sleep(5)will eventually break on a slow network, a new modal, or a slightly different response time. - Prefer stable locators over brittle CSS chains. Labels, roles, test IDs, predictable URLs, and meaningful element names usually survive redesigns better than deep selectors.
- Keep headed mode available. Watching the browser once is often the fastest way to diagnose a failing automation.
- Separate browser work from data work. Use the browser to get through login, rendering, or export steps, then switch back to normal parsing and processing as soon as you can.
- Design for consent prompts, expired sessions, and rate limits from day one. Those are common operational problems, not edge cases.
What I Would Recommend In Practice
For a new client build, I would normally start with a small proof of concept in Playwright. If the target turns out to be mostly static, I would simplify it down to direct requests and parsers. If the organisation is strongly R-based, I would test whether chromote or rvest::read_html_live() is enough before introducing a heavier browser stack.
The goal is not to use the most advanced tool. The goal is to end up with an automation workflow your team can rerun next month without babysitting it. If you need help choosing the stack, tightening an unreliable scraping flow, or turning one-off scripts into something an operations team can actually own, Greg can help scope and structure the work.
Need help with this kind of work?
Need help choosing the right automation stack? Get in touch with Greg.