If you run a revenue site, client portal, or campaign landing page, "port 443 is open" is not enough. A useful Nagios web check should tell you that HTTPS answers, the expected virtual host is served, intended redirects still work, and the page or endpoint returns something that proves the application is alive. That is where check_http still earns its place: it is simple, readable, and already installed in many Nagios environments.
The original one-liner on this page is still directionally right. Checking for SSL, following redirects, and matching a known string is often better than doing a bare TCP or ICMP probe. The main improvement is to make the check more deliberate: choose the right path, choose a stable string, and set thresholds that reflect business impact rather than server optimism.
Use check_http to confirm the right thing
A strong HTTP check answers a business question, not just a technical one. For most teams, that question is: "Can a real user reach the correct page quickly enough?" In practice, that means checking the correct hostname with -H, using --ssl for HTTPS, following known redirects with --onredirect=follow, and confirming expected content with --string or --regex.
- Use the real hostname, especially on shared or virtual-hosted servers.
- Prefer a stable path such as
/health,/status, or/loginover a noisy home page if you control the application. - Match a stable marker like the product name, page title, or a known response string, not a random bit of changing content.
- Set
-wand-cso the alert lines up with what your team or clients would actually consider slow.
A practical baseline command
/usr/local/nagios/libexec/check_http -H example.com --ssl --onredirect=follow --string 'example.com' -w 5 -c 10This is the cleaned-up version of the original idea. It checks the host header example.com, connects over HTTPS, follows redirects, looks for the string example.com in the body, warns after 5 seconds, and goes critical after 10. That is a reasonable starting point for marketing sites, brochure sites, and small client sites.
That said, matching the domain name is only a fallback. Many modern apps render minimal shell HTML, localize content, or serve generic cached pages where the domain may not appear reliably. If you control the application, a purpose-built endpoint is better:
/usr/local/nagios/libexec/check_http -H example.com -u /health --ssl --string 'ok' -w 3 -c 8If the response is more variable, switch from --string to -r or -R and match a stable pattern instead of an exact phrase. If DNS, CDN, or load-balancer behavior makes troubleshooting harder, combine -H with -I so you test a specific address while still sending the correct host header.
Redirects, certificates, and common blind spots
Redirect behavior matters because a site can be up and still be wrong. If HTTP should always move to HTTPS, --onredirect=follow is sensible. If a redirect would actually signal a mistake, set redirect handling to warning or critical instead of following it blindly. The check should reflect the intended user journey.
For certificate expiry alone, a focused check is still useful:
/usr/local/nagios/libexec/check_http -H example.com -C 21,7That warns when the certificate has fewer than 21 days left and goes critical below 7 days. The important caveat is that a passing check_http result should not be treated as proof of full modern TLS validation. Current manpages still call out limits around hostname and CA-chain verification in check_http, so if strict certificate validation matters, plan to use check_curl for that job.
Current reality: new checks should lean toward check_curl
This is the part worth updating in older notes. The current Monitoring Plugins documentation marks check_http as deprecated and recommends check_curl as the drop-in replacement. If you have stable legacy definitions, you do not need to rewrite everything immediately. But for new checks, refreshes, or environments where TLS behavior matters, check_curl is the safer direction.
/usr/local/nagios/libexec/check_curl -H example.com --ssl -D --onredirect=curl --string 'example.com' -w 5 -c 10That example stays close to the older pattern but adds certificate and hostname verification with -D and uses libcurl-backed redirect handling. It is a better default when you are standardizing checks across multiple client sites, agency retainers, or environments where inconsistent packaging turns into operational noise.
Quick log triage still matters
The original grep note is still useful when you are reviewing logs by hand. This command remains valid:
grep -i -e 'critical' -e 'invalid' -e 'warning' logfile.logOn GNU grep, a shorter equivalent is:
grep -Ei 'critical|invalid|warning' logfile.logUse it for first-pass triage, not as your whole monitoring strategy. The real goal is to turn recurring log patterns into explicit checks, clear thresholds, and alerts that someone can act on quickly.
If you need help turning a handful of ad hoc server checks into a monitoring setup your team can actually trust across sites, clients, and handovers, Greg can help define the checks, thresholds, and operational ownership so alerts become useful instead of noisy.
Need help with this kind of work?
Talk to Greg about turning ad hoc site checks into dependable monitoring across client and production environments. Get in touch with Greg.