HTTP/2 is no longer the newest version of HTTP. HTTP/3 exists now, and many edge platforms support it. Even so, the day-to-day operational questions for most teams are still about h2, h2c, and http/1.1: what should be enabled, where does it help, and how do you verify what is really happening on a live stack?
If you run a business website, manage an origin behind a CDN, or support client delivery across multiple hosting layers, the right answer is rarely "use the newest thing everywhere." The better answer is to choose the protocol that fits each hop, keep fallbacks simple, and make testing easy for the next person who has to maintain it.
What These Names Actually Mean
h2 means HTTP/2 over TLS. For public HTTPS traffic, this is the normal production default. It is negotiated during the TLS handshake, and for browser-facing sites it is the practical baseline when you want modern performance without unusual deployment risk.
h2c means HTTP/2 over cleartext TCP. That makes it a niche tool, not a general recommendation. It can still be useful inside controlled networks, between trusted proxies and services, or in lab environments where you control both ends. What matters is that you choose it intentionally. The old upgrade-style path around h2c was never widely adopted and is now marked obsolete in the current HTTP/2 RFC, so it is not something to build public-web strategy around.
http/1.1 is not a failure state. It is still a valid, widely supported protocol, and in some environments it remains the simplest operational choice. Older upstreams, certain appliances, debug tooling, and legacy integrations can still behave more predictably with HTTP/1.1 than with forced HTTP/2 everywhere.
Choose the Default Based on the Traffic Pattern
For public websites, a sensible default is usually h2 with http/1.1 fallback. That gives you modern browser performance while keeping compatibility straightforward. If your platform also supports HTTP/3, treat that as an extra benefit at the edge, not a reason to neglect your HTTP/2 and HTTP/1.1 behavior.
For internal services, reverse proxy hops, or service meshes, the decision is more practical than ideological. If you fully control both ends and want the connection reuse and multiplexing that HTTP/2 brings, h2c can be reasonable. If simple troubleshooting matters more, or if your traffic is low-volume and stable, HTTP/1.1 may still be the better trade.
For agencies and operations teams, one mistake shows up again and again: assuming the browser protocol and the origin protocol are the same thing. They often are not. A CDN may speak HTTP/2 or HTTP/3 to the visitor and HTTP/1.1 to the origin. That is not automatically a problem. The real question is whether the handoff points are measured, understood, and fit for the workload.
Practical Server Patterns
On Apache, the common TLS pattern is simple:
Protocols h2 http/1.1If you truly need cleartext HTTP/2 in a controlled environment, Apache also documents this broader pattern:
Protocols h2 h2c http/1.1Use that second form only when you know why h2c is needed. Apache also notes an important caveat: upgrade-based switching to HTTP/2 is only accepted for requests without a body, so POST and PUT requests with content will not trigger that upgrade path. In other words, do not assume an API workload will quietly "upgrade itself" just because the module is enabled.
On NGINX, the current HTTP/2 enablement pattern is also direct:
server {
listen 443 ssl;
http2 on;
}What is worth avoiding is stale copy-paste configuration. Current NGINX docs mark older HTTP/2 server-push directives such as http2_push as obsolete. If your stack still contains old push-related snippets, that is a good sign the configuration deserves a cleanup instead of another round of inherited tuning.
How to Verify What a Live Site Is Negotiating
Do not guess from marketing pages, control panels, or old screenshots. Test the live endpoint.
curl -I -sS -o /dev/null -w '%{http_version}\n' https://example.com
curl --http1.1 -I https://example.com
curl --http2 -I https://example.com
curl --http2-prior-knowledge http://internal-service:8080/healthThe first command tells you which HTTP version was actually used. The next two force HTTP/1.1 or HTTP/2 so you can compare behavior. The final command is useful only when you already know the cleartext endpoint is meant to speak HTTP/2 directly. If your local curl build does not support these options, run curl -V and confirm HTTP/2 support is present before blaming the server.
This is also where many performance discussions become more honest. If the edge is already serving h2, but the origin is slow, the protocol is not the main problem. If the CDN is negotiating cleanly, but cache headers or asset weight are poor, changing transport alone will not save the page.
Mistakes Worth Avoiding
- Treating
h2cas a default for public sites instead of a controlled-environment tool. - Assuming HTTP/2 is the latest protocol and writing plans that ignore HTTP/3.
- Believing every hop in the chain must use the same protocol to be "correct."
- Relying on old upgrade or server-push recipes without checking current docs.
- Optimizing protocol choice before fixing backend latency, caching, and asset delivery.
If your stack spans hosting, CDN, load balancer, reverse proxy, and application servers, the real value is not just enabling a flag. It is knowing which layer is responsible for performance, compatibility, and operational complexity. That is where a short protocol review can prevent a lot of wasted effort.
Need help with this kind of work?
Need a clearer plan for CDN, proxy, and origin protocol choices? Greg can help simplify the stack and reduce guesswork. Get in touch with Greg.