Java 11: Standardized HTTP Client API

One of the features to be included with the upcoming JDK 11 release is the standardized HTTP client API that aims to replace the legacy HttpUrlConnection class, which has been present in the JDK since the very early years of Java. The problem with this old API is described in the enhancement proposal, mainly that it is now considered old and difficult to use.

The new API supports both HTTP/1.1 and HTTP/2. The newer version of the HTTP protocol is designed to improve the overall performance of sending requests by a client and receiving responses from the server. This is achieved by introducing a number of changes, such as stream multiplexing, header compression, and push promises. In addition, the new HTTP client also natively supports WebSockets.

A new module named java.net.http that exports a package of the same name is defined in JDK 11, which contains the client interfaces:

module java.net.http {
  exports java.net.http;}


You can view the API Javadocs here (note that since JDK 11 is not yet released, this API is not 100 percent final).

The package contains the following types:

BodyPublisher is a subinterface of Flow.Publisher, introduced in Java 9. Similarly, BodySubscriber is a subinterface of Flow.Subscriber. This means that these interfaces are aligned with the reactive streams approach, which is suitable for asynchronously sending requests using HTTP/2.

Implementations for common types of body publishers, handlers, and subscribers are pre-defined in factory classes BodyPublishers, BodyHandlers, and BodySubscribers. For example, to create a BodyHandler that processes the response body bytes (via an underlying BodySubscriber) as a string, the method BodyHandlers.ofString() can be used to create such an implementation. If the response body needs to be saved in a file, the method BodyHandlers.ofFile() can be used.

Code Examples

Specifying the HTTP Protocol Version

To create an HTTP client that prefers HTTP/2 (which is the default, so the version() can be omitted):

HttpClient httpClient = HttpClient.newBuilder()
               .version(Version.HTTP_2)  // this is the default
               .build();


When HTTP/2 is specified, the first request to an origin server will try to use it. If the server supports the new protocol version, then the response will be sent using that version. All subsequent requests/responses to that server will use HTTP/2. If the server does not supports HTTP/2, then HTTP/1.1 will be used.

Specifying a Proxy

To set a proxy for the request, the builder method proxy is used to provide a ProxySelector. If the proxy host and port are fixed, the proxy selector can be hardcoded in the selector:

HttpClient httpClient = HttpClient.newBuilder()
               .proxy(ProxySelector.of(new InetSocketAddress(proxyHost, proxyPort)))
               .build();


Creating a GET Request

The request methods have associated builder methods based on their actual names. In the below example, GET() is optional:

HttpRequest request = HttpRequest.newBuilder()
               .uri(URI.create("https://http2.github.io/"))
               .GET()   // this is the default
               .build();


Creating a POST Request With a Body

To create a request that has a body in it, a BodyPublisher is required in order to convert the source of the body into bytes. One of the pre-defined publishers can be created from the static factory methods in BodyPublishers:

HttpRequest mainRequest = HttpRequest.newBuilder()
               .uri(URI.create("https://http2.github.io/"))
               .POST(BodyPublishers.ofString(json))
               .build();


Sending an HTTP Request

There are two ways of sending a request: either synchronously (blocking until the response is received) or asynchronously. To send in blocking mode, we invoke the send() method on the HTTP client, providing the request instance and a BodyHandler. Here is an example that receives a response representing the body as a string:

HttpRequest request = HttpRequest.newBuilder()
               .uri(URI.create("https://http2.github.io/"))
               .build();

HttpResponse<String> response = httpClient.send(request, BodyHandlers.ofString());
logger.info("Response status code: " + response.statusCode());
logger.info("Response headers: " + response.headers());
logger.info("Response body: " + response.body());


Asynchronously Sending an HTTP Request

Sometimes, it is useful to avoid blocking until the response is returned by the server. In this case, we can call the method sendAsync(), which returns a CompletableFuture.  A CompletableFutureprovides a mechanism to chain subsequent actions to be triggered when it is completed. In this context, the returned CompletableFuture is completed when an HttpResponse is received. If you are not familiar with CompletableFuture, this post provides an overview and several examples that illustrate how to use it.

httpClient.sendAsync(request, BodyHandlers.ofString())
          .thenAccept(response -> {

       logger.info("Response status code: " + response.statusCode());
       logger.info("Response headers: " + response.headers());
       logger.info("Response body: " + response.body());
});


In the above example, sendAsync would return a CompletableFuture<HttpResponse<String>>. The thenAccept method adds a Consumer to be triggered when the response is available.

Sending Multiple Requests Using HTTP/1.1

When loading a web page in a browser using HTTP/1.1, several requests are sent behind the scenes. A request is first sent to retrieve the main HTML of the page, and then several requests are typically needed to retrieve the resources referenced by the HTML, e.g. CSS files, images, and so on. To do this, several TCP connections are created to support the parallel requests, due to a limitation in the protocol where only one request/response can occur on a given connection. However, the number of connections is usually limited (most tests on page loads seem to create six connections). This means that many requests will wait until previous requests are complete before they can be sent. The following example reproduces this scenario by loading a page that links to hundreds of images (taken from an online demo on HTTP/2).

A request is first sent to retrieve the HTML main resource. Then, we parse the result, and for each image in the document, a request is submitted in parallel using an executor with a limited number of threads:

ExecutorService executor = Executors.newFixedThreadPool(6);

HttpClient httpClient = HttpClient.newBuilder()
        .version(Version.HTTP_1_1)
        .build();

HttpRequest mainRequest = HttpRequest.newBuilder()
        .uri(URI.create("https://http2.akamai.com/demo/h2_demo_frame.html"))
        .build();

HttpResponse<String> mainResponse = httpClient.send(mainRequest, BodyHandlers.ofString());

List<Future<?>> futures = new ArrayList<>();

// For each image resource in the main HTML, send a request on a separate thread
responseBody.lines()
            .filter(line -> line.trim().startsWith("<img height"))
            .map(line -> line.substring(line.indexOf("src='") + 5, line.indexOf("'/>")))
            .forEach(image -> {

             Future imgFuture = executor.submit(() -> {
                 HttpRequest imgRequest = HttpRequest.newBuilder()
                         .uri(URI.create("https://http2.akamai.com" + image))
                         .build();
                 try {
                     HttpResponse<String> imageResponse = httpClient.send(imgRequest, BodyHandlers.ofString());
                     logger.info("Loaded " + image + ", status code: " + imageResponse.statusCode());
                 } catch (IOException | InterruptedException ex) {
                     logger.error("Error during image request for " + image, ex);
                 }
             });
             futures.add(imgFuture);
         });

// Wait for all submitted image loads to be completed
futures.forEach(f -> {
    try {
        f.get();
    } catch (InterruptedException | ExecutionException ex) {
        logger.error("Error waiting for image load", ex);
    }
});


Below is a snapshot of TCP connections created by the previous HTTP/1.1 example:

TCPView_HTTP1_1

Sending Multiple Requests Using HTTP/2

Running the scenario above but using HTTP/2 (by setting version(Version.HTTP_2) on the created client instance, we can see that a similar latency is achieved but with only one TCP connection being used as shown in the below screenshot, hence, using fewer resources. This is achieved through multiplexing — a key feature that enables multiple requests to be sent concurrently over the same connection, in the form of multiple streams of frames. Each request / response is decomposed into frames, which are sent over a stream. The client is then responsible for assembling the frames into the final response.

TCPView_HTTP2

If we increase the level of parallelism by allowing more threads in the custom executor, the latency is remarkably reduced, obviously, since more requests are sent in parallel over the same TCP connection.

Handling Push Promises in HTTP/2

Some web servers support push promises. Instead of the browser having to request every page asset, the server can guess which resources are likely to be needed by the client and push them to the client. For each resource, the server sends a special request, known as a push promise, in the form of a frame to the client. The HttpClient has an overloaded sendAsyncmethod that allows us to handle such promises by either accepting them or rejecting them, as shown in the below example:

httpClient.sendAsync(mainRequest, BodyHandlers.ofString(), new PushPromiseHandler() {

    @Override
    public void applyPushPromise(HttpRequest initiatingRequest, HttpRequest pushPromiseRequest, Function<BodyHandler<String>, CompletableFuture<HttpResponse<String>>> acceptor) {
        // invoke the acceptor function to accept the promise
        acceptor.apply(BodyHandlers.ofString())
                .thenAccept(resp -> logger.info("Got pushed response " + resp.uri()));
    }
})


Pushed resources can lead to better performance by avoiding a round-trip for requests explicitly made by the client that are otherwise pushed by the server along with the initial request.

WebSocket Example

The HTTP client also supports the WebSocket protocol, which is used in real-time web applications to provide client-server communication with low message overhead. Below is an example of how to use an HttpClient to create a WebSocket that connects to a URI, sends messages for one second, and then closes its output. The API also makes use of asynchronous calls that return CompletableFuture:

HttpClient httpClient = HttpClient.newBuilder().executor(executor).build();
Builder webSocketBuilder = httpClient.newWebSocketBuilder();
WebSocket webSocket = webSocketBuilder.buildAsync(URI.create("wss://echo.websocket.org"), new WebSocket.Listener() {
    @Override
    public void onOpen(WebSocket webSocket) {
        logger.info("CONNECTED");
        webSocket.sendText("This is a message", true);
        Listener.super.onOpen(webSocket);
    }

    @Override
    public CompletionStage<?> onText(WebSocket webSocket, CharSequence data, boolean last) {
        logger.info("onText received with data " + data);
        if(!webSocket.isOutputClosed()) {
            webSocket.sendText("This is a message", true);
        }
        return Listener.super.onText(webSocket, data, last);
    }

    @Override
    public CompletionStage<?> onClose(WebSocket webSocket, int statusCode, String reason) {
        logger.info("Closed with status " + statusCode + ", reason: " + reason);
        executor.shutdown();
        return Listener.super.onClose(webSocket, statusCode, reason);
    }
}).join();
logger.info("WebSocket created");

Thread.sleep(1000);
webSocket.sendClose(WebSocket.NORMAL_CLOSURE, "ok").thenRun(() -> logger.info("Sent close"));


Conclusion

The new HTTP client API provides a standard way to perform HTTP network operations with support for modern web features, such as HTTP/2, without the need to add third-party dependencies. To take a look at the full code of the above examples, you can check it out here. If you enjoyed this post, feel free to share it!

 

 

 

 

Top