A non-blocking NIO Channel
and a blocking InputStream
have inevitable impedance mismatch. Because an InputStream
is supposed to block for every read operation unless there’s some data available in the buffer, any InputStream
-based decoder implementation can’t be used with a non-blocking NIO application right away.
A common workaround is to prepend a length field for each message so you can wait until you read a whole message before calling InputStream.read()
. However, this turns your NIO application incompatible with a legacy blocking I/O application because the legacy application doesn’t prepend a length field at all. You might have managed to modify the legacy application to prepend a length field, but we know it’s not always the case. We need something to fill this gap between two I/O paradigms.
An ObjectInput/OutputStream
-based blocking I/O network applications are the most common case because it was considered to be the easiest quick-and-dirty solution for intranet Java object exchange. It’s as simple as wrapping an InputStream
of a Socket
with an ObjectInputStream
(i.e. in = new ObjectInputStream(socket.getInputStream());
).
How can we implement a NIO network application to be interoperable with those legacy applications without any modification? It was considered to be impossible… until today!
I’ve just released a new milestone of Netty which addresses the issue I described above. It provides CompatibleObjectEncoder
and CompatibleObjectDecoder
, which retains interoperability with the legacy ObjectInput/OutputStream
-based socket applications.
You will also find you can do the same for any kind of InputStream
implementations with Netty’s ReplayingDecoder
with fairly small amount of effort, which means you can shift the paradigm of your complicated blocking protocol client/server to more scalable non-blocking paradigm while retaining most legacy code.
The excitement of ReplayingDecoder
doesn’t stop here. It also allows you to implement a non-blocking decoder in a blocking paradigm. In a non-blocking paradigm, you always had to check if there’s enough data in the buffer, like the following:
public boolean decode(ByteBuffer in) {
if (in.remaining() < 4) {
return false;
}
// Read the length header.
int position = in.position();
int length = in.getInt();
if (in.remaining() < length) {
in.position(position);
return false;
}
// Read the body.
byte[] data = new byte[length];
in.get(data);
...
return true;
}
With ReplayingDecoder
, you don’t need to check the availability of the input buffer at all:
public void decode(ByteBuffer in) {
// Read the length header.
int length = in.getInt();
// Read the body.
byte[] data = new byte[length];
in.get(data);
...
}
How could this work? ReplayingDecoder
uses a sort of continuation technique. It rewinds the buffer position to the beginning when there’s not enough data in the buffer automatically and calls decode()
again (i.e. replays the decode) when more data is received from a remote peer.
You might think this is pretty inefficient, but it turned out to be very efficient in most cases. Higher throughput means lower chance of replay because we will receive more than one message for a single input buffer (often dozens). Consequently, most messages will be decoded in one shot without a replay. In case of slow connection, it will be less than optimal but you won’t see much difference because it’s already slow because the connection itself is slow. Just compare the code complexity of the two paradigms. I’d definitely go for the latter.