1. 前言
Spring WebFlux源码分析(1)-服务启动流程 大致分析了从服务启动到请求向上传递到处理端的过程,但是对底层服务器的实现没有进行细致的解析。我们知道,Spring WebFlux 底层默认是集成 Netty 服务器实现的,本文着重于分析底层 Netty 服务器的启动及其服务实现
2. Netty 服务器启动与服务
2.1 Netty 服务器配置启动
在上文 Spring WebFlux源码分析(1)-服务启动流程 中,我们已经分析到 Spring 框架在完成容器初始化之后,会调用相关方法启动 NettyWebServer
这个服务器实例,本文就从这一步开始切入分析
-
NettyWebServer#start()
方法被调用时就开始了 Netty 服务器启动的流程,可以看到这个方法比较简单,唯一的逻辑就是调用NettyWebServer#startHttpServer()
方法:NettyWebServer#startHttpServer()
方法比较关键的流程分为两步,- 首先调用
this.httpServer.handle(this.handlerAdapter)
将上层处理请求的DispatcherHandler
封装到一个HttpServerHandle
对象中,并将该对象返回。 这里需注意,this.httpServer 对象是一个 HttpServerBind 的实例, HttpServerHandle 对象创建时会将其保存为内部 source 属性 - 紧接着使用
HttpServerHandle
对象调用其父类方法HttpServer#bindNow()
方法开始服务器绑定监听端口的流程
@Override public void start() throws WebServerException { if (this.disposableServer == null) { try { this.disposableServer = startHttpServer(); } catch (Exception ex) { ChannelBindException bindException = findBindException(ex); if (bindException != null) { throw new PortInUseException(bindException.localPort()); } throw new WebServerException("Unable to start Netty", ex); } logger.info("Netty started on port(s): " + getPort()); startDaemonAwaitThread(this.disposableServer); } } private DisposableServer startHttpServer() { if (this.lifecycleTimeout != null) { return this.httpServer.handle(this.handlerAdapter) .bindNow(this.lifecycleTimeout); } return this.httpServer.handle(this.handlerAdapter).bindNow(); }
- 首先调用
-
HttpServerHandle
对象的创建比较简单,此处不再分析,本节主要关注其后的绑定过程,也就是HttpServer#bindNow()
方法。这个方法其实只是个入口,可以看到最终还是调用了HttpServer#bind()
方法:显而易见
HttpServer#bind()
方法的关键分为了两步:- 调用
HttpServerHandle#tcpConfiguration()
完成 TCP 配置,返回一个TcpServer
实例 - 调用
HttpServer#bind()
方法将TcpServer
实例入参,去启动服务器
public final DisposableServer bindNow(Duration timeout) { Objects.requireNonNull(timeout, "timeout"); try { return Objects.requireNonNull(bind().block(timeout), "aborted"); } catch (IllegalStateException e) { if (e.getMessage().contains("blocking read")) { throw new IllegalStateException("HttpServer couldn't be started within " + timeout.toMillis() + "ms"); } throw e; } } public final Mono<? extends DisposableServer> bind() { return bind(tcpConfiguration()); }
- 调用
-
HttpServerHandle#tcpConfiguration()
的实现看似简单,就是调用步骤1创建实例时保存的source
属性的相关方法:source
属性也就是HttpServerBind
实例的,也就是这里调用HttpServerBind#tcpConfiguration()
- 使用上一步创建的对象,调用其 bootstrap() 方法
其实这里体现了一个非继承的链式思想,对象创建顺序是:
HttpServerBind --> HttpServerHandle,二者属于兄弟类
被创建的 HttpServerHandle 以 source 属性保存把它创建出来的 HttpServerBind 对象,这样通过这个 source 引用可以调用 HttpServerBind 相关方法,从而实现了类特性的水平扩展,下文的分析中我们还会碰到一模一样的操作
protected TcpServer tcpConfiguration() { return source.tcpConfiguration().bootstrap(this); }
-
HttpServerBind#tcpConfiguration()
的实现很简短,就是返回一个TcpServer
实例。这个TcpServer
对象其实是HttpServerBind
对象被无参构造创建出来的时候赋值的,而其最终其实是来自于TcpServer#create()
静态方法返回// 该声明位于 HttpServerBind 父类 HttpServer 中 static final TcpServer DEFAULT_TCP_SERVER = TcpServer.create(); // 以下代码位于 HttpServerBind 类中 final TcpServer tcpServer; HttpServerBind() { this(DEFAULT_TCP_SERVER); } @Override protected TcpServer tcpConfiguration() { return tcpServer; }
-
TcpServer#create()
静态方法看上去也比较简单,可以看到最终就是返回了一个TcpServerBind
实例public static TcpServer create() { return TcpServerBind.INSTANCE; }
-
继续往下追踪,可以看到
TcpServerBind.INSTANCE
是通过TcpServerBind
类的无参构造方法创建的,在这里我们终于看到了 Netty 框架中的服务端配置类 ServerBootstrap。可以看到此处只是简单地为ServerBootstrap
设置了一些配置属性,并没有做过多处理。如果有读者不清楚ServerBootstrap
的作用,建议可以看下 Netty源码分析(2)-服务端启动流程 做个初步了解,此处不再展开static final TcpServerBind INSTANCE = new TcpServerBind(); final ServerBootstrap serverBootstrap; TcpServerBind() { this.serverBootstrap = createServerBootstrap(); BootstrapHandlers.channelOperationFactory(this.serverBootstrap, TcpUtils.TCP_OPS); } ServerBootstrap createServerBootstrap() { return new ServerBootstrap().option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT) .option(ChannelOption.SO_REUSEADDR, true) .option(ChannelOption.SO_BACKLOG, 1000) .childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT) .childOption(ChannelOption.SO_RCVBUF, 1024 * 1024) .childOption(ChannelOption.SO_SNDBUF, 1024 * 1024) .childOption(ChannelOption.AUTO_READ, false) .childOption(ChannelOption.SO_KEEPALIVE, true) .childOption(ChannelOption.TCP_NODELAY, true) .childOption(ChannelOption.CONNECT_TIMEOUT_MILLIS, 30000) .localAddress( InetSocketAddressUtil.createUnresolved(NetUtil.LOCALHOST.getHostAddress(), DEFAULT_PORT)); }
-
以
TcpServerBind
对象被创建出来作为结束,回到步骤3的第二个关键点。此时我们已经知道代码source.tcpConfiguration()
返回的是一个TcpServerBind
对象 ,则代码source.tcpConfiguration().bootstrap(this)
实际是调用的TcpServerBind#bootstrap()
方法,这个方法其实是TcpServerBind
的父类TcpServer
实现的,可以看到实现如下:方法内逻辑简单明了,就是创建一个 TcpServerBootstrap 对象,这个对象的作用是保存上层对服务器的配置,需要注意两个构造入参:
this
:这里比较好理解,就是方法所属实例,也就是TcpServerBind
对象,该对象会被新建的 TcpServerBootstrap 对象内部属性 source 保存,也就是上文步骤3提到的链式思想bootstrapMapper
: 这个尤其需要注意,可以看到它是 bootstrap() 方法入参传入的,根据上文很容易知道,就是HttpServerHandle
实例。全局来看,bootstrapMapper 就是对服务器进行配置的实际逻辑抽象,后续将会调用这个 lambda 表达式完成服务器配置
// 该方法位于 TcpServer 类中 public final TcpServer bootstrap(Function<? super ServerBootstrap, ? extends ServerBootstrap> bootstrapMapper) { return new TcpServerBootstrap(this, bootstrapMapper); } // 以下为 TcpServerBootstrap 的构造方法 TcpServerBootstrap(TcpServer server, Function<? super ServerBootstrap, ? extends ServerBootstrap> bootstrapMapper) { super(server); this.bootstrapMapper = Objects.requireNonNull(bootstrapMapper, "bootstrapMapper"); }
-
经过以上分析,回到步骤2,我们知道在步骤2第一个关键点最终返回的
TcpServer
类对象是TcpServerBootstrap
实例,那么继续分析步骤2第二个关键点HttpServer#bind()
方法。这个方法是抽象方法,由子类实现,粗略看调用的应该是HttpServerHandle#bind()
,但其实HttpServerHandle
类并没有实现这个方法,而是由其父类实现HttpServerOperator#bind()
。以下为源码,看到 source 引用,不用想,调用的一定是HttpServerBind#bind()
:@Override protected Mono<? extends DisposableServer> bind(TcpServer b) { return source.bind(b); }
-
HttpServerBind#bind()
方法的实现如下,可以看到也是两个关键点:- 代码
delegate.bootstrap(this)
使用入参的TcpServerBootstrap
实例新建一个TcpServerBootstrap
对象。这部分与步骤 7 完全一致,只不过需要注意新建TcpServerBootstrap
实例的两个入参:
this
:此处也是被调用的方法所属的实例,也就是变成了TcpServerBootstrap
对象,该对象会被新建的 TcpServerBootstrap 对象内部属性 source 保存bootstrapMapper
: 这个根据上下文可以知道,此处调用则是HttpServerBind
对象
- 以上步骤返回了一个新建的
TcpServerBootstrap
实例,则代码delegate.bootstrap(this).bind()
其实就是调用TcpServerBootstrap#bind()
方法
需注意,经过以上步骤最终对象创建的顺序形成了以下单向链式结构,括号内为 TcpServerBootstrap 对象创建时实际的入参对象,这部分在配置 Netty 服务端的过程中非常重要:
TcpServerBind—>
A
: TcpServerBootstrap(TcpServerBind, HttpServerHandle)—>B
: TcpServerBootstrap(A, HttpServerBind)public Mono<? extends DisposableServer> bind(TcpServer delegate) { return delegate.bootstrap(this) .bind() .map(CLEANUP_GLOBAL_RESOURCE); }
- 代码
-
TcpServerBootstrap#bind()
方法实际是其父类的实现TcpServer#bind()
,可以看到这里关键步骤主要有两个:- 调用抽象方法
TcpServer#configure()
完成 Netty 服务端配置 - 调用抽象方法
TcpServer#bind()
绑定 Netty 服务端监听端口,完成启动
public final Mono<? extends DisposableServer> bind() { ServerBootstrap b; try{ b = configure(); } catch (Throwable t){ Exceptions.throwIfFatal(t); return Mono.error(t); } return bind(b); }
- 调用抽象方法
-
首先来看
TcpServer#configure()
方法,我们是通过TcpServerBootstrap
对象来发起绑定的,那么其实现必然是TcpServerBootstrap#configure()
,源码如下:此处可以明显看到关键流程分为两步:
source.configure()
调用该 TcpServerBootstrap 对象的 source 属性对象的 configure() 方法bootstrapMapper.apply()
调用该 TcpServerBootstrap 对象的 bootstrapMapper lambda 对象的 apply() 方法进行服务器配置
此时可以回头看 步骤9 的单向链式结构,这样就能一目了然地理解此处方法的执行顺序,可以知道由最外层到最里层调用的方法的依赖为:
HttpServerBind#apply() --> HttpServerHandle#apply()–> TcpServerBind#configure()
@Override public ServerBootstrap configure() { return Objects.requireNonNull(bootstrapMapper.apply(source.configure()), "bootstrapMapper"); }
-
TcpServerBind#configure()
方法很简单,可以看到只是返回了步骤6 创建的 Netty 服务端配置ServerBootstrap
对象@Override public ServerBootstrap configure() { return this.serverBootstrap.clone(); }
-
HttpServerHandle#apply()
方法的逻辑也不复杂:- 通过工具类
BootstrapHandlers#childConnectionObserver()
方法查找服务端配置对象 ServerBootstrap 中配置的连接观察者ConnectionObserver
对象 observer.then(this)
将当前HttpServerHandle
对象添加到连接观察者CompositeConnectionObserver
对象内部的观察者列表,再通过工具类BootstrapHandlers.childConnectionObserver()
方法将这个 ConnectionObserver 重新存放进 ServerBootstrap 中。这个步骤非常关键,通过这部分操作将上层的请求处理器作为监听器保存到了 Netty 服务端的配置中
@Override public ServerBootstrap apply(ServerBootstrap b) { ConnectionObserver observer = BootstrapHandlers.childConnectionObserver(b); BootstrapHandlers.childConnectionObserver(b, observer.then(this)); return b; }
- 通过工具类
-
HttpServerBind#apply()
方法实现体较长,不过大致流程比较清晰:- 首先通过 ServerBootstrap 配置 Netty 服务端的主从 Reactor,这部分不了解的读者建议看下 Netty源码分析(2)-服务端启动流程
- 根据 ServerBootstrap 配置中的 HTTP 协议版本信息选择一个初始化 Netty 从 Reactor 处理器的初始化类,并通过工具类
BootstrapHandlers.updateConfiguration()
使用这个初始化类更新 Netty 服务端从 Reactor 设置的处理器。以 HTTP 1.1 为例,这个过程中Http1Initializer
会被保存为PipelineConfiguration.consumer
属性,最后添加到 处理器列表BootstrapPipelineHandler
对象中
@Override public ServerBootstrap apply(ServerBootstrap b) { HttpServerConfiguration conf = HttpServerConfiguration.getAndClean(b); SslProvider ssl = SslProvider.findSslSupport(b); if (ssl != null && ssl.getDefaultConfigurationType() == null) { if ((conf.protocols & HttpServerConfiguration.h2) == HttpServerConfiguration.h2) { ssl = SslProvider.updateDefaultConfiguration(ssl, SslProvider.DefaultConfigurationType.H2); SslProvider.setBootstrap(b, ssl); } else { ssl = SslProvider.updateDefaultConfiguration(ssl, SslProvider.DefaultConfigurationType.TCP); SslProvider.setBootstrap(b, ssl); } } if (b.config() .group() == null) { LoopResources loops = HttpResources.get(); boolean useNative = LoopResources.DEFAULT_NATIVE || (ssl != null && !(ssl.getSslContext() instanceof JdkSslContext)); EventLoopGroup selector = loops.onServerSelect(useNative); EventLoopGroup elg = loops.onServer(useNative); b.group(selector, elg) .channel(loops.onServerChannel(elg)); } //remove any OPS since we will initialize below BootstrapHandlers.channelOperationFactory(b); if (ssl != null) { if ((conf.protocols & HttpServerConfiguration.h2c) == HttpServerConfiguration.h2c) { throw new IllegalArgumentException("Configured H2 Clear-Text protocol " + "with TLS. Use the non clear-text h2 protocol via " + "HttpServer#protocol or disable TLS" + " via HttpServer#tcpConfiguration(tcp -> tcp.noSSL())"); } if ((conf.protocols & HttpServerConfiguration.h11orH2) == HttpServerConfiguration.h11orH2) { return BootstrapHandlers.updateConfiguration(b, NettyPipeline.HttpInitializer, new Http1OrH2Initializer(conf.decoder.maxInitialLineLength, conf.decoder.maxHeaderSize, conf.decoder.maxChunkSize, conf.decoder.validateHeaders, conf.decoder.initialBufferSize, conf.minCompressionSize, compressPredicate(conf.compressPredicate, conf.minCompressionSize), conf.forwarded, conf.cookieEncoder, conf.cookieDecoder)); } if ((conf.protocols & HttpServerConfiguration.h11) == HttpServerConfiguration.h11) { return BootstrapHandlers.updateConfiguration(b, NettyPipeline.HttpInitializer, new Http1Initializer(conf.decoder.maxInitialLineLength, conf.decoder.maxHeaderSize, conf.decoder.maxChunkSize, conf.decoder.validateHeaders, conf.decoder.initialBufferSize, conf.minCompressionSize, compressPredicate(conf.compressPredicate, conf.minCompressionSize), conf.forwarded, conf.cookieEncoder, conf.cookieDecoder)); } if ((conf.protocols & HttpServerConfiguration.h2) == HttpServerConfiguration.h2) { return BootstrapHandlers.updateConfiguration(b, NettyPipeline.HttpInitializer, new H2Initializer( conf.decoder.validateHeaders, conf.minCompressionSize, compressPredicate(conf.compressPredicate, conf.minCompressionSize), conf.forwarded, conf.cookieEncoder, conf.cookieDecoder)); } } else { if ((conf.protocols & HttpServerConfiguration.h2) == HttpServerConfiguration.h2) { throw new IllegalArgumentException( "Configured H2 protocol without TLS. Use" + " a clear-text h2 protocol via HttpServer#protocol or configure TLS" + " via HttpServer#secure"); } if ((conf.protocols & HttpServerConfiguration.h11orH2c) == HttpServerConfiguration.h11orH2c) { return BootstrapHandlers.updateConfiguration(b, NettyPipeline.HttpInitializer, new Http1OrH2CleartextInitializer(conf.decoder.maxInitialLineLength, conf.decoder.maxHeaderSize, conf.decoder.maxChunkSize, conf.decoder.validateHeaders, conf.decoder.initialBufferSize, conf.minCompressionSize, compressPredicate(conf.compressPredicate, conf.minCompressionSize), conf.forwarded, conf.cookieEncoder, conf.cookieDecoder)); } if ((conf.protocols & HttpServerConfiguration.h11) == HttpServerConfiguration.h11) { return BootstrapHandlers.updateConfiguration(b, NettyPipeline.HttpInitializer, new Http1Initializer(conf.decoder.maxInitialLineLength, conf.decoder.maxHeaderSize, conf.decoder.maxChunkSize, conf.decoder.validateHeaders, conf.decoder.initialBufferSize, conf.minCompressionSize, compressPredicate(conf.compressPredicate, conf.minCompressionSize), conf.forwarded, conf.cookieEncoder, conf.cookieDecoder)); } if ((conf.protocols & HttpServerConfiguration.h2c) == HttpServerConfiguration.h2c) { return BootstrapHandlers.updateConfiguration(b, NettyPipeline.HttpInitializer, new H2CleartextInitializer( conf.decoder.validateHeaders, conf.minCompressionSize, compressPredicate(conf.compressPredicate, conf.minCompressionSize), conf.forwarded, conf.cookieEncoder, conf.cookieDecoder)); } } throw new IllegalArgumentException("An unknown HttpServer#protocol " + "configuration has been provided: "+String.format("0x%x", conf .protocols)); }
-
经过以上处理 Netty 服务端的配置就完成了,回到步骤10第二个关键点
TcpServer#bind()
方法的调用。我们是通过TcpServerBootstrap
对象来发起绑定的,那么可以知道实际应该是调用TcpServerBootstrap#bind()
方法,不过TcpServerBootstrap
其实没有重写 bind()方法,而是直接继承父类TcpServerOperator#bind()
方法,其实现如下,可以看到是通过source.bind(b)
来完成绑定的,那么根据步骤9可以知道最后会调用到TcpServerBind#bind()
方法@Override public Mono<? extends DisposableServer> bind(ServerBootstrap b) { return source.bind(b); }
-
TcpServerBind#bind()
方法实现如下,可以看到 Netty 服务端终于启动了,其关键流程如下:- 通过工具类
BootstrapHandlers.childConnectionObserver()
方法取出步骤13设置的连接观察者 new ChildObserver(childObs)
将观察者监听器包装成ChildObserver
对象- 通过工具类
BootstrapHandlers.finalizeHandler()
方法最后再整理设置 Netty 服务端从 Reactor 的处理器配置 bootstrap.bind()
Netty 服务端启动
@Override public Mono<? extends DisposableServer> bind(ServerBootstrap b) { SslProvider ssl = SslProvider.findSslSupport(b); if (ssl != null && ssl.getDefaultConfigurationType() == null) { ssl = SslProvider.updateDefaultConfiguration(ssl, SslProvider.DefaultConfigurationType.TCP); SslProvider.setBootstrap(b, ssl); } if (b.config() .group() == null) { TcpServerRunOn.configure(b, LoopResources.DEFAULT_NATIVE, TcpResources.get()); } return Mono.create(sink -> { ServerBootstrap bootstrap = b.clone(); ConnectionObserver obs = BootstrapHandlers.connectionObserver(bootstrap); ConnectionObserver childObs = BootstrapHandlers.childConnectionObserver(bootstrap); ChannelOperations.OnSetup ops = BootstrapHandlers.channelOperationFactory(bootstrap); convertLazyLocalAddress(bootstrap); BootstrapHandlers.finalizeHandler(bootstrap, ops, new ChildObserver(childObs)); ChannelFuture f = bootstrap.bind(); DisposableBind disposableServer = new DisposableBind(sink, f, obs, bootstrap); f.addListener(disposableServer); sink.onCancel(disposableServer); }); }
- 通过工具类
-
BootstrapHandlers#finalizeHandler()
方法实现如下,可以看到并没有复杂的逻辑,关键点如下:- 使用传入的监听器和步骤14中的
BootstrapPipelineHandler
对象创建BootstrapInitializerHandler
实例 b.childHandler()
将以上新建对象设置为从 Reactor 的处理器
public static void finalizeHandler(ServerBootstrap b, ChannelOperations.OnSetup opsFactory, ConnectionObserver childListener) { Objects.requireNonNull(b, "bootstrap"); Objects.requireNonNull(opsFactory, "ops"); Objects.requireNonNull(childListener, "childListener"); BootstrapPipelineHandler pipeline = null; ChannelHandler handler = b.config().childHandler(); if (handler instanceof BootstrapPipelineHandler) { pipeline = (BootstrapPipelineHandler) handler; } b.childHandler(new BootstrapInitializerHandler(pipeline, opsFactory, childListener)); }
- 使用传入的监听器和步骤14中的
-
BootstrapInitializerHandler
类继承了 Netty 框架中的ChannelInitializer
并重写了initChannel()
方法,该方法将在 Netty 服务端启动过程中触发static final class BootstrapInitializerHandler extends ChannelInitializer<Channel> { final BootstrapPipelineHandler pipeline; final ConnectionObserver listener; final ChannelOperations.OnSetup opsFactory; BootstrapInitializerHandler(@Nullable BootstrapPipelineHandler pipeline, ChannelOperations.OnSetup opsFactory, ConnectionObserver listener) { this.pipeline = pipeline; this.opsFactory = opsFactory; this.listener = listener; } @Override protected void initChannel(Channel ch) { if (pipeline != null) { for (PipelineConfiguration pipelineConfiguration : pipeline) { pipelineConfiguration.consumer.accept(listener, ch); } } ChannelOperations.addReactiveBridge(ch, opsFactory, listener); if (log.isDebugEnabled()) { log.debug(format(ch, "Initialized pipeline {}"), ch.pipeline().toString()); } }
2.2 Netty 执行服务
上一节已经分析完了 Spring WebFlux 集成 Netty 服务器的配置和启动,本节分析 Netty 服务器是如何为上层服务的
-
首先接上一节分析,
BootstrapInitializerHandler#initChannel()
方法将在 Netty 服务端初始化过程中触发,如读者想透彻理解这个过程还是建议阅读 Netty源码分析(2)-服务端启动流程 。此处不作展开,只分析以下方法:可以看到
BootstrapInitializerHandler#initChannel()
方法做的事情只有两件- 遍历上一节步骤14创建的
BootstrapPipelineHandle
,将其保存的每一个PipelineConfiguration
元素取出来,调用PipelineConfiguration.consumer
对象的accept()
方法。 结合上文可以知道,此处Http1Initializer#accept()
方法将被调用 ChannelOperations.addReactiveBridge()
方法在 Netty 的 ChannelPipeline 处理器链路中添加一个处理器
@Override protected void initChannel(Channel ch) { if (pipeline != null) { for (PipelineConfiguration pipelineConfiguration : pipeline) { pipelineConfiguration.consumer.accept(listener, ch); } } ChannelOperations.addReactiveBridge(ch, opsFactory, listener); if (log.isDebugEnabled()) { log.debug(format(ch, "Initialized pipeline {}"), ch.pipeline().toString()); } }
- 遍历上一节步骤14创建的
-
Http1Initializer#accept()
方法实现如下,可以看到主要进行的就是往 Netty 的 ChannelPipeline 处理器链路中添加处理器的工作,此处的重点关注对象是最后添加的一个处理器HttpTrafficHandler
,可以看到新建该处理器时会将上层的监听器传入,后文将详细分析public void accept(ConnectionObserver listener, Channel channel) { ChannelPipeline p = channel.pipeline(); p.addLast(NettyPipeline.HttpCodec, new HttpServerCodec(line, header, chunk, validate, buffer)); if (ACCESS_LOG) { p.addLast(NettyPipeline.AccessLogHandler, new AccessLogHandler()); } boolean alwaysCompress = compressPredicate == null && minCompressionSize == 0; if (alwaysCompress) { p.addLast(NettyPipeline.CompressionHandler, new SimpleCompressionHandler()); } p.addLast(NettyPipeline.HttpTrafficHandler, new HttpTrafficHandler(listener, forwarded, compressPredicate, cookieEncoder, cookieDecoder)); }
-
回到步骤1第二个关键点,
ChannelOperations.addReactiveBridge()
方法的实现也很简单,只是新建ChannelOperationsHandler
对象,并将其添加到 Netty 的 ChannelPipeline 处理器链路中。此处需要注意,Netty 的 ChannelPipeline 处理器链路是双向链表结构,也就是有先后顺序的,因此 HttpTrafficHandler 处理器的位置在 ChannelOperationsHandler 处理器之前
public static void addReactiveBridge(Channel ch, OnSetup opsFactory, ConnectionObserver listener) { ch.pipeline() .addLast(NettyPipeline.ReactiveBridge, new ChannelOperationsHandler(opsFactory, listener)); }
-
当 Netty 服务端接收到数据的时候,会触发处理器链路上的各个处理器对数据进行处理,其实现就是触发各个处理器的
channelRead()
方法。此处首先关注HttpTrafficHandler#channelRead()
方法:这个方法的实现代码较多,不过真正关键的代码如下所示,
- 新建
HttpServerOperations
对象并调用其HttpServerOperations#bind()
方法将自身以键值对的形式保存到 Channel 对象中,键为ReactorNetty.CONNECTION
。需注意,新建 HttpServerOperations 对象时会将步骤2
提到的上层监视器传入,这是整个 Netty 框架为上层提供服务的关键一步 ctx.fireChannelRead(msg)
向后唤醒下一个处理器处理读取到的数据,也就是会触发ChannelOperationsHandler#channelRead()
方法
public void channelRead(ChannelHandlerContext ctx, Object msg) { // read message and track if it was keepAlive ...... new HttpServerOperations(Connection.from(ctx.channel()), listener, compress, request, ConnectionInfo.from(ctx.channel(), readForwardHeaders, request), cookieEncoder, cookieDecoder) .chunkedTransfer(true) .bind(); ctx.fireChannelRead(msg); return; ......
- 新建
-
ChannelOperationsHandler#channelRead()
方法的实现较为简练,可以看到最关键的操作如下:ChannelOperations.get()
操作从 Channel 中取键为ReactorNetty.CONNECTION
的对象,也就是步骤4中的HttpServerOperations
对象,然后调用其HttpServerOperations#onInboundNext()
方法@Override final public void channelRead(ChannelHandlerContext ctx, Object msg) { if (msg == null || msg == Unpooled.EMPTY_BUFFER || msg instanceof EmptyByteBuf) { return; } try { ChannelOperations<?, ?> ops = ChannelOperations.get(ctx.channel()); if (ops != null) { ops.onInboundNext(ctx, msg); } else { if (log.isDebugEnabled()) { String loggingMsg = msg.toString(); if (msg instanceof DecoderResultProvider) { DecoderResult decoderResult = ((DecoderResultProvider) msg).decoderResult(); if (decoderResult.isFailure()) { log.debug(format(ctx.channel(), "Decoding failed: " + msg + " : "), decoderResult.cause()); } } if (msg instanceof ByteBufHolder && ((ByteBufHolder)msg).content() != Unpooled.EMPTY_BUFFER) { loggingMsg = ((ByteBufHolder) msg).content() .toString(Charset.defaultCharset()); } log.debug(format(ctx.channel(), "No ChannelOperation attached. Dropping: {}"), loggingMsg); } ReferenceCountUtil.release(msg); } } catch (Throwable err) { Exceptions.throwIfFatal(err); exceptionCaught(ctx, err); ReferenceCountUtil.safeRelease(msg); } }
-
HttpServerOperations#onInboundNext()
方法就是 Netty 服务器为上层提供服务最后也是最关键的一步了,至此 Netty 服务器服务流程完成可以看到当处理的数据类型为
HttpRequest
,此处会回调监听器的ConnectionObserver#onStateChange()
方法,最后将回调到上一节步骤13
中的HttpServerHandle#onStateChange()
方法,将请求传递到上层进行处理@Override protected void onInboundNext(ChannelHandlerContext ctx, Object msg) { if (msg instanceof HttpRequest) { listener().onStateChange(this, ConnectionObserver.State.CONFIGURED); if (msg instanceof FullHttpRequest) { super.onInboundNext(ctx, msg); } return; } if (msg instanceof HttpContent) { if (msg != LastHttpContent.EMPTY_LAST_CONTENT) { super.onInboundNext(ctx, msg); } if (msg instanceof LastHttpContent) { onInboundComplete(); } } else { super.onInboundNext(ctx, msg); } }