Solution of Spring Cloud Gateway Memory Overflow

  • 2021-10-27 07:44:35
  • OfStack

Memory overflow query process of Spring Cloud Gateway

Environment configuration:

org. springframework. boot: 2.1. 4. RELEASE org. springframework. cloud: Greenwich. SR1

Accident record:

Due to the loss of RequestBody in the gateway, Gu adopted a general solution on the Internet and solved it in the following ways:


@Bean
public RouteLocator tpauditRoutes(RouteLocatorBuilder builder) {
    return builder.routes().route("gateway-post", r -> r.order(1)
       .method(HttpMethod.POST)
       .and()
       .readBody(String.class, requestBody -> {return true;}) #  The point is here 
       .and()
       .path("/gateway/**")
       .filters(f -> {f.stripPrefix(1);return f;})
       .uri("lb://APP-API")).build();
}

Test environment, Spring Cloud Gateway gateway function is completed. Start the pressure measurement of the test environment.

Gradient pressure measurement is normally adopted, and the maximum user peak value is set to 400 concurrent. After two rounds of pressure measurement lasting about 10 minutes, no abnormal situation occurred.

At lunch time, one hour was set for testing.

When you come back, the system reports the following exception


2019-08-12 15:06:07,296 1092208 [reactor-http-server-epoll-12] WARN  io.netty.channel.AbstractChannelHandlerContext.warn:146 - An exception '{}' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 503316487, max: 504889344)
 at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640)
 at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594)
 at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764)
 at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740)
 at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
 at io.netty.buffer.PoolArena.allocate(PoolArena.java:214)
 at io.netty.buffer.PoolArena.allocate(PoolArena.java:146)
 at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324)
 at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185)
 at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176)
 at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137)
 at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
 at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:72)
 at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:793)
 at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:382)
 at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
 at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
 at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:315)
 at io.

At that time, I was forced to monitor the Jvm stack immediately, reduce the memory space of jvm, increase the concurrency number, restart the project and re-test,

The project startup parameters are as follows:


java -jar -Xmx1024M /opt/deploy/gateway-appapi/cloud-employ-gateway-0.0.5-SNAPSHOT.jar
 Left Left Left Left Modified to Left Left Left Left 
java -jar -Xmx512M /opt/deploy/gateway-appapi/cloud-employ-gateway-0.0.5-SNAPSHOT.jar

Reduced memory startup by 1.5, waiting for the problem to recur. Wait 3 minutes for the problem to recur again, but at the same time Jvm does Full GC.


      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT   
 275456.0 100103.0  484864.0   50280.2  67672.0 64001.3 9088.0 8463.2    501   11.945   3      0.262 
 275968.0 25072.3   484864.0   47329.3  67672.0 63959.4 9088.0 8448.8    502   11.970   4      0.429 

Yes, Full Gc appeared in the system when there was a problem, but OU did not reach the trigger reason.

Combined with direct memory in the log, I thought of out-of-heap memory in Jvm.

Using-XX: MaxDirectMemorySize, you can set the Jvm out-of-heap memory size, and trigger Full GC when the out-of-heap memory allocated by Direct ByteBuffer reaches the specified size.

There is an upper limit to this value, with the default being 64M and the maximum being sun. misc. VM. maxDirectMemory ().

Combined with all cases, it indicates that there is a memory overflow in out-of-heap memory usage.

The error reporting content is Netty framework. Add the following configuration to start Netty error log printing:


-Dio.netty.leakDetection.targetRecords=40 # Settings Records  Upper limit 
-Dio.netty.leakDetection.level=advanced   # Setting Log Levels 

The project starts without any problems. After starting the pressure test, the service reports the following anomalies:


2019-08-13 14:59:01,656 18047 [reactor-http-nio-7] ERROR io.netty.util.ResourceLeakDetector.reportTracedLeak:317 - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
#1:
	org.springframework.core.io.buffer.NettyDataBuffer.release(NettyDataBuffer.java:301)
	org.springframework.core.io.buffer.DataBufferUtils.release(DataBufferUtils.java:420)
	org.springframework.core.codec.StringDecoder.decodeDataBuffer(StringDecoder.java:208)
	org.springframework.core.codec.StringDecoder.decodeDataBuffer(StringDecoder.java:59)
	org.springframework.core.codec.AbstractDataBufferDecoder.lambda$decodeToMono$1(AbstractDataBufferDecoder.java:68)
	reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:107)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxJust$WeakScalarSubscription.request(FluxJust.java:101)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onSubscribe(MonoCollectList.java:90)
	reactor.core.publisher.FluxJust.subscribe(FluxJust.java:70)
	reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:54)
	reactor.core.publisher.MonoCollectList.subscribe(MonoCollectList.java:59)
	reactor.core.publisher.MonoFilterFuseable.subscribe(MonoFilterFuseable.java:44)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:56)
	reactor.core.publisher.MonoSubscriberContext.subscribe(MonoSubscriberContext.java:47)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:59)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoPeek.subscribe(MonoPeek.java:71)
	reactor.core.publisher.MonoMap.subscribe(MonoMap.java:55)
	reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.netty.channel.FluxReceive.terminateReceiver(FluxReceive.java:372)
	reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:196)
	reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:337)
	reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:333)
	reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:453)
	reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:191)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
	io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
	io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
	io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
	io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	java.lang.Thread.run(Unknown Source)
#2:
	io.netty.buffer.AdvancedLeakAwareByteBuf.nioBuffer(AdvancedLeakAwareByteBuf.java:712)
	org.springframework.core.io.buffer.NettyDataBuffer.asByteBuffer(NettyDataBuffer.java:266)
	org.springframework.core.codec.StringDecoder.decodeDataBuffer(StringDecoder.java:207)
	org.springframework.core.codec.StringDecoder.decodeDataBuffer(StringDecoder.java:59)
	org.springframework.core.codec.AbstractDataBufferDecoder.lambda$decodeToMono$1(AbstractDataBufferDecoder.java:68)
	reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:107)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxJust$WeakScalarSubscription.request(FluxJust.java:101)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onSubscribe(MonoCollectList.java:90)
	reactor.core.publisher.FluxJust.subscribe(FluxJust.java:70)
	reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:54)
	reactor.core.publisher.MonoCollectList.subscribe(MonoCollectList.java:59)
	reactor.core.publisher.MonoFilterFuseable.subscribe(MonoFilterFuseable.java:44)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:56)
	reactor.core.publisher.MonoSubscriberContext.subscribe(MonoSubscriberContext.java:47)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:59)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoPeek.subscribe(MonoPeek.java:71)
	reactor.core.publisher.MonoMap.subscribe(MonoMap.java:55)
	reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.netty.channel.FluxReceive.terminateReceiver(FluxReceive.java:372)
	reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:196)
	reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:337)
	reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:333)
	reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:453)
	reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:191)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
	io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
	io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
	io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
	io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	java.lang.Thread.run(Unknown Source)
#3:
	io.netty.buffer.AdvancedLeakAwareByteBuf.slice(AdvancedLeakAwareByteBuf.java:82)
	org.springframework.core.io.buffer.NettyDataBuffer.slice(NettyDataBuffer.java:260)
	org.springframework.core.io.buffer.NettyDataBuffer.slice(NettyDataBuffer.java:42)
	org.springframework.cloud.gateway.handler.predicate.ReadBodyPredicateFactory.lambda$null$0(ReadBodyPredicateFactory.java:102)
	reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:46)
	reactor.core.publisher.MonoCollectList.subscribe(MonoCollectList.java:59)
	reactor.core.publisher.MonoFilterFuseable.subscribe(MonoFilterFuseable.java:44)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:56)
	reactor.core.publisher.MonoSubscriberContext.subscribe(MonoSubscriberContext.java:47)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:59)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoPeek.subscribe(MonoPeek.java:71)
	reactor.core.publisher.MonoMap.subscribe(MonoMap.java:55)
	reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.netty.channel.FluxReceive.terminateReceiver(FluxReceive.java:372)
	reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:196)
	reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:337)
	reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:333)
	reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:453)
	reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:191)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
	io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
	io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
	io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
	io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	java.lang.Thread.run(Unknown Source)

In # 3, I found a familiar class, ReadBodyPredicateFactory. java. Remember using readbody configuration in the first place?

This is the writing class for cachedRequestBodyObject.

Track the source code of Readbody under 1


 /**
  * This predicate is BETA and may be subject to change in a future release. A
  * predicate that checks the contents of the request body
  * @param inClass the class to parse the body to
  * @param predicate a predicate to check the contents of the body
  * @param <T> the type the body is parsed to
  * @return a {@link BooleanSpec} to be used to add logical operators
  */
 public <T> BooleanSpec readBody(Class<T> inClass, Predicate<T> predicate) {
  return asyncPredicate(getBean(ReadBodyPredicateFactory.class)
    .applyAsync(c -> c.setPredicate(inClass, predicate)));
 }

ReadBodyPredicateFactory. applyAsync () for asynchronous calls and applyAsync () in the error log


org.springframework.cloud.gateway.handler.predicate.ReadBodyPredicateFactory.lambda$null$0(ReadBodyPredicateFactory.java:102)

Point to method 1. Look at line 102 of the source code:


Flux<DataBuffer> cachedFlux = Flux.defer(() -> 
 Flux.just(dataBuffer.slice(0, dataBuffer.readableByteCount()))
);

Here, Spring Cloud Gateway cut out a new dataBuffer through dataBuffer. slice, but according to the memory detection tool of Netty, dataBuffer here has not been recycled.

The error is as follows. Many logs are easily overlooked.

ERROR io.netty.util.ResourceLeakDetector.reportTracedLeak:317 - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.

To find the problem, it is necessary to solve it. Try to modify the source code


@Override
@SuppressWarnings("unchecked")
public AsyncPredicate<ServerWebExchange> applyAsync(Config config) {
    return exchange -> {
        Class inClass = config.getInClass();

        Object cachedBody = exchange.getAttribute(CACHE_REQUEST_BODY_OBJECT_KEY);
        Mono<?> modifiedBody;
        // We can only read the body from the request once, once that
        // happens if we
        // try to read the body again an exception will be thrown. The below
        // if/else
        // caches the body object as a request attribute in the
        // ServerWebExchange
        // so if this filter is run more than once (due to more than one
        // route
        // using it) we do not try to read the request body multiple times
        if (cachedBody != null) {
            try {
                boolean test = config.predicate.test(cachedBody);
                exchange.getAttributes().put(TEST_ATTRIBUTE, test);
                return Mono.just(test);
            } catch (ClassCastException e) {
                if (LOGGER.isDebugEnabled()) {
                    LOGGER.debug("Predicate test failed because class in predicate "
                            + "does not match the cached body object", e);
                }
            }
            return Mono.just(false);
        } else {
            // Join all the DataBuffers so we have a single DataBuffer for
            // the body
            return DataBufferUtils.join(exchange.getRequest().getBody()).flatMap(dataBuffer -> {
                // Update the retain counts so we can read the body twice,
                // once to parse into an object
                // that we can test the predicate against and a second time
                // when the HTTP client sends
                // the request downstream
                // Note: if we end up reading the body twice we will run
                // into
                // a problem, but as of right
                // now there is no good use case for doing this
                DataBufferUtils.retain(dataBuffer);
                // Make a slice for each read so each read has its own
                // read/write indexes
                Flux<DataBuffer> cachedFlux = Flux
                        .defer(() -> Flux.just(dataBuffer.slice(0, dataBuffer.readableByteCount())));

                ServerHttpRequest mutatedRequest = new ServerHttpRequestDecorator(exchange.getRequest()) {
                    @Override
                    public Flux<DataBuffer> getBody() {
                        return cachedFlux;
                    }
                };
                #  Add the following code 
                DataBufferUtils.release(dataBuffer);

                return ServerRequest.create(exchange.mutate().request(mutatedRequest).build(), messageReaders)
                        .bodyToMono(inClass).doOnNext(objectValue -> {
                            exchange.getAttributes().put(CACHE_REQUEST_BODY_OBJECT_KEY, objectValue);
                            exchange.getAttributes().put(CACHED_REQUEST_BODY_KEY, cachedFlux);
                        }).map(objectValue -> config.predicate.test(objectValue));
            });

        }
    };
}

Spring Cloud Gateway is in the configured architecture, version 2.1. 1. After modifying the above code, the project test is started, the problem does not reappear, and it runs normally.

For the same problem, you can also choose to upgrade Spring Cloud Gateway. In the official version 2.1. 2, the code here has been reconstructed and the test is completely normal after the upgrade.


Related articles: