C++ write high performance server instance tutorial

  • 2020-09-28 09:02:56
  • OfStack

I'll show you how to write an Echo server using modern C++, the equivalent of "Hello World" for distributed system development. The server will return the received message directly. We also need a client can launch messages to our server, where you can find the source of the client.

Wangle is an C/S application framework for building modern asynchronous C++ services that are event-driven. The most basic abstraction of Wangle is the Pipeline(pipeline). Understanding this abstraction makes it easy to write complex modern C++ services. Another important concept is Service(services), which can be considered a more advanced VERSION of Pipeline, but beyond the scope of this article.

PipeLine

pipeline is the most important and powerful abstraction in Wangle, giving users the freedom to customize the implementations of request and response. An pipeline is a nested set of 1 series request/response controls. I tried to find a real-world analogy to pipeline, and the only one I could think of was a production line in a real-world factory. A production line works in one sequential mode, and all workers take an object, add only one modification, and send it to upstream workers until the entire product is made. This may not be a particularly good metaphor, since the flow of products on the assembly line is one-way, and an pipeline can control the flow of data in the opposite direction -- as if it were splitting the finished product into raw materials.

An Wangle handler can control the flow of data in both upstream and downstream directions. When you connect all handler to one, you have a flexible way to assemble the raw data into the type of data you want or break up the data you already have.

In pipeline of our server, there will be roughly the following handler:

1.Handler 1 (the upstream downstream below refers to the opposite input/output for the same handler, depending on its position in pipeline) upstream: Write one zero copy of the binary data stream received from socket (ES45en-ES46en, which omits the context switch between Applicaion context and Kernel context, avoiding the redundant copy of Buffer by CPU and sending it directly to handler2 at the Kernel level (see Wikipedia for details) in the byte cache

Downstream: Receives a zero-copy byte cache and writes its contents to socket

2. Upstream of Handler2: Receive the cache object of handler1, decode it as an string object and pass it to downstream of handler3: receive the string object of handler3, transcode it as a zero-copy byte cache and send it to handler1

3. Upstream of Handler3: Receive the string object in handler2, then send it down to pipeline to wait for writing back to the client. string sends back to handler2 downstream: receives the upstream string object and passes it to handler2

One thing to note is that each handler should only do one thing and only one thing. If you have one handler doing multiple tasks, such as decoding string directly from base 2, then you need to learn to break it down. This is important to improve the maintainability and extensibility of your code.

Also, yes, handler is not thread-safe, so don't easily use any data in it that is not protected by mutex, atomic lock. If you do need a thread-safe environment, Folly provides a lockfree data structure, and Folly relies on Wangle, you can easily introduce and use it in your project.

If you don't quite understand all the steps, don't worry, you'll be more aware when you see the concrete implementation below.

Echo Server

I'll show you the implementation of the server. I assume that you have Wangle installed. Note that Wangle is currently not available on Mac OS, so I recommend that you install the virtual machine and use Ubuntu to install Wangle.

This is echo handler: receive one string, print it to stdout, and send it back to pipeline. Note that the delimiter in the write statement cannot be omitted because pipeline will decode in bytes.


// the main logic of our echo server; receives a string and writes it straight

// back

class EchoHandler : public HandlerAdapter {

public:

 virtual void read(Context* ctx, std::string msg) override {

 std::cout << "handling " << msg << std::endl;

 write(ctx, msg + "rn");

 }

};

Echohandler is actually the last handler of our pipeline, now we need to create one PipelineFactory to control all request and response.


// where we define the chain of handlers for each messeage received

class EchoPipelineFactory : public PipelineFactory {

public:

 EchoPipeline::Ptr newPipeline(std::shared_ptr sock) {

 auto pipeline = EchoPipeline::create();

 pipeline->addBack(AsyncSocketHandler(sock));

 pipeline->addBack(LineBasedFrameDecoder(8192));

 pipeline->addBack(StringCodec());

 pipeline->addBack(EchoHandler());

 pipeline->finalize();

 return pipeline;

 }

};

The insertion order of each handler in pipeline requires careful attention because they are sequentially ordered, and here we have four handler

1.AsyncSocketHandler: Upstream: Read the binary stream in scoket into zero-copy byte cache downstream: writes the byte cache contents to the underlying socket

2. LineBasedFrameDecoder: Upstream: Receives byte cache and splits data by row downstream: sends byte cache to AsyncSocketHandler

3. StringCodec: Upstream: receive byte cache, decoded as std:string and pass to EchoHandler downstream: receive std:string, encoded as byte cache and pass to LineBasedFrameDecoder

4. EchoHandler: Upstream: Receives the std:string object, writes it to pipeline- returns the message to Echohandler. Downstream: Receives 1 std:string object and forwards it to StringCodec Handler. Now all we need to do is associate pipeline factory to ServerBootstrap, bind a port, and we've done pretty much everything.


#include <gflags/gflags.h>



#include <wangle/bootstrap/ServerBootstrap.h>

#include <wangle/channel/AsyncSocketHandler.h>

#include <wangle/codec/LineBasedFrameDecoder.h>

#include <wangle/codec/StringCodec.h>



using namespace folly;

using namespace wangle;



DEFINE_int32(port, 8080, "echo server port");



typedef Pipeline<IOBufQueue&, std::string> EchoPipeline;



// the main logic of our echo server; receives a string and writes it straight

// back

class EchoHandler : public HandlerAdapter<std::string> {

public:

 virtual void read(Context* ctx, std::string msg) override {

 std::cout << "handling " << msg << std::endl;

 write(ctx, msg + "\r\n");

 }

};



// where we define the chain of handlers for each messeage received

class EchoPipelineFactory : public PipelineFactory<EchoPipeline> {

public:

 EchoPipeline::Ptr newPipeline(std::shared_ptr<AsyncTransportWrapper> sock) {

 auto pipeline = EchoPipeline::create();

 pipeline->addBack(AsyncSocketHandler(sock));

 pipeline->addBack(LineBasedFrameDecoder(8192));

 pipeline->addBack(StringCodec());

 pipeline->addBack(EchoHandler());

 pipeline->finalize();

 return pipeline;

 }

};



int main(int argc, char** argv) {

 google::ParseCommandLineFlags(&argc, &argv, true);



 ServerBootstrap<EchoPipeline> server;

 server.childPipeline(std::make_shared<EchoPipelineFactory>());

 server.bind(FLAGS_port);

 server.waitForStop();



 return 0;

}

So far we have only written 48 lines of code to complete a high-performance asynchronous C++ server.

Echo Client

The implementation of the echo client is very similar to our server:


// the handler for receiving messages back from the server

class EchoHandler : public HandlerAdapter {

public:

 virtual void read(Context* ctx, std::string msg) override {

 std::cout << "received back: " << msg;

 }

 virtual void readException(Context* ctx, exception_wrapper e) override {

 std::cout << exceptionStr(e) << std::endl;

 close(ctx);

 }

 virtual void readEOF(Context* ctx) override {

 std::cout << "EOF received :(" << std::endl;

 close(ctx);

 }

};

Note that we have overloaded both readException and readEOF methods, and that there are 1 other methods that can be overloaded. If you need to control a particular event, just overload the corresponding virtual function.

This is the implementation of pipeline factory on the client side, which is based on our server structure 1. Only EventBaseHandler, handler, is not present in the server code, which ensures that we can write data from any one thread.


// the handler for receiving messages back from the server

class EchoHandler : public HandlerAdapter {

public:

 virtual void read(Context* ctx, std::string msg) override {

 std::cout << "received back: " << msg;

 }

 virtual void readException(Context* ctx, exception_wrapper e) override {

 std::cout << exceptionStr(e) << std::endl;

 close(ctx);

 }

 virtual void readEOF(Context* ctx) override {

 std::cout << "EOF received :(" << std::endl;

 close(ctx);

 }

};

All the code for the client is shown in the figure below


#include <gflags/gflags.h>

#include



#include <wangle/bootstrap/ClientBootstrap.h>

#include <wangle/channel/AsyncSocketHandler.h>

#include <wangle/channel/EventBaseHandler.h>

#include <wangle/codec/LineBasedFrameDecoder.h>

#include <wangle/codec/StringCodec.h>



using namespace folly;

using namespace wangle;



DEFINE_int32(port, 8080, "echo server port");

DEFINE_string(host, "::1", "echo server address");



typedef Pipeline<folly::IOBufQueue&amp;, std::string> EchoPipeline;



// the handler for receiving messages back from the server

class EchoHandler : public HandlerAdapter {

public:

 virtual void read(Context* ctx, std::string msg) override {

 std::cout << "received back: " << msg;

 }

 virtual void readException(Context* ctx, exception_wrapper e) override {

 std::cout << exceptionStr(e) << std::endl;

 close(ctx);

 }

 virtual void readEOF(Context* ctx) override {

 std::cout << "EOF received :(" << std::endl;

 close(ctx);

 }

};



// chains the handlers together to define the response pipeline

class EchoPipelineFactory : public PipelineFactory {

public:

 EchoPipeline::Ptr newPipeline(std::shared_ptr sock) {

 auto pipeline = EchoPipeline::create();

 pipeline->addBack(AsyncSocketHandler(sock));

 pipeline->addBack(

 EventBaseHandler()); // ensure we can write from any thread

 pipeline->addBack(LineBasedFrameDecoder(8192, false));

 pipeline->addBack(StringCodec());

 pipeline->addBack(EchoHandler());

 pipeline->finalize();

 return pipeline;

 }

};



int main(int argc, char** argv) {

 google::ParseCommandLineFlags(&amp;argc, &amp;argv, true);



 ClientBootstrap client;

 client.group(std::make_shared(1));

 client.pipelineFactory(std::make_shared());

 auto pipeline = client.connect(SocketAddress(FLAGS_host, FLAGS_port)).get();



 try {

 while (true) {

 std::string line;

 std::getline(std::cin, line);

 if (line == "") {

 break;

 }



 pipeline->write(line + "rn").get();

 if (line == "bye") {

 pipeline->close();

 break;

 }

 }

 } catch (const std::exception&amp; e) {

 std::cout << exceptionStr(e) << std::endl;

 }



 return 0;

}

The program USES an While loop to constantly monitor user input and relies on calling.get () to synchronously wait 1 until the request is responded to.

The above is C++ writing high performance server instance tutorial details, more C++ high performance server information please follow other related articles on this site!


Related articles: