How to set the number of IO threads for the client and server when using the gRPC async callback API?
I’m currently using the async callback API of gRPC, mainly for stream RPC. How can I set the number of IO threads? I don’t seem to see any related configurations. https://grpc.io/docs/languages/cpp/callback/
How to set the number of IO threads for the client and server when using the gRPC async callback API?
I’m currently using the async callback API of gRPC, mainly for stream RPC. How can I set the number of IO threads? I don’t seem to see any related configurations. https://grpc.io/docs/languages/cpp/callback/
How to set the number of IO threads for the client and server when using the gRPC async callback API?
I’m currently using the async callback API of gRPC, mainly for stream RPC. How can I set the number of IO threads? I don’t seem to see any related configurations. https://grpc.io/docs/languages/cpp/callback/
How to set the number of IO threads for the client and server when using the gRPC async callback API?
I’m currently using the async callback API of gRPC, mainly for stream RPC. How can I set the number of IO threads? I don’t seem to see any related configurations. https://grpc.io/docs/languages/cpp/callback/
How to set the number of IO threads for the client and server when using the gRPC async callback API?
I’m currently using the async callback API of gRPC, mainly for stream RPC. How can I set the number of IO threads? I don’t seem to see any related configurations. https://grpc.io/docs/languages/cpp/callback/
Why are callback APIs preferred over sync APIs for a GRPC C++ client?
The gRPC documentation for C++ states to use callback API in favor of other APIs for most RPCs:
https://grpc.io/docs/guides/performance/#c
Could Grpc.Dotnet.Client hang during stream sending due to request blocking?
A client sends streams successfully but sometimes thread hangs forever if there is a huge gRPC service load. Timeout/deadline doesn’t help. Async request is blocked by GetAwaiter().GetResult().
grpc 1.62 performance regression compared with 1.55
Recently, we are performing performance test on gRPC server receiving unary RPC request. We are performing the test on a 48 core server machine with CPU E5-2650 2.20gHZ. We see that after we upgrade the grpc version from 1.55 to 1.62, we see a performance drop pretty large:
Thrroughput Result
gRPC 1.55 performance drop when client send rate increase
We have been working on a pub/sub project using gRPC in C++, we are publish messages to gRPC server with unary RPC, and receive the message with server streaming RPC call. We implement the server with cq-based async way, just like the qps test server in grpc repo, and the client will issue the unary RPC with a fix interval. We run the test on a 48 core server machine with CPU E5-2650 2.20gHZ.
perf test repo
gRPC Constructor for CallOptions C#
Is there any behavior difference in the following two callOptions?