gRPC you say?
Think of grpc_pass as proxy_pass’ faster, cooler cousin. grpc is able to carry http2 traffic as an ingest point for connections… Upon discovering this, I ended up making some adaptations to our previous project the Plex CDN.
gRPC essentially has a bunch of interesting blocks (mostly due to the way that they’re structured) as to why you can’t proxy http1 traffic through it, so we had to create a separate location block and websockets through there.
First, we start out with the basics of our nginx config, we define an upstream, setup ssl, then we add the Google Pagespeed module to the stack… Here is where it starts to get interesting: Google’s Pagespeed module actually allows us to optimize images and use redis to cache objects, despite us not using proxy_cache.
The location block, at least I’ve found does fairly well with a 90 second read/send timeout, and keepalive set to off. Instead of https:// we’ll be using grpcs:// — this is defining that we will only allow http2 comms between our server and our cdn.
In our HTTP Block:
pagespeed on;
pagespeed FileCachePath "/var/cache/pagespeed/";
pagespeed FileCacheSizeKb 204800;
pagespeed FileCacheCleanIntervalMs 3600000;
pagespeed FileCacheInodeLimit 500000;
pagespeed LRUCacheKbPerProcess 8192;
pagespeed LRUCacheByteLimit 32768;
pagespeed DefaultSharedMemoryCacheKB 400000;
pagespeed ShmMetadataCacheCheckpointIntervalSec 300;
pagespeed HttpCacheCompressionLevel 9;
pagespeed RedisServer "127.0.0.1:6379";
pagespeed EnableCachePurge on;
pagespeed InPlaceResourceOptimization on;
In our server block
pagespeed on;
pagespeed Domain app.cdn.whatever;
pagespeed Domain app.whatever;
pagespeed RewriteLevel OptimizeForBandwidth;
pagespeed InPlaceResourceOptimization on;
pagespeed FetchHttps enable,allow_self_signed;
location {
grpc_pass grpcs://application;
grpc_socket_keepalive off;
grpc_ssl_name $og_host;
grpc_ssl_server_name on;
grpc_ssl_session_reuse on;
}
Now we move our websockets to use proxy_pass
location /:/websockets {
proxy_set_header Accept-Encoding "";
proxy_ssl_verify off;
proxy_http_version 1.1;
proxy_read_timeout 86400;
proxy_pass https://application;
proxy_ssl_name $og_host;
proxy_ssl_server_name on;
proxy_ssl_session_reuse on;
}
From here, we add our pagespeed blocks
location ~ "\.pagespeed\.([a-z]\.)?[a-z]{2}\.[^.]{10}\.[^.]+" {
add_header "" "";
}
location ~ "^/pagespeed_static/" { }
location ~ "^/ngx_pagespeed_beacon$" { }
Typically I’ve seen the main improvement on TTFB, or Time to First Byte; making a huge difference in terms of resource loading speeds, and making the web application more responsive in modern browsers.