React js 18 Server-side rendering performance degradation with renderToPipeableStream
Hello!
When switching from renderToString
to renderToPipeableStream
, I run load tests on the application, and found a decrease in server throughput, from 50 to 15 RPS, and an increase in response timings.
When profiling the CPU, I see a large overhead on the internal work of the stream, specifically the methods Writable.write
and Writable.uncork
.
All these method calls together take more than twice as much CPU time (about 50-60ms) as rendering my test page (about 15-20ms)
Also, I don't want to give the HTML to the client in the stream, this approach has some disadvantages.
So I have to buffer the data, and it slows down the application a bit more.
CPU profiler in production mode:
CPU profiler in development mode:
My custom Writable stream with buffering:
class HtmlWritable extends Writable {
chunks = [];
html = '';
getHtml() {
return this.html;
}
_write(chunk, encoding, callback) {
this.chunks.push(chunk);
callback();
}
_final(callback) {
this.html = Buffer.concat(this.chunks).toString();
callback();
}
}
And rendering flow:
import { renderToPipeableStream } from 'react-dom/server';
new Promise((resolve, reject) => {
const htmlWritable = new HtmlWritable();
const { pipe, abort } = renderToPipeableStream(renderResult, {
onAllReady() {
pipe(htmlWritable);
},
onError(error) {
reject(error);
},
});
htmlWritable.on('finish', () => {
resolve(htmlWritable.getHtml());
});
});
Yes, my team maintain SSR meta-framework, so we tested HTML streaming couple of times, and have some feedback in general (non-React specific):
- Inconsistent behaviour between browsers when loading a page and MPA transitions:
Safari behaves unpredictably, regardless of rendering type, may blink white screen.
Firefox - no white screen when streaming, waiting for full page load.
Chrome - white screen if the stream responds for more than 5 seconds, or if the stream is given thebody
tag, and then starts streaming rendering a new page.
Actual problem for us, because we have a lot of different applications on the same domain. - There is no possibility to make a server redirect after the first byte is sent to the client
- It is also not possible to update the meta tags if they have already been sent to the client
- There is no clear answer as to how this will affect SEO
The perf numbers you mentioned. Are those with the polyfilled Writable that buffers that you mentioned above or with native Node.js streams?
It sounds like you’re saying that your Writable polyfill is slow since that’s where the bulk of the time is? Do you want help writing a faster one? Or are you saying that it’s faster with that polyfill than the native Node.js streams?
Even if you want to buffer it, do you actually need it to be converted back to a string or can it stay as binary?
If you run the same benchmark but with your custom HtmlWritable with buffering. What kind of numbers do you get?
Similarly, if you run React 18 with renderToString
.
Just to narrow down how much of the perf issues are with the writable vs the rest.
I don't doubt that native Node.js streams have a lot of overhead in the _write
calls. There's a lot of stuff going on in there. In theory they wouldn't have to be though. It's not optimized for lots of small calls.
Web Streams have even more overhead in every chunk call. So we added our own buffering to avoid enqueueing so often.
We could do the same for Node streams but there's a principle question there. If we do that, we remove optimization opportunities for user space. There's nothing inherent in the Node stream API that it has to have so much overhead. So should it be our responsibility or the responsibility of the recipient stream?
One way could be to build an intermediate stream API, like the one you have, that buffers for a bit and then flushes to the underlying one periodically just like our Web Streams buffering does.