![]() ![]() ![]() The nice part about getting Ember rendering on the server is that anything you do to improve the performance of the app in the browser will improve the performance of the app on the server. ![]() Some strategies to improve FastBoot performance It’s important to try and reduce this as much as possible. The TTFB can directly impact the time it takes to render your app, and crawlers like Google take this time into consideration as part of their PageRank score. We make very few network requests and don’t have too many complex components to render. Especially considering that we are not doing that much. On this site on a warm FastBoot instance (10+ requests for the given URL) fetching index.html averages out to above 500ms: This is all before you have actually written any business logic in your app. This is why pushing work out of instance-initializers to initilaizers is important if you don’t need app state and want to avoid repeat cost. FastBoot does do a one-time initialize of your app, so it is now in a state of pre-instantiation but has gotten some of the work out of the way. We don’t have any caching set up and FastBoot is not written to boost performance on additional renders. Without getting into the source we can assume that Node or Express is making some run-time optimizations based upon code it is seeing more frequently. The 10th request bring it down to around the 10ms mark. For example, is this the first request to the URL? Is it the 10th request? Are you making external requests, if so how many? So to help inform this let’s look at the most basic use case: a first time request (cold) to the app with a newly generated Ember app: 100ms on my machine. The answer to this question is, as it is with most things in life, “it depends”. I’d like to see a focus on server-side rendering performance. With FastBoot 1.0 recently being released that has been accomplished. The big challenge was to get Ember working with SSR in a stable way. In my opinion, the community is still very early in the process of learning how best to use and optimize Ember server-side. And that falls back to the Ember app being instantiated on each request. What is less understood at this point is what we probably shouldn’t be incurring the cost of. We must incur this cost if we want to render the uniquely rendered page for the given URL. This is a typical render cycle for any server-side application (with the exception of instantiating a new app) and shouldn’t be looked at in any other way. FastBoot then starts a new Ember app (more on this later), routes based upon the requested URL, may make one or more external requests via Ember Data to an API, renders the page, and returns it. When your browser requests the URL it will hit the FastBoot server. Let’s look into why this is, how much latency FastBoot adds, and some strategies around this. So while the actual transfer time of the given index.html files from FastBoot may be trivial, especially when gzipped, the latency to start receiving the file can really add to the overall impression of your application’s performance. The browser cannot download in parallel assets it hasn’t yet been instructed to fetch. TTFB becomes an issue for any FastBoot enabled web app because the index.html that it serves has all of the rules on what to download next. As you can imagine, the longer it takes to even start receiving your asset, let alone the time it takes to get the entire asset, can really accumulate. With HTTP/1.1 you had to wait for each asset to finish being received before requesting the next. With certain advances in technology like HTTP/2 TTFB has become less of an issue because you can load all of your assets in parallel. If you are unfamiliar with this term, it is simply how long does it take the server to start respondnig back with the first byte of the request. ![]()
0 Comments
Leave a Reply. |