JS and CSS bundled and served as needed in a Block...
# help-with-other
s
I feel like I've been dancing around this problem for too long and I'm missing something very obvious. Site has 100s of page each with a varying number of picked block list modules / blocks. Some of these have third party libraries, chunky amounts of js and or css that is required - think sliders, video blocks, popups etc etc - some pages have multiple sliders etc. Traditionally I'd stick output all the required css and js in minified bundles, built out in gulp, and move on with my life. However Google Lighthouse likes to tell me (as do the SEO audits) that my bloated bundles have lots of unused css and js on each page (which of course, they do!). I've just been playing with Vite and it looked promising but I can't see how I could have a dynamic bundle / dynamically load components (or whatever) to cover the above use case. So how does everyone in 2024 cover the above? Code splitting with bundle goodness and dynamic - happy Lighthouse whilst not having 20-30 included css and js files conditionally included?
s
Maybe you should go on a JS diet instead? These days browsers can do a lot without javascript, eg. CSS scroll-snap for sliders, details/summary element for collapsible content, dialog element for popups.
s
Just moves the issue onto the CSS to be fair! I'd love for these sites to be less heavy on the browser - unfortunately nobody told the designers. So I'm stuck with needing to serve custom js and css per block / module. And trying to keep bundles releveant to the page and reduce the number of http requests. I'm coming to the conclusion it's worth writing something custom 1) Use a build tool to output all SASS & JS -> Minified modules of css and js in the /wwwroot. 2) Have a custom controller that if /bundles/js/mods-xxxx.js or /bundles/css/mods-xxxx.css is requested it serves up a concatenated bundle unique to the page. 3) xxx will contain a number (and unique build prefix) which using bit operations will switch on / off an array of modules. So 3 = 111 which means the first three modules and nothing else, 4 the fourth and nothing else etc. Cache / write these to disc so they are only built out once for each unique string. This seems like it would work, and be fairly performant if I do it right. My question again is I feel like I'm solving something that must be quite common for others. What's the "right" way of doing this before I start hacking out some custom fun?
And if I'm getting to this point - why not just inject the css / js straight into the page. Does Google Lighthouse / et al disuade or like this? Fewer http requests but bigger html.
To answer my own final question - this would be bigger html page deliveries and not utilise any browser caching for pages that have the same css and js served up. This would depend on the number of unique pages I guess. As my sites have any number of modules from a library of 50 modules... I think it might be better injected.
d
Not sure if it helps but I tend to minify my Sass using Dart Sass then use Smidge to bundle my JS and CSS. You could use PostCSS + PurgeCSS to strip out unused CSS but the process seems more trouble than it's worth. I get 4 x 100 Lighthouse scores on most of my sites but as you say as soon as clients start adding third party code we have little control over the output.
s
How much custom css/css do you need per block? A rule of thumb, if the js/css you are loading is below 4KB, its better to inline it (make sure to only inline it once though). Smidge can help you with loading block-specific css/js too, but you might end up with the user loading numerous different bundles, that all contains 90% of the same. Probably better to just have a block-a.js/css, block-b.js/css loaded with the different blocks.
d
Personally I tend to compile third party CSS into my global stylesheets so it is available throughout the site. I also try and only install what is being used.
s
Also, limiting the number of http requests isn't really needed if you are running at least http2, which pretty much everyone is today'
s
I like the 4kb limit - I had in mind to inline small ones. @Dean Leigh - you're totally right about the third party stuff. They go wild with GTM and add all sorts of antiquated poor performing scripts then beat me up for 100kb of css that is "unused" - despite the fact the browser has now cached it and it will actually be used in the next bit of the site most likely. But you can't reason with automated scores can you?
I've seen people testing 10-15 requests and saying it had a detremintal effect on lighthouse and page performance but in theory I agree with this./
d
I hear you and feel your pain!
s
I actually think the Google Lighhouse score is incredibly simplistic and completely flawed. But they point at Google, look at me and, quite fairly, trust the billion $ entity!!
I'll do some experiementing with a few approach. If I make a breakthrough I'll obviously update you all. TBH it's nice just to hear that everyone is making these tradeoffs and facing the same challenges.
My current thinking is to just have a core css and js, a module library one and then just break anythign big (liek the checkout / user area) into separate entries / modules and then soak up a few kb here and there of critism. I really was hoping for a VITE style entry point that I could inject the modules it needs and it do some code shaking magic dynamically. I guess I was hoping for too much.
s
For CSS we use tailwindcss (I know it's quite divisive in the industry, so not trying to start a discussion about that), it actually keeps both the bundle size, and the number of unused css down. We are very rarely seeing css bundles above 100KB. For JS dependencies, I tend to go the cdn-route, and include just what the library wants (but selfhosting using eg. https://24days.in/umbraco-cms/2022/static-assets-taghelper/) I only include js files where needed. Eg. if I am using a slider library, the inclusion of the librarys files is done in the partial views containing the slider. So pages without a slider, doesn't load the slider library. I have a tag-helper I can use for rendering stuff only once on the page, eg.
<umb-once key="includesliderlibrary"><script src="cdn-url.to.sliderlibrary.com/min.js" umb-self-host></script></umb-once>
Then the block containing the slider can be included several times on the page, but the script tag only gets added with the first block.
With these techniques, css and js is rarely something you need to optimise. You'll likely have images, videos etc. (GTM and editors misusing it too!) that are far bigger problems for your pagespeed.
d
AFAIK Tailwind uses PostCSS + PurgeCSS to strip out unused CSS but you can use them with any CSS.
s
not anymore, it looks through your source files looking out for which classes it should generate
d
Good suggestions re: JS @skttl
s
PurgeCSS can be a difficult beast, especially if you like to concatenate dynamic class names eg. class="my-block my-block--@theme". WHere @theme is the dynamic part.
d
It does but if you use that it will not optimise 3rd party CSS as well. > By default, Tailwind will only remove unused classes that it generates itself, or has been explicitly wrapped in a @layer directive. It will not remove unused styles from third-party CSS you pull in to your project, like a datepicker library you pull in for example.
I think as @SiempreSteve says - we can optimise 1kb of CSS then the client will add 0.5mb of code via GTM 🙂
s
Exactly, and optimising 3rd party css is often not trivial. You can easily hit the problem I mentioned with PurgeCSS. If you think your third party librarys css needs optimising, you are probably better of working with the author to actually optimise the css 🙂
m
Inlining.. but then you have to generate
nonce
for your CSP 😉
m
if only there was a package that can do that for you
3 Views