The Web must be fast, optimized and efficient. But achieving a high-performance website is not easy when you consider the ecological aspects of the resulting code or the additional servers on your infrastructure. Web Performance is not Green, but it could be.
During this talk, I’ll explain the ecological impacts of digital technology and those specifically added by some Web performance optimizations so that you know how to embrace them. Together, let’s build a fast, efficient and responsible web.
Thanks to these different areas of expertise, I grasped the performance and architecture issues of many projects, and to take a serious look at responsible design by taking into account the eco-design of web and native applications and all the aspects of responsible digital design.
A talk giving my opinion on why frontend performance matters and it’s huge impact on users from the perspective of government services. The talk focuses on the UK central government’s website, GOV.UK. It contains details on the history of Government Digital Service and GOV.UK, who our users are and the devices that they use.
There’s a focus on what happens if web performance is poor for government services and what this means for users who rely on them. It also covers what we’ve done in the past, are currently doing and our plans for the future of frontend web performance across government.
The target audience is anyone interested in web performance, and who would like a government perspective on the subject. It’s a sector where there is no competition. If a user can’t find the information they need quickly and easily, there really are no alternative.
Matt Hobbs is Head of Frontend Development at the Government Digital Service. An experienced frontend developer, he is passionate about using his skills to build accessible and performant user interfaces. Matt makes a point of keeping on top of the latest technology and tools, is interested in all aspects of interface development, and is a keen advocate for best practice.
Compressing an image is already difficult, and doing it on large volumes and at a lower cost is a real challenge.
Together we will see the main principles of optimization and the techniques that result from it: resizing according to content, compression according to perceived quality, how to choose an image according to the capacity of the screen, the browser support and even the quality of the browser connection. Finally, we will see how to apply these techniques to large-scale services such as Akamai and Cloudinary based on our experience with this topic at Fasterize.
We managed to reduce the load time of our app from 9 to 2 seconds, using several methods: bundle analysis, code splitting and dependency deduplication. During this talk, I’ll talk about the following topics:
Deduping your dependencies
Choosing the right dependencies
Code splitting vs. bundle splitting
I will illustrate those topics with their concrete impact on our web application’s performance.
Webperf optimization project: what are we getting into? #
Do you want to launch or are you in the process of launching a Web Performance optimization project? Let’s discuss together the different stages of maturation of a webperf project in a company through the feedback of OUI.sncf, but also from other e-commerce sites that have had the same approach. During this presentation, we will discuss:
The preconditions for launching a webperf project (what triggers or arguments can make you aware of the need to launch such a project?);
Preparation (who to surround yourself with, how to organize yourself?);
Skills development (how to approach the learning phase, set objectives, review your monitoring);
Obstacles (what posture to adopt in cases of divergent interests);
The maturity phase (sensitize teams more broadly, determine the relays of acculturation);
And of course, the end (when can we say that an optimization project is finished?).
Chaperones and curfews: minimising 3rd party impact #
Every year websites get heavier – but the majority of growth isn’t coming from code written at the organisations running them… it’s coming from 3rd parties. Long gone are the days when it was viable to build everything internally, but their impact on performance is already getting a little out-of-hand. We don’t want to be the fun police throwing everyone out, but we can start to moderate the party. In this talk we won’t be looking to remove 3rd parties altogether – nobody likes having their toys taken away; instead we’ll be taking the practical approach of accepting that 3rd parties aren’t going anywhere and look at what strategies we can employ for maintaining performance and safeguarding against slowdowns, outages and abuse.
With over 15 years experience developing for the web, and an undying passion for web performance, Ryan Townsend is the CTO of Shift Commerce – a SaaS e-commerce platform for ambitious retailers.
His pragmatic and performance-led outlook means that—on the rare occasion that he does wear a shirt—his sleeves stay firmly rolled up: even as an exec, his favourite place is right there in the thick of things with his team.
Outside of the office, you’ll usually find him picking up heavy objects in the gym or falling off his mountain bike down the side of a hill.
2 years ago, L’Équipe was one of the first in France to launch a Progressive Web App for its mobile website. The web performance earnings led us to fully revamp the desktop version also based on PWA. I’ll share feedback and results, and specify the actions implemented to optimize web performance.
HTTP or Hyper Text Transfer Protocol is the Web protocol. The announcement of HTTP/3 in early November 2018 surprised many people: the time gap between HTTP/3 and HTTP/2 was less than 4 years, whereas 18 years had passed between HTTP/1.1 and HTTP/2.
Nevertheless, this version brings a real complementarity to the work achieved by HTTP/2, especially on latency issues.
Latency is certainly the number one enemy of Web Performance. It can be found at all levels: front-end, back-end, network protocol, hardware, etc. Latency must therefore be tackled in a comprehensive way to provide users with the best possible experience.
This talk will come back to the reasons for this 3rd version of the Web protocol, what it brings and what it changes, and what web developers can expect from it.
It will also be an opportunity to present the challenges that will arise when implementing HTTP/3.
The target audience is anyone with a technical background working around the web (developer, devops, network engineer).
The main lines of the intervention will be:
the notions of latency and bandwidth
the latency problem related to TCP + TLS
the introduction of the QUIC transport protocol to solve these problems
limitations imposed by TCP over HTTP/2
the challenges regarding the implementation of HTTP/3
Benoit Jacquemont dived into the web in 2000 after having a first experience on payment software. From Java to PHP, from Oracle to MySQL and from management applications to eCommerce sites, he worked on various projects, always with a strong OpenSource dimension, within the Smile group. He became the CTO in 2009 when the company had nearly 450 employees. In January 2013, he co-founded Akeneo with Frédéric de Gombert, Nicolas Dupont and Yoav Kutner. The startup adventure then began for him, as Akeneo’s CTO.
Feedback on 4 years of XP using a Synthetic Monitoring tool at SeLoger #
A few years ago, SeLoger invested heavily in the application performance of its websites. This resulted in jobs creation, the tools purchasing and the creation of an custom-made Synthetic Monitoring solution based on WebPageTest. Today we have migrated to a commercial product.
During this conference, you will first discover why and how we created a Synthetic Monitoring solution by ourselves. We will answer the following questions.
Why we didn’t take a tool from the market?
How we designed it?
How and why we have improved it?
How we used it internally to create a culture, a web-performance oriented standard?
Then, in a second phase, why and how we carried out the migration:
Why abandon a custom solution?
What are the impacts on our projects?
What choices did we make throughout the process?
What have we learned?
What are the pitfalls to avoid?
What compromises have we accepted?
I will start by presenting the context to SeLoger and our late Synthetic Monitoring solution. Then we will proceed with the selection process we have put in place.
How PagesJaunes climb into the top 10 of the webperf ranking in France #
This feedback chronologically retraces all the work done by my team over the past year to improve the webperf on www.pagesjaunes.fr. I will review everything that has been implemented, the achievements but also the failures.
Head of the architecture department at www.pagesjaunes.fr. Specialist in the development of high visibility web applications. Certified ScrumMaster. Specializations: Java, Spring, Continuous Delivery, Webperf, Security, Git, Maven, Scrum, Kanban, Lean, Management 3.0
The performance team at the Wikimedia Foundation is responsible for measuring the performance of Wikipedia. That is easy right? Setup monitoring and you are ready to go? In theory it is easy, but in practice we have had some problems.
In this talk, we will look at how we do performance testing at the Wikimedia Foundation using synthetic monitoring tools and how it works together with our real user measurements. We will talk about the setup, some case studies where we found regressions and go through the learnings we got from our mistakes.
Peter works in the performance team of the Wikimedia Foundation where he measure the performance of Wikipedia using synthetic testing and RUM. Before he joined the foundation he worked as a web performance consultant helping some of the biggest Swedish brands. Peter also created the popular open source web performance tool sitespeed.io.
Thanks to client-side APIs like NavigationTiming, we can collect a lot of information about the real performance experienced by users. However, the organic nature of this data introduces many pitfalls when it comes to interpreting them.
We will look at best practices, real life examples from Wikipedia’s production traffic and recent research we’ve done on this subject. We will attempt to answer which of these metrics matter the most. Finally, we will take a tour of exciting new RUM metrics being worked on at the W3C Web Performance working group and share our experience testing them on Wikipedia with Google Origin Trials.
After more than 10 years as a full stack engineer, Gilles Dubuc joined the Wikimedia Foundation in 2014. He is one of the founding members of the Foundation’s performance team, created in 2015 - which he’s now the engineer manager of. The goal of the Wikimedia Performance Team is to make Wikipedia and its sister projects the fastest websites possible.
Reduce the weight of a mail client UI via CSS / SVG #
I’m currently working on the v4 of ProtonMail. The present version is not always easy to operate: the technical debt is inevitable (and performance debt as well), so are heavy CSS and lack of consistency…
Revamping is an ideal opportunity to rethink the CSS architecture and take the right decisions to improve performance: significant earnings requires groundwork, and some choices that may appear as trivial could reveal themselves powerful when it’s about scaling and thinking about going worldwide.
I’m sharing feedback about this long term work, tips such as SVG (because the most efficient requests are the ones… we don’t run:)), and more especially the way to conceive CSS architecture to drastically reduce its weight, in order to consider performance budget or critical CSS with perfect equanimity.
You haven’t heard the last from HTML/CSS developpers!
Nicolas is a soldier who has been pacificaly moving up to the front(-end) over more than 15 years. He’s been working in a Swiss web agency for about 10 years, and he’s currently working as a quark on UI and CSS among other things… at ProtonMail.
Head of the OpenWeb group, speaker at events such as Paris-Web, Codeurs en Seine, Sud Web… He’s also the editor of the micro-framework CSS Röcssti, Opquast certified Quality Expert, editor of scripts available via the Van11y project – with some ARIA inside –, and he published more than 50 articles about CSS conception, security, Web quality… He’s sometimes kindly nicknamed “the Web quality Swiss-German”, due to his great score at the Opquast certification. And maybe you won’t believe it, but he considers this nickname as a compliment!
Fast to code. Fast to show. It doesn’t have to be an either/or situation. All sites should be fast to load, responsive, internationalizable, secure, and performant. It makes sense to use all available tools to quickly implement features, but some of the tools negatively impact usability. In this session we address the issues of creating sites without compromising any under the hood requirements.
In this session, we’ll discuss the problems of creating websites without compromising any requirements.