Browser Compatibility Explained

We have more options than ever before when it comes to choosing a browser for everyday use. But have you ever wondered why a website behaves a certain way in one browser and differently in another?

Modern web browsers have gotten good, but there are still differences between them that are important for anyone developing (or even using) websites to understand. In addition to features like bookmark syncing, tab management, and extensions, the underlying technologies that power each browser tend to have notable differences. Buttons may be styled differently, newer image formats might not render properly, and JavaScript may break in unexpected ways.

Part of this is due to the existence of many independent browser vendors, each who have their own vision for how the web should operate. Google Chrome is by far the most widely used – accounting for over 66% of the market share as of September 2020, but not everyone viewing your website is going to be using the same browser or platform – and those who are might not be on the same versions. This simple fact is the root cause for most compatibility issues.


The Evolution of Standards

Web standards are largely determined by an international organization led by Tim Berners-Lee and Jeffrey Jaffe called the World Wide Web Consortium, or W3C. Comprised of over 400 members from companies such as Mozilla, Google, Apple, and Microsoft, this nonprofit organization is responsible for maintaining the standards that govern the modern web. This includes the HTML, CSS, XML, and SVG specifications, accessibility standards, and protocols such as HTTP.

The JavaScript language on the other hand is maintained by Ecma International and the TC39 committee, although browser APIs like the DOM are still governed by W3C. New versions of ECMAScript / JavaScript are typically released on a yearly basis with ECMAScript 2020 (also known as ES11) being the latest release.

Once these organizations approve new specs for the web, it’s left up to browser vendors to implement support for these new features and release an update to their users. This process can take anywhere from weeks to years, depending on the priority of the new addition.

Nowadays most modern browsers push automatic updates on a regular basis without users even realizing, but with other vendors updates can be a bit more nuanced. Safari for example typically bundles its updates alongside macOS system updates, leading to a much slower adoption rate since this relies on users updating their entire computer to receive these new features.

So what exactly is being “updated”? The answer usually involves changes across multiple distinct engines that each take on specialized responsibilities.


Browser Engines

The browser engine is essentially the heart of the browser. This can almost be thought of as its operating system since it powers all of the browser’s main operations. There’s a handful of these engines, each with their own quirks – Chromium browsers (such as Chrome and Edge) run on Blink, Safari runs on WebKit, and Firefox runs on Gecko.

Most of the variation between browsers is derived from differences in the way their browser engines operate. This engine is in charge of everything from the browser UI to page navigation and networking. It determines how memory and CPU are utilized, the way pages are loaded, and how resources are cached locally.

Tightly coupled to the browser engine is the rendering / layout engine, which is in charge of parsing HTML and CSS then painting the results to your screen. This is arguably the browser’s most important job, as it’s responsible for translating markup into a visual experience that can be seen and interacted with.

Issues sometimes arise when a CSS property or HTML tag is supported in one rendering engine but not in another. This can be due to the vendor being slow to implement the new property, or more fundamental disagreements with whether the property should exist at all.

For a deep dive into the process in which web pages are rendered check out this blog post.



JavaScript Engines

All browsers also need a specialized engine to interpret and execute JavaScript code. This interpreter is a program that takes JavaScript files and translates them into bytecode that can be understood by the browser.

Each browser implements their own version of these JavaScript engines – Chromium uses V8, Safari uses JavaScriptCore, and Firefox uses SpiderMonkey. The speed and popularity of V8 in particular has led to its adoption in runtimes like Node.js and Deno as well.

JavaScript engines are constantly evolving to become more performant as both app complexity and hardware processing power increase. They often apply different optimizations and compilation techniques (such as just-in-time compilation) to speed up parsing and execution.

Arguably more important than speed however, is support for modern syntax and features. It’s crucial for browsers to stay current with the JavaScript community and what new features developers are actively using.

Compared to rendering engines which are typically able to fail silently and continue rendering the page, JavaScript errors will cause the entire script to exit if the interpreter encounters an uncaught error. This risk has led to the rise of polyfills and transpilers, such as Babel, to convert newer syntax to code that can be understood by older versions of JavaScript interpreters.


User Agent Stylesheets

Each browser has its own default stylesheet. These are CSS rules that the browser falls back on in the event that the developer doesn’t explicitly define styles for certain elements. These include defaults for visual aspects such as margins, borders, and font sizes.

While these values carry the lowest specificity and are easily overridden by any other CSS rules, issues can arise when certain properties are overlooked – leaving the decision up to the browser. Many developers choose to “reset” these default styles in order to start off with a blank slate and have more predictable control over the appearance of their site.

Furthermore, browsers can change their default stylesheet at any point through updates (as Google recently did with form controls in Chrome 83), potentially altering the appearance of your site without warning.




Platform Restrictions

The hardware that browsers run on can also play a defining role in the browsing experience and its potential limitations. For example, users browsing a website on a brand new laptop will generally have the help of modern processors and an increase in available memory, whereas older devices may struggle more on those same websites.

Not all browsers are made available on all operating systems either. Safari is not built to work on non-Apple devices, and Microsoft does not offer a macOS version of Internet Explorer or (until recently) Edge. While mostly due to bureaucratic reasons, these restrictions reduce the options a user has when choosing a browser.

When it comes to mobile devices, the version of a browser you download from the App Store or Google Play Store is not necessarily the same as its desktop counterpart. On iOS, although Apple allows third-party browsers on the App Store, they require them all to run its WebKit engine. This allows Apple to exercise a tighter control over the way websites are rendered on its devices. Because of this, apps like Chrome on iOS behave much more similarly to Safari than desktop Chrome.


The Decline of Internet Explorer

The topic of cross-browser compatibility can not be discussed without mentioning Internet Explorer. The 20+ year old browser reached its end of life in 2016, when Microsoft shifted its focus to Edge. Since then, IE only receives critical security updates and there’s a dwindling community of developers willing to continue supporting the browser.

While its user base continues to shrink (1.19% as of September 2020), it still remains heavily relied upon by enterprise websites that were built to work on older versions of IE and haven’t been updated since. This has led to new versions of Windows continuing to ship with IE pre-installed even though users are being encouraged to use Edge. Microsoft Principal Program Manager Chris Jackson has even gone as far as calling IE a “compatibility solution”, pointing out the perils of using it as your default browser.

Up until recently, users were forced to use IE in order to access these older enterprise applications. However, Microsoft’s new Chromium build for Edge now offers an “IE Mode” (Edge 77 and later) that switches to using Internet Explorer 11’s Trident MSHTML engine for legacy sites that still require it.

It’s important to ensure a good experience regardless of which browser, version, or platform your customers decide to use. For developers, this means using cross-browser testing tools such as CrossBrowserTesting or BrowserStack to find bugs before your customers do. Being aware of these differences allows you to make informed decisions about the needs and considerations for the audience you’re building for.

The developers here at Something Digital’s are highly skilled at ensuring a smooth browsing experience across a wide range of browsers and devices. For any questions, please reach out to us.

How Webpages are Rendered

What does it mean to “render” a webpage? This may sound like a simple question, but when you dive into the technical details you begin to realize how much work a browser does in an incredibly short amount of time. Knowing more about this process will allow you to make better decisions when it comes to optimizing the performance of your site.

While external factors often come into play, much of the responsibility for laying the foundation of a smooth experience rests on the shoulders of the developer. Here at Something Digital, we’ve found that many people don’t have a full understanding of the process a webpage takes to go from files on a server to a complete page in your browser – so it may be beneficial to gain a better understanding of what’s going on behind the scenes.

From a technical standpoint, the process for loading a webpage can be broken into four stages: navigation, parsing, rendering, and interaction. Let’s break each one of those steps down in detail.


First, the browser needs to retrieve all the necessary files from a remote server to create the initial page. This is limited by factors such as the end user’s internet speed and network latency. It is for this reason that data centers are often spread all over the world, and CDNs are often used to deliver content from a number of potential locations, depending on whichever is geographically closest.

Files are requested over a protocol called HTTP, or HyperText Transfer Protocol. Although created in the late 80s, this standard continues to evolve and the newest version called HTTP/2 improves upon HTTP/1.1 in a number of ways. One such advantage is that it now allows the client’s browser to make requests for multiple assets (images, stylesheets, JavaScript, etc.) concurrently over a single TCP connection, whereas with HTTP/1.1 this required opening separate connections if you wanted to transfer data in parallel.

“Time to first byte” (or TTFB) is a common measurement of the time it takes the browser to receive the first byte of a response, and should ideally occur in a half second or less.


Once the browser has all the necessary files, they can begin to be interpreted. This is when the files are read and the structure of the website begins to take shape through what’s known as lexical parsing and syntax analysis.

The Lexer breaks up code into “tokens” that can be easily processed, stripping out white space and other unnecessary characters. Each token is then passed to the syntax analyzer to apply language-specific syntax rules and added to the parse tree. In the event that syntax errors are found, this is where a runtime exception will be thrown.

Once lexical parsing and syntax analysis are complete, HTML and XML elements are used to create the Document Object Model (or DOM) – a series of element and text nodes organized in a tree-like structure. JavaScript can then use these DOM nodes in order to manipulate the document’s contents.

Similar to the DOM, the CSS Object Model (or CSSOM) is also constructed at this stage, allowing JavaScript to read and modify CSS rules dynamically. The tree-like structure of the CSSOM is what gives CSS its “cascade”, as stylesheets are interpreted from top to bottom with increasingly specific rules.

JavaScript and CSS are “render blocking” resources, meaning they can negatively affect load time by preventing the rest of the page from being parsed until they’re finished being executed. If there’s any inline JavaScript or CSS embedded in the HTML document, it will be parsed synchronously. Since this can have a huge impact on overall load time, loading non-critical scripts asynchronously or using the defer / async attributes is generally a good practice.


Rendering is the multi-step process in which the content of the page begins to become visible to the user. This can be a relatively expensive task for the browser to perform, depending on the complexity of the styles and animations being rendered.

First the DOM and CSSOM are combined to create the “Render Tree” by traversing the DOM nodes and finding the appropriate CSSOM rules that apply to them. This only includes nodes that will occupy space in the layout, so if an element has display: none, it will be omitted from this tree.

Next the layout stage computes the exact size and positions of each node within the layout by creating a box model, and reserves that space on the page. This is also commonly referred to as “reflow”.

Content is then displayed to the screen through a process called “painting”. More complicated styles such as drop shadows are more memory intensive to compute and render, and may take longer to be painted than a solid color. The first meaningful paint represents the moment when the user is able to see meaningful content on your webpage for the first time.

Since parts of the page may have been drawn to different layers, compositing is the process in which the GPU is used to “flatten” these layers, ensuring the order of elements on the page remains correct.

Actions such as adding / removing elements from the DOM, changing inline styles, or resizing the window will cause additional reflow, since the browser needs to perform the above process again to calculate the new positions of elements. This is a user-blocking operation and should be avoided as much as possible as it can lead to an unpleasant user experience.

As images begin to be displayed, the smaller more compressed images will typically appear first. Newer image formats (such as WebP) also promise to reduce load times by more efficiently encoding the images, although not all formats are supported by modern browsers.


Finally, the interaction step is when the user can begin browsing and using the page. A page is considered “fully interactive” when all previous steps have completed and users can begin to scroll, type, and interact with elements on the page.

First CPU Idle represents the point at which the page is minimally interactive – meaning it has loaded enough information for it to be able to handle a user’s input. Most, but not all of the UI is interactive and the page responds to user input in a reasonable amount of time.

As animations begin to run, 60 frames per second is usually the target for a smooth frame rate – and anything less than that starts to become noticeable as “jank”. With lower frame rates, scrolling can appear choppy or become unresponsive altogether, whereas higher frame rates are usually indicative of a site’s overall responsiveness.

The process a browser takes to construct a webpage is far more complex than one might think. With each step comes more opportunities for developers to make decisions that can provide a smoother user experience and have fundamental impacts on increasing conversion rates.

Something Digital’s developers are passionate about getting your online store to be as fast as it can be. If that sounds like something you’d be interested in, reach out.