The Web browser is the most-used kind of software in the world, having become the de facto way that people access the Internet. Today, virtually all computing tasks can be completed in the browser.
Testing browsers can veer from incredibly complex to shockingly simple, depending on what you’re looking for and why. At CNET, we prefer a holistic approach to browser benchmarking, looking at a combination of tests that benchmark general browser behavior, as well as several “real-world” tests that look at browser performance in common scenarios.
Note about mobile testing: We are still finalizing our standards for mobile browser testing, and will update this post as soon as they’re ready. For now, the following procedures apply only to desktop browsers.
Is your favorite browser on our test list?
Unless your favorite browser is some obscure remixed version of Netscape, chances are we test it. However, browser testing is made even more complex than it would otherwise be by the fact that two of the five major browsers, Firefox and Chrome, update on a six-week release cycle. Sometimes those updates bring dramatic changes, but often they don’t. Because of the sadly human limitations of your humble editors, CNET will not be testing all browsers simultaneously.
Instead, we will conduct quarterly tests for the most-used and best-known browsers, and biannual tests for a wider range of competitors. Also, tests are staggered according to platform, so Windows browsers are not tested simultaneously with Mac, Android, or iOS browsers.
Desktop browsers tested, both Windows and Mac unless otherwise noted:
- Internet Explorer 9 (Windows 7 only)
- Internet Explorer 10 (Windows 8 only)
- Safari (Mac only)
Desktop browsers tested biannually will include:
- Avant (Windows only)
How we test desktop browsers
We run each of the following three times, and restart the computer before each test, so that the browser is starting “cold.” We also wait 30 seconds after booting the browser to ensure that any background processes have been completed. Then we average the three tests.
The Acid3 test from the Web Standards Project checks browser compliance with accepted standards. Slightly outdated since it doesn’t look at HTML5, which has now been finalized, it remains a good way to establish a baseline. Browsers that don’t hit 100 out of 100 on the Acid3 are behind the times in a fundamental, crucial way.
The HTML5 Test assigns points for each HTML5 feature that the browser supports, out of a total of 500. This is a rough way to gauge how future-forward the browser is.
JSGameBench, GUImark3 (gaming test, text test), and Microsoft FishIE Tank look at HTML5 Canvas performance in several different game-like environments. Canvas is an important part of HTML5 to test because it creates all the nifty 2D images and shapes that can move across your screen.
Each of these three tests uses a different standard. FishIE Tank, for example, lets the tester set the number of fish on the screen. The test will then show you how many frames per second it can render them in.
Microsoft Chalkboard runs a series of timed tests on HTML5 panning, zooming, and scaling. Faster is better.
Microsoft setImmediate Sorting looks at the intersection of HTML5 and power consumption. Currently, only IE 10 supports the setImmediate API, but the test’s HTML5 sub-test gives an idea of how your browser drains power.
Facebook Ringmark checks HTML5 feature support, geared for the needs of the mobile browser. Nevertheless, it works well on desktops and provides a good and rare point of comparison between desktop and mobile.
We also perform four “real-world” tests to see how the browser performs under in-use conditions. These tests look at specific browser behaviors that you’re likely to encounter: startup from cold boot, memory used while open, shutdown time, and wake from sleep.
Like the benchmark testing, each test is performed three times and then averaged. Unlike those tests, which are performed only with the tab running the test opened, our real-world tests are conducted twice: with five tabs open, and with 50 tabs. This is to replicate the real-world scenario of keeping many tabs open simultaneously, something that many people do (even if you don’t.)
Tabs are chosen according to categories based on realistic use cases: search engine, streaming media, news site, gaming site, and Web mail. The five tabs we open in the less-intensive test are: Google.com, CNET.com, Outlook.com, YouTube.com, and Pandora.com. We’re not going to list all 50 sites here for space considerations.
Check back soon for updates on how CNET tests mobile browsers.[source:cnet]