Back to Blog

Gone in 60 Seconds (or 8, or 2, or 400 ms)…

Note: This article was originally posted on Loadtester.com, and has been migrated to the Northway web site to maintain the content online.

According to Robert B. Miller[1], here are acceptable response times for various actions:

  • One tenth of a second (0.1) is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
  • One second (1.0) is about the limit for the user’s flow of thought to remain uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
  • Ten seconds (10.0) is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users turn to other tasks while waiting for the computer to finish. Getting a new page within 10 seconds, while annoying, at least means that the user can stay focused on navigating the site.

Sounds like a great gauge to go by for the newly developed, highly scalable infrastructure of today doesn’t it? Actually this was written in 1968! 10 seconds was a long time back then, and its a long time now, when we are talking about user interaction with computers. There are some things the designers and hosts of a web application can control. You can make sure that you optimize your code, eliminate round trip calls, and performance test and tune throughout the development lifecycle.

By doing this, you have a much more confident picture of what to expect in the production environment. Other things, like latency from dial up modems, and backbone delays on a major ISP — you just can’t control. You cannot make the Internet speed up. If you plan ahead and prepare for those worst cases where the user is under those conditions you cannot control, you have a huge head start. Unfortunately, even large companies don’t always understand this. I’ve seen some development shops put something into production with web page response times of 60 seconds because they were willing to live with the fact that something in their application was fundamentally flawed. To correct it would take too long to get to market. “Throw hardware at it. Get a bigger pipe. We don’t have time. ” Heard this before?

Many of you have heard about the “8 second rule” when referring to web applications. The longer is takes for a web page to render at or over 8 seconds, the higher the rate and frequency of user abandonment. The levels of satisfaction, tolerance and frustration obviously depend on the patience and character of the user[2].

What will not change is that a fast web page on an internal LAN will still take 6 seconds longer on a 28.8 modem connection because of latency. You can’t control the “last mile”. For internet facing web sites to overcome this, testing under those worst case conditions is needed. I suggest using a WAN emulation technology inside a pristine lab environment, using load generators outside the firewall, or a cloud based solution to get the performance of actual users out on the Internet.

To pull this kind of scenario off, there must be champion at the management level committed to performance as an investment. These tools aren’t cheap. But they provide the kind of value and reassurance that in many cases you cannot buy. If management isn’t committed to performance now, don’t worry. It will only take one time getting bit by the client to enable this commitment. If you are at the management level, are you willing to bet the business that your users won’t be gone in 60 seconds? I’m betting they will!

05/27/2012 UPDATE: In 2009, Forrester Research (on behalf of Akamai) showed that 2 seconds was the amount of time web pages should load within (up from 4 seconds in 2006) . In 2012, Google released information that 400 MILLISECONDS is too long.

[1] “Response Time in Man-Computer Conversational Transactions,” R. B. Miller, from the proceedings of the AFIPS Fall Joint Computer Conference, 1968.

[2 ]”Understanding How Users View Application Performance” by Peter Sevcik. July 2002 issue of Business Communications Review, pp. 8–9.

Back to Blog