On 06/01/12 03:56, Subhranath Chunder wrote:
> With that in mind, how should we measure response complexity?
> Any particular parameter, scale? Probably I can measure against
> that, and share the numbers to shed more light on how many
> requests can be handled in with a particular hardware config.
There are a pretty small number of bottlenecks:
1) computation (a CPU-bound problem)
1b) algorithm (usually manifests as a CPU problem)
2) I/O
2a) disk
2b) network
2c) memory
Most of them can be mitigated through an improvement in algorithm or
by adding an appropriate caching layer. There's also perceived
performance, so spawning off asynchronous tasks (such as via celery)
can give the user the feel of "I've accepted your request and I'm
letting you know it will take a little while to complete".
In any testing that I do, I try to determine where the bottleneck
is, and if that can be improved: Did I choose a bad algorithm? Am
I making thousands of queries (and waiting for them to round-trip to
the database) when a handful would suffice? Am I transmitting data
that I don't need to or that I could have stashed in a cache
somewhere? Am I limited by the CPU/pipe/disks on my machine?
There's no single number to measure the complexity, but often
there's an overriding factor that can be found addressed, at which
point another issue may surface.
-tkc
--
You received this message because you are subscribed to the Google Groups "Django users" group.
To post to this group, send email to django-users@googlegroups.com.
To unsubscribe from this group, send email to django-users+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/django-users?hl=en.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment