First, it can impose limits on maximum and average response DRM server times.
首先,它能对最大或平均响应DRM服务器时间实施限制。
Once the CPU utilization on the application server approaches and hits the 100% mark, any increase in the number of users only results in poorer response times.
一旦应用程序服务器上的CPU利用率接近并达到100%分,用户数量的增加只会导致更长的响应时间。
This would cause major performance degradation, and lead to longer response times from the server.
这会造成性能急剧下降,并且服务器响应时间会加长。
The index is a value between 0 and 100, where 100 indicates a lightly loaded server (fast response times), and 0 is a heavily loaded server (slow response times).
指数值范围是0至100,100表明服务器的负载很轻(响应时间很快),而0表明服务器的负载很重(响应时间很慢)。
By using these workloads, we are capable of determining the capacity and response times for a simulated number of users running a particular workload on the Domino server.
通过使用这些负载,我们能够确定一定数量的模拟用户在Domino服务器上运行特定负载所需的容量和响应时间。
Actual production response times will vary by machine, amount of memory, network load and speed, Web server load, and the processing time consumed by other applications.
实际生产响应时间将根据机器、内存量、网络负荷和速度、Web服务器负荷和其他应用程序消耗的处理时间而有所不同。
This will, of course, be at the expense of increasing average response times between the Rational Requirements Composer client and server, because requests are given a longer time to complete.
当然,这样做的代价是增加了RationalRequirementsComposer客户端与服务器之间的响应时间,因为需要更长的时间来完成请求。
Since most performance test tools are optimized to determine server response times, this leaves a gap in both functional and performance testing of Web 2.0 applications using these new technologies.
因为大多数的性能测试工具得到了优化,以决定服务器的响应时间,这就使得使用这些新技术的Web 2.0程序的功能性测试与性能测试之间,产生了隔阂。
Servlet response time to help compare and contrast the response times observed on the application server against those measured on the load test clients.
servlet响应时间:有助于比较应用程序服务器上观察到的响应时间与负载测试客户机上测得的响应时间之间的异同。
This can reduce response times on the middle-tier server, and also improve the scalability of the system by reducing the number of connections and calls to the Library server.
这可以在中间层服务器上减少响应时间,而且由于减少了对LibraryServer的连接和调用数量,这也提高了系统的可伸缩性。
Depending upon the load placed on the server and the relative size of the entity bean that has been requested, queries on entity beans can have sub-par response times.
根据服务器上的负载和所请求实体bean的相对大小,实体bean的查询可能有不达标的响应时间。
GC in the server caused variation in response times and in the event a large GC occurred, could cause Cache clients (L1's) to failover to a backup Terracotta server.
服务器里的GC会引起响应时间和大规模GC事件出现变化,可能会致使(一级缓存l1的)缓存客户端故障转移到备份的Terracotta服务器上去。
Figures 2c and 2d show server response times for the largest data environments, the 250k and 500k repositories, for a variety of user loads (100, 200, 300, and 500, respectively).
图2c与2d显示了最大型数据环境下,250k与500 k储存库,对于大量的用户负荷下(分别是100,200,300与500)的服务器反应时间。
The StackExchange team takes performance very seriously – for instance you can see how StackOverflow uses caching at three different levels to improve response times and reduce server loads.
StackExchange团队对性能问题非常关注——比方说,你可以看到StackOverflow在三个不同的等级上 使用caching,从而提升反应时间,减少服务器的负载。
Determining the server configuration to ensure adequate response times for expected server load is a well documented and studied science.
决定服务器配置,以确保预期服务器负载足够的响应时间得到了完善的记录。
Note that the response times that are taken into account are server-based and do not include any consideration for network time.
注意,这里的响应时间仅是针对服务器而言的,不考虑网络时间。
Response times are reduced and the scalability of the Document Manager middle-tier server is greatly increased since it is now doing only the work required to actually perform the user's request.
因为现在只进行实际执行用户请求所需的工作,所以响应时间减少了,DocumentManager中间层服务器的可伸缩性大大提高了。
This improves the response times, and also the scalability of both the middle-tier server and the Library server.
这会改进响应时间,还会提高中间层服务器和LibraryServer的可伸缩性。
This improves the response times, and also the scalability of both the middle-tier server and the Library server.
这会改进响应时间,还会提高中间层服务器和LibraryServer的可伸缩性。
应用推荐