
200 S Laurel Ave - Bldg B
Middletown, NJ
Designing Reliable LTE Services
Carolyn Johnson
conference proceedings,
2011.
[BIB]
{Many services today are quickly evolving the basic hardware, software, service capabilities and functionalities due to marketplace pressures to rapidly advance in order to provide more value to both the service provider and the service user. Maintaining high levels of reliability and performance under such rapid rates of change brings many challenges for reliability and performance engineering for both equipment vendors and service providers. The introduction of LTE for wireless applications, with advances in the radio layer and packet core, combined with the IMS application layer is used as an example to discuss performance related issues and possible approaches to manage service reliability under rapid change.
To set the background, we note the LTE services are evolving to IP Multimedia Subsystem (IMS) based solutions as a means for LTE to converge voice and data services. IMS provides an architectural framework for multimedia applications that include Voice over IP, presence based services, video streaming, and other applications. LTE and IMS require the introduction of many new and complex technologies, and these newer technologies will need to interact seamlessly with the existing deployed technologies.
Our primary objective is to discuss some of the challenges such an evolution raises and secondarily present an initial application of Service Reliability methodology for assessing service reliability taking into consideration the redundancy structures, failure analysis, and expected reliability metrics.
}
Method And Apparatus For Throttling Requests To A Server,
Tue Dec 28 18:10:14 EST 2004
The present invention provides a throttling system that can throttle incoming requests to a server that includes a variable sized buffer for holding incoming calls prior to processing by the server. The number of requests that are held in a queue by the buffer can be dependent on the overload status of the server. If the server is not overloaded, the number of requests that are held in the buffer can be large, such as the full capacity of the buffer. Alternatively, if the server is overloaded for a predetermined amount of time, then the number of requests that are held in the buffer can be decreased, such as to only a portion of the full capacity of the buffer. Any requests that arrive at the buffer once the buffer is at its capacity can be discarded or blocked. Accordingly, a reduction of the buffer size in a overloaded state results in a superior delay performance without increased request blocking of the processor.