Current location - Plastic Surgery and Aesthetics Network - Plastic surgery and medical aesthetics - What is the function of IP QOS?
What is the function of IP QOS?
abstract

IP network is gradually becoming the basic communication platform, and more and more value-added services, especially multimedia services, will run on IP network. Therefore, how to ensure the service quality of services on IP network has become a key issue for the development of new services. This paper introduces the main architecture for realizing QoS on IP network at present, focusing on the principle of DiffServ, the mainstream architecture, and the QoS support ability of a new generation of service-oriented network devices represented by Alcatel 7750 SR service router.

Keywords: IP; QoS differentiated services; 7750 SR

With the rapid development of Internet in the world, IP technology has gradually become a broad and universal network platform. Its economy, flexibility and ability to support a variety of services are incomparable to the original circuit-switched network. However, the traditional IP technology can only forward data packets in a best-effort way, and only transmit data packets as quickly as possible within its own capabilities, which does not guarantee throughput, delay, delay jitter and packet loss rate, and the transmission loss is left to the terminal system to deal with.

This best-effort transmission mode used to be suitable, because most traditional IP-based applications (such as Telnet, FTP, etc. ) can tolerate large delay and delay jitter. However, the situation is changing rapidly. New services, such as telephone, video and internet, are being widely used. The new multimedia service needs a lot of bandwidth and strict time limit; Moreover, the exponential growth of Internet users will also lead to more serious network delay and congestion.

Although expanding the capacity of network nodes and links is indeed a part of the solution, it is not enough to invest bandwidth only where there are problems, because temporary and sudden network congestion on the Internet cannot be eliminated. The new generation Internet must be able to provide different levels of protection for some applications and users, and realize the quality of service (QoS) of IP networks. At the same time, combined with the implementation of SLA (Service Level Agreement), IP service providers can really make a profit by providing them with differentiated services.

IP QoS model

At present, IETF has defined many models and mechanisms to realize QoS over IP. The main models are as follows:

I. Relative Priority Marking Model [1]

Relative marking model is the earliest QoS model. Its mechanism is to set a relative priority for its data stream through the terminal application or proxy, and mark the corresponding packet header, and then the network node will forward it according to the label of the packet header. This model is very simple to implement, but it is coarse-grained and lacks advanced QoS processing processes (such as comments, policies and shaping), so it cannot achieve detailed and diverse QoS guarantees. At present, the technology adopting this model is IPv4 priority (RFC79 1). In addition, token ring priority (IEEE 802.5) and Ethernet traffic level (802. 1p) also adopt this architecture.

Two. Integrated service model (internal service) [2]

Its design idea is to define a series of extended features on the basis of best-effort service mode, which can provide application-based QoS for each network connection, and create and maintain the state of specific flows in each router in the network by using signaling protocol to meet the requirements of corresponding network services.

The system can clearly distinguish and ensure the quality of service of each service flow, and provide the most fine-grained service quality distinction for the network. However, there is a problem in the implementation of IP core network, because the implementation of Inter-Serv requires each network node to provide a considerable amount of calculation and processing for each stream. This includes end-to-end signaling and related information to distinguish each flow, tracking, computing resource occupation, policy control and scheduling traffic. With the increase of the number of inter-server flows, the processing and storage of inter-server signaling rapidly consumes the resources of routers and greatly increases the complexity of network management, so the scalability of this model is poor. At present, the technologies that adopt this model are MPLS-TE(RSVP), and other typical technologies are ATM and Frame Relay.

Three. Differential service) [3] 3]

Compared with IntServ, which acts on each flow, in DiffServ architecture, traffic flows are divided into different differential service classes (up to 64 types). The differential service category of a traffic flow is indicated by the differential service code point (DSCP) in its IP header. In DiffServ network, each router will forward packets according to the DSCP field, namely PHB (per-hop behavior).

Although DiffServ can't guarantee different quality of service for each traffic flow. However, due to the adoption of traffic flow classification technology, it is not necessary to establish and maintain the flow state on each router by signaling protocol, which saves the resources of routers, so the network scalability is much higher. In addition, DiffServ technology can be used not only for pure IP networks, but also for MPLS networks using multi-protocol label switching technology through the mapping of DSCP and MPLS labels and the EXP field in the label header.

DiffServ's main architecture is divided into two layers: edge layer and core layer.

The edge layer completes the following tasks:

-Traffic identification and filtering: When user traffic enters the network, edge layer devices will first identify the traffic according to predefined rules and filter out illegal traffic, and then map the traffic to different service levels according to the information contained in the data packet (such as source/destination address, port number, DSCP, etc.).

-Traffic Policy and Shaping: When the user's traffic is mapped to different service levels, the edge layer equipment will shape the traffic according to the QoS parameters in the SLA signed with the user, such as CIR (committed information rate) and PIR (Peak Information Rate), to ensure that the traffic entering the network will not exceed the range set in the SLA.

-Re-tagging of traffic: Edge layer devices will set service level tags in their packets according to their service levels, such as DSCP field in IP header or EXP field in MPLS header, so that core layer devices can identify and process them.

Compared with the edge layer, the work to be done by the core layer is much simpler. The core layer equipment mainly identifies the relevant QoS fields in the data packet according to the preset QoS policy, and carries out corresponding QoS processing. Through this hierarchical structure, a QoS network architecture of "intelligent edge+simple core" is formed, which not only improves the scalability of the network, but also greatly improves the flexibility of QoS processing.

IP network devices realize differentiated services.

Due to the flexibility and expansibility of DiffServ, almost all IP network devices support DiffServ architecture at present.

While supporting DiffServ architecture on network devices generally requires the following functions:

-Multi-conditional flow differentiation

Multi-conditional traffic differentiation refers to dividing the forwarding level of traffic according to the different conditional information and predefined differentiation rules contained in the received customer traffic. Differentiating rules are similar in format to access control lists (ACL), and each rule contains different matching conditions and corresponding forwarding levels. When the customer traffic meets the matching conditions of the differentiation rules, the traffic is divided into corresponding forwarding levels. The matching conditions mentioned here can be physical ports, VLAN, various IP fields or various MAC fields.

-Traffic marking and forwarding category mapping.

Traffic differentiated by differentiation rules will be mapped to different forwarding levels, and DiffServ defines several standard forwarding levels:

-Accelerated forwarding class.

The accelerated forwarding level has the highest forwarding priority, and the equipment must ensure that the traffic of other forwarding levels cannot affect the delay and jitter of the accelerated forwarding level traffic, so the accelerated forwarding level is often used for network control traffic and traffic sensitive to jitter, such as VOIP.

-Guaranteed forwarding level.

Ensure that the forwarding level is very similar to the QoS of Frame Relay, which provides the parameters of PIR (Peak Information Rate) and committed information rate () for business traffic. When the customer flow is less than CIR, it is marked as "internal file", and when the customer flow exceeds CIR, it is marked as "external file". Through this distinction, when congestion occurs in the network, the traffic that "exceeds the contour" will be discarded before the traffic that "conforms to the contour".

-Try to transmit the category.

Best effort is the lowest priority forwarding level. Only when the traffic with accelerated forwarding level and guaranteed forwarding level is forwarded, the traffic with best-effort forwarding level will be processed.

When the forwarding level of the traffic is determined, the device will mark the traffic accordingly, so that the downstream network devices can identify and process the traffic in the same way and realize a unified QoS policy. The marker fields defined in DiffServ standard are DSCP field in IP packet header and EXP field in MPLS packet header respectively.

-Queuing and scheduling

The forwarding processing of DiffServ at each level is realized by queuing and scheduling. Queue is a logical concept, which is actually a cache in the high-speed memory of equipment, and follows the rule of "first in, first out". There are often multiple queues in the system, so that for multiple forwarding levels, when a packet is determined as a certain forwarding level, it will be stored in the corresponding queue, and then the system will schedule it according to different forwarding levels and different parameter settings (PIR, CIR). Different forwarding levels often adopt different scheduling algorithms, such as "strict priority" scheduling of accelerated forwarding level, that is, packets in the queue of accelerated forwarding level are always scheduled first to ensure their highest priority. For the guaranteed forwarding stage, a weighted recursive scheduling algorithm is adopted. This algorithm first schedules all the "in-contour" traffic, and then schedules all the "out-of-contour" traffic to ensure that each guaranteed forwarding stage queue can be scheduled according to its CIR and PIR.

-Congestion control

When the queue buffer is full, the system will be congested. At this time, a large number of newly received packets will be discarded until the data source detects packet loss through TCP's flow control mechanism (sliding window protocol), which reduces the sending rate, thus eliminating congestion and restarting forwarding. However, with the increase of data source rate, the system will be congested and discarded, which will have a great impact on the overall network performance. Therefore, congestion control mechanism is often introduced into DiffServ architecture, and commonly used algorithms are RED and WRED. The so-called RED refers to the random early detection algorithm, which reduces the transmission rate of TCP sender by randomly discarding some packets before congestion occurs. The possibility of packet loss increases with the occupation of queue buffer, which can avoid a large number of packet losses.

-MPLS differentiated services

With the wide application of MPLS technology, the MPLS working group of IETF has defined two methods to map DiffServ layer of IP to MPLS LSP:

-Electronic logistics service providers

It is relatively simple to map IP DiffServ levels by using EXP field in MPLS header, but the EXP field is only 3 bits long, so it can only represent 8 levels.

L-LSP

This method not only uses EXP field, but also uses MPLS label for mapping, which greatly expands the number of levels that can be represented, but has the disadvantage of consuming a lot of limited MPLS label resources.

It is worth mentioning that another way to support QoS in MPLS technology is to adopt traffic engineering technology, namely RSVP-TE. In the process of establishing LSP, RSVP-TE can reserve bandwidth at nodes along the way. As an inter-service architecture, this technology is usually combined with DiffServ architecture to provide the relay link bandwidth of the service.

With more and more multimedia services running on IP networks, higher QoS requirements are put forward for IP networks, while traditional IP routers or switches can only provide simple connection services. Although most of them also support DiffServ architecture, due to the traditional hardware architecture and technology, they cannot provide perfect differentiated service level support. For example, many traditional devices only support several queues on each physical port, which cannot meet the needs of large-scale business development; Some devices will greatly affect the forwarding performance of the system after starting DiffServ function; There are also some devices that only support simple traffic restrictions and do not support forwarding levels, or even if they support forwarding levels, they cannot flexibly allocate available bandwidth among different forwarding levels. The limitations of these devices greatly limit the development of various new IP services.

Therefore, Alcatel Shanghai Bell Company has launched a new generation of IP products, including IP service router 7750 SR and Ethernet service switch 7450 ESS. Essentially different from traditional equipment, Alcatel 7750 SR and 7450 ESS are completely designed for new IP/MPLS services, and both product lines have a strong and perfect service-based Qos system. Take 7750 SR as an example:

The 7750 SR supports the service-based Qos policy, and can define a special QoS policy for each service instance (such as each VPN service) on the 7750 SR. At the service access port (multiple users/services are connected to the same physical port), each application traffic of each user can be independently input-output shaped, each application traffic can obtain its own independent buffer queue space, and each independent queue can be set with independent traffic shaping parameters, such as CIR, PIR, MBS. , and supports the industry-leading hierarchical Qos scheduling technology. Each line card on the 7750 SR can support 32,000 queues, which is much larger than the traditional equipment. Unicast packets and multicast/broadcast packets can be processed in different queues, which can prevent broadcast or multicast data from crowding out the resources of unicast data. Moreover, for customer data flow, Qos can be processed not only at the entrance, but also at the exit, which greatly improves the flexibility of Qos policy.

The Qos system of 7750 SR mainly consists of three parts: traffic classification, cache management and traffic scheduling.

1. According to the predefined classification policy, user traffic is divided into different service levels. The 7750 SR supports a powerful and flexible classification strategy, which can classify user traffic according to the following information:

IP ACL:Src/Dest IP address/range, Src/Dest port/range, IP fragment, protocol type, IP priority, DSCP.

MAC ACL:802. 1p, Src/Dest MAC address/mask, Ethernet type value, 802.2 LLC SSAP/DSAP/SNAP value/mask.

MPLS:E-LSP (exit)

2. Each service level is assigned a special queue, and each queue has corresponding configurable Qos parameters.

CIR (commit information rate): When the dequeue rate of a queue is less than CIR, the traffic of the queue is marked as "in the profile", and if the dequeue rate exceeds CIR, it is marked as "out of the profile". At the same level, the flow inside the contour will be arranged before the flow outside the contour.

PIR (Peak Information Rate): When the dequeue speed of a queue exceeds PIR, the system will stop scheduling the packets of the queue.

3. Queues are allocated from the cache pool on each line card, and each queue has two configurable parameters about cache allocation.

CBS: Guaranteed queue length. Once the queue length exceeds CBS, the system cannot guarantee that the datagrams rejoining the queue can be distributed to the cache.

Mbs: the value of MBS:MBS the maximum length of the queue. When the queue length exceeds MBS, the newly input data packet will be discarded. This value is to prevent a queue from being dequeued faster than PIR for a long time and being unable to be scheduled, so that the queue length will continue to grow and eventually consume the cache resources of other packets.

4. The scheduler controls dequeue scheduling between queues.

7750 SR supports the most advanced hierarchical queue scheduling technology in the industry at present, which can not only control the total bandwidth of a single service or multiple services, but also further subdivide the Qos of each service on the basis of controlling the total bandwidth, thus truly realizing a powerful and flexible SLA guarantee. Hierarchical scheduling sets up a multi-level logical scheduler. The upper scheduler controls the total bandwidth of a group of lower schedulers, and the upper scheduler can reasonably allocate the CIR and PIR of the lower scheduler according to the level and weight of the lower scheduler. The practical application is as follows:

Users and operators signed an SLA with a total bandwidth of 10M, including three kinds of traffic, namely voice: CIR = PIR = 2m, video PIR=CIR=2M, Internet access CIR=0, PIR = 10 m ... If hierarchical scheduling is not adopted, in this case, when all three kinds of traffic suddenly reach the maximum, If hierarchical scheduling is adopted, we can set up a two-tier scheduler on the router. The first layer is responsible for the separate QoS guarantee of each service flow, and the second layer scheduler controls the three service flows not to exceed 10M at any time. When there is no voice traffic but 2M video traffic, the Internet traffic can burst to 8 m, which not only independently guarantees the service quality of each service, but also ensures the total bandwidth of 10M.

Concluding remarks

With the expansion of Internet scale and the variety of value-added services, the QoS guarantee of the new IP Internet will increasingly show its important strategic and economic significance. We believe that the 7750 SR and 7450 ESS designed by Alcatel Shanghai Bell for enterprises can help operators build a new generation of profitable IP networks.

refer to

[1] Almquist, p., "service types in internet protocol suite", RFC 1349, July 1992.

[2]r. Braden, d. Clark and S. Shenker, "Integrated Services in Internet Architecture: Overview", RFC 1633, July 1994.

[3] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, Architecture of Differentiated Services, RFC2475,1February 1998.