The company said that while it is very simple to get started, the secondary benefit is more complex.
As the overall capacity of the network increases, congestion decreases. The idea that faster LAN connections to users and servers will result in more traffic and more congested trunks is outdated. One CIO said, “Applications determine traffic. The network does not suck data from the interface. “The application is pushing data,” he explained.
Faster connections reduce congestion, which reduces complaints, and allows alternative routes to be used without traffic delays and losses. In fact, packet loss, interruption, and even latency can cause frustration, and resolving complaints can be a significant part of operating costs. Complicating matters is the fact that network speed affects user/application experience quality in more ways than just congestion.
When data packets pass through a switch or router, they are exposed to two factors that cause delay: One is congestion and the other is “serialization delay”. This seemingly complex term means that switching cannot occur without all packets, so there is a delay until all data packets are received. Latency is determined by the connection speed over which the packet arrives, so a faster interface will always provide better latency, and the latency experienced by a given packet is the sum of the serialization latency of each interface the packet passes through.
application design, composition element Cost, AIas network in capacity for aspect reorganization
You may be wondering why it is only now, decades later, that companies are starting to pay attention to the perspective of solving capacity problems. There is an answer here for both the supply and demand sides.
On the demand side data center and The increasing componentization of applications, including splitting component hosting across clouds, has dramatically increased the complexity of application workflows. Monolithic applications have a simple workflow: input, processing, and output. Componentized applications must move messages between components, and each of these movements is supported by network connections, making the network more tightly coupled to application availability and performance. Additionally, complex workflows make it more difficult to determine what went wrong and how to fix it. Finally, every component interface that adds serialization delay eats up a portion of the delay budget inherent in every application.
Source: www.itworld.co.kr