Skip to content
Surf Wiki
Save to docs
general/network-performance

From Surf Wiki (app.surf) — the open knowledge base

TCP pacing


In the field of computer networking, TCP pacing is the denomination of a set of techniques to make the pattern of packet transmission generated by the Transmission Control Protocol less bursty. Where there could be insufficient buffers in switches and routers, TCP Pacing is intended to avoid packet loss due to exhaustion of buffer memory in network devices along the path. It can be conducted by the network scheduler.

Bursty traffic can lead to higher queuing delays, more packet losses and lower throughput. However it has been observed that TCP's congestion control mechanisms may lead to bursty traffic on high bandwidth and highly multiplexed networks, a proposed solution to this problem is TCP pacing. TCP pacing involves evenly spacing data transmissions across a round-trip time. https://homes.cs.washington.edu/~tom/pubs/pacing.pdf

References

References

  1. (2006). "Proceedings of IEEE INFOCOM".
  2. Kleinrock, L. (1975). "Queueing Systems". Wiley.
  3. (August 1991). "Proceedings of the Conference on Communications Architecture & Protocols". ACM.
Info: Wikipedia Source

This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.

Want to explore this topic further?

Ask Mako anything about TCP pacing — get instant answers, deeper analysis, and related topics.

Research with Mako

Free with your Surf account

Content sourced from Wikipedia, available under CC BY-SA 4.0.

This content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.

Report