To calculate how long it would take to transfer a certain amount of data over a given network connection, we can use the formula:
Time = Data Volume / Data Rate
In my example, we are going to see it would take around 22.76 hours to transfer 100 Terabytes over a 10G circuit.
First, let’s convert everything to the same units.
Data volume:
100 Terabytes (TB) = 100 * 1024 Gigabytes (GB)
= 100 * 1024 * 1024 Megabytes (MB)
= 100 * 1024 * 1024 * 8 Megabits (Mb)
= 838,860,800 Megabits
Data rate:
10 Gigabits per second (Gbps) = 10 * 1024 Megabits per second (Mbps)
= 10,240 Mbps
Time:
Time = 838,860,800 Mb / 10,240 Mbps = 81,920 seconds = approximately 22.76 hours
However, the latency of the connection also plays a role. The impact of latency (25 ms in your case) is more complicated to calculate, as it depends on a number of factors such as the size of the packets being transferred and the transmission protocol in use. TCP, for example, acknowledges received packets and will delay further sends until the acknowledgements are received, meaning higher latencies can result in slower overall transmission. However, for large enough data transfers and reasonably small latencies like 25ms, the impact of the latency becomes less significant, so the total transfer time would still be roughly in the same order of magnitude as the estimate above.
OTHER FACTORS
There are several factors that can affect the speed of a data transfer over a network, including but not limited to:
- Bandwidth: This is the maximum rate of data transfer across a given path. Higher bandwidth means more data can be transferred in a given time.
- Latency: This is the delay that occurs in data communication over a network. Higher latency means data takes longer to travel from source to destination, and this can particularly affect protocols that require round-trip confirmations before sending more data.
- Packet loss: This refers to packets of data that are sent but do not arrive at their destination. Packet loss can slow down a transfer because the missing data needs to be retransmitted.
- Network congestion: This refers to a network state where a node or link carries so much data that it may degrade network quality of service. High levels of network traffic can slow down a data transfer.
- Transmission protocol: Different transmission protocols handle data transmission differently. For example, TCP (Transmission Control Protocol) is designed for reliable transmission at the cost of speed, whereas UDP (User Datagram Protocol) prioritizes speed over reliability.
- Overhead: This refers to non-data information that accompanies the data for purposes such as addressing, error checking, etc. Higher overhead can reduce the actual data throughput.
- Physical medium: The physical transmission medium (e.g., copper wire, optical fiber, wireless) also influences the transfer speed. Some mediums are capable of transmitting data faster than others.
- Distance: The farther the data has to travel, the longer it will take, especially if the data has to pass through switches, routers, or other network devices.
- Hardware performance: The performance of the sending and receiving devices, including their network interface cards, CPU speed, memory, etc., can also influence the data transfer speed.
Understanding these factors can help in diagnosing and improving network performance issues.