What is the Network Impact on Node Performance

I am running a node as an experiment on a StarLink module on my home network.

I’m running some rudimentary tests to see how much it differentiates (environment-wise) and to determine how the variables impact the performance.

To start, I’m running a simple ping command: ping 1.1.1.1 -c 20

What I’m observing is that the home network (wired on Cat 6) latency is around 100-140ms (rtt min/avg/max/mdev = 105.336/117.207/138.129/9.603 ms).

On my nodes in a datacenter, the latency is much reduced, ~2-3ms (rtt min/avg/max/mdev = 2.601/2.686/2.764/0.056 ms).

From what I understand the master clock tick (duration between proving a piece of data on your node) is ~10seconds, so if my understanding is correct this shouldn’t be a factor in a node’s performance as long as the proof gets generated within 10s-(latency in+latency out)s.

Proof data generally is also fairly small, so having a huge bandwidth (i.e. gigabit) may be useful for large server farms, having a few nodes on a smaller, say 100-300Mbps up/down would be sufficient. Although that would be assuming you get one proof per interval/master clock tick.

The question is, is the above an accurate understanding?