Sender | Message | Time |
---|---|---|
25 Sep 2024 | ||
ufm | result of first ping is simply ignored. | 19:08:51 |
Arceliar | We can't send the packet right away, we have to do some work to get the route set up and exchange keys, so things have to be cached and resent after the setup is finished. We don't cache things for very long, since packets can be quite large, so it may just mean that the setup takes too long to finish and we've already dropped the packet to save memory | 19:08:57 |
Arceliar | (the other node has to do most of the same work before they can send the pong back, so there's about 4 round trips involved for the first ping, and only 1 round trip for the rest) | 19:10:04 |
ufm | Sorry, but I don't understand what you're trying to convince me of. That losing 30% of packets (on the 30-day graph you seem to like so much) is normal? Considering both I and you (or whoever owns the site with the network tree) are on good public peers? | 19:17:01 |
ufm | * Sorry, but I don't understand what you're trying to convince me of. That losing 30% of packets (on the 30-hour graph you seem to like so much) is normal? Considering both I and you (or whoever owns the site with the network tree) are on good public peers? | 19:19:18 |
Arceliar | I'm saying losing the first packet at a high rate isn't very surprising. There's a lot that can go wrong with the first packet, before the route is found and the session is set up. ping6 -c3 $yggaddr and then checking if at least 1 ping got through would probably show much lower packet loss. In the real world, most traffic isn't one-and-done, there's some kind of ongoing connection, so 30% packet loss on a single ping doesn't reflect real world performance for most applications | 19:19:24 |
Arceliar | DNS is a good example of an exception where dropping 1 packet hurts | 19:19:48 |
ufm | I repeat once again - the results of the first ping are ignored. Completely. | 19:20:20 |
Arceliar | If I can find a way to improve it I will, but I'm not sure what part of the setup is either going wrong or taking too long | 19:20:24 |
Arceliar | Ah, OK, I misunderstood. Then 30% from the second ping is more concerning | 19:20:47 |
ufm | And it's not just pings. When accessing this site, I sometimes really notice delays. Even up to timeout (though that happens very rarely). | 19:24:08 |
Arceliar | Do you see these delays when first opening the site, or every time you open a new page? It's a similar story there, the first packet (or maybe a few) could be lost while the session is being set up, so it may need to wait for TCP to retry before the connection opens and the request gets through | 19:25:29 |
Arceliar | Actually, one thing it may be worth checking is to reduce the time between pings in your graph to 30 seconds. Session and paths and even the addr->key map get cleaned up after a minute or two without being used, so pinging more frequently than that would prevent needing to ever redo the handshake or needing to do a lookup (unless one of the nodes moved on the tree, that would force another lookup). If 30 seconds is reliable, then it's probably something going wrong in the handshake step. If 30 second is still unreliable, then it's probably something else (maybe just the tree is flapping too much) | 19:28:45 |
ufm | Arceliar: 🤦I don't know how else to put this... I've been using Yggdrasil for years. I'm the CTO of an Internet Service Provider. I know a bunch of scary terms like BGP, OSPF, STP, EtherChannel, and the like. Let me share a secret with you — I even know how to configure Cisco devices. :) Could you please stop assuming I'm inexperienced and stop asking strange questions? :) | 19:29:52 |
ufm | Arceliar: > maybe just the tree is flapping too much BTW, my public node - root of tree. Maybe because of this? | 19:31:22 |
Arceliar | In reply to @ufm:twinkle.lolSorry, I don't remember everyone's level of exprience or familiarity with ygg (especially internal implementation details), so I usually assume it's better to say too much than too little (that also helps other people here to learn or understand the situation if they read the scrollback) | 19:33:25 |
Arceliar | In reply to @ufm:twinkle.lolThat's... and interesting point. | 19:33:39 |
Arceliar | I don't think being the root should matter, but if there's any special case were things go wrong, then the root would be a good candidate for that special case. | 19:35:01 |
Arceliar | I'll try to run some tests when I get home, to see if I can reproduce similar issues with packet loss. I haven't seen anything that bad in the past, but I haven't run tests pinging the root, so maybe I missed something | 19:38:30 |
Arceliar | Actually, I have seen performance that bad, but only during the v0.3.X days (and I think that was from people peering ygg over ygg) | 19:40:03 |
neilalexander | It’s possible the link cost code may change some things too, as paths that get normally selected today may become completely ignored on the next version and vice versa | 19:41:14 |
ufm | neilalexander: I believe that direct connectivity between public peers and connectivity between public peers through regular nodes should have different weights. Public peers are generally more reliable and stable. That's exactly why I want the ability to manually control the link cost. | 19:49:09 |
ufm | Ideally, a regular node shouldn't fear that all the internet traffic will route through it if it connects to multiple public peers (except in a situation where it's the only link). And the entire network shouldn't be destabilized just because a regular node on LTE connects to all the public peers. | 19:55:08 |
28 Sep 2024 | ||
cynic (elitist) 🏴☠️ changed their profile picture. | 03:06:11 | |
30 Sep 2024 | ||
ivan | Looks interesting https://awala.network | 09:47:34 |
ivan | Also an overlay network | 09:47:48 |
Parnikkapore 😁 | because the embed isn't clear:
| 10:16:59 |
Parnikkapore 😁 | Reading list:
| 10:19:29 |
neilalexander | It's also not clear whether or not it expects full-mesh reachability when internet is available | 10:30:46 |
8 Oct 2024 | ||
asshacking changed their display name from asshacking (ass/hack) to asshacking. | 06:08:19 |