Sender | Message | Time |
---|---|---|
22 Apr 2021 | ||
Calvin Xiao | Simple perf shows
| 18:34:15 |
Calvin Xiao | Only when using tcp and :remote_address option is not :value or :header | 18:36:05 |
Calvin Xiao | I might need to read some MPTCP stuffs, https://en.wikipedia.org/wiki/Multipath_TCP | 18:38:13 |
23 Apr 2021 | ||
Calvin Xiao | We do need to reset @peerip it unless client.remote_addr_header is set | 08:54:30 |
Calvin Xiao | * We do need to reset @peerip it if client.remote_addr_header is set | 08:54:48 |
Calvin Xiao | * We do need to reset @peerip if client.remote_addr_header is set | 08:54:58 |
nateberkopec |
This happens a lot for cellular connections as they change cell towers | 13:02:00 |
nateberkopec | (that's my understanding anyway) | 13:02:06 |
Calvin Xiao | In this case, there will be a new connection. | 13:21:18 |
msp-greg | 'a new connection' -> 'a new connection/socket/client'? | 13:30:16 |
Calvin Xiao | Yes, if client changes the IP, the old connection will be dead. | 13:34:33 |
Calvin Xiao | Sorry, will be closed. | 13:34:57 |
Calvin Xiao | I think a socket connection is identified with (local address, local port, server address, server port) tuple. | 13:40:23 |
Calvin Xiao | I telnet to a server three times, it shows:
| 13:40:51 |
Calvin Xiao | nateberkopec: All tests pass in my local dev, 2 masos job failed in my forked repo's github actions, then only 3 windows job failed in pull request checks, are they using different tests? | 14:41:00 |
Calvin Xiao | Some times pushing a new commit not relative to the broken test case will pass all the tests. Is it always like this for you guys? | 14:42:30 |
msp-greg | Current test suite is a PITA. Intermittent failures are common. Long term, I've been working on an update, mostly to the current Integrations tests. A main change is creating hundreds (or more) clients, in the hope that odd race/timing issues are more likely to appear with a higher qty of clients. Also, response errors almost never appear locally, but do appear on GitHub Actions (GHA). Any CI system has one or processes running 'above' the CI code, and that is both good and bad, compared to local testing... | 14:58:27 |
Calvin Xiao | I see, thanks for explaining. | 15:03:24 |
msp-greg | History.md - currently, changes are grouped by 'bug', 'feature', and 'refactor'. Should 'performance' be added? People that actually read History.md will see what changes were made that could help them? And that the maintainers are actively looking at it... | 15:03:52 |
msp-greg | * Current test suite is a PITA. Intermittent failures are common. Long term, I've been working on an update, mostly to the current Integrations tests. A main change is creating hundreds (or more) clients, in the hope that odd race/timing issues are more likely to appear with a higher qty of clients. Also, response errors almost never appear locally, but do appear on GitHub Actions (GHA). Any CI system has one or more processes running 'above' the CI code, and that is both good and bad, compared to local testing... | 15:13:31 |
msp-greg | With CI, sometimes parallel testing can cause issues. Generally, they're more intermittent than what we see with Puma CI. So, running CI again with the same seed value may be helpful. With that, it's helpful to know the GHA workflow syntax, so one can remove passing jobs. | 15:17:57 |
Calvin Xiao | Can I rerun a test on GHA without a new commit? | 15:28:09 |
nateberkopec | In reply to @msp-greg:matrix.orgSure, a perf header would be nice | 15:28:51 |
nateberkopec | In reply to @calvinxiao:matrix.orgnot without commit access, I think | 15:29:03 |
nateberkopec | I can hit "re-run" in the GHA UI but you can't | 15:29:17 |
nateberkopec | I think Greg has his own fork of Puma on Github for this reason | 15:29:30 |
nateberkopec | because he can hit "re-run" on his personal fork | 15:29:37 |
msp-greg | Any maintainer can re-run a workflow run in puma/puma. I have my own fork because I make typos (RuboCop), don't want to run tests in upstream when I'm just starting on things, etc. For instance, the new test framework hopefully will fix the intermittent issues. The only way to test that is to run it a lot... I'm using it less with WSL2/Ubuntu, since I have fork, UNIXSockets, etc locally. | 15:34:40 |
msp-greg | Calvin Xiao: I wouldn't be concerned about failing CI. All the maintainers can check if the failures are related to the PR. But, it is a PITA. | 15:35:56 |
Calvin Xiao | Got it. | 15:36:28 |