Conversation
|
Does it fix it or mask an underlying problem that would exist if the payload was much larger? |
|
FYI, I've been running with the following settings for a few months without issue. I average 9mbps (11 peak) on large downloads.
These links were useful: |
|
I think the issue here is partially reading data from a server (which is sending more than read) then closing the socket and this resulting in a RST but the stack keeping the underlying resources open. On 1 Oct 2024, at 13:34, megacct ***@***.***> wrote:
FYI, I've been running with the following settings for a few months without issue. I average 9mbps (11 peak) on large downloads.
"lwip.memp-num-tcp-seg": 32,
"lwip.memp-num-tcpip-msg-inpkt": 16,
"lwip.socket-max": 12,
"lwip.tcp-mss": 1460,
"lwip.tcp-socket-max": 10,
These links were useful:
https://lwip.fandom.com/wiki/Maximizing_throughput
https://lwip.fandom.com/wiki/Tuning_TCP
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
|
I only see an upside to merging this PR. Am I missing a downside? |
|
Given the memory of the target devices I'd agree. It's a better default. |
|
👍 I'll add this to the changes to be included in the next 4.3.x release. Since it really don't fix #937 i was unsure if merge or not, but i agree that 1460 is a better default. I will try to do some testing also with the other settings suggested by @schnoberts1 and update this PR. |
fixes #937