RL317 got a reaction from East in Traffic Prioritization Settings: Modern Warefare (BO4)
He snaps to target because he's a very high sens player with a ton of experience, while with his old setup and the desk he used he was forced to run 4k dpi and high sens with his mouse on his keyboard shelf as his keyboard covered half of his small mouse pad. Actually he's halved his sens since moving from MW3 to MW because he's got a new setup lol. I hope the video clip I'm attaching from my phone files goes through because it shows that old setup he had to git gud with after upgrading from a laptop.
He was shadow banned because the system is automated and dependent on how many noobs report you. Infinity Ward contacted him, watched his streams and videos, and readded him to regular matchmaking rather than hardware banning him because there was nothing fishy found. In fact he was shadow banned again less than three hours after being contacted by them only because of this ridiculous report system LOL
He had the wired G502 himself for years but Logitech sponsored him to try out the "cut the cord" challenge with the Lightspeed and he agrees there's little, if any, difference between the two
RL317 got a reaction from tehwayne in Traffic Prioritization Settings: Modern Warefare (BO4)
Yeah this typically only used to be an issue when I forwarded ports manually to a specific IP, and then of course an IP change would affect the NAT. But I can't explain why it happens on UPnP too lol. Surely it should refresh and release depending on which IP needs said ports? ¯\_(ツ)_/¯ UPnP has been buggy on DumaOS though so I went back to old firmware after a while.
RL317 reacted to N3CR0 in Traffic Prioritization Settings: Modern Warefare (BO4)
70k bans... and only 10% of them actually did anything wrong.
I got my name reset and a 7 day ban just coz some snowflake thought my name was offensive which it wasn't, yet I see nasty and racist names on a daily basis.
RL317 got a reaction from tehwayne in Please read this everyone now for cod mw2019
I guess hyper lane can't help my broken internet even if I've got low stable pings lmao https://youtu.be/a9RdMILcVKg
Edit: why would anyone hit dislike but not say anything? All I've tried to do on this thread is help people. Weirdo 🐒
RL317 got a reaction from purpleandgold33 in Please read this everyone now for cod mw2019
In this case 3074 source and 30000-60000 destination would be upstream/outgoing traffic, whereas 30000-60000 source and 3074 would be downstream/incoming traffic. That's what this setup would cover in full:
The classified games setup automatically detects game traffic but we don't really know what that includes, and it could well be prioritising less important ports and giving them equal priority to more vital ones. This method just allows us to have full control.
RL317 got a reaction from JOE1305 in PS4 SDD broken?
I haven't really played BO4 because of connection issues but the game seems a laggy mess either way. The menus are even worse than BO3's...
I tried to play on a fairly new PS4 Pro with a brand new Samsung 860 SSD and the menus are disgusting, it takes forever to pull up the share menu to save a clip and my PS4 sounds like a jet trying to take off. I don't see any of those issues in other games...
Maybe your fan could use a clean though. I cleaned my PS4 Pro out (I think the method for accessing the fan is a little different) and because there was only a little dust inside, it didn't seem to make much difference. That could be your problem if the console hasn't been cleaned out after five years.
Edit: sorry, I was sleep deprived and failed to notice you'd already done that LOL
RL317 reacted to Zippy in Please read this everyone now for cod mw2019
Thanks Jack for the kind words! I think something like this would become a very useful tool. We could even take this a bit further and not only give what ports are being used and there protocol UDP/TCP but we could also show how much data each port is sending and receiving. This would come in really handy to target a port because we likely know when playing a game the port that is transferring a lot of data is likely the key port. And we could prioritize that port by giving it the highest priority over all others or even have a method of prioritizing based on something like this.. Highest, high, medium, lowest.. This type of feature would become a dominating tool I feel. And the reasoning behind my idea is it would give the player/user more control of exactly what is going on. Not to mention what one player is showing for a high priority port number may not be the same port number another player might show.. So this would really help in that situation which believe it or not is very common.. Also say if the developer of a game does some changing on there end where a high priority port number that we were more commonly use to seeing no longer is the common port. Say for whatever reason they use to use port 34000 but all of a sudden we see a new port number pop in that is 43000. So we would catch this because of this feature and make the needed adjustments.. Anytime we can customize our ports by giving them the proper prioritization will result in the most optimum performance.. Which really is what we are all looking for.. To get the edge in a particular game.. Here is just an example of what this would look like..
UDP 30001- 280kbps...Highest
TCP 443 - 20kbps...Lowest
UDP 3074 - 45kbps..High
Granted this is just an example and there would likely be more ports but this is just to give you an idea of what it would look like.. The Highest to Lowest feature could be just a drop down box for each port and we could prioritize it as needed based on what a particular port is doing.. And really we likely only really need to prioritize one or two ports. The rest would be just general type traffic not critical to in game performance.. If there is anyone that could put something like this together its you guys over there at NetDuma! Im sure you guys could put a good twist on this and make it as user friendly as need be.. But it is fairly straight forward yet can become a dominating feature.. Thanks again!
RL317 reacted to NoMoreMyFriend-S in Preemptive versus Reactive
played teh shit out of IW before I moved to Thailand. From here you connected to servers in Japan or Australia, same for WW2. It was a drama. BUT: still more consistant than the shite I face with BO$ having a server right in front of me in Singapore (27-32 ms, instead of 120+). I guess the game code is just shite and too many folks in the area connecting from their McDonalds hotspot....
RL317 reacted to NoMoreMyFriend-S in Preemptive versus Reactive
You are describing my life with this thread.
no clue why I have lagspikes, most games unplayable on BO4. I have a fiber to the home with 200/50 beside my playstation and r1. And it is frustrating the hell out of me. I spend more time tweaking than playing, e3xchanging cables and devices, hard drives, etc. and still shit results.
R1 original software seems to work better than the Duma OS on R1
but overall, unplayable for me.
about to give up
RL317 got a reaction from NoMoreMyFriend-S in Preemptive versus Reactive
I signed up two months ago so no, I just missed it. Typical
It's a good 99% of the time, no exaggeration. It'll take me 100 or even more games just to find one on the same server (or even one with a higher ping) that randomly gives me a great connection, and if I'm lucky it'll last two or three games before going back to weeks of nonsense. For the most part I get on expecting the same and just play 15 or 20 minutes before I get off again. If I'm feeling really stubborn (like "why the fuck shouldn't I be allowed to play? I'm gonna sit here for a while") I'll just be testing connections for an hour or more.
I can only think of a few days since February of LAST YEAR where I got more than one smooth responsive game during a session. There was the 8th of March this year (5 solid games, shite the next day), the 21st of April (2 hours of constant good connections, shite the next day), and then a few weeks ago on DumaOS. It was awful to start with, then I tried this new setup messing around with bandwidth allocation to different devices rather than global bandwidth control and I got four smooth games one day on 7ms to the UK server, and the first six games of the next day before it went back to one bar jokes. It's gotten even worse since then.
The only other time I've had decent connections in that time was on the 17th and 19th of May this year, where I set my filter in the north of Scotland when connecting to online services and avoided servers, then disabled the geofilter and got player hosted lobbies. In two games I pulled host and it was incredible. In the rest I connected to random hosts around Europe and on anything up to the 60ms I got to a host in Denmark, it played better than 99% of my games on servers, including the UK one at 7ms. I distinctly remember one game where there was a host migration and suddenly there were hexagon (packet loss) icons on screen for a full five minutes, and the connection was still better than most I've played on 7ms. Unfortunately it would take me up to 30 minutes just to get a lobby that was populated enough for me to start a game, so it's not exactly a reliable method of forcing decent games; my attempts to do this in the week following those games all failed to find a game.
This was from 8th of March. My jaw hit the floor when I started playing, and most of my time was spent laughing because it was so unbelievable
Naturally the next day it was back to desync dogshit 🤣 what's really weird is that I'd been using the "typical" R1 setup that day - 70/70 preemptive, CoD Ultimate profile, PSN in hyper lane and auto everything else. That setup had never given me these results before. I've tried literally hundreds of different settings - optimal or otherwise, and tweaked one at a time or as a combo of new settings - and never attributed any brief moments of success to them because those good rare games are truly random.
Actually thinking back to February 2017... I think that's when I got my R1. November 2016 was awful on IW, but December through February were comparatively great (say around a third of games were on great connections) while I was using just my BT Smart Hub. Most would come to the conclusion that the R1 is the problem, yet I've had great games with the R1 too. Clearly my network is capable of giving me these great connections because they do sometimes (well, seldom) appear, and I've spent so long on bad ones I can detect them straight away, in my first gunfight of the game. The problem is there's nothing in my settings that would logically affect my usual connection performance, so I just get on confused and get off confused - whether the game plays well or not.
RL317 got a reaction from Zippy in Please read this everyone now for cod mw2019
Apologies for the delay in replying. My Wireshark capture was saved on my old laptop, and I simply forgot to fish it out and load it up 😂
I've been skimming through and the majority of the 7246 packets from this capture showed UDP (86%) with some TCP (13%) and the rest was DNS, ICMP, DHCP and TLS. Most UDP involved 3074 on one end (40040 on the other), but some UDP traffic was 3074 to 3074 - this was between players. I've been looking up some of these IPs from the single game I captured and I've seen results from ISPs (in fact the two I found were American and Omani - NICE matchmaking). Then there was TCP 3074 (me) to 64755 (Demonware in Ireland - the backend servers).
The rest of it was involved TCP 443, and a few Google searches show IPs registered to Amazon AWS and Akamai, which I think has something to do with PSN. There were too many ports in communication with 443 to mention (often 60000-65535), and 443 appeared both on my end and the server end depending on what I was communicating with.
Interestingly I see a lot of large packets here. TCP 3074 sent me a lot of 1494 byte packets while TCP 443 from many different sources sent me 1506 byte packets (WTF?). The largest game packets (over UDP 3074), both to and from me, were 1322 bytes. I guess that counters the claim that game packets are always tiny lol
What's sad about checking all this is I've been doing my own testing with custom single hyper lane rules and I'd already been prioritising 3074 (UDP alone and then both) as well as 443, because I found out about the latter from a couple of people here on the forum. I had everything else set up optimally (70/70 QoS), small filter radius to guarantee games at either 7ms or 13ms, and games played like shit as usual. I had two random CoD games with randomly superb hit detection, and nothing was out of the ordinary in those games. My setup was the same, the server was the same... and the next game it would go back to awful again.
I also used the network monitor to find the ports in use on the server end. When I connected to online services, many would pop up at once like this:
... yet when I spawned into a game, only 3074 and one other port (on the server end) would show up:
So I tried adding just UDP 3074 both and UDP 37350 both to the hyper lane, but it made no difference LOL
Today I got sick of seeing disgusting lag on a flat 7ms to my local server (according to the R1), and I found out that Battlefield games have different latency detection methods: the scoreboard shows ping results, and the network graph shows UDP latency for your game traffic or, in other words, your true game latency. I installed BF4, found a local empty server (based in Amsterdam) and pinged it on the geofilter. I got a stable 14ms that moved about 0.3ms in either direction. I spawned into the server on BF4, opened up the network graph and got nothing like what the R1 showed. While the R1 stayed at 14ms pretty much dead on, the in game latency fluctuated between 6 and 41ms, and spikes in game were never matched by spikes on the geofilter. Must be why I'm seeing random hundreds of milliseconds of delays on hitmarkers in CoD ¯\_(ツ)_/¯ HaPpY dAyS
RL317 got a reaction from Scrizzy in Please read this everyone now for cod mw2019
My pleasure. Most will prefer to just pick a preset and forget about it, but I know there are quite a few tweakers here on the forum lol. I was only inspired to take a deeper look into it once I read Scrizzy's old post asking for help with customising rules for hyper lane, and once I learned more about how networking works on CoD I learned how to find out what was going on in our games with Wireshark.
I don't think I'd have gone to such lengths if it wasn't for A7Legit's swearing on hyper lane making all the difference with the quality of his games! Of course if you can find just one or two ports that are vital to the experience we have in games, it makes sense to place a higher priority on those than any others.
If anyone here wants to do the same kind of testing, it's a piece of cake if you're on PC. If you're on console, you just need to bridge your connection from R1 to console through a PC by going to the Network & Sharing Centre, clicking change adapter settings and then left clicking one your ethernet adapters (or WiFi if you're using both), pressing shift and right clicking the other to give you the option to "bridge connections". Then you just download and open Wireshark, wait for it to scan for interfaces, and click on whichever connection leads to the console in use. It's a pretty cool trick for those like me who enjoy in-depth testing.
Before I'd even heard of Netduma, I used this in AW to figure out why connections were atrocious when Sledgehammer Games claimed "ping is king". Wireshark proved I was spending most of my time connecting to the Seattle server on the west coast of the US, which speaks volumes when you live just outside London. Of course if I had a Netduma, it would have been a tad easier getting that proof
RL317 reacted to GHOST-1-EC in Please read this everyone now for cod mw2019
I like the fact that you've continued to tinker with the setup as i have as well.....i've had the best gameplay ever since disabling dumaos games and adding the custom ports ....... What i have noticed is that randomly i'll experience a dip in the silky smooth experience which i now cannot live without when my 'packets sent 'exceeds my 'packets received'
I must add that my bufferbloat is set to 'when high priority traffic is detected' with a 80/80 split.
This has been my gun setup irrespective of what the bufferbloat ratings were on dslreports or pingplotter to the respective servers that i play on.
Why i pay little attention to the dslreports or pingplotter in this case is because when i had excellent ratings amd graphs on the two mentioned platforms my gameplay experience was no where close to where it is now without using them as a benchmark.
So the question i have now is....could those reduction in packets received be countered with anti bufferbloat settings on the QOS side?
RL317 got a reaction from Zippy in Please read this everyone now for cod mw2019
Yes it already covers those. The main problem for those who want to fine tune their setup is it would cover 1024:65535 source and destination, and UDP only. This causes two potential problems: a less important port being given identical priority to the main port being used for a game, and the inability to prioritise TCP should you need to. In fact @A7Legit once did some testing and found certain games use TCP 3074 to communicate with other players and the Demonware server. Of course the preset is great for a "set and forget" approach but it may not suit players in many different games.
Upload and download depends on the ports in use. If you're playing CoD and sending from 3074 to port 44000, for example, that's uploading. When you're receiving packets from 44000 that's downloading. Both are vital but, if anything, download is more important since there's zero advantage to your downstream traffic being caught in a queue; you'll simply be behind the action, trying to play catch up and dying before you see it in an FPS game for example, which affects your actions (upstream traffic) since they'll be disregarded. At least if your upstream traffic doesn't have priority you get more time to react in games where you're moving lol
The worrying thing is I've been doing some testing trying to prioritise downstream only and it doesn't really seem to work. I had a rule with destination UDP 3074, source UDP 30000-60000 only yesterday and was seeing upload packets being prioritised. In fact there were far more upload packets being prioritised than down, and sometimes it wouldn't kick in at all.
Frankly that's impossible since UDP 3074 is never present on the server side, and it's pretty frustrating not knowing exactly how traffic prio is doing what it does.
The reason I want to test this is because classified games and the PSN preset do nothing for my experience in games. I've only had brief success trying custom rules.
RL317 got a reaction from Netduma Jack in What am i looking at here?
Hmm that's not good. Assuming there's nothing wrong with the testing device (the port could be fucked tbh) the modem might be the problem. My old laptop typically showed higher latency on the first hop when testing on PingPlotter for some reason.
At least we've ruled out the R1 because modems are typically easier and cheaper to replace. It could be a line issue too though.
Do you have any other modems lying around that you could test with? Going backwards one step at a time like this is good - we can rule out variables and get to the source of the problem. If a different modem doesn't help you can send the test results to your ISP and tell them to sort themselves out lol
RL317 got a reaction from Netduma Admin in Finally got bridge mode working with TP Link + R1 (but not sure how...)
I've used a variety of setups with my R1 over the last few years. I've used ISP combos which usually don't have bridge mode or any ability to run as a pure modem, I've used a TP Link model which can run as a modem only, combo or router only, and I've used the Huawei HG612 in bridge mode only since it doesn't have WiFi and wouldn't be worth using in router mode. I've also been with Virgin Media whose Superhub let me use bridge mode with no other configuration - it was as close to "plug and play" as you could get.
A few months ago I moved from BT to TalkTalk - an identical service except for the fact that BT uses PPPoE (1492 MTU and requires login info) and TalkTalk uses IPoE (1500 MTU and "plug and play"). The BT Hub didn't have bridge mode so I could use that as a combo behind the R1, or I could use a pure modem behind the R1 and enable PPPoE on the R1 to get online. When I first moved to TalkTalk I tried out all possible setups and since it's not PPPoE you should just be able to connect the R1 to a pure modem and be on your way, just like I could on Virgin Media cable with the Superhub in bridge mode. This was the case with the HG612 (set to IPoE_bridged or whatever it's called) where the R1 would now be the only device handling NAT etc, with its own public WAN IP. If I set up the TP Link in bridge mode, as I could easily do with the R1 before, I wouldn't get a connection on the R1 and I'd be stuck with either a local IP or none at all. TalkTalk doesn't use PPPoE but I tried enabling that on old firmware and DumaOS, with the fields left empty, but no bueno...
Today I decided to see if I could figure it out, since the TP Link is my favourite "modem" and it runs better when in bridge mode. I inadvertently found a fix when I came across a post on this forum from a guy trying to work out a different problem on TalkTalk, where he could already use bridge mode on the other combo TalkTalk provides (mine is the new WiFi Hub):
In the post he mentions a guide saying he should "disable DHCP in the LAN", which seemed weird to me because I'd have assumed DHCP would be disabled in bridge mode. I was extra confused because I didn't need to do this on the TP Link when running it in bridge mode. Anyway I decided to try it myself after enabling bridge mode on the TP Link and, lo and behold, I now have it working just fine with TalkTalk and the R1. As expected I don't need to do anything else on the R1 since TalkTalk is IPoE and not PPPoE. I just rebooted the R1 once and the old local WAN IP was replaced with a public one - something I could only achieve in bridge mode with the HG612. The only downside is I can no longer access my TP Link GUI; on BT I could simply disable PPPoE on the R1 and I'd be able to access it through an R1 WiFi connection. Not that big a deal really lol
Bit of an essay but I'm half thinking out loud, half wondering if any of you know why this only works if I disable DHCP on the TP Link when it's already in bridge mode. I wanted to write this as a kind of guide in case anyone else has any issues themselves with such a setup. Someone browsing the forum or even searching for answers on Google might get some use out of this if they stumble across it.
Cheers and Happy New Year
RL317 got a reaction from BIG__DOG in Finally got bridge mode working with TP Link + R1 (but not sure how...)
Hey. I know they're different to an extent (IPoE vs PPPoE) but my confusion lies in what I had to do to get it working; the differences between the two ISPs should be irrelevant in this case. The VM Superhub in bridge mode was plug and play with the R1, and while I had to enter PPPoE info on the R1 with BT I didn't have to disable DHCP. I mean, if it's in bridge mode it's not handling DHCP anyway right? How very brain bending 😂
RL317 got a reaction from GHOST-1-EC in Please read this everyone now for cod mw2019
As far as I know 9307 relates to party chat and 9308 is just what you need to connect to PSN in the first place - perhaps prioritising 9308 could help with the overall experience on any game. Let me know how it goes for you 👍
Tip: remember to use the tag feature on Wireshark to make it easier to sift through packets. I typically just type in UDP since I'd be looking for stuff relevant to CoD, but I have also used it to check info on ICMP pings after using PingPlotter before.
When you reach the first screen of Wireshark, before you double click an adapter to start a capture, check the IP of that connection. If you're bridging it shouldn't be 192.168.88.x like most devices connected to an R1 (does the XR500 have the 192.168.88.x range too?) might be, for example, but it would still show up as a local IP. Mine here was 192.168.137.136:
If you identify that and then the server you're connecting to, you can click either source or destination at the top and order your packets so that only one direction of traffic is visible at a time, just to keep things tidy and easy to flick through. You'll notice I also have the UDP filter applied (make sure to hit the arrow just to the left of "expression" at the end of the bar to apply).
Another tip: at the top left you'll see a blue fin or a red stop icon; one will be greyed out depending on whether the capture is running or has been stopped. When it's running you'll see a total number of packets captured at the bottom right of the screen, and which percentage of those are visible if you have a filter applied. If you stop the capture and delete the UDP filter to display everything again, the packet counter on the bottom right will now show you if you've had any packet loss at any point during the capture
RL317 reacted to GHOST-1-EC in Please read this everyone now for cod mw2019
I've added ports 9307 and 9308 as well as both wireshark and the XR500's upnp reported these in use.
Even the previously listed servers that i have tagged as bad and even tried blocking/denying (which never worked from day 1) are now giving me alot better gameplay.
I dont want to jinx anything but this work around will help alot of guys.
Cheers to even better connections and gameplay.