Jump to content


  • Content count

  • Joined

  • Last visited

  • Days Won


lllRL last won the day on December 30 2018

lllRL had the most liked content!

About lllRL

  • Rank
    Forum Legend
  • Birthday November 14

Profile Information

  • Gender
  • Location
  • Interests
    Hi :) https://www.youtube.com/c/RL1411

Recent Profile Visitors

4,376 profile views
  1. lllRL

    Question on dns R1 duma OS

    Yup, red is on. There are problems with entering the DNS and MTU either way though, and the same goes for UPnP working - everything has to be set up on the old firmware before upgrading for it to stick for some reason. And if you do that, then decide you want to port forward and disable UPnP, the latter will remain enabled ๐Ÿคจ
  2. I upgraded back to DumaOS today and my first game on 7ms was crispy lol. I don't know what's happening anymore ๐Ÿ˜ญ
  3. I tried again today. Same old nonsense... forced one single server and I got 2 games that were normal in around 2 hours. The weird lag is actually getting worse... it's like playing on a one bar. Had PingPlotter running alongside (obviously my PS4 had the priority with hyper lane) and it was flawless, yet in game I'm getting hitmarkers after turning 180 degrees and running away, dying half a second around corners, and just generally getting joked. The final straw came when I used this ability called FTL Jump (I don't know if you've played IW but it's like a short burst of super speed where you can fly over/past someone) over someone's head, flicked back on to them instantly on max sensitivity, engaged and got three shots off before dying (with hitmarkers after I died, naturally). The killcam shows a guy from Russia with no quickdraw on his gun, PACKET LOSS ICONS and what looked like 3 sensitivity reacting late, turning and wildly hipfiring with thumb spasm aim. The best part? From his perspective, I turned like a snail. He saw me turning on what looked like 1 sensitivity. On my screen I went over his head at the speed of light and snapped back to start firing at him before he had even turned 90 degrees, and he was firing off to my right hitting me. I was facing him a good half a second before he turned to me, yet on his screen he was the one to turn 180 degrees first on what was a quarter of my sensitivity at most. How many hundreds of milliseconds of lag would there have to be to see that much desync, with that large a discrepancy, while I'm on a 7ms ping to this server? I just don't know anymore...
  4. All I know is I lag and teleport on my own custom game host vs bots LMAO
  5. I guess hyper lane can't help my broken internet even if I've got low stable pings lmao https://youtu.be/a9RdMILcVKg Edit: why would anyone hit dislike but not say anything? All I've tried to do on this thread is help people. Weirdo ๐Ÿ’
  6. Apologies for the delay in replying. My Wireshark capture was saved on my old laptop, and I simply forgot to fish it out and load it up ๐Ÿ˜‚ I've been skimming through and the majority of the 7246 packets from this capture showed UDP (86%) with some TCP (13%) and the rest was DNS, ICMP, DHCP and TLS. Most UDP involved 3074 on one end (40040 on the other), but some UDP traffic was 3074 to 3074 - this was between players. I've been looking up some of these IPs from the single game I captured and I've seen results from ISPs (in fact the two I found were American and Omani - NICE matchmaking). Then there was TCP 3074 (me) to 64755 (Demonware in Ireland - the backend servers). The rest of it was involved TCP 443, and a few Google searches show IPs registered to Amazon AWS and Akamai, which I think has something to do with PSN. There were too many ports in communication with 443 to mention (often 60000-65535), and 443 appeared both on my end and the server end depending on what I was communicating with. Interestingly I see a lot of large packets here. TCP 3074 sent me a lot of 1494 byte packets while TCP 443 from many different sources sent me 1506 byte packets (WTF?). The largest game packets (over UDP 3074), both to and from me, were 1322 bytes. I guess that counters the claim that game packets are always tiny lol What's sad about checking all this is I've been doing my own testing with custom single hyper lane rules and I'd already been prioritising 3074 (UDP alone and then both) as well as 443, because I found out about the latter from a couple of people here on the forum. I had everything else set up optimally (70/70 QoS), small filter radius to guarantee games at either 7ms or 13ms, and games played like shit as usual. I had two random CoD games with randomly superb hit detection, and nothing was out of the ordinary in those games. My setup was the same, the server was the same... and the next game it would go back to awful again. I also used the network monitor to find the ports in use on the server end. When I connected to online services, many would pop up at once like this: ... yet when I spawned into a game, only 3074 and one other port (on the server end) would show up: So I tried adding just UDP 3074 both and UDP 37350 both to the hyper lane, but it made no difference LOL Today I got sick of seeing disgusting lag on a flat 7ms to my local server (according to the R1), and I found out that Battlefield games have different latency detection methods: the scoreboard shows ping results, and the network graph shows UDP latency for your game traffic or, in other words, your true game latency. I installed BF4, found a local empty server (based in Amsterdam) and pinged it on the geofilter. I got a stable 14ms that moved about 0.3ms in either direction. I spawned into the server on BF4, opened up the network graph and got nothing like what the R1 showed. While the R1 stayed at 14ms pretty much dead on, the in game latency fluctuated between 6 and 41ms, and spikes in game were never matched by spikes on the geofilter. Must be why I'm seeing random hundreds of milliseconds of delays on hitmarkers in CoD ยฏ\_(ใƒ„)_/ยฏ HaPpY dAyS
  7. Yeah I'm fairly sure there are one or two TCP ports involved in actual gameplay. It might be worth me looking back over my saved Wireshark captures; I only filtered out UDP for screenshots last time, but everything is in there. I'll see if I can find anything that looks relevant.
  8. I've noticed that a lot myself - I believe it has something to do with update rates if you're playing older games, as many of them have far higher send/upstream update rates (as high as 100Hz or 10 updates per second) than receive/downstream rates (as low as 10Hz). Whenever I've tested IW I see around three times as many prioritised upload packets as download. On new games like BO4 (which has 60/62Hz update rates) there shouldn't be such a difference in the number of packets prioritised both ways, so it makes sense you'd have a worse experience if traffic in one direction isn't prioritised as much. Maybe you should try download and upload sliders to even them out somehow, so while upload may be prioritised more you could level it out with less of a reduction on the upload slider? Idk, this is all speculation at this point lol
  9. Yes it already covers those. The main problem for those who want to fine tune their setup is it would cover 1024:65535 source and destination, and UDP only. This causes two potential problems: a less important port being given identical priority to the main port being used for a game, and the inability to prioritise TCP should you need to. In fact @A7Legit once did some testing and found certain games use TCP 3074 to communicate with other players and the Demonware server. Of course the preset is great for a "set and forget" approach but it may not suit players in many different games. Upload and download depends on the ports in use. If you're playing CoD and sending from 3074 to port 44000, for example, that's uploading. When you're receiving packets from 44000 that's downloading. Both are vital but, if anything, download is more important since there's zero advantage to your downstream traffic being caught in a queue; you'll simply be behind the action, trying to play catch up and dying before you see it in an FPS game for example, which affects your actions (upstream traffic) since they'll be disregarded. At least if your upstream traffic doesn't have priority you get more time to react in games where you're moving lol The worrying thing is I've been doing some testing trying to prioritise downstream only and it doesn't really seem to work. I had a rule with destination UDP 3074, source UDP 30000-60000 only yesterday and was seeing upload packets being prioritised. In fact there were far more upload packets being prioritised than down, and sometimes it wouldn't kick in at all. Frankly that's impossible since UDP 3074 is never present on the server side, and it's pretty frustrating not knowing exactly how traffic prio is doing what it does. The reason I want to test this is because classified games and the PSN preset do nothing for my experience in games. I've only had brief success trying custom rules.
  10. Thanks... lol There are different settings between the two that don't translate. With the HG I think you can use ipoe_bridged (or is it just ip_bridged?) which sets it all up fine with no other tweaks required. That's why factory resetting gets you straight online. None of those other settings (like 802.1p/q) are on the TP Link, other than VLAN which is obviously 101. There's even a drop down menu on the TP which asks for your ISP (there's an option for TalkTalk_VDSL) before you enter bridge mode. The VM Superhub definitely didn't require my turning off DHCP on it to get it working in bridge mode with the R1 - you just enabled it and away you go ๐Ÿคจ
  11. I'm not using a HG612 lol. I have no issues with that. It's the TP Link I couldn't bridge before.
  12. Hey. I know they're different to an extent (IPoE vs PPPoE) but my confusion lies in what I had to do to get it working; the differences between the two ISPs should be irrelevant in this case. The VM Superhub in bridge mode was plug and play with the R1, and while I had to enter PPPoE info on the R1 with BT I didn't have to disable DHCP. I mean, if it's in bridge mode it's not handling DHCP anyway right? How very brain bending ๐Ÿ˜‚
  13. I've used a variety of setups with my R1 over the last few years. I've used ISP combos which usually don't have bridge mode or any ability to run as a pure modem, I've used a TP Link model which can run as a modem only, combo or router only, and I've used the Huawei HG612 in bridge mode only since it doesn't have WiFi and wouldn't be worth using in router mode. I've also been with Virgin Media whose Superhub let me use bridge mode with no other configuration - it was as close to "plug and play" as you could get. A few months ago I moved from BT to TalkTalk - an identical service except for the fact that BT uses PPPoE (1492 MTU and requires login info) and TalkTalk uses IPoE (1500 MTU and "plug and play"). The BT Hub didn't have bridge mode so I could use that as a combo behind the R1, or I could use a pure modem behind the R1 and enable PPPoE on the R1 to get online. When I first moved to TalkTalk I tried out all possible setups and since it's not PPPoE you should just be able to connect the R1 to a pure modem and be on your way, just like I could on Virgin Media cable with the Superhub in bridge mode. This was the case with the HG612 (set to IPoE_bridged or whatever it's called) where the R1 would now be the only device handling NAT etc, with its own public WAN IP. If I set up the TP Link in bridge mode, as I could easily do with the R1 before, I wouldn't get a connection on the R1 and I'd be stuck with either a local IP or none at all. TalkTalk doesn't use PPPoE but I tried enabling that on old firmware and DumaOS, with the fields left empty, but no bueno... Today I decided to see if I could figure it out, since the TP Link is my favourite "modem" and it runs better when in bridge mode. I inadvertently found a fix when I came across a post on this forum from a guy trying to work out a different problem on TalkTalk, where he could already use bridge mode on the other combo TalkTalk provides (mine is the new WiFi Hub): In the post he mentions a guide saying he should "disable DHCP in the LAN", which seemed weird to me because I'd have assumed DHCP would be disabled in bridge mode. I was extra confused because I didn't need to do this on the TP Link when running it in bridge mode. Anyway I decided to try it myself after enabling bridge mode on the TP Link and, lo and behold, I now have it working just fine with TalkTalk and the R1. As expected I don't need to do anything else on the R1 since TalkTalk is IPoE and not PPPoE. I just rebooted the R1 once and the old local WAN IP was replaced with a public one - something I could only achieve in bridge mode with the HG612. The only downside is I can no longer access my TP Link GUI; on BT I could simply disable PPPoE on the R1 and I'd be able to access it through an R1 WiFi connection. Not that big a deal really lol Bit of an essay but I'm half thinking out loud, half wondering if any of you know why this only works if I disable DHCP on the TP Link when it's already in bridge mode. I wanted to write this as a kind of guide in case anyone else has any issues themselves with such a setup. Someone browsing the forum or even searching for answers on Google might get some use out of this if they stumble across it. Cheers and Happy New Year
  14. As far as I know 9307 relates to party chat and 9308 is just what you need to connect to PSN in the first place - perhaps prioritising 9308 could help with the overall experience on any game. Let me know how it goes for you ๐Ÿ‘ Tip: remember to use the tag feature on Wireshark to make it easier to sift through packets. I typically just type in UDP since I'd be looking for stuff relevant to CoD, but I have also used it to check info on ICMP pings after using PingPlotter before. When you reach the first screen of Wireshark, before you double click an adapter to start a capture, check the IP of that connection. If you're bridging it shouldn't be 192.168.88.x like most devices connected to an R1 (does the XR500 have the 192.168.88.x range too?) might be, for example, but it would still show up as a local IP. Mine here was If you identify that and then the server you're connecting to, you can click either source or destination at the top and order your packets so that only one direction of traffic is visible at a time, just to keep things tidy and easy to flick through. You'll notice I also have the UDP filter applied (make sure to hit the arrow just to the left of "expression" at the end of the bar to apply). Another tip: at the top left you'll see a blue fin or a red stop icon; one will be greyed out depending on whether the capture is running or has been stopped. When it's running you'll see a total number of packets captured at the bottom right of the screen, and which percentage of those are visible if you have a filter applied. If you stop the capture and delete the UDP filter to display everything again, the packet counter on the bottom right will now show you if you've had any packet loss at any point during the capture
  15. Huh, no need. After many many hours of constantly trying it finally prompted me for my username and password and let me on. Big fat question mark? ๐Ÿ˜‚