Jump to content

Bert

Members
  • Content Count

    657
  • Joined

  • Days Won

    9

Reputation Activity

  1. Like
    Bert got a reaction from Netduma Alex in xr300 super slow, settings or problem?   
    XR500 is a bit better than the XR300 but it depends heavily on your usage. If you use a lot of wifi, XR500 is better, if not and you use all wired you will most likely not see much benefit.
     
    The speed problem you have has to be a software of configuration issue as both WAN and LAN ports on that router is 1Gbit, or an issue with other parts of your setup.
     
    For reference though, have a look in your settings if all is right. Go settings > monitoring > statistics and look at your LAN connection. It should say 1000M/Full. If it says 100M/Full then that's your issue. You might need to try another ethernet cable or redo the connections.
     
    Some routers can be sensitive to the type of cable. I had one cable with a bit of damage once, on my ISP router it would drop to 10M/Full, on my XR500 it would drop to 100M/full. After cutting off 10cm and redoing the connector it went to 1000M/Full. These speeds get negotiated on signal strength as far as I am aware so that's why it can vary between devices.
     
    I just noticed while looking that up that my PS4 on XR500 drops to 100M/Full in rest mode, as soon as it wakes up it goes to 1000M/Full. So it might be a software glitch but I would investigate the physical cable first. Cable needs to be Cat 5E or higher for 1Gbps.
     
    The reason I suspect the cable is because usually when it tops out around 100 it's an issue with that. 100M needs 2 twisted pairs and 1000M needs all 4 twisted pairs. If there is an issue with one it will drop to 100M. 
  2. Like
    Bert got a reaction from bagsta69 in Must Be Magic?!   
    Not even throttling but there is instances where the lagger can gain the advantage simply due to in game mechanics.
     
    If I play at 150ms and you at 4ms. Assume the game has approx. 50ms interpolation lag for 60Hz servers.
     
    We are playing on Freqency. You run through the middle, I camp one of the hallways on either side of the map. As soon as I come out of cover and aim and shoot at you, this needs to be put in an update by my console, 0-16ms, send to the server, which takes 75ms, they need to process it in the master gamestate, which takes 0-16ms in theory (time between 1 update) then it gets send to your console, which takes 2ms. Then this gets processed by your client, 50-83ms interpolation lag (time between updates and frame rounding. in the best case it takes 127ms for me to be visible on your screen, worst case it will take 192ms. Average, 160ms.  If I use a gun that kills in 300 ms I can have killed you before you finished raising your gun. 
     
    Also in the above example, in CoD damage is dealt directly, so there is no interpolation delay, but there is a delay for it going in the update to the server, 0-16ms. So actually you would receive damage after just  0+75+0+2 = 77ms best case, or 16 (update rate)+75+16+2+32(frame rounding) = 141ms. So you can actually start recieving damage or be instakilled before I even appear on your screen fully. Also, my shots lag but register at the server after 75ms best or 107ms worst case scenario.
     
    What a lot of people forget that with high latency connections, that it gives the lagger a window to shoot at you without having flinch, because you can't shoot back yet. Where it hurts him is that hitmarkers will come at a delay.
     
    If situation above was reversed. 
    You would come out of cover and this would be visible to me after 2+0+75+50 = 127ms best case or 16+2+16+75+83 = 192ms. Same numbers. But you fire and your shots hit at the server in 2ms best or 18ms worst case scenario. So I would already be dead at the server before I even see you moving.
     
    Now 2 players that both  have 4ms to the server:
    Best case for coming out of cover: 0+2+0+2+50 = 54ms
    Worst case for coming out of cover 16+2+16+83 = 117ms
    Average, 85ms delay.
     
    You you see, it's not just your ping, but the other guy's ping plays a big role in the gunfight.
     
    And this is assuming clean connections which is usually not the case in a online environment. And a server that is not limited by CPU cycles etc etc.
    If you do the math for LAN with no latency then you will see that it's still not 100% fair.
     
    In reality. Most of the times latency is not symmetrical. And game updates are not recieved in exact 16ms intervals etc. Server can be at max capacity. It can go on forever.
     
    Low ping just means that you have the lowest possible path to the server. Less hops in between so the least chance of something going wrong. You have the advantage when coming out of cover and in straight up fights. But there is times where you are disadvantaged.
  3. Like
    Bert got a reaction from CrossFitKila717 in Must Be Magic?!   
    Not even throttling but there is instances where the lagger can gain the advantage simply due to in game mechanics.
     
    If I play at 150ms and you at 4ms. Assume the game has approx. 50ms interpolation lag for 60Hz servers.
     
    We are playing on Freqency. You run through the middle, I camp one of the hallways on either side of the map. As soon as I come out of cover and aim and shoot at you, this needs to be put in an update by my console, 0-16ms, send to the server, which takes 75ms, they need to process it in the master gamestate, which takes 0-16ms in theory (time between 1 update) then it gets send to your console, which takes 2ms. Then this gets processed by your client, 50-83ms interpolation lag (time between updates and frame rounding. in the best case it takes 127ms for me to be visible on your screen, worst case it will take 192ms. Average, 160ms.  If I use a gun that kills in 300 ms I can have killed you before you finished raising your gun. 
     
    Also in the above example, in CoD damage is dealt directly, so there is no interpolation delay, but there is a delay for it going in the update to the server, 0-16ms. So actually you would receive damage after just  0+75+0+2 = 77ms best case, or 16 (update rate)+75+16+2+32(frame rounding) = 141ms. So you can actually start recieving damage or be instakilled before I even appear on your screen fully. Also, my shots lag but register at the server after 75ms best or 107ms worst case scenario.
     
    What a lot of people forget that with high latency connections, that it gives the lagger a window to shoot at you without having flinch, because you can't shoot back yet. Where it hurts him is that hitmarkers will come at a delay.
     
    If situation above was reversed. 
    You would come out of cover and this would be visible to me after 2+0+75+50 = 127ms best case or 16+2+16+75+83 = 192ms. Same numbers. But you fire and your shots hit at the server in 2ms best or 18ms worst case scenario. So I would already be dead at the server before I even see you moving.
     
    Now 2 players that both  have 4ms to the server:
    Best case for coming out of cover: 0+2+0+2+50 = 54ms
    Worst case for coming out of cover 16+2+16+83 = 117ms
    Average, 85ms delay.
     
    You you see, it's not just your ping, but the other guy's ping plays a big role in the gunfight.
     
    And this is assuming clean connections which is usually not the case in a online environment. And a server that is not limited by CPU cycles etc etc.
    If you do the math for LAN with no latency then you will see that it's still not 100% fair.
     
    In reality. Most of the times latency is not symmetrical. And game updates are not recieved in exact 16ms intervals etc. Server can be at max capacity. It can go on forever.
     
    Low ping just means that you have the lowest possible path to the server. Less hops in between so the least chance of something going wrong. You have the advantage when coming out of cover and in straight up fights. But there is times where you are disadvantaged.
  4. Thanks
    Bert got a reaction from Netduma Alex in Must Be Magic?!   
    Not even throttling but there is instances where the lagger can gain the advantage simply due to in game mechanics.
     
    If I play at 150ms and you at 4ms. Assume the game has approx. 50ms interpolation lag for 60Hz servers.
     
    We are playing on Freqency. You run through the middle, I camp one of the hallways on either side of the map. As soon as I come out of cover and aim and shoot at you, this needs to be put in an update by my console, 0-16ms, send to the server, which takes 75ms, they need to process it in the master gamestate, which takes 0-16ms in theory (time between 1 update) then it gets send to your console, which takes 2ms. Then this gets processed by your client, 50-83ms interpolation lag (time between updates and frame rounding. in the best case it takes 127ms for me to be visible on your screen, worst case it will take 192ms. Average, 160ms.  If I use a gun that kills in 300 ms I can have killed you before you finished raising your gun. 
     
    Also in the above example, in CoD damage is dealt directly, so there is no interpolation delay, but there is a delay for it going in the update to the server, 0-16ms. So actually you would receive damage after just  0+75+0+2 = 77ms best case, or 16 (update rate)+75+16+2+32(frame rounding) = 141ms. So you can actually start recieving damage or be instakilled before I even appear on your screen fully. Also, my shots lag but register at the server after 75ms best or 107ms worst case scenario.
     
    What a lot of people forget that with high latency connections, that it gives the lagger a window to shoot at you without having flinch, because you can't shoot back yet. Where it hurts him is that hitmarkers will come at a delay.
     
    If situation above was reversed. 
    You would come out of cover and this would be visible to me after 2+0+75+50 = 127ms best case or 16+2+16+75+83 = 192ms. Same numbers. But you fire and your shots hit at the server in 2ms best or 18ms worst case scenario. So I would already be dead at the server before I even see you moving.
     
    Now 2 players that both  have 4ms to the server:
    Best case for coming out of cover: 0+2+0+2+50 = 54ms
    Worst case for coming out of cover 16+2+16+83 = 117ms
    Average, 85ms delay.
     
    You you see, it's not just your ping, but the other guy's ping plays a big role in the gunfight.
     
    And this is assuming clean connections which is usually not the case in a online environment. And a server that is not limited by CPU cycles etc etc.
    If you do the math for LAN with no latency then you will see that it's still not 100% fair.
     
    In reality. Most of the times latency is not symmetrical. And game updates are not recieved in exact 16ms intervals etc. Server can be at max capacity. It can go on forever.
     
    Low ping just means that you have the lowest possible path to the server. Less hops in between so the least chance of something going wrong. You have the advantage when coming out of cover and in straight up fights. But there is times where you are disadvantaged.
  5. Like
    Bert got a reaction from AI_EXREYFOX in Nighthawk S8000 switch and XR700 DumaOS Geo filtering does not work.   
    It's an overpriced piece of gear in a fancy case if you ask me. Most people are better off buying a regular 1Gbit managed switch as it will perform the same functions. I don't really buy into the part where it says lower latency switching for gaming and even if it did 0.5ms isn't going to matter. For XR700 owners you'd be better off buying the SX10 and utilizing the 10G SFP connections. Or just buy another switch and using the 10G connection. But for most users this is probably overkill.
     
    As for QoS, I feel that if you set the uplink port to critical priority, it won't do anything as it will simply prioritize all your traffic coming in over the uplink port. The only thing you can realistically QoS is the upload from individual devices. Actually I think Netgear also states this in the documentation, QoS is primarily there for upload and QoS on incoming traffic is best left to your provider, or in case of a router like R1 / XR, simply leave it to your router.
     
    I would only recommend this device if you are stuck with a ISP router or a common connection and want QoS somehow but even that can probably be done on a regular managed switch. Also this method is very rough since wifi falls outside of your QoS if you attach one to a router that doesn't support QoS.
     
    It makes sense in the setup that PharmDawgg is running since he's connecting his NAS to it and using aggregation to the XR700 frees up a bunch of bandwith. If you connect your non gaming stuff directly to the XR700 and access the NAS it will hog the bandwith between the switch and the router when accessing the NAS interfering with your gaming traffic. I would actually connect the NAS straight to the router instead in that case, same with the X1X. If that is possible due to placement offcourse.
  6. Like
    Bert reacted to Netduma Alex in Can we please get (or crowd fund) a new Netduma branded router?   
    Well make sure you keep an eye on the forums over the next few months, and also make sure you're signed up to the newsletter, because IF we had any amazing new products to announce, and i'm not saying we do, but IF we were to announce an amazing new product that was so sexy you'd have to buy it instantly, we'd certainly make sure everybody knew about it via the newsletter.
    https://netduma.us11.list-manage.com/subscribe/post?u=87a7aef5725cff4be341f1b26&id=5341b993d5
  7. Like
    Bert got a reaction from Netduma Fraser in Accessing NetDuma in cascaded router configuration   
    What my solution is, leave wifi on the R1 enabled and connect to it over wifi.
     
    Then you can adjust settings on geofilter etc simply from your phone, ipad or another PC that is wifi equipped.
  8. Like
    Bert got a reaction from Netduma Alex in Call of Duty lagcompensation/hit detection   
    What you're actually doing by connection throttling is delaying your upload packets so much that it gives you more time to respond. You walk around a corner, see your enemy, but your system hasn't updated the server yet so you get more time to shoot. It's actually simply cheating. you're creating congestion on the upload side.
     
    This is an issue with your internet most likely not with the game. Most providers have certain traffic shaping settings where they prioritize traffic like live videostreaming and so on. This actually hurts online gaming. Because often your UDP packets get buffered at the routers on the way to the server, so you actually have more download latency then upload latency.
     
    This is also why these TTV and youtube streamers are often lagging on your screen while they seem to be destroying the lobby.
  9. Like
    Bert got a reaction from Netduma Fraser in QOS not working on Upload - PPPoE VLAN7 Telekom   
    I went back to the switch on VLAN thing and tried again, same issue. Light flashes white as in Internet enabled. Turned out I just needed to reboot the router and then it works. For some reason it won't take it on the fly, I think it wasn't getting the proper DNS.
     
    So you need to set your switch like this:
    Port to/from modem/ONT: Tagged with VLAN from ISP (packets outgoing retain VLAN tag)
    Port to/from XR500: Untagged with VLAN from ISP (strips VLAN tag before sending traffic to router)
    Port VLAD ID (PVID) to/from XR500: ISP VLAN (Adds VLAN tag to traffic going in the switch)
     
    This needs a 802.1Q capable managed switch.
  10. Like
    Bert got a reaction from ArcaPulse in Bt Smarthub 2 to DrayTek 130   
    Maybe YT is actually using a lot more bandwidth then you think. 
     
    What I see from watching YT on my own systems is that it sends it data in bursts. It's not a continuous stream but everytime it fills the buffer you get a data burst so to say. The Network monitor might actually show you average? There may be some graph smoothing going on?
     
  11. Like
    Bert got a reaction from Netduma Alex in Try this   
    I keep my settings higher than what it actually is. Like a my subscription says 300/300  but speedtest suggests about 340 up down so I leave then values at 350/350. Then dial that down to 300/300 in the anti bufferbloat setting.
  12. Like
    Bert got a reaction from ArcaPulse in Bt Smarthub 2 to DrayTek 130   
    yes I play on PS4
     
    I have DumaOS classfied games off though, more as a expiriment to see if it doesn't prioritize anything else by accident and I play 99% CoD. Don't think it makes a difference just haven't changed it back yet.
  13. Thanks
    Bert got a reaction from ArcaPulse in Bt Smarthub 2 to DrayTek 130   
    I wonder if the rules are not the other way around?
     
    For CoD I have UDP Source: 3074 Destination 1024-65335
     
    My QoS seems to work fine since it lights up when I play CoD and goes off when I shut off the game. Also on the traffic overview it counts both incoming and outgoing packets under high priority.
  14. Like
    Bert got a reaction from ArcaPulse in Bt Smarthub 2 to DrayTek 130   
    This could be an issue with your ISP's network rather than your home system. It's unlikely that any sort of bufferbloat setting would fix it in that scenario.
     
    On that last pingplotter pic hub 4 and 7 seem to give issues, 4 packet loss and 7 high latency.
     
    I have something similar in The Netherlands. There is 2 serverclouds in Amsterdam hosting BO4 games. One is run by Vultr and fed by the NTT backbone. And the other is Internap fed by the Congent backbone. One of the Congent hubs is giving mega ping spikes, this is visible in DumaOS and in Pingplotter. The Vultr cloud however is totally fine. 
  15. Like
    Bert got a reaction from AzeRuS94 in Can't download and update games ps4 xr500   
    Did you give it VLAN 10?
     
    The line in the top should read something like: 1_INTERNET_B_VID_10
  16. Like
    Bert got a reaction from AzeRuS94 in Can't download and update games ps4 xr500   
    The reason your bridge mode didn't work is because you bridged one extra port (previous page) and if it's done like that you need two IP addresses. The first connection you saw is programmed in by your provider. The bridge mode is extra. So if you disabled the first connection it could have worked, because otherwise that snags the IP address meant for the XR500.  Multicast VLAN should have just been set to 10, which seems default for your conection judging by the name of the first connection. Actually if you look at the page for your present WAN connection, just changing the setting from Routing to Bridge and set binding to LAN1 would already have done the trick. Your present setting binds to SS
     
    I have a huawai router from my ISP that is programmed like that and it only works for stuff where you have 2 separate WAN's or have acces to multiple public IP's.
     
    The reason you had issues with downloading games on your PS4 is in the DNS setting at XR500. You have added a local IP as DNS server, the one of your Huawai Router. If this goes out of whack it will prevent downloading and doing stuff like uploading video's etc. I would set that to automatic from ISP so it just uses your ISP's DNS, or something like google (8.8.8.8) or cloudflare so you know it won't be an issue.
     
    Important to note is that setting something in DMZ just opens all the ports to that device, it will never get rid of double NAT.
    If you use Huawai Router - XR500(DMZ) - XBOX for example. Huawai forward all ports to the XR500. But you will still need to set up either port forwarding on the XR500, or set the xbox in XR500 DMZ. Ideally it should all work on uPNP but I found that not always to work right on the XR500. For CoD in PS4 I just forward UDP 3074 and that gives you open NAT regardless of Geofilter settings. But I don't know how that is if you use a XBOX and PS4 at the same time. Ideally you need uPNP for this. You can't forward ports to say PS4 and use XBOX in DMZ, in that case 3074 will be used by PS4 and the XBOX will still have NAT issues.
     
    You could try something else:
    Plug your WAN ethernet cable straight into the XR500.
    Enter the static IP data like you had in your Huawai router
    Under advanced setttings, go to VLAN / Bridge and enable VLAN / Bridge settings. Take the bottom setting and add VLAN 10.
    This should make your XR500 the main router and simplfy things.
  17. Like
    Bert got a reaction from Netduma Alex in Long Story Short.....   
    The thing is just that the ISP can't always help it.
     
    I am in the Netherlands at the moment playing on my 250/25 cable connection and normally I connect to Vultr in Amsterdam. Today though I had a unknown host designated as peer.
     
    Turns out this is Internap cloud services in Amsterdam. I had some ping spikes and I ran a pingplotter log out of curiosity. As you can see on the picture, the actual bufferbloat occurs on the Internap Corporation network. Also there is another carrier between my ISP and Internap and that is Cogentco. When I connect to Choopa/Vultr that is not there. Instead its ISP -> NTT -> Vultr.
     
    So a 2 different game servers in the same city with a totally different routing.
     
    Some ISP's might specialize in low latency for gaming but if the backbone that they use has one bad hop it can already screw you over.
     
     
     
     

  18. Like
    Bert got a reaction from CrossFitKila717 in Long Story Short.....   
    The thing is just that the ISP can't always help it.
     
    I am in the Netherlands at the moment playing on my 250/25 cable connection and normally I connect to Vultr in Amsterdam. Today though I had a unknown host designated as peer.
     
    Turns out this is Internap cloud services in Amsterdam. I had some ping spikes and I ran a pingplotter log out of curiosity. As you can see on the picture, the actual bufferbloat occurs on the Internap Corporation network. Also there is another carrier between my ISP and Internap and that is Cogentco. When I connect to Choopa/Vultr that is not there. Instead its ISP -> NTT -> Vultr.
     
    So a 2 different game servers in the same city with a totally different routing.
     
    Some ISP's might specialize in low latency for gaming but if the backbone that they use has one bad hop it can already screw you over.
     
     
     
     

  19. Like
    Bert got a reaction from CrossFitKila717 in Duma r1 vs xr500   
    Doesn't make a difference if you just use it as a gaming router.
     
    There is only some situations where the XR500 has the upper hand:
    1. When you need PPPoE / VLan tagging
    2. When you have a lot of wifi devices and use QoS on them.
    3. When you have a lot of bandwidth available over 400ish Mbps and you use it so heavily that you need to use QoS.
     
    The wifi is much better on the XR500 but if budget is an issue, this is easily solved by simply buying a Wifi AP and using that with the R1. Before I had a XR500 I used the R1 and had one of my old ISP routers configured as AP and that worked just fine. Not supporting 5GHz wifi is one of the R1's biggest drawbacks IMHO.
     
    On my XR500 in Asia I generally have my wifi turned off and only have a PC and PS4 connected wired and both the XR500 and R1 perform identical. The only thing is that I need a PPPoE setup and that's why my XR500 stays, otherwise I would have left the R1 in there and let my XR500 run my home wifi net. If I wanted to use the R1 there I could simply get a Edgerouter and run my R1 behind that though so loads of options.
     
    Your games won't run better on a XR500 assuming you're wired but it simply does everything else better.
  20. Like
    Bert got a reaction from Netduma Fraser in Long Story Short.....   
    The thing is just that the ISP can't always help it.
     
    I am in the Netherlands at the moment playing on my 250/25 cable connection and normally I connect to Vultr in Amsterdam. Today though I had a unknown host designated as peer.
     
    Turns out this is Internap cloud services in Amsterdam. I had some ping spikes and I ran a pingplotter log out of curiosity. As you can see on the picture, the actual bufferbloat occurs on the Internap Corporation network. Also there is another carrier between my ISP and Internap and that is Cogentco. When I connect to Choopa/Vultr that is not there. Instead its ISP -> NTT -> Vultr.
     
    So a 2 different game servers in the same city with a totally different routing.
     
    Some ISP's might specialize in low latency for gaming but if the backbone that they use has one bad hop it can already screw you over.
     
     
     
     

  21. Like
    Bert got a reaction from CrossFitKila717 in Long Story Short.....   
    If you keep QoS active it will most likely not work for you as the router just dumps the other traffic. It really depends on the setup. Also when it works you need to adjust your playstyle it only works for very agressive players. You have to remember that if the other guy is around the corner first it works in your disadvantage.
    Try turning QoS off and running a big upload.
     
    There is more advanced ways of doing it involving a PC with shared connection but I considder it cheating so I'll leave it at that.
     
  22. Like
    Bert got a reaction from tallus in Black Ops 4 - Getting P2P Only? Post Here!   
    It most likely will use the same servers. On the servers I usually play on, Japan for example have the same ID's as they had in WW2.
  23. Like
    Bert got a reaction from purpleandgold33 in So Many Variables!!!   
    You will never get this totally right.
    A Netduma router or any other router with QoS can only control traffic outgoing at your own network and there is stops. Netduma/DumaOS also have the option to use a geofilter to limit connections, that's where you get the major advantage over other brands.

    First of all, there is no dedicated servers in CoD in true sense of the word. A true dedicated server is an actual physical server dedicated to running the hosting software and has it's own internet connection.  What we call dedicated servers in CoD is a program running in a cloud, this cloud has multiple access nodes  which all have their own traffic load, and the cloud itself is running numerous other programs requiring internet bandwidth and prioritization levels. So performance of the hosting/server program is reliant on total usage of the cloud it's running in and total bandwidth usage of the cloud. If you have ever seen a complex like this, my place in Europe is close to a Microsoft datacenter, and the place is massive. You're easily talking over 10 football fields long and wide, having over 25 warehouse sized buildings containing their serverpark. So you can imagine it doesn't get it's internet through a single UTP cable, there is multiple fiberoptic backbones coming in. It has actually happened that I was connected to the same server complex. DumaOS showed 41ms ping. Game finishes, next game loads, I am playing on another uplink on the same servercomplex and ping read 33ms. So we can already see that their servers themselves are inconsistent.

    Before I continue, this is a good basis read:
    http://denkirson.proboards.com/thread/5972/networking-lag-compensation-hit-detection
    I think it's posted here before but it's a good foundation as to why there is issues and what is this terrible lag comp people on the internet talk about.

    Traffic coming from your router doesn't go straight to the server, but first has to cross your ISP's network. From there it goes on one or more backbones owned by a third party. From there, it usually gets connected up to a internet exchange, a place where these backbones meet and a lot of high end serverparks have a very close connection to these places, or are directly plugged in. On every segment, the owner of the network sets his own traffic shaping and prioritization ques.

    One of the largest Internet Exchanges in Europe is AMS-IX. For reference you can see how much traffic they process: https://stats.ams-ix.net/index.html What is funny here is that you can see exactly what peak time is for internet usage and peak load on the backbones and the network.

    Beginning at your ISP. Your ISP is usually oversubscribed factor 100 to factor 500. They expect their customers to use their internet in bursts so they can sell the same bandwidth multiple times if you want to call it that. Or another way of putting it. I live in a town with about 50.000 households covered by my ISP. I have 300/300 bandwidth. So if everybody had that and fired up their speedtest website, the ISP has to have 15Tb of bandwidth to my town alone, that's simply not happening. As you can see in the link above, the actual AMS-IX roughly has a 6 Tb/s throughput. 

    Since they don't have this amount of bandwidth, the ISP has to deal with bandwidth management and they do this by something called traffic shaping. They have multiple traffic ques for different types of traffic, each having their own priority level. My ISP in Europe for example, they offer free VOIP between customers. You don't want this to be laggy, so this will be prioritized highest. Furthermore, the ISP's believe that a regular customer will simply spend most time downloading stuff, surfing the internet and watch streaming content etc. Their traffic shaping algorithms are optimized towards that. You can tell simply by the fact that download is much higher than upload on most residential connections. So while traffic is filling their traffic ques, the shaping algorithm decides what to send out and what to buffer. When the buffer is full, traffic simply gets dropped (the idea behind bufferbloat as often discussed on these forums)

    Gaming traffic is UDP traffic. And generally is just one way traffic. One reason why a ISP likes to put UDP traffic in the very lowest priority que, is that when a packet is dropped it is simply ignored after, the sender simply sends the next packet. TCP on the other hand, sends a packet, awaits confirmation from the receiver and then sends the next. So if the original sender doesn't get this confirmation, it will simply resend the packet. This is why network operators choose to drop or delay UDP traffic. Once it's gone it's gone. TCP traffic will be resend over and over and if there is any sort of congestion issue dropping TCP packets will simply make this worse. 

    Also, since different traffic has different priorities, you have to look at how ping is calculated in game or by a website etc. Is it physically sending a ICMP request, or does it do what sites like DSL reports do, send a HTTP get request and time how long it takes to get an answer? These things are all different. Point being here is that while you can have a great ping in game, it's a different traffic class than UDP gaming traffic. So while your ping might look great, your gaming traffic can be lagging all over the place since this is qued and buffered differently. 
    When your traffic arrives at the backbone, largely the same thing will happen, your traffic is prioritized and gaming traffic is once again in the lowest tier. Every hop to the server will have it's own traffic ques and traffic shaping schemes. All the way up to the server complex.

    So this lag comp people speak of? Following the above. Latency for UDP traffic can be very different than ping suggests but can also be more unstable due to buffering etc. What is more annoying, is that it can also be asymmetric. Since most ISP's provide asymmetrical connections, their traffic shaping model across their network will follow a similar model. What's happening when the famous "Lag comp is against me" occurs? Your real-time UDP latency on download is higher than on your upload to the server. If you have read the article about lag compensation at the start, it will be obvious by now. When this occurs, your traffic reaches the server faster than you are gettig the gamestate updates from the server. And thus you will appear at your enemies screen before they appear on yours. And they seem to shoot you in a nanosecond. Some players try to influence this by clogging their upload on purpose, so they delay upload to the server. 

    Fine example of exactly this happening to me a few days ago:
    You can see he was clearly not on the screen when I was taking damage.

    This mainly why you get a different feel for the connection every time you play, or even from game to game or half way the game. You can control your own network, but next to you and the host, there is at least 3 other different parties involved who have a influence on the quality of your traffic. It makes me chuckle when some Youtuber connects up router A and afterwards router B and decides on feel that one or the other is better because he had a slightly faster hit detection in one of the games.

    Also, when you are setting up your network, and follow the directions for setting up anti-bufferbloat, you're simulating a fully loaded internet connection. Most of the time when playing, this is not the case. When I play CoD and I have my DumaOS screen open, I generally have a traffic load of 0.5 Mb/s on a 300/300 Mb/s connection. Bufferbloat is absolutely not going to be an issue no matter what you set the sliders to. 

    When is all this important? QoS and anti bufferbloat is simply important if you have a low upload connection, or better said low compared to your usage. On a connection with 10/20 Mb/s upload, a phone simply syncing can already congest your network. This is where you need to have your bufferbloat and QoS set up right. If you are using a 1000/1000 Mb/s connection this is of very little interest. QoS will do very little for you here. You read about people complaining that they are using 1000/1000 Mb/s internet and still have a crappy game. Thats simply because traffic coming from their network is simply still affected by traffic shaping between their ISP, backbone and server and this is not any different than any other connection.

    Geofilter is the prime functionality for R1 and DumaOS routers for me. It simply makes sure that you are connected to the closest server. I measured ping across Europe. To Amsterdam I get 8ms, to Italy 28ms. If you were to have a game that was truly having 28ms latency, it would be perfectly playable. It's only that from Amsterdam to Italy, there is so many extra hops in the traffic route that the chance of you meeting over congestion is exponentially growing. As geofiltering limits your matchmaking, you have to find a balance here between connectivity and quality. Typically in Europe I have Amsterdam, Frankfurt and London in my filter giving 8-15ms ping. Paris for some odd reason gives 25ms and always has a lower game quality. 

    I have friends that use R1 routers and mess about with the sliders every day and they can never get it right either. That's because it's impossible to get it right and it's all pissing in the wind. If you have a setting that gives you a clean line without bufferbloat, then you did all you can do and the rest is simply out of your hands.

    Other settings? DNS, MTU? Is all BS. Set it to what your provider defaults and done. It's all snake oil. I swapped my XR500 for my R1 once to do some testing, forgot to update the connection setting in the PS4 and it was pointing to a non functional DNS adress. And the game ran fine. I only found out because I went to upload a video afterwards and it was coming up with DNS errors. So conclusion here is also it doesn't matter which DNS you use.
  24. Like
    Bert got a reaction from Locosano in So Many Variables!!!   
    You will never get this totally right.
    A Netduma router or any other router with QoS can only control traffic outgoing at your own network and there is stops. Netduma/DumaOS also have the option to use a geofilter to limit connections, that's where you get the major advantage over other brands.

    First of all, there is no dedicated servers in CoD in true sense of the word. A true dedicated server is an actual physical server dedicated to running the hosting software and has it's own internet connection.  What we call dedicated servers in CoD is a program running in a cloud, this cloud has multiple access nodes  which all have their own traffic load, and the cloud itself is running numerous other programs requiring internet bandwidth and prioritization levels. So performance of the hosting/server program is reliant on total usage of the cloud it's running in and total bandwidth usage of the cloud. If you have ever seen a complex like this, my place in Europe is close to a Microsoft datacenter, and the place is massive. You're easily talking over 10 football fields long and wide, having over 25 warehouse sized buildings containing their serverpark. So you can imagine it doesn't get it's internet through a single UTP cable, there is multiple fiberoptic backbones coming in. It has actually happened that I was connected to the same server complex. DumaOS showed 41ms ping. Game finishes, next game loads, I am playing on another uplink on the same servercomplex and ping read 33ms. So we can already see that their servers themselves are inconsistent.

    Before I continue, this is a good basis read:
    http://denkirson.proboards.com/thread/5972/networking-lag-compensation-hit-detection
    I think it's posted here before but it's a good foundation as to why there is issues and what is this terrible lag comp people on the internet talk about.

    Traffic coming from your router doesn't go straight to the server, but first has to cross your ISP's network. From there it goes on one or more backbones owned by a third party. From there, it usually gets connected up to a internet exchange, a place where these backbones meet and a lot of high end serverparks have a very close connection to these places, or are directly plugged in. On every segment, the owner of the network sets his own traffic shaping and prioritization ques.

    One of the largest Internet Exchanges in Europe is AMS-IX. For reference you can see how much traffic they process: https://stats.ams-ix.net/index.html What is funny here is that you can see exactly what peak time is for internet usage and peak load on the backbones and the network.

    Beginning at your ISP. Your ISP is usually oversubscribed factor 100 to factor 500. They expect their customers to use their internet in bursts so they can sell the same bandwidth multiple times if you want to call it that. Or another way of putting it. I live in a town with about 50.000 households covered by my ISP. I have 300/300 bandwidth. So if everybody had that and fired up their speedtest website, the ISP has to have 15Tb of bandwidth to my town alone, that's simply not happening. As you can see in the link above, the actual AMS-IX roughly has a 6 Tb/s throughput. 

    Since they don't have this amount of bandwidth, the ISP has to deal with bandwidth management and they do this by something called traffic shaping. They have multiple traffic ques for different types of traffic, each having their own priority level. My ISP in Europe for example, they offer free VOIP between customers. You don't want this to be laggy, so this will be prioritized highest. Furthermore, the ISP's believe that a regular customer will simply spend most time downloading stuff, surfing the internet and watch streaming content etc. Their traffic shaping algorithms are optimized towards that. You can tell simply by the fact that download is much higher than upload on most residential connections. So while traffic is filling their traffic ques, the shaping algorithm decides what to send out and what to buffer. When the buffer is full, traffic simply gets dropped (the idea behind bufferbloat as often discussed on these forums)

    Gaming traffic is UDP traffic. And generally is just one way traffic. One reason why a ISP likes to put UDP traffic in the very lowest priority que, is that when a packet is dropped it is simply ignored after, the sender simply sends the next packet. TCP on the other hand, sends a packet, awaits confirmation from the receiver and then sends the next. So if the original sender doesn't get this confirmation, it will simply resend the packet. This is why network operators choose to drop or delay UDP traffic. Once it's gone it's gone. TCP traffic will be resend over and over and if there is any sort of congestion issue dropping TCP packets will simply make this worse. 

    Also, since different traffic has different priorities, you have to look at how ping is calculated in game or by a website etc. Is it physically sending a ICMP request, or does it do what sites like DSL reports do, send a HTTP get request and time how long it takes to get an answer? These things are all different. Point being here is that while you can have a great ping in game, it's a different traffic class than UDP gaming traffic. So while your ping might look great, your gaming traffic can be lagging all over the place since this is qued and buffered differently. 
    When your traffic arrives at the backbone, largely the same thing will happen, your traffic is prioritized and gaming traffic is once again in the lowest tier. Every hop to the server will have it's own traffic ques and traffic shaping schemes. All the way up to the server complex.

    So this lag comp people speak of? Following the above. Latency for UDP traffic can be very different than ping suggests but can also be more unstable due to buffering etc. What is more annoying, is that it can also be asymmetric. Since most ISP's provide asymmetrical connections, their traffic shaping model across their network will follow a similar model. What's happening when the famous "Lag comp is against me" occurs? Your real-time UDP latency on download is higher than on your upload to the server. If you have read the article about lag compensation at the start, it will be obvious by now. When this occurs, your traffic reaches the server faster than you are gettig the gamestate updates from the server. And thus you will appear at your enemies screen before they appear on yours. And they seem to shoot you in a nanosecond. Some players try to influence this by clogging their upload on purpose, so they delay upload to the server. 

    Fine example of exactly this happening to me a few days ago:
    You can see he was clearly not on the screen when I was taking damage.

    This mainly why you get a different feel for the connection every time you play, or even from game to game or half way the game. You can control your own network, but next to you and the host, there is at least 3 other different parties involved who have a influence on the quality of your traffic. It makes me chuckle when some Youtuber connects up router A and afterwards router B and decides on feel that one or the other is better because he had a slightly faster hit detection in one of the games.

    Also, when you are setting up your network, and follow the directions for setting up anti-bufferbloat, you're simulating a fully loaded internet connection. Most of the time when playing, this is not the case. When I play CoD and I have my DumaOS screen open, I generally have a traffic load of 0.5 Mb/s on a 300/300 Mb/s connection. Bufferbloat is absolutely not going to be an issue no matter what you set the sliders to. 

    When is all this important? QoS and anti bufferbloat is simply important if you have a low upload connection, or better said low compared to your usage. On a connection with 10/20 Mb/s upload, a phone simply syncing can already congest your network. This is where you need to have your bufferbloat and QoS set up right. If you are using a 1000/1000 Mb/s connection this is of very little interest. QoS will do very little for you here. You read about people complaining that they are using 1000/1000 Mb/s internet and still have a crappy game. Thats simply because traffic coming from their network is simply still affected by traffic shaping between their ISP, backbone and server and this is not any different than any other connection.

    Geofilter is the prime functionality for R1 and DumaOS routers for me. It simply makes sure that you are connected to the closest server. I measured ping across Europe. To Amsterdam I get 8ms, to Italy 28ms. If you were to have a game that was truly having 28ms latency, it would be perfectly playable. It's only that from Amsterdam to Italy, there is so many extra hops in the traffic route that the chance of you meeting over congestion is exponentially growing. As geofiltering limits your matchmaking, you have to find a balance here between connectivity and quality. Typically in Europe I have Amsterdam, Frankfurt and London in my filter giving 8-15ms ping. Paris for some odd reason gives 25ms and always has a lower game quality. 

    I have friends that use R1 routers and mess about with the sliders every day and they can never get it right either. That's because it's impossible to get it right and it's all pissing in the wind. If you have a setting that gives you a clean line without bufferbloat, then you did all you can do and the rest is simply out of your hands.

    Other settings? DNS, MTU? Is all BS. Set it to what your provider defaults and done. It's all snake oil. I swapped my XR500 for my R1 once to do some testing, forgot to update the connection setting in the PS4 and it was pointing to a non functional DNS adress. And the game ran fine. I only found out because I went to upload a video afterwards and it was coming up with DNS errors. So conclusion here is also it doesn't matter which DNS you use.
  25. Like
    Bert got a reaction from Netduma Alex in So Many Variables!!!   
    Yes exactly that. Unless anything changes at your ISP all the other stuff is simply outside your control.
×
×
  • Create New...