Jump to content

robcore

Members
  • Content Count

    15
  • Joined

  • Last visited

  1. Unfortunately, this is a sign of reduced playerbase that happens with COD games, especially as the next release draws closer. It also happens much earlier on platforms that are less popular (ie. Xbox and PC). You see, Activision's subsidiary, Demonware, handles the matchmaking for its games, and they have no idea how to appropriately match lobbies in a way that could be considered geographically sensible. As the playerbase dwindles, this effect becomes far more pronounced. The Seattle server is a ms ping for me, so you'd think it would be optimal. This, however, isn't the case due to lag compensation favouring matched players who teleport around from Alaska, Hawaii, Guam, Fuji, Mexico, you name it. I've long since given up.
  2. Can confirm after having done some tcpdumping and compared to the actual communication with the game server, that the other udp traffic is negligible as far as potential congestion goes. That said, BO4 actually deserves credit in this area because it allows disabling lobby chat in favour of party, so the level of traffic generated is far less than past games like ww2.
  3. Good question! I believe that udp between other players over 3075 is used primarily for voice chat. In my case, i'd rather just strictly prioritize my connection with the server, and let the p2p audio be categorized as bulk traffic. Sorry, I know I'm a pest...but my inner nerd used to be obsessed with iptables 😅 Edit: Ps. I imagine the team has a variety of methods available to them for this sort of thing, but if you're ever in need of a small, portable, and low footprint method for listing current nat'd connections, check out https://tweegy.nl/projects/netstat-nat/ It's netstat, but with full nat support!
  4. Ahh my example was bad. Normally I would agree with you, but the cod matchmaking is messy with ports (as I'm sure you guys know way too well 😅). Basically, I want to prioritize the actual connection to the host, but not the other players in the match. The host port is usually something in the 30k to 60k range connected to 3075 on our end, and the other players we connect to are 1 to 1 3075. So it would be more like src:3075 dst:10000-65535 and src:10000-60000 dst:3075. Am I making sense or do I need stronger coffee this morning? Lol thanks for the reply man.
  5. Umm...manual entries are an amazing idea. Thank you! This may echo other people's requests, but maybe allowing qos priority ranges to "overlap" would be cool. That way I could specify, say, src:3075 dst 3075-65535 AND src:3075-65545 dst:3075 to cover my bases given that actual game server vm instances connect to our 3075 via a randomized port on their end (only peers connect 1 to 1 with 3075).
  6. For a future milestone, would you fellas be interested in blocking/allowing specific ip's within a cluster instead of the whole thing? I know it sounds ridiculous, but bear with me. Using Seattle as an example, BO4 on xbox seems to have blocks running virtual instances on different pieces of hardware. Sometimes, when a specific server is on a machine that is under a lot of workload or is a lower quality rental machine, certain instances play way worse than others. Still sounds ridiculous, I know, but I consistently get 3ms faster ping to 45.63.32.145 than 45.63.32.156, which leads me to believe that the lower address block is running on a higher end rig. Unrelated quick question about qos: does the device prioritization somehow bypass interface traffic shaping altogether? If that's the case, I am jealous of whoever coded that, even if it's just an nss hook or something it's genius.
  7. I see where your coming from, but it's still wasted computational power and milliseconds of calculation for something that isn't there. Offline devices shouldn't factor in to qos whatsoever. I have an R1 and am XR500, and they are the best on the market in this area, but that doesn't mean that they can't be improved. I don't like to settle when it comes to programming, and I imagine the Duma team doesn't either.
  8. So far, 45.63.32.145 is in Seattle, not Chicago, but shows up as Chicago. I will pinpoint more as they come up. Edit: now it's showing fine. Maybe Activision is doing some wonky stuff or I've been drinking too much.
  9. I've reported one other issue with the Seattle cluster at 45.63.32.x, and will mention it again because it's still present. Some of the addresses in the above block are actually in Chicago. This is causing me a 50/50 chance of being on a server that's either 30 or 50 ms away when I limit to the Seattle area.
  10. The Seattle server cluster is still improperly classified. As seen in the attached photo, it is being recognized as a peer connection but is actually dedicated. Be warned, some of the 45.63.32.x block is Chicago, and some is Seattle. Here is ipinfo.io output: ip: "45.63.32.186" hostname: "45.63.32.186.vultr.com" city: "Seattle" region: "Washington" country: "US" loc: "47.6145,-122.3480" postal: "98121" phone: "206" asn: Object asn: "AS20473" name: "Choopa, LLC" domain: "choopa.net" route: "45.63.32.0/21" type: "hosting" company: Object name: "Vultr Holdings, LLC"
  11. Ah sorry, my middle point is moot because I forgot to do a factory reset after upgrading. The reset handled it. The servers, on the other hand, don't respond when clicked on from chrome or Firefox on my desktop. I only grabbed the screen shot from my cell because it's easier. The last point, I guess makes sense now that you mention it. It's logical for the device with higher bandwidth to take priority over incoming devices. I guess I just assumed they'd all remain equal for some reason. Thanks for your insight!
  12. Hey Fraser, I keep trying to click on it to get its info, but nothing comes up. Lately there are two servers that I have to extend my geo filter range to include in order to be able to access BO4 multiplayer online (xbox), as shown in the attached image. Hey, while I have you, do you know If there is a way for me to exclude my R7800 in ap mode from qos? I'm running it as a wireless access point, but would rather it not take up qos space. Oh also on the xr500 the flower is broken. If I give a device more bandwidth than the others, leave the page and come back, it will have less than the others (and vice versa). Also, if I change a single device's bandwidth allocation, new devices receive less than the rest of the equally shared devices. Sorry for the OT!
  13. If the cloud update includes a whitelist, make sure you include Demonware's primary server in Ireland. It's the 185.xxx.xxx.xxx address when you first start the game. With BO4, they are using a different method of selecting servers for matchmaking than ww2. The game checks to see if you are able to connect the aforementioned central server, then it tries to connect to all available game servers. So long as you have the geofilter set before starting the game (and that demon ware address is whitelisted), you will only connect to the servers specified in the filter. Unfortunately, if you make any changes after startup, the game will still attempt to place you in blocked servers resulting in "unable to join lobby" messages. Netduma may want to consider a more automatic approach for handling this. If the filter is changed, take down the connection for the device and bring it back up, or use the old open wrt method of flushing conntrack for the device and close any existing existing connections like the Demonware tcp session, or any of the udp port 3075 connections (xbox is my example because that's what i use). If you guys use the conntrack package, just use that if your kernel doesn't include openwrt patches from attitude adjustment. Food for thought. Everyone else: whitelist the server in Ireland that pops up whenever you connect to the game.
  14. Please look into QoS: - Device Independant Type of Service prioritization, capable of re-ordering various common ports/traffic types as needed. Allowing a "custom port" option would go a long way as well. - More common protocol types in the prioritization fields than just tcp and udp, as sctp has become a common protocol for gaming, for example. As well, "all" or "0" would be a useful option for full device prioritization. NAT/Port Forwarding: - Real full cone support (I can link the patches if it helps) or a dedicated console dmz would be very helpful. - Again, more protocols supported for forwarding. Thanks and have a good one! Edit: For QoS, device prioritization as a separate option from bandwidth slice would be good too, as often the services whose packets we need to go out first don't necessarily require high bandwidth. Also, offline devices NEED to be removed from QoS calculation AUTOMATICALLY. Frankly, I was surprised to see that I needed to remove an offline device from dhcp reservation and manually update the distribution table when a device leaves the network. You guys need to base distribution solely on connection status because I don't want to have to manually update router settings every time I turn off a computer.
  15. Hey folks! The Seattle servers are displaying as Chicago (I think, my American geography sucks because I'm Canadian). I think the issue is that some of the 45.63.32.x addresses are actually in the Chicago cluster, but some are in Seattle. Info is here: - Location on map: Chicago (should be Seattle) - Server Type: Dedicated - ID: 812dcc97ca203a3f - Ping: 30ms - My Location: Central Alberta (spoofed here because i try in vain to avoid Alaskans) Screenshot for good measure:
×
×
  • Create New...