User Details
- User Since
- Nov 4 2014, 4:29 PM (527 w, 22 h)
- Availability
- Available
- IRC Nick
- bblack
- LDAP User
- BBlack
- MediaWiki User
- BBlack (WMF) [ Global Accounts ]
Yesterday
Mon, Dec 9
Wed, Dec 4
Also, probably the way to standardize this for sanity (avoiding ORIGIN mistakes on both ends) is to follow some simple rules that:
Seems like a net win to me. Reduces some error-prone process stuff and makes life simpler!
Oct 25 2024
Ah interesting! We should confirm that and perhaps avoid the set-cookie entirely on cookies that are (or at least are intended to be) ~ SameSite=Lax|Strict then, I guess?
I don't think it's necessarily always up to us to be able to know it's cross-origin, though, right? It would depend on the $random_other_site's CORS whether they tell us about a referrer at all?
Oct 24 2024
Do we have a specific example of a URL and which cookies triggered the rejections? In my own quick repro attempt, I only saw them failing on actually cross-domain traffic (in my case, an enwiki page was loading Math SVG content from https://wikimedia.org/api/..., and it was the cookies coming with that response that were rejected).
Seems like all of these Varnish-level cookies mentioned at the top should at least gain appropriate, explicit SameSite= attributes, in addition to perhaps Partitioned as appropriate (only NetworkProbeLimit currently carries a SameSite attribute at all).
Oct 11 2024
Sep 20 2024
Jul 23 2024
Note also Digicert's annual renewal is coming soon in T368560 . We should maybe look at whether the OCSP URI is optional in the form for making the cert, and turn it off (assuming they also have CRLs working fine). Or if they're not ready for this, I guess Digicert waits another year.
Firefox has historically been the reason we've been stapling OCSP for the past many years. If our certificate has an OCSP URI in its metadata, then Firefox will check OCSP in realtime (which is a privacy risk) unless our servers staple the OCSP to the TLS negotiation (which we do!). This applies to both our Digicert and LE unified certs (and I'm sure some other lesser cases as well!).
Jul 2 2024
^ While we can maintain the VCL-level hacks for now, it would be best to both dig into how this actually happened (most likely, we ourselves emitted donate.m links from a wiki, probably donatewiki itself?), and to come up with a permanent solution at the application layer (fix the wiki to support these links properly and directly). We don't want to keep accumulating hacks like these in our already-overly-complex VCL code if unwarranted.
Jun 27 2024
Note there was some phab/brain lag here, I wrote this before I saw joe's last response above, they overlap a bunch
Jun 7 2024
Jun 4 2024
Jun 3 2024
Re: "same logic" - they're different protocols, different hierarchies, and much different on the client behavior front as well. It doesn't make sense to share a strategy between the two.
May 31 2024
Yes, from a resiliency POV, in some senses keeping unicasts in the mix is an answer (and it's the answer we currently rely on). In a world with only very smart and capable resolvers, the simplest answer probably is the current setup. And indeed, not-advertising ns2 from the core DCs would be a very slight resiliency win over that.
Yeah my general thinking was get ns1-anycast going first, and then figure out any of the above about better resiliency before we consider withdrawing ns0-unicast completely.
Yeah, I've looked at this from the deep-ntp-details POV and it's all pretty sane. We're in alignment with the recommendations in https://www.rfc-editor.org/rfc/rfc8633.html#page-17 and it should result in good time sync stability.
May 30 2024
On that future discussion topic (sorry I'm getting nerdsniped!) - Yeah, I had thought about prepending (vs the hard A/B cutoff) as well, but I tend to think it doesn't offer as much resiliency as the clean split.
Re: anycast-ns1 and future plans, etc (I won't quote all the relevant bits from both msgs above):
May 25 2024
There are brand/identity dilution and confusion issues with using any of *.wiki in an official capacity, especially as canonical redirectors for Wikipedia itself, which is why we didn't start using these many years ago when they were first offered for free.
May 23 2024
May 22 2024
I'm a little leery of dropping the TTL really-short. I get the argument for the normal case, but we also have to consider the possibility that something out there on the Internet could cause traffic surges to some of these URLs and we'd lose some amount of caching defenses against it with a short TTL (esp if we're also no longer pregenerating them, making such traffic more-expensive on the inside). Re-routing sounds better? Or perhaps even-better would be a full-on redirect to the new parsoid URL paths?
May 17 2024
What a fun deep-dive! :)
May 16 2024
May 15 2024
May 14 2024
Also similarly T214998
T215071 <- throwing this in here for semi-related context. Maybe we can align on a potential common future URI scheme anyways, while not actually yet tackling that one.
May 10 2024
Should be all set, may take up to ~30 minutes for changes to propagate.
May 8 2024
The patch should fix things up, let me know if there's still problems after ~half an hour to let the change propagate through the systems.
May 3 2024
We could choose to use subdivision-level mapping in cases where it makes sense.
Mar 27 2024
Mar 26 2024
Feb 12 2024
Feb 9 2024
Jan 19 2024
We discussed this in Traffic earlier this week, and I ended up implementing what I think is a reasonable solution already, so now I've made this ticket for the paper trail and to cover the followup work to debianize and usefully-deploy it. The core code for it is published at https://github.com/blblack/tofurkey .
Dec 5 2023
The perf issues are definitely relevant for traffic's use of haproxy (in a couple of different roles). Your option (making a libssl1.1-dev for bookworm that tracks the sec fixes that are still done for the bullseye case, and packaging our haproxy to build against it) would be the easiest path from our POV, for these cases.
Nov 29 2023
Followup: did a 3-minute test of the same pair of parameter changes on cp3066 for a higher-traffic case. No write failures detected via strace in this case (we don't have the error log outputs to go by in 9.1 builds). mtail CPU usage at 10ms polling interval was significantly higher than it was in ulsfo, but still seems within reason overall and not saturating anything.
I went on a different tangent with this problem, and tried to figure out why we're having ATS fail writes to the notpurge log pipe in the first place. After some hours of digging around this problem (I'll spare you endless details of temporary test configs and strace outputs of various daemons' behavior, etc), these are the basic issues I see:
Nov 22 2023
Nov 9 2023
Nov 7 2023
I don't suspect it serves any real purpose at present, unless it was to avoid some filtering that exists elsewhere to avoid cross-site sharing of /32 routes or something.
Oct 19 2023
Oct 16 2023
One potential issue with relying solely on MSS reduction is that, obviously, it only affects TCP. For now this is fine, as long as we're only using LVS (or future liberica) for TCP traffic (I think that's currently the case for LVS anyways!), but we could add UDP-based things in the future (e.g. DNS and QUIC/HTTP3), at which point we'll have to solve these problems differently.
Could we take the opposite approach with the MTU fixup for the tunneling, and arrange the host/interface settings on both sides (the LBs and the target hosts) such that they only use a >1500 MTU on the specific unicast routes for the tunnels, but default to their current 1500 for all other traffic? If per-route MTU can usefully be set higher than base interface MTU, this seems trivial, but even if not, surely with some set of ip commands we could set the iface MTU to the higher value, while clamping it back down to 1500 for all cases except the tunnel.
Oct 11 2023
Oct 3 2023
Looks about right to me!
We could add some normalization function at the ferm or puppet-dns-lookup layer perhaps (lowercase and do the zeros in a consistent way)?
Sep 27 2023
Sep 25 2023
To clarify and expand on my position about this thread count parameter (which is really just a side-issue related to this ticket, which is fundamentally complete):
Sep 22 2023
Adding to the confusion: historically, we once used the hostname cp1099 back in 2015 for a one-off host: T96873 - therefore that name already exists in both phab and git history, confusingly.
Reading a little deeper on this, I think we still have a hostnames issue. If those other 8 hosts are indeed being brought from ulsfo+eqsin. Those 8 hosts, I presume, would be 1091-8, and so these hosts should start at 1099, not 1098?
@VRiley-WMF - Sukhbir's out right now, but I've updated the racking plan on his behalf!
Sep 18 2023
Sep 15 2023
There's a followup commit that was never merged, to re-enable pybal health monitoring on all the wikireplicas: https://gerrit.wikimedia.org/r/c/operations/puppet/+/924508/1/hieradata/common/service.yaml
Sep 14 2023
https://grafana.wikimedia.org/d/000000513/ping-offload might be a good starting point (might need some updates/tweaking to get the exact data you want, though)
some sort of rate-limiting configured on the switch-side for ICMP echo, which was IP-aware and didn't count packets from our own internal systems
Sep 8 2023
Reading into the code above and the history more and self-correcting: the ratelimiter doesn't apply to PTB packets, just some other informational packets. Apparently we bumped the ratelimiter first as a short-term mitigation (for all the sites), I guess primarily to avoid what looks like ping loss to our monitoring and/or users, then deployed the ping offloader in some places as well as a better way to deal with it (and I guess at thousands per second, the pps reduction probably is useful, although I don't know to what degree).
The current puppetized tuneables are at: https://gerrit.wikimedia.org/r/plugins/gitiles/operations/puppet/+/8ed59718c7a7603b61d7d42e05726fd11dae5eaa/modules/lvs/manifests/kernel_config.pp#49
to reduce load on LVS hosts
Sep 5 2023
This topic probably deserves a ~hour meeting w/ Traffic to hash out some of the potential solutions and tradeoffs, but I'm gonna try to bullet-point my way through a few points for now anyways to seed further discussion:
Jun 12 2023
The more I've thought about this issue, I think we should probably stick with the (very approximate) latency mapping we have, and not try to have a second setup to optimize for the codfw-primary case. I do think we should swap the core DCs at the front of the global default entry on switchover, though, and the patch above makes that spot a little more visible. There shouldn't be any hard dependencies between this and other steps, but it could be done around the start of the switchover process asynchronously.
May 31 2023
We've got a pair of patches to review now which configure this on the pybal and safe-service-restart sides. We could especially use serviceops input on the latter. None of it's particularly pretty, but at least it's fairly succinct and seems to do the job!
May 30 2023
Note: I restored+amended https://gerrit.wikimedia.org/r/c/operations/puppet/+/924342 and merged+deployed it on lvs1018+lvs1020. This seems to work and disable the problematic monitoring that impacts LVS itself.
May 27 2023
Apr 27 2023
I like this direction (etcd). It's not super-trivial, but we've complained a lot even internally about the lack of etcd support for depooling whole sites at the public edge.
Apr 26 2023
Probably needs subtasks for two things:
- Fix "safe-service-restart.py" being unsafe (either it or its caller is failing to propogate an error upstream to stop the carnage, and is also leaving a node depooled when the error happens between the depool and repool operations. At least one of those needs fixing, if not both).
- The whole 'template the local appservers.svc IP into the "instrumentation_ips"' thing at the pybal level, plus whatever changes are needed to use it from the scap side of things (so that it only checks one local pybal, and it's the correct one by current pooling).
The patch has been rolled out everywhere for a little while at this point, should be able to confirm success
We had a brief meeting on this, and I think the actual problem and immediate workaround is actually much simpler than we imagined. We're going to apply the same workaround we did for MediaWiki traffic in T238285 ( https://gerrit.wikimedia.org/r/c/operations/puppet/+/882663/ ) to the Restbase traffic for now. Patch incoming shortly!
Apr 25 2023
I think we need to rewind a step here. We do want mh, but we want it for the current public sh cases (basically: text and upload ports 80+443), and maybe the other three sh cases (kibana + thanos), although we can start with text+upload first and then talk about those others with the respective teams. The current ticket description and patches seem to be going after the opposite: switching the current wrr services to mh via hieradata and spicerack changes. I think this would be actively harmful. sh and mh choose the destination based on hashes of the source address, which is great for public-facing, but would be hasing on our very limited set of internal cache exit IPs (or other internal service clusters for internal LVS'd traffic), and so it wouldn't balance very well at all. One could potentially address that by including the source port in the hash, but it still seems like it would be more-complicated and less-optimal than just sticking with wrr for these cases.
Apr 17 2023
So, the solution quoted from my IRC chat above: that's about making the depool verification code actually track the currently-live "low-traffic" (applayer/internal) LVS routing, as opposed to what it's doing now (which I think checks the primary+secondary for the role as-configured in puppet, which doesn't account for any failure/depool/etc at the LVS layer).
Apr 14 2023
It's awesome to see this moving along! One minor point:
Apr 13 2023
Mar 30 2023
Mar 13 2023
Resilient hashing indeed sounds much better (it seems like that's their codeword for some internal "consistent hashing" implementation), but it doesn't look like our current router OS have it, at least not when I looked at cr1-eqiad.
Mar 6 2023
The redirects are neither good nor bad, they're instead both necessary (although that necessity is waning) and insecure. We thought we had standardized on all canonical URIs being of the secure variant ~8 years ago, and this oversight has flown under the radar since then, only to be exposed recently when we intentionally (for unrelated operational reasons) partially degraded our port 80 services.