Things my team is working on: MediaWiki-Platform-Team
Side projects I am working on (or planning to, eventually): User-Tgr
You can find more info about me on my user page.
User Details
- User Since
- Sep 19 2014, 4:55 PM (567 w, 5 d)
- Availability
- Available
- IRC Nick
- tgr
- LDAP User
- Gerg? Tisza
- MediaWiki User
- Tgr (WMF) [ Global Accounts ]
Today
Which test do you mean? At a glance, I don't see anything destructor-related in SessionBackendTest.
In general, if the test checks some behavior that's still relevant after the refactoring, it should ideally be kept, otherwise not.
Seems to be working:
Could not send confirmation email: Sendmail exited with non-zero exit code 74
Thanks a lot @jhathaway and @Scott_French for fixing the error reporting issue!
Yesterday
No, the script doesn't try to copy client hints. It does try to copy IPs, it just (reportedly) doesn't always work.
If it works as intended, I think the only change is Logstash error messages (link) getting more informative.
Thanks! We are definitely interested. By monitoring do you just mean checking if this error becomes less frequent / the error message becomes more useful (can do) or testing email sending after the deployment (can do as well if you ping me)?
Objects with a destructor are garbage collected as soon as nothing references them anymore (not sure PHP actually guarantees this, but seems to hold in practice), and the most likely way a reference survives the end of a test is via the User -> Request -> Session reference chain. So resetting that during teardown seems like the simplest fix, although it only treats the symptom.
So basically we need a ForeignAPIRepo subclass that overrides httpGet() with something along the lines of
$version = MW_VERSION; $contact = Title::newMainPage()->getCanonicalUrl(); // or use $wgEmergencyContact? $options['userAgent'] = "InstantCommons MediaWiki/$version ($contact)"; return parent::httpGet( $url, $timeout, $options, $mtime);
There is no way to declaratively add arbitrary headers, so if we really need a referer, that will be more complex.
Mon, Aug 4
I don't think MW-Vagrant meaningfully pins a Vagrant version, it's up to the host machine. I have Vagrant 2.4.3 (which is about six months old) and haven't encountered any problems related to the Vagrant version. Not sure what's the relationship between the vagrant gem version and the actual Vagrant version, but we can probably just bump it. (The last FLOSS licenced version is 2.3.7 so it would make sense to standardize on that.)
I think we should fix this by the time we switch to PHP 8.3. It would be nice to move to library versions which were tested on that version. At least for lcobucci/jwt (which is pinned to an old version because the old version of oauth-server requires that) that's not the case today. (We can fix that without unforking, by just merging in some upstream changes, but I'm not sure it would be less effort, and we'd just be rolling the ball forward.)
MediaWiki-Platform-Team will pick up the core part of this. Note that the soonest a change to the InstantCommons code could make a difference is after the next MediaWiki release (so in about 3 months). Many sites will only upgrade when the next LTS version is released (in about 15 months).
Sun, Aug 3
T399057: Introduce allowlists into the CDN (text) filtering has some discussion of planned rate limiting classes.
When the images are hotlinked (but the downstream wiki still needs to fetch metadata), adding a username would reveal IP / username combinations to the upstream wiki via timing correlations. Can't violate privacy much more than that.
Sat, Aug 2
Search for the relevant libraries. Turns out firebase/php-jwt is used in ContentTranslation (for authenticating with cxserver) and CheckUser (for paging-related URL parameters, to prevent data leak).
Fri, Aug 1
The dashboard for the session write logs is here.
That's fair. Let us know if we can help something (e.g. an IP throttling exemption).
That sounds like an error in the job runner rather than the job? The job was scheduled, the status was set to In progress, but then the job runner crashed and never actually executed the job?
Well, more specifically, it would prevent storing recovery codes via one-way hashes. Encrypting them would still be a meaningful security improvement.
This would prevent the recovery-codes part of T145915: OATHAuth OTP shouldn't be stored in cleartext in the DB.
Probably blocked on T232336: Separate recovery codes into a separate 2FA module.
Base56 and base58 are some common ways to generate characters which are hard to mistake for each other. We could use the uppercase-only version of one of those.
Similar older task: T332385: Improve descriptions for our 2FA methods in 2FA management page
Replaced "FIDO" with WebAuthn - I think the intent was the same but FIDO is less well-specified. Let me know if I misunderstood.
Boldly closing and tagging those tasks instead.
The older task about this is T166622: Allow all users on all wikis to use OATHAuth. There it was suggested that the blockers for making 2FA available to everyone are T242031: Allow multiple different 2FA devices, T150601: Add option to generate new set of recovery codes (which requires T232336: Separate recovery codes into a separate 2FA module) and T180896: Allow functionaries to reset second factor on low-risk accounts.
Do you want to track enables / disables or just the number of people who have enabled it? The first would probably have to be done via an event stream, the second via a Prometheus exporter.
I didn't test this but looked through the code (while looking at {T268384}), and I don't think this is the case - the disable form eventually calls WebAuthn::verify() which doesn't privilege any key.
Do we want to fix the recovery code part of T145915: OATHAuth OTP shouldn't be stored in cleartext in the DB as part of this?
This is now happening as part of FY2025-26 WE4.6.2 Multiple Authenticators so we can probably close this task?
Let's close this given there's a new design research effort now.
Looks like this is fixed?
I think we can call this one fixed. On Wikimedia wikis there is no domain conflict anymore because of the SUL3 shared login domain (the special page links go to that central domain now), and third party wikis can use the $wgWebAuthnRelyingPartyID configuration variable added in this task to log in on all subdomains of a single domain, and related origins for supporting multiple top-level domains to some extent. I don't think anything more can be done about it.
In theory this works (today, anyway; not sure about three years ago) - WebAuthn::verify() will just iterate through all keys. I tested using two different WebAuthn keys a few times in the past, and it seemed to work.
I suppose the issue with the specific user/devices is not reproducible after so much time, so let's close this and reopen if someone has exact reproduction steps.
Thanks!
Thu, Jul 31
And introduce the concept of display names, so they can be differentiated from each other. And have some sort of a default display name (since the existing entries don't have one), maybe based on creation date.
For backup codes, I imagine we won't allow setting up multiple, from the database point of view (from the user's POV they come in batches of ten already) so there is no other information needed than maybe the number of remaining codes. Even if wanted a "give me 10 more codes" functionality, we'd probably just add that to the existing set of codes and still keep everything in a single DB row.
Currently the only information we show about WebAuthn keys is a user-provided nickname. For TOTP keys we have no information whatsoever (not a problem in the current UI where there can only be one TOTP key, but once we have multiple, this might become problematic).
Multiple TOTP keys is T230042: Allow multiple TOTP devices.
Being able to use TOTP keys and WebAuthn keys at the same time is T242031: Allow multiple different 2FA devices.
(Multiple WebAuthn keys has always been possible.)
Currently you can use the TOTPAuthenticationRequest to submit either a real TOTP code or a backup code. Once we do separate backup codes from TOTP, the latter might stop working (unless we add some kind of B/C code). So we should clarify the expectations around that.
The way the 2FA integration in AuthManager works in a nutshell is that the OATHAuth extension registers a secondary authentication provider and implements the beginAuthentication() and continueAuthentication() methods. beginAuthentication() will be called once the primary authentication provider established the user's identity (ie. the user has submitted the username + password form), and will return one or more AuthenticationRequest objects (wrapped inside an AuthenticationResponse) that describe the next login form. Then when the user submits that form, continueAuthentication() is called, and can either do the same, or signal success (allow the login flow to proceed).
Current 2FA logic in clients: Android app, iOS app, Commons app, CommonsFinder.
TOTP (and backup codes, once split) is straightforward - just entering text in a form field.
We'd need to make a TOTPManageForm, along the lines of WebAuthnManageForm.
Done in T242031: Allow multiple different 2FA devices I think (unless you want to use this task for creating the new design).
So this needs:
- a new 2FA module in OATHAuth
- making the backup code part of the TOTP UI optional (both for setup and for verification)
- some sort of workflow for ensuring that generating backup codes is still integrated with the TOTP setup flow (and presumably it would also integrate them with the WebAuthn setup flow)
- a feature flag for switching from generating as part of TOTP setup to generating via this new workflow
- a migration script that copies codes from existing TOTP records into separate DB rows, to be run once the feature flag was switched