Sender | Message | Time |
---|---|---|
9 Apr 2021 | ||
sarahjamielewis | Cross posting here in case anyone has any experience with Flutter automated desktop builds: https://twitter.com/SarahJamieLewis/status/1380602899818930176 | 19:31:20 |
Dan Ballard | ha thanks, I should have thought about that. Also if anyone just has more windows experience than my 0, I'm trying to cobble together a Windows Docker container with Android SDK + NDK, Visual Studios, and Flutter | 19:33:06 |
15 Apr 2021 | ||
mconley changed their display name from mconley to mconley|pto. | 21:19:53 | |
16 Apr 2021 | ||
plasmapower | There's a new 2 round version of MuSig that I somehow didn't hear about: https://eprint.iacr.org/2020/1261 | 22:49:44 |
plasmapower | and the first round can be precomputed too | 22:49:52 |
19 Apr 2021 | ||
jeff | We proved the same result in https://eprint.iacr.org/2020/1245 | 13:08:39 |
jeff | I think blockstream guy's proof has a rather different flavor and they prove a cool result that if you do four witnesses then you can do everything with the forking lemma, and do not require AGM. | 13:10:21 |
jeff | * I think blockstream guy's proof has a rather different flavor and they prove a cool result that if you do four witnesses then you can do everything with the forking lemma, and do not require AGM. | 13:18:48 |
jeff | As an aside, the frost paper does not have a security proof. It has an interactive argument that then appeals to an informal global idea of a random oracle, which reminds me of the approaches Nevin, et al. broke in https://eprint.iacr.org/2018/417. Also, the "fro" in frost is meant to suggest saving pre-nonces/pre-witnesses to disk, which is incredible dangerous unless you're really set up to do it correctly, like because you're making a hardware wallet or something. It also does not really save a round since you need agreement on who signs, again unless you're doing a hardware wallet. I think the frost authors don't mind these concerns because they're precisely aiming at hardware wallets. I do wish they would not put the "fro" in the name however because people will miss use it. As you must agree upon signers anyways, the "fro" gains you nothing unless you've extremely limited bandwidth like a QR code. | 13:27:56 |
jeff | Afaik, there is nobody who currently plans on doing an implementation for Ed25519. AppStores maybe and CAs definitely would benefit from threshold signing. It could be more important than any of the blockchain applications if it actually made it harder for nations to compromise CAs. I donno if that's likely but maybe.. | 13:30:08 |
jeff | * Afaik, there is nobody who currently plans on doing an implementation for Ed25519, but someone could do so. AppStores maybe and CAs definitely would benefit from threshold signing. It could be more important than any of the blockchain applications if it actually made it harder for nations to compromise CAs. I donno if that's likely but maybe.. | 13:30:28 |
jeff | * I think blockstream guy's proof has a rather different flavor and they prove a cool additional result that if you do four witnesses then you can do everything with the forking lemma, and do not require AGM. | 13:30:52 |
mconley changed their display name from mconley|pto to mconley. | 14:14:12 | |
21 Apr 2021 | ||
sanketh changed their display name from sanketh_ to sanketh. | 04:41:55 | |
plasmapower | Download crypto.rs | 18:00:27 |
plasmapower | Don't use this in production of course, but I did write a FROST impl | 18:00:41 |
plasmapower | it was originally going to be a musig2 impl so it supports having 4 nonces if you want it. And it is for ed25519 | 18:01:45 |
plasmapower | I also haven't even reviewed the code myself, so there's probably security issues in it. It's a proof of concept only. | 18:02:13 |
22 Apr 2021 | ||
István András Seres joined the room. | 10:50:59 | |
István András Seres | In reply to @sarahjamielewis:matrix.orgisn't this attack only possible by the server? in order to set the tag to your target false positive public keys, you would need to know their corresponding detection keys which is only known by the server. or what am I missing? | 11:07:02 |
István András Seres | btw, Thank you and congrats on the simulator, Sarah! super amazing work! Hope to build upon it real soon! | 11:07:47 |
István András Seres set a profile picture. | 11:09:29 | |
István András Seres | I was trying to dig into the simulator code and understand more deeply what you've created there, specifically the deanonymisation part of the simulator: https://git.openprivacy.ca/openprivacy/fuzzytags-sim/src/branch/trunk/src/server.rs#L92 | 11:21:29 |
István András Seres | If my understanding is correct then what you do is you observe the skew between the probability distributions of expected number of messages and the actual number of "received messages". One straightforward extension would be to this is to execute this function recursively. Like you remove from the graph the nodes which you deem to be deanonymized and then repeat the whole process on the remaining subgraph. Do you think this kinda trivial extension of this function would make sense or would improve your deanonymization attack? | 11:24:38 |
István András Seres | sorry if these are too trivial ideas and were already discussed either here on twitter. Sadly did not read everything on the topic. anyways would be quite interested to hear any idea/opinion for further deanonymizing fuzzy message detection :) If I'm not mistaken you only consider this skew for a way to deanonymise users...but is there any other heuristic to deanonymise users? | 11:27:52 |
@tkennedy365:matrix.org joined the room. | 13:08:30 | |
sarahjamielewis | In reply to @seresistvanandras:matrix.org Hi! Been a few months since I was heads deep in fuzzy message detection (I dumped a bunch of thoughts https://docs.openprivacy.ca/fuzzytags-book/introduction.html and in the code back then which might also prove helpful). Senders always have access to the full public key and so can always construct a valid tag that will pass a test by any derived detection key. The multiple public key matching is done by filtering the generated randomness to find values that work for all of the public keys you want to match (this gets harder as the number of public keys increases and/or as gamma increases). i.e. keys generated with a public key will always match The server has access to the derived detection keys, which are ostensibly much shorter (and so can simply brute force tags to match any arbitrary(?) subset of tags, but that is less interesting, since the server can obtain valid tags in other ways too, and injecting notifications is inherent to the underlying problem and not something that is server specific. | 16:39:00 |
sarahjamielewis | In reply to @seresistvanandras:matrix.org I've played around with this attack a little, in the context of scenarios where no one downloads all messages. In those scenarios it is trivial to filter out the anonymity set over time. (for a set of 1000 users it can take as few as 3 messages given prior knowledge that those 3 messages are related) The one complication is in accommodating parties that do download everything. They are the effective anonymity floor of the system and they can't be filtered out naively and do impact the usefulness of any derived social graph. So given any set of parties and a set of messages the cardinality of potential deanonymizations for each message will always be >= the cardinality of the set of parties who download everything. | 16:46:48 |
sarahjamielewis | In reply to @seresistvanandras:matrix.org I documented a few general approaches here: https://docs.openprivacy.ca/fuzzytags-book/risk-model.html They basically split into 2 domains: intersection attacks (like above) given prior knowledge of structure of communication and statistical attacks based on the expected false positive rate. Both are highly effective in simulations, but both are undermined in proportion to the set of parties who download everything. | 16:49:07 |
sarahjamielewis | My aim is to revisit fuzzytags after getting Cwtch beta out the door, as I do think there is at least something in the idea of moving false positive detection to the receiver side, though after all the simulations and tests I did I'm not convinced that fuzzy message detection it it's current form is fundamentally sound from a privacy persepective, given its intended threat model. You might find it interesting to try out some attack ideas on the dataset simulations I outlined: https://docs.openprivacy.ca/fuzzytags-book/simulation-eu-core-email.html | 16:53:11 |