They/Them

Network Guardian Angel. Infosec.

Antispeciesist.

Anarchist.

Personal Website

You should hide scores on Lemmy. They are bad for you.

  • 8 Posts
  • 15 Comments
Joined 3 years ago
cake
Cake day: January 11th, 2022

help-circle













  • I don’t think that a robots.txt file is the appropriate tool here.

    First off, robots.txt are just hints for respectful crawlers. Go proxies are not crawlers. They are just that: caching proxies for Go modules. If all Go developers were to use direct mode, I think the SourceHut traffic would be more, not less.

    Second, let’s assume that Go devs would be willing to implement something to be mindful of robots.txt or retry-after indications. Would attackers do? Of course not.

    If a legitimate although quite aggressive traffic is DDoSing SourceHut, that is primarily a SourceHut issue. Returning a 503 does not have to be respected by the client because the client has nothing to respect: the server just choose to say “I don’t want to answer that request. Good Bye”. This is certainly not a response that is costly to generate. Now, if the server tries to honor all requests and is poorly optimized, then the fault is on the server, not the client.

    I have not read in details the Go Proxy implementation, to be truthful. I don’t know how it would react if SourceHut was answering 503 status code every now and then, when the fetching strategy is too aggressive. I would simply guess that the server would retry later and serve the Go developers a stale version of the module.


  • I don’t get it. Public endpoints are public. Go proxies (there are alternatives to direct mode or using Google proxy, such as Athens) are legitimate to query these public endpoints, as aggressively as they want. That’s not polite, but that’s how the open Internet works and always has.

    I don’t get why SourceHut does not have any form of DDoS protection, or rate-limiting. I mean HTTP status 503 and the retry-after header are standard HTTP. That Drew chose a public outcry over implementing basic anti-applicative DDoS seems to be a very questionnable strategy. What would happen to the Sourcehut content if tomorrow attackers launch a DDoS attack on SourceHut? Will Drew post another public outcry on their blog?

    SourceHut is still in alpha. This feels like a sign that it is still not mature enough to be a prod service for anyone.


  • The OpenPGP format was designed in the 90’ and never really changed since then. It was documented in RFC4880 in 2008. Unfortunately, in the 90’, people had really no good understanding of crypto yet, and the choices made were poor. Envelope design is poor. Some crypto algorithms are clearly outdated. Some default options are plain wrong.

    Have you ever noticed that so many crypto attacks target OpenPGP and GnuPG? That’s not a surprise: it’s a popular crypto solution and it’s a relatively easy target, comparatively to some other mainstream crypto implementations. The Go langage maintainers even deprecated the OpenPGP implementation in their crypto standard library because they think OpenPGP is dangerous

    OpenPGP is incompatible with https://golang.org/design/cryptography-principles, it’s complex, fragile, and unsafe, and using it exposes applications to a dangerous ecosystem.

    Basically, I would say that the only thing that OpenPGP has for itself is the deployed infrastructure. Or has it? Web of trust is mostly dead, since keyservers are out-of-service. And OpenPGP adoption was never really that high to begin with.

    SSH keys are much more widely deployed and used than OpenPGP keys. The format is dead simple, and the crypto implementation from OpenSSH is up-to-date.

    I am very happy that git made SSH signing possible; it means I can delete my OpenPGP keys for good. I just hope linux distros will make the switch soon, to a more modern crypto approach: ssh signing or minisign.




  • Very good question. Thank you for asking.

    To sign documents, I would recommend using signify or minisign.

    To encrypt files, I guess one could use age

    If you need a cryptolibrary, I would recommend nacl or sodium. In Go, I use nacl a lot. If you need to encrypt or sign very large files, I wrote a small library based on nacl.

    Emails are the tricky part. It really depends on your workflow. When I was working for a gov infosec agency, we learned to never use any integrated email crypto solution. Save the blob, decrypt the blob in a secure environment. This helps significantly against leaks and against creating an oracle to the attacker’s benefit.

    For data containers, I would use dm-crypt and dm-verity + a signed root. But that’s just me and I would probably not recommend this to other people :)

    OpenPGP is rarely used in messaging protocols, but if it was I would probably advise leveraging a double ratchet library.



  • X_Cli@lemmy.mltoPrivacy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 years ago

    Does anyone know if and how the private key is secured during cloud sync? Do they have access to it or is it ciphered before sync using the… user password?

    Also, how is it different from Duo Push? (edit: I am talking workflow, here. I know about the FIDO part)




  • Good article. Thank you. You make some excellent points.

    I agree that source access is not sufficient to get a secure software and that the many-eyes argument is often wrong. However, I am convinced that transparency is a requirement for secure software. As a consequence, I disagree with some points and especially that one:

    It is certainly possible to notice a vulnerability in source code. Excluding low-hanging fruit, it’s just not the main way they’re found nowadays.

    In my experience as a developer, the vast majority of vulnerabilities are caught by linters, source code static analysis, source-wise fuzzers and peer reviews. What is caught by blackbox (dynamic, static, and negative) testing, and scanners is the remaining bugs/vulnerabilities that were not caught during the development process. When using a closed source software, you have no idea if the developers did use these tools (software and internal validation) and so yeah: you may get excellent results with the blackbox testing. But that may just be the sign that they did not accomplish their due diligence during the development phase.

    As an ex-pentester, I can assure you that having a blackbox security tools returning no findings is not a sign that the software is secure at all. Those may fail to spot a flawed logic leading to a disaster, for instance.

    And yeah, I agree that static analysis has its limits, and that running the damn code is necessarry because UT, integrations tests and load tests can only get you so far. That’s why big companies also do blue/green deployments etc.

    But I believe this is not an argument for saying that a closed-source software may be secure if tested that way. Dynamic analysis is just one tool in the defense-in-depth strategy. It is a required one, but certainly not a sufficient one.

    Again, great article, but I believe that you may not be paranoid enough 😁 Which might be a good thing for you 😆 Working in security is bad for one’s mental health 😂