• 1 Post
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle
  • o1i1wnkk@beehaw.orgtoPrivacy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    1 year ago

    Whatsapp imo is better. They are both stealing your data, at least with WA you avoid plain text for third parties.

    By the way, people answering Signal or Matrix: you really suck big time. Of course that’s better, but if you ever have a friend outside privacy nich (or at all - your momma doesn’t count-) , you would learn that’s not a choice


  • o1i1wnkk@beehaw.orgOPtoPrivacy Guides@lemmy.oneCompile with AI
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Your insights as a software developer are truly valuable. Thank you for explaining.

    I agree with your points on the complexities of the build process and the potential pitfalls of taking control away from developers. However, the goal is not to replace the role of developers but to provide additional transparency for those lacking technical expertise. An AI could assist in clarifying this process, and while trust is a wider issue, AI could help in verifying package integrity. The idea is to automate and standardize some aspects of the build process, not to diminish developer control. As technology advances, it’s an idea.


  • o1i1wnkk@beehaw.orgOPtoPrivacy Guides@lemmy.oneCompile with AI
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You’re correct, I suggest a user-friendly AI interface to assist with compilation, not for AI to produce machine code directly. The idea is to increase transparency and trust, especially for non-technical users. The Archlinux scripts you mentioned are indeed similar to my thought, but as you noted, third-party involvement may raise trust issues. Hence, AI might add an extra layer of verification, making the process more understandable. It’s a complex issue worth exploring as technology continues to evolve.


  • o1i1wnkk@beehaw.orgOPtoPrivacy Guides@lemmy.oneCompile with AI
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I understand your concern about the black-box nature of AI and the potential for exploitation. It’s indeed a serious challenge, but I still believe it’s possible to work towards solutions.

    As AI continues to evolve, there’s ongoing research into improving the transparency and interpretability of AI algorithms. Ideally, this could lead to AI models that can better explain their actions and decisions. We may not have reached this point yet, but it is an active area of research and progress is being made.

    Furthermore, having open-source AI models could offer some degree of assurance. If an AI model is open source and has undergone rigorous audits, there’s a higher level of transparency and trustworthiness. The community could scrutinize and vet the code, which might help to mitigate some of the risks associated with hidden secrets and exploitation of the AI’s training methodology.

    And about your point of building, training and vetting the AI ourselves being harder than setting up a buildbot environment: I agree, but the idea here is not to replace human compilers entirely… for now. Instead, the goal could be to have a tool that can aid in ensuring trustworthiness, especially for those of us without the technical background to compile code ourselves.


  • o1i1wnkk@beehaw.orgOPtoPrivacy Guides@lemmy.oneCompile with AI
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I understand your point about the transfer of trust, and it is indeed a serious concern. However, I believe there are measures that could be taken. I’m not an expert myself and I won’t pretend to be one, but it occurs to me that eventually technology will evolve to the point where we could ask the AI to explain step by step how it arrived at the final result. We could also potentially perform audits by cherry-picking the final results from different software to assess their accuracy.

    If we were to use Open Source AI projects (like GPT4all, for example), maybe eventually we could run these codes 100% locally and privately. Naturally, I understand that we are far from this scenario, either due to the resources required or the nature of the complexity involved. It’s just an idea.

    I would never think of bothering a developer by asking them to compile code step by step in front of me. First, because their time is valuable, and second, because the level of my questions would be frustrating. And third - and most importantly - because no one would accept such a whim.

    However, I am willing to go step by step with an AI in some key software applications, such as communication, for example. Journalists or people in jobs where they cannot afford to trust blindly but lack the technical background might find benefit in these possibilities.



  • A nice solution is a domain which only job is to translate everything to the right instance (most popular). Like lemmyweb.xyz lemmyweb.org lemmyweb.etc (many domains owned by different ONG with the same “script” so there is no one single point of failure). The first time you login to this domain you select your user / pass and instance. And then it’s all automatic. I don’t know if what I’m saying has any sense.