• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 3rd, 2023

help-circle
  • S410@lemmy.mltoMemes@lemmy.mlMath
    link
    fedilink
    arrow-up
    2
    arrow-down
    10
    ·
    9 months ago

    How does burning a car improve anything? By what logic does not burning a car equal to “fucking over the next 15 generations”?
    Misdirected rage, even if it’s initially for a good reason, doesn’t help anything. If there’s a house on fire, you pour water on that house, not one two streets over. You do the latter, you end up with two destroyed houses: one burned, the other flooded.


  • I’m not talking about this case and this data. I’m talking about the take of people above on how things should be handled.

    Derpgon said “russia can be fucked” regardless of whether it’s a military or civilian target.

    Maness300 pointed out that collective punishment can easily turn into discrimination. That is, there’s a big difference between “XYZ is bad, because it’s aiding Russian military” and “XYZ is bad, because it’s Russian”

    Phoenixz points out that Russia is committing genocide, as if it’s a counter argument to the previous statement, somehow.

    I point out that just because Russia is committing genocide, it doesn’t make it right to slip into “XYZ is bad, because it’s Russian” and use that as justification to do anything you want to the country and its people.

    Maness300 is right: targeting entities of any country for the action of said country, regardless of whether the entities in question are responsible, or even capable of influencing the actions, is not a good idea. It will lead to more problems.

    I, for example, don’t support the US selling weapons to Israel. Should go and set the closest 7-eleven on fire? It’s an American company, so, clearly, it’s a valid target, right?



  • Again, the state being a piece of shit, doesn’t mean everyone who lives and operates in that country automatically supports every single decision of the state.

    You know that Israel buys most of its weapons from the US, right? In other words, US actively supplies Israel with weapons knowing full well what Israel is using them for. Are you going to boycott all US companies too, now?

    If you want to hurt a state, you should get your country to stop doing business with that state - that is, natural resources and weapons trading, - not go after civilians and civilian businesses, which aren’t responsible for the already made decisions, nor hold any power to overturn them. (Since countries that are known for aggressive behavior aren’t known for being particularly democratic).


  • S410@lemmy.mltoOpen Source@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    3
    arrow-down
    5
    ·
    9 months ago

    Just because the company is an Israeli one, doesn’t necessary mean they interact with the Israeli government in any way other than paying taxes.
    Assuming ill intentions based on the country someone or something is from/based in is, pretty much, racism.
    Now, assuming ill intentions based on the fact that it’s yet another god damn adware company is perfectly justified.



  • Did you add a repo for RHEL8 to your Fedora install? Please, undo that.
    Please, don’t blindly follow instructions you find online, particularly when it comes down to installing something as important as drivers.
    Installing drivers from third party sources should be done only as the last resort and only if you know exactly what you’re doing.



  • What about “if the target audience is known to not speak it” part do you not understand?

    It’s one thing to have your little community in whatever language and post there.

    It’s another to show up in a very much international community and start posting in whatever random language you want. Or, worse, start replying to comments written in English using your language. Like this guy. Just… How do you even do that? Aren’t you supposed to kind of speak English to even understand the content of a comment you’re replying to? Why not respond using it, then?

    I’m not a native English speaker. My friends aren’t either. Yet we all use it for pretty much the same reason. And if you think you can just chime in, go “你的母亲是只仓鼠,你父亲满身接骨木的气味”, and be both perfectly understood and not downvoted, you’re either a troll or an idiot.


  • It makes sense to not use English if the target audience is known to not speak it, but it is often not the case.
    English is the most commonly spoken language in the world, after all. To not use it, is to make the content less searchable and harder to understand for billions of people.
    Over a billion of those people have learned it as their second language simply to understand and be understood by each other. Is it really that weird that those who can’t be bothered to do the same get downvoted?



  • S410@lemmy.mltoLinux@lemmy.mlI'm an idiot (arm)
    link
    fedilink
    arrow-up
    34
    arrow-down
    2
    ·
    10 months ago

    Offtopic, but why on earth would anyone use .rar? It’s a proprietary format. The reason there’s basically no software to create or modify .rar archives is due licensing, which makes it illegal to write software that can do it.

    Looking at the rarlab’s website, it appears that only the MacOS version has an ARM build. For Linux, only x86 and x64 are listed.

    So, either use MacOS, use emulation to run the x86/x64 version or break the law.



  • Machine learning doesn’t retain an exact copy either. Just how on earth do you think can a model trained on terabytes of data be only a few gigabytes in side, yet contain “exact copies” of everything? If “AI” could function as a compression algorithm, it’d definitely be used as one. But it can’t, so it isn’t.

    Machine learning can definitely re-create certain things really closely, but to do it well, it generally requires a lot of repeats in the training set. Which, granted, is a big problem that exists right now, and which people are trying to solve. But even right now, if you want an “exact” re-creation of something, cherry picking is almost always necessary, since (unsurprisingly) ML systems have a tendency to create things that have not been seen before.

    Here’s an image from an article claiming that machine learning image generators plagiarize things.

    However, if you take a second to look at the image, you’ll see that the prompters literally ask for screencaps of specific movies with specific actors, etc. and even then the resulting images aren’t one-to-one copies. It doesn’t take long to spot differences, like different lighting, slightly different poses, different backgrounds, etc.

    If you got ahold of a human artist specializing in photoreal drawings and asked them to re-create a specific part of a movie they’ve seen a couple dozen or hundred times, they’d most likely produce something remarkably similar in accuracy. Very similar to what machine learning images generators are capable of at the moment.


  • Expect for all the cases when humans do exactly that.

    A lot of learning is, really, little more than memorization: spelling of words, mathematical formulas, physical constants, etc. But, of course, those are pretty small, so they don’t count?

    Then there’s things like sayings, which are entire phrases that only really work if they’re repeated verbatim. You sure can deliver the same idea using different words, but it’s not the same saying at that point.

    To make a cover of a song, for example, you have to memorize the lyrics and melody of the original, exactly, to be able to re-create it. If you want to make that cover in the style of some other artist, you, obviously, have to learn their style: that is, analyze and memorize what makes that style unique. (e.g. C418 - Haggstrom, but it’s composed by John Williams)

    Sometimes the artists don’t even realize they’re doing exactly that, so we end up with with “subconscious plagiarism” cases, e.g. Bright Tunes Music v. Harrisongs Music.

    Some people, like Stephen Wiltshire, are very good at memorizing and replicating certain things; way better than you, I, or even current machine learning systems. And for that they’re praised.



  • It’s called “machine learning”, not “AI”, and it’s called that for a reason.

    “AI” models are, essentially, solvers for mathematical system that we, humans, cannot describe and create solvers for ourselves, due to their complexity.

    For example, a calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly. For the device to be useful, however, the creator will have to analyze mathematical works of other people (to figure out how math works to begin with) and to test their creation against them. That is, they’d run formulas derived and solved by other people to verify that the results are correct.

    With “AI” instead of designing all the logic manually, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for some incredibly complex system. Such as languages or images.

    If we were training a regular calculator this way, we might feed it things like “2+2=4”, “3x3=9”, “10/5=2”, etc.

    If, after we’re done, the model can only solve those three expressions - we have failed. The model didn’t learn the mathematical system, it just memorized the examples. That’s called overfitting and that’s what every single “AI” company in the world is trying to avoid. (And to do so, they need a lot of diverse data)

    Of course, if instead of those expressions the training set consisted of Portrait of Dora Maar, Mona Lisa, and Girl with a Pearl Earring, the model would only generate those tree paintings.

    However, if the training was successful, we can ask the model to solve 3x10/5+2 - an expression it has never seen before - and it’d give us the correct result - 8. Or, in case of paintings, if we ask for a “Portrait of Mona List with a Pearl Earring” it would give us a brand new image that contains elements and styles of the thee paintings from the training set merged into a new one.

    Of course the architecture of a machine learning model and the architecture of the human brain doesn’t match, but the things both can do are quite similar. Creating new works based on existing ones is not, by any means, a new invention. Here’s a picture that merges elements of “Fear and Loathing in Las Vegas” and “My Little Pony”, for example.

    The major difference is that skills and knowledge of individual humans necessary to do things like that cannot be transferred or lend to other people. Machine learning models can be. This tech is probably the closest we’ll even be to being able to shake skills and knowledge “telepathically”, so to say.


  • Why are you entitled to other peoples work?

    Do you really think you’ve never consumed data that was not intended for you? Never used copyrighted works or their elements in your own works?

    Re-purposing other people’s work is literally what humanity has been doing for far longer than the term “license” existed.

    If the original inventor of the fire drill didn’t want others to use it and barred them from creating a fire bow, arguing it’s “plagiarism” and “a tool that’s intended to replace me”, we wouldn’t have a civilization.

    If artists could bar other artists from creating music or art based on theirs, we wouldn’t have such a thing as “genres”. There are genres of music that are almost entirely based around sampling and many, many popular samples were never explicitly allowed or licensed to anyone. Listen to a hundred most popular tracks of the last 50 years, and I guarantee you, a dozen or more would contain the amen break, for example.

    Whatever it is you do with data: consume and use yourself or train a machine learning model using it, you’re either disregarding a large number of copyright restrictions and using all of it, or exist in an informational vacuum.