Also known as @VeeSilverball

  • 3 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • Some of my own thoughts, which rebut the article in parts:

    1. Godot does have “barbell performance” - you can make it go fast if you drop to C++ and do low-level engine things to add new nodes, resources, etc. You can also make it go fast when you use the premade nodes without a great deal of script in between(and the nodes are, FWIW, pretty flexible and composable). What it doesn’t do at present is the thing Unity users are used to, which is “fast scripting”. Fast scripting still means working around the garbage collector and the overheads of going between native and a runtime. C# is a kind of flytrap for the needs of high-end games, and Unity has only seemingly surmounted the issues by doing a lot of custom engineering for their use-case. That is, you don’t really code standard C# in Unity, you code Unity’s C#, which is nearly as bespoken as GDScript.
    2. Saying the engine is coded in a naive way is actually not as smart as it seems, because there’s a maintenance cost to always doing things in exactly the most optimal way. The target for what is fastest changes every time the platform changes. As a (up until recently) relatively small project, it’s overall better that the engine stay relatively easy to build and straightforward to modify, which is what it’s done. The path it’s taken has helped it stay “lightweight”. The price of that is that sometimes it doesn’t even take low-hanging fruit that would be a win for 90% of users.
    3. The 3D in Godot 4 is capable of good test scenes, but everyone seems to agree that it’s not really ready for production for speed reasons. Any specific point on this just backs that up. And that’s disappointing in one sense, but pretty okay in others. If you need high-end graphics, Unreal will welcome you for the time being.
    4. On that note, developing for console always comes with fussy limitations, at minimum just meeting TRC/TCR/lot check; that’s why professional porting is a thing. Engine devs usually end up in the position of maintaining these multiple-API abstractions because it’s necessary for porting. It’s the same deal with the audio code, the persistent storage, the controllers, the system prompts, it just goes on and on like that. So, rewriting the rendering bindings to do things in the D3D way and not the Vulkan way is actually a bit of a whatever; it’s more rendering code. It changes some assumptions about what binds to what. But it accesses the same kind of hardware, running the same kind of shaders. A lot of ports in the not-so-distant past basically had to start over because the graphics hardware lacked such a common denominator.

    The author’s bio says that they have been doing this as a professional for about 5 years, which, face value, actually means that they haven’t seen the kinds of transitions that have taken place in the past and how widely game scope can vary. The way Godot does things has some wisdom-of-age in it, and even in its years as a proprietary engine(which you can learn something of by looking at Juan’s Mobygames credits the games it was shipping were aiming for the bottom of the market in scope and hardware spec: a PSP game, a Wii game, an Android game. The luxury of small scope is that you never end up in a place where optimization is some broad problem that needs to be solved globally; it’s always one specific thing that needs to be fast. Optimizing for something bigger needs production scenes to provide profiling data. It’s not something you want to approach by saying “I know what the best practice is” and immediately architecting for based on a shot in the dark. Being in a space where your engine just does the simple thing every time instead means it’s easy to make the changes needed to ship.


  • The game has already consumed over 40 hours of my time, and I’ve got plenty more campaign to go. It does just about all the stuff I wanted JA2 to have to make it play faster - combat is faster, looting is faster, inventory is faster. It has a few things that look like X-COM, but it still mostly plays like JA. The early game is the roughest part but things definitely shaped up once I had a team with size, experience and gear.

    And the campaign is detailed with a few surprises and plenty of side quests - it does some things to pull the rug on you, which is rude, but rewarding if you play along and accept a few losses(or carefully savescum and go out of your way to avoid triggering timed quests).



  • Arch is always “latest and greatest” for every package, including the kernel. It lets you tinker, and it’s always up to date. However, a rolling release introduces more ways to break your system - things start conflicting under the hood in ways that you weren’t aware of, configurations that worked don’t any longer, etc.

    This is in contrast to everything built on Debian, which Mint is one example of - Mint adds a bunch of conveniences on top, but the underlying “how it all fits together” is still Debian. What Debian does is to set a target for stable releases and ship a complete set of known-stable packages. This makes it great for set and forget uses, servers that you want to just work and such. And it was very important back in the 90’s when it was hard to get Internet connectivity. But it also means that it stays behind the curve with application software releases, by periods of months to a year+. And the original workaround to that is “just add this other package repository” which, like Arch, can eventually break your system by accident.

    But neither disadvantage is as much of a problem now as it used to be. More of the software is relatively stable, and the stuff you need to have the absolute latest for, you can often find as a flatpak, snap, or appimage - formats that are more self-contained and don’t rely on the dependencies that you have installed, just “download and run.”

    Most popular distros now are Arch or Debian flavored - same system, different veneer. Debian itself has become a better option for desktop in recent years just because of improvements to the installer.

    I’ve been using Solus 4.4 lately, which has its own rolling-release package system. Less software, but the experience is tightly designed for desktop, and doesn’t push me to open terminals to do things like the more classical Unix designs that guide Arch and Debian. The problem both of those face as desktops is that they assume up-front that you may only have a terminal, so the “correct way” of doing everything tends to start and end with the terminal, and the desktop is kind of glued on and works for some things but not others.


  • My principle of “blockchain’s fundamental value” is simply this: A blockchain that secures valuable information is valuable.

    To break that down further:

    • “Valuable information” isn’t data - it’s something that you can interpret, that has meaning and power to affect your actions. So, price speculation taking place on a chain isn’t that valuable in a broad, utilitarian sense, but something like encyclopedic knowledge, historical records, and the like might be. The sense of “this is real” vs “this is Monopoly money” is related to the information quality.
    • “Secures” means that we have some idea of where the information came from, who can access it, and whether it’s been altered or tampered. Most blockchains follow the Bitcoin model and are fully public ledgers, storing everything - and just within that model(leaving aside Monero etc.) there are positive applications, but “automatically secure” is all dependent on what application you’re aiming for.

    You don’t need to include tokens, trading, finance, or the specific method of security, to arrive at this idea of what a blockchain does, but having them involved addresses - though maybe without concretely solving - the question of paying upkeep costs, a problem that has always dogged open, distributed projects in the past. If the whole chain becomes more valuable because one person contributes something to it, then you have a positive feedback loop in which a culture of remixing and tipping is good. It tends to get undercut by “what if I made scam tokens and bribed an exchange to list them”, the maxi- “we will rule the world” cultures of Bitcoin and Ethereum, or the cynical “VC-backed corporate blockchains”, but the public alt chains that are a bit out of the spotlight with longer histories, stuff like Tezos and NEM/Symbol, tend to have a more visible sense of purpose in this direction - they need to make a myth about themselves, and the myth turns into information by chance and persistence.

    What tends to break people’s brains - both the maxis, and people who are rabidly anti-crypto - is that securing on-chain value in this way also isn’t a case of “public” vs “private” goods. It’s more akin to “commons” vs “enclosed” spaces, which is an older notion that hasn’t been felt in our political lives in centuries, because the partnership of nation-states and capital has been so strong as a societal coordinating force - the state says where the capital should go, the people that follow that lead and build out an empire get rewarded. The commons is, in essence, the voice in the back of your mind asking, “Why are you in the rat race? Do you really need an empire?” And this technology is stating that, clearly and patiently: making a common space better is another way to live.

    And so there is a huge amount of spam around “ownership”, but ownership itself isn’t really a factor. That’s just another kind of information that the technology is geared towards storing. The social contract is more along the lines that if you are doing good for a chain and taking few risks, a modest, livable amount of credit is likely to flow to you in time. Everyone making “plays” and getting burned is trying to gamble with it, or to advance empire-building goals in a basically hostile environment that will patch you out of the flow of information.



  • To me, a big difference is in the lengthy prelude, which follows the model of TOS, just with an updated production. First the synths layered with strings, which are very 80’s wonder-music(it could be right out of the score for Flight of the Navigator or The Goonies) and then the french horns come in playing a round, which adds a Wagnerian element.

    The percussive “march music” elements quoting TMP are subdued in TNG’s arrangement - it’s a less compressed, “punchy” sound, and I believe the mic has been set farther back or they’ve EQ’d out some higher frequencies. Those decisions, plus a few choices of instrumentation like the harp glissandos, tone down the bombastic energy and add a gliding, romantic quality. Again, more like TOS, but updated.


  • My favorite example of “weird camera” is Journey to the Planets. It’s an Atari 800 game with graphics that are more 2600-esque. It’s mostly side view, but the proportions are abstract, like a child’s drawing: the spaceship is about 1/3rd the size of the player sprite, but then as you lift off it shows zoomed out terrain and the sprite is the same size. The game is based around solving adventure game puzzles with objects that are mostly just glowing rectangles, but your way of interacting with the puzzles involves a lot of shooting. Even though there’s so little detail, every room feels “hand-crafted”.

    I’m pretty sure the game permanently altered my sense of aesthetics.


  • I used to experience FOMO over games in general. There was always some kind of technical advance to marvel at. But that ended in the past decade. Some of it because age, but also because my approach to games changed: it wasn’t that important to see more content, especially when the content was getting relatively less risky and more predictable in most cases, the same kind of “put 3D people in a scene and animate them kind of poorly with bad movie dialogue” stuff over and over.

    So I tend to pick up games after there’s a lot of DLC and get the bundle depending on what it adds. The microtransactions are an “almost never”, at most they’re another obstacle to gameplay and I’ll go find something else if it’s too much.

    The correct microtransaction for me is how pinball works: I play to see how much I can get out of one credit. If I meet the conditions to get a free credit I may play again, or consider that a win and walk away.


  • I’ve become comfortable with saying, “it’s not worth it being normal”, and thus am ok with the normie tag, even if I don’t want to turn it into a slur. Normal is a form of social construct that tends to be imposed on us: it’s when the teacher enters the classroom and the class sits down and shuts up, not because they explicitly agreed to, but because it was normalized. There isn’t anything in particular suggesting that normal = good.

    And that does have an elitist ring to it, which does upset the goal of equitable outcomes in some ways. There are certain philosophies in which a premise of elitism is assumed, as in, some people just won’t be able to access the necessary understanding to participate, so it’s better to have a gate than to let anyone wander through. This is the view of Leo Strauss and followers - for a really detailed explanation try Arthur Melzer’s book Philosophy Between the Lines.

    Even if you don’t read that book(it’s a good book), I would make the case in this way: it’s the difference between online gaming that uses automatic matchmaking, versus a martial arts dojo. Online gaming and toxicity correlate because there is no lower bar on who can play, so all sessions are pushed to the lowest common denominator of cheating, griefing, etc. But someone who participates in martial arts like a gamer gets kicked out of the gym, if the standards are high: an instructor who values students does not let them attempt to eyegouge each other or slam their head on the mat.

    Occasionally someone like that will sign up for a tournament, commit an illegal move in the first round, and get themselves disqualified. But they can leave their opponent seriously injured in the process, and maybe end a career. So the standards tend to condition a degree of gatekeeping, respect for others, etc. Not every gym does well at this, and some styles like boxing have a norm of “hard sparring” where full-contact is trained and damage is expected. But predominantly the focus is on getting the techniques and training without destroying yourself or others.

    And I do liken the idea of complete access to, essentially, allowing dirty street brawls to be the only kind of sparring, the only way in which you can interact online with strangers.

    Martial artists also sometimes use the normie term. They will say it outright: you have to be a weirdo to spend so much time getting beat up and choked out. We can have a gate and still be tough on ourselves to do better.


  • Eugen is not the person I would trust for good judgment on this because his agenda has always been user growth-centric, so a Fediverse that resembles Facebook would just be a “yay” moment for him - either way he can still end up with a career by leveraging his role with ActivityPub.

    That said, I don’t believe EEE works here, because AP evolved in an environment that already had to compete like-for-like with corporate options. You’ll still log in for the rest of the fediverse if it brings you better content than Threads…

    …and it has an edge on that, because these spaces are not designed around herding around industrial quantities of users. They have a natural size at which moderators shrug and close the gates if a big instance is too troublesome, because it hurts the quality of the experience for their own users. This peeves instance admins who want the power fantasy of “owning” a lot of low-quality users, but it also basically guarantees defederation with corporate social, because it’s never been able to handle its own moderation problem other than in a “pass-the-buck” way.


  • When I looked for answers I spent some time exploring “personal organization Youtube” and found a few general principles for myself:

    • Make a real place for everything, not a pile. The difference between a simple tray or bin and a pile is that you can reposition a container without moving every single item in it. Repeating this principle at different scales from very small to very large will quickly organize things into a hierarchy, make it less exhausting to “declutter” and reduce the chance of errors like knocking over stuff and having a cascade.

    • Have more containers than you need. Then you don’t “end up” with a pile forming. But don’t have more containers than is appropriate for the space you’re working with. That’s just a sign of needing to throw out some things.

    • Also consider how you “divide” vs “bind” space together. The two are like ying and yang: we divide up documents into pages, but then we bind them together again with various mechanisms like yarn, binder clips or glue. Sometimes the right answer for making a place is assuming it’s pre-divided, but easing binding.

    • Taking this idea to its extreme, you can cram a huge number of small items into a three-ring binder with various accessories(pencil pouches, clear dividers, etc.) This is a wonderful way to deal with those products that ship with, e.g., an extra screw.

    • You don’t have to invest in expensive containers. Cardboard and leftover food plastic can work on a budget.

    • If throwing things out is triggering your hoarding senses, try making spaces of relatively greater and lower priority: your “favorites” vs the “other stuff”. This sorting helps you now by putting the stuff you like within reach, and it helps later by creating an opportunity to declutter by tossing the whole bin at once.

    What are the best containers to invest in for the long run? In my opinion, small zipper bags, desk organizers, pencil bins, and cafeteria trays. If you have a lot of round items, a lazy susan, or variations on that like a makeup organizer.


  • With big, complex synthesizers made for sound design(which the Peak counts as - generally that’s going to be the case if you have a modulation matrix) you have to approach the programming with a layer of philosophy behind it, because while you can get a result by turning knobs, that’s relying on your muscle memory, which tends to result in just remaking the same go-to bread and butter sounds over and over. The muscle memory is what you’re using for performance with a synth as a live instrument. Knowing what parts you’re intending to make be muscle memory, and what you’re more likely to approach with the manual and note paper, is an important factor in being at ease with a sound design synth. It also explains why people are so happy with simpler synth architectures, since they avoid putting you in that position and let you use every knob performatively.

    To start a patch in a more logical way, you can go at it by brainstorming a few elements to focus on(pitch, rhythm, and other fundamentals) and then doing a Venn diagram of that with the synth features that could emphasize those elements. There are plenty of video tutorials around on synthesizer fundamentals, but the problem is in motivating yourself to actively explore them, which the diagram can help with. Remaking famous patches is also a common way to go about the learning process, and you can get a lot of intuition for what things to try by doing remakes, just following a tutorial for another synth and adapting it to yours.

    Even though you have plenty of architectural power to work with, you may run into limitations in your remake because every synth has slightly different character, usually manifested in how fast the envelopes are, the style of filter or oscillators. This can also tell you something about “what the synth does well”.


  • I have no plans to support p92 precisely because it’s going to “push” users together as a commodity. What Meta has jurisdiction over is not its communities but rows of data - in the same way that Reddit’s admins have conflicted with its mods, it is inherently not organized in such a way that it can properly represent any specific community or their actions.

    So the cost-benefit from the side of extant fedi is very poor: it won’t operate in a standard way, because it can’t, and the quality of each additional user won’t be particularly worth the pain - most of them will just be confused by being presented with a new space, and if the nature of it is hidden from them it will become an endless misunderstanding.

    If a community using a siloed platform wants to federate, that should be a self-determined thing and they should front the effort to remain on a similar footing to other federated communities. The idea that either side here inherently wants to connect and just “needs a helping hand” is just wrong.


  • I believe there is a healthy relationship between instances and magazines, actually: the way in which topical forums tend to be “hive-mindy” fits well with Fediverse instance culture. The difference is that instead of Reddit-scaling leading in the direction of “locking down” topical discussion to be a bureaucratic game of dancing around every rule, because all users are homogenous - just a name, a score, and a post history - you can have “this board is primarily about this” but then allow in a dose of chaos, affording some privilege to the instance users who already have a set of norms and values in mind and pushing federated comments out of view as needed, where you know the userbases are destined to get into unproductive fights.

    This also combats common influencer strategies applying bots and sockpuppeting, because you’ve already built in the premise of an elite space.

    There’s work needed on the moderation technology of #threadiverse software to achieve this kind of vision, but it’s something that will definitely be learned as we go along.


  • The most universal mammal behavior I know of is not visual, it’s the “hi I’m here” bark or grunt. It’s something that was pointed out in a wildlife tour video where they visited mountain gorillas: if you don’t make any noise and their first indication is visual, you may have predatory intentions, but if you add a little “mm” noise, you’re just passing through and they can relax.

    It works for many kinds of creatures, humans included.


  • Recently I entered the world of dip pens and got a set from Deleter: Dip pen holder, G-nibs, and the Black 4 Ink. The G-nib is the most common nib used in manga drawing. It needs some pressure to do its work, but it’s flexible enough to do thin and thick strokes.

    They aren’t hugely expensive items - the nibs can be bought in large packs for a few dollars at stationary stores, and are made to last for a few months of heavy use each. The ink is a little more expensive. It’s the kind of thing where the results are better in that you can get some really sharp lines by using viscous ink that would clog anything else, but also, you’d only use it if you’re deep into working with ink and aren’t satisfied with felt fineliners. It’s just logistically harder to deal with keeping an ink pot secure on the desk, dipping the pen, cleaning the nib, putting everything away. Fountain pens are way more popular with collectors, but dip pens are workhorses and there’s almost nothing to troubleshoot, just “how do I keep ink from blobbing on it” (scrub off the protective factory coating with mild detergents or just using the ink itself) and “how do I clean it” (rinse with water).

    The other tool of that type is the kolinsky sable brush - sable hair is more springy than synthetics. I am on the fence about actually getting one of those, my rubberized-felt brush pens do a decent job of getting the elements of brushes that I want, and cleaning brushes is more annoying.


  • Mastodon’s export portability mostly focuses on the local social-graph aspects(follows, blocks, etc.) and while it has an archive function, people frequently lament losing their old posts and that graph relationship when they move.

    Identity attestment is solvable in a legible fashion with any external mechanism that links back to report “yes, account at xyz.social is real”, and this is already being done by some Mastodon users - it could be through a corporate web site, a self-hosted server or something going across a distributed system(IPFS, Tor, blockchains…) There are many ways to describe identity beyond that, though, and for example, provide a kind of landing page service like linktree to ease browsing different facets of identity or describe “following” in more than local terms.

    I would consider these all high-effort problems to work on since a lot of it has to do with interfaces, UX and privacy tradeoffs. If we aim to archive everything then we have to make an omniscient distributed system, which besides presenting a scaling issue, conflicts with privacy and control over one’s data - so that is probably not the goal. But asking everyone to just make a lot of backups, republish stuff by hand, and set up their own identity service is not right either.



  • If you look at the links by each post, you’ll notice that some will reference a URL that goes off of your local instance. In Lemmy these are icons, in kbin it appears from the “more” link. Sometimes it’s unclear who/where I’m interacting with and examining the URL helps me get some idea of it. In federated social media different instances often develop a different subculture, but since they can access each other you have more dimensions of interaction and how to behave.