I make jungle/IDM/old school rave tracks using Ableton, released through openwheel.bandcamp.com

In my other life, I work in IT.

  • 2 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle







  • Personally, I’m looking at AI in music as the modern equivalent of the introduction of sampling in the 80s. Loads of people lost their minds back then - “Nobody has to be able to be able to play instruments anymore”, “It’s cheating”, “Sampling isn’t real music”, etc etc, and look at how ubiquitous sampling is now, and how literally nobody with half a brain thinks that sampling is “cheating” and anyone using a sampler isn’t capable of making “real” music. Sure, it’ll take a while for the whole AI thing to settle down and find its place in composition and production as artists learn how to best integrate it into their creative flow, and inevitably there will be people who use it to make up for a lack of talent and who might even get lucky and have a hit with fully AI generated compositions, but most artists are going to learn how to use it as a tool to expand their repertoire or to generate ideas from their own ideas, rather than as a replacement for their own creative input. I can easily imagine bands like Radiohead already having spent weeks training models to their own specifications to work within their specific requirements, rather than replace them or using publicly available models that won’t give them the results they want.

    I think a few specific markets are going got be more impacted than others - music for low budget films and YouTube content creators are the main areas which I think will benefit mostly from purely AI generated music (and current people working in these fields may have more to worry about than most). Rather than worrying about paying royalties for music playing in the background of an Italian cafe scene in your 1940s-set drama, you can generate a unique piece which fits with the scene perfectly with nobody needing to research copyrights or anything like that. Same for YouTube videos where you just want a catchy hook looping in the background over you talking about whatever - no need to worry about getting your video taken down for copyright issues when the music was created by a text prompt ten minutes before you uploaded it.

    Pop music, whatever your personal interpratation of that may be - from The Greatest Showman soundtrack, to Warp records output, to vaporwave, to K-pop, to whatever’s on the current Radio Six playlist, I don’t believe will suffer from the introduction of AI, as the people creating music that people love to listen to will not allow their own creative input to be overshadowed by AI generated content, but probably will be happy to see what using this new tool can add to their creative output. Some artists will embrace it more than others, and others will completely shun it, but to go full circle to my opening sentence, I believe those people are the equivalent to the people saying that samplers will be the death of “proper” music back in the day. And like how the sampler is virually ubiquitous across all genres of music now, I believe that even those genres that shun AI currently will eventually see the value once the pioneers have figured out exactly how to use it effectively.

    (Edited a bit for clarity and adding a couple of other thoughts - oh, and I’ve got to be honest, I haven’t read the article!)





  • It’s a cool little box of tricks. At the moment I’ve got two presets that I jump between, one for new projects/mixing, the other for my performance project.

    For performance it’s set up like this:

    1st row - a row of group track selectors: (1) = Drums; (2) = Bass; (3) = Synths; (4) = FX and Vox. Once the group is selected, Banks 1 and 2 on the MIDI Fighter Twister (MFT) is scripted to have selected track control over the devices, sends, pan, volume etc. on the selected track.

    2nd row - Button (1) controls the Touch Me Max device by Elzabeth Homeland - this is the only button in this preset that also uses the dial on the Phantasmal Force to send out MIDI CC data; button (2) toggles the floating windows of two Max device interfaces at the same time, Control Matrix by killhu and Inspector by AudioLord; (3) Selects a specific rack on a bass track in the Bass group which is controlled by Bank 3 on the MFT (filtering/distortion etc.) via the Control Matrix device; button (4) controls the Collapse All function in the GETOUTOFMYWAY device by Elizabeth Homeland so the project immediately collapses back to just showing the group tracks rather than all the tracks within them, which is great if I’ve unfolded any of the groups and have got lost in a world of clips on my Launchpad.

    Bottom left 4 buttons (2x2 grid) controls this MFT Bank Changer device - no prizes for guessing what that does 😊

    The bottom right 2x2 grid of buttons isn’t set in stone yet - the two right-most buttons on the third row aren’t programmed for anything yet although I have an idea what they’ll be used for; the very bottom right corner button is another track selector for a Portatron channel - this is constantly playing tape loops (which aren’t synced to Live’s BPM) over the whole set, but the channel fader is usually all the way down. I can jump straight to this track with the bottom right button then bring the fader up with the MFT to play a looped drone while I speed up the project BPM for the faster section of the set without it affecting the outputput of the Portatron.

    The 3rd button on the bottom row is going to be hard to explain 🤣. I’ve basically taken the idea under the heading “A novel use for the crossfader” from this page on the pATCHES website but turbo-charged it. In the Drum group, I’ve set up several drum tracks playing loops and classic breaks, and use this button to switch between them, but instead of just using static fills on the B track like in the linked article, I’ve set up a multi-chained FX rack with different Permut8 banks on each chain. One encoder on the MFT controls the chain selector to jump between banks, and another encoder feeds MIDI notes into the Permut8 rack via a Potee device so as I twist the encoder it changes the current program. Between one botton and two dials I have full control over a couple of hundred giltch/pitch/stutter effectx than I can drop in and out of at will.

    Blimey, that was a long post - sorry… Hopefully it makes sense - I’ve never thought about having to explain my setup before! (I’ve just realised I’ve not even covered the preset for mixing either - I guess that’s for another time, if anyone even cares. I use the dial on the Phantasmal Force much more in that preset - this one mainly just uses the buttons, except for the Touch Me device button which uses the dial too. The MFT does 99% of the twisty knob stuff in my performance set)

    TLDR: buttons do stuff





  • I think I’ve got my head around the whole instances thing now - although I’ve accidentally posted elsewhere on a different user account to this one without realising it. I’ve accidentally set up three or four accounts, all with the same name, on different instances. Oops 😬. Going to try to stick to using this one now I’ve noticed!

    Anyway, the other bit of advice was basically just “playing live should be fun or why would anyone want to do it? Keep it simple.or you’ll just get stressed out and it’ll be obvious”. This was after he watched me jump around a set with about 90 channels for half an hour. What really clicked for me was when I worked out how to take his method of dub performance (effectively using a reel-to-reel as an instrument) and translate it to a Launchpad performance, but keep the flexibility of having each of the 90-odd channels independently tweakable without the whole thing being overwhelming.

    I’ll try to work out how to make a post on what I’ve done when I get time - hopefully it’s interesting enough to warrant one!


  • Related to this, a good friend of mine is the engineer for a very well-known dub producer, and last time I saw him I ran through a live set I’ve been working on for a few months - (I make jungle/breakbeat/IDM stuff). He gave me two great tips - one of which I can’t be bothered to explain here as it’ll take me to long to type out, but the other was basically saying the same thing as this - make some elements really jump out of the mix, make stuff BANG and it’ll make the audience respond so much better than having everything sit nicely and politely throughout the whole set.

    (Just want to mention that this is the first time I’ve commented on Lemmy and I’m not sure how the whole crossposting thing across instances works. I’ve left the same comment on the article at music.productiom - hope that’s okay! Just want to try to spark discussion and realised this is posted to two different places where I have an account 😊)