Thoughts on adding 'spectral information' beyond just drums to the mix

More complex instrumentation, beyond simple drum hits, crowds the spectral space in the mix. Most songs with most kits sound pretty well balanced when limited to live guitars, bass and vocals aided by BB. For some months now enabled by Phil, Phil Flood, Persist and others I’ve tinkered with using enhanced MIDI files to add depth and character to songs the trio might otherwise pass on for lack of needed ‘signature’ instrumentation.

I also record everything that passes to the PA via the Headphone output for post-mortem analysis the next day. Everything runs through the mixer to the PA. Lead guitar amp is mic’d and bass amp is DI so they are heard in the room live as well as ‘sampled’ on the recording. We have no FOH person to adjust the mix so I use each night’s recorded work to note needed adjustments. Often the live sound in the room (I am told) is excellent although sometimes the backing BB MIDI channel is a bit loud. On the recordings its more obvious when the amps are not also adding to the sound as they do live.

I would be interested in anyone’s thoughts on the following:

  • Drums alone or even simple additions such as piano on Phil Flood’s “Willin’” using NP Big Bose Jazz Trio Centered XRp seem to take up a narrow spectral window and fit within a ‘set once and forget’ approach to mixing.
  • Some songs using the same kit (I Know You Rider and Ohio) seem louder competing with the vocals although the mix has not changed. Simply using the volume pedal during ‘Rider’ to make some dynamic changes makes me think I can, at least on some songs, pull the volume down and all is well.
  • More complex added material such as Hammond and/or piano and synth seems to crowd the spectrum needing a more refined EQ beyond just needing a simple volume adjustment. For instance Black Magic Woman (NP P-Bass and Hammond 2 XRL - Phil Flood), Why Does Love Got to Be So Sad (Santana Piano & Organ - Phil Flood) and Comfortably Numb (NP P-Bass and Better Strings Centered XRp - Phil).

So before I start adjusting individual elements in drum kits solely for volume arbitrarily I thought this might be something others would have thoughts on.

1 Like

I would be interested to hear other’s thoughts as well since that’s been my holy grail ever since I got the BeatBuddy. Figuring out ways to use the different kits while maintaining consistent volume/mix levels.

As you suggested, recording and analyzing every session is the only way I have found I can do this. Heck after having my BB for almost 2 years I still sometimes find myself tinkering with the midi files or individual instruments in the kits.

As I keep adding songs though, somehow I don’t think that will ever end because the midi files themselves can be quite inconsistent and getting good average velocity levels on every instrument in a kit can also be a daunting task. Maybe one of these days but unfortunately right now I am not competent or patient enough to create from scratch all the songs I want to play. :rolleyes:

What I’ve been thinking about doing… if I ever have the time, is to create maybe 3 to 4 kits with the instruments I want and then only use those. I have found that you do start learning the nuances of the different kits after a while which in turn makes it easier to know what to modify your midi files. However, since different kit authors use different instrument ranges and the posted songs are modified to work with a specific kit it means often times having to move different instruments to different ranges. That’s why it’s so nice when people post the original midi file instead of just the .sng file since midi export is still not really fixed in BBM.

“…What I’ve been thinking about doing… if I ever have the time, is to create maybe 3 to 4 kits with the instruments I want and then only use those”
I’m relieved to find I’m not the only one diving into rabbit holes. Nature being what it is I’m always asking ‘why?’ This comment of yours Raymond is in keeping with an exercise I did after posting thinking it certainly makes sense for the more prolific among us (Phil, Phil Flood, Persist and others) to adapt known good files by adding pieces. So when I mapped out which parts were kept and which were varied or added of course that was just how it was done.

Since the .wav files create the color, finding a few with parts that work (mostly) and making small tweaks is useful. I still need to understand whether simply reducing volume in some cases will be enough to better balance the more complex kit to my ears. After all subtractive EQ reduces volume for specific frequencies. Maybe what I’m looking for is the “less work approach” to modify outputs for more songs easily. As opposed to moving each organ note down a few dB arbitrarily to find I need to do it again to get it right for a single song. Then do it again for another.

^
^^
^^^
I like the way this discussion is heading. I spend far too much time trying to fit the song to the bounty of kits. I prefer using the NP StdPBass 63-91 and a couple of the piano kits; ocassionally, a kit with strings or horns but that’s about it.

One addendum - I realize if I had a reasonably capable MAC as opposed to my aging 4GB MacBook Air even GarageBnad would likely allow me to grab large numbers of channels and adjust the volume at once (for instance only the keys) save the result and try again if not happy with the outcome as opposed to tedious picking in BB. However I would still wind up with multiple variants of basically the same kit as opposed to tinkering with EQ on one kit. Of course if this is a dead end and EQ alone won’t bring me the desired effects…back to step 1.

With regards to the volume, I went down this rabbit hole before. Dynamics is a hard beast to tame when it comes to playing live with the BeatBuddy. It feels like something is ‘missing’, when actually there’s too much going on.

When I first got my BeatBuddy, jamming solo was fun; the out of the box beats sounded great. Then I tried jamming with others and incorporating it into the duo I was in at the time. Something wasn’t quite right, the beat felt lost - there was no power or drive. I figured it must be the ‘accuracy’ of the beats and how closely they matched the original drums in the songs we played. So I started creating finely crafted beats, balancing the levels between kits as well as velocities on beats. It still didn’t sound right. It was either too quiet or too loud; very hard to find the right volume - especially from song to song and sounded quite monotonous. I’ve described it before as being too ‘busy’.

More recently I started watching how guitarists use percussion stomp boxes or how cajon players play in small setups. What I heard was a lot of space. They don’t play ‘busy’ beats. There’s something very powerful and pleasant about a simple percussion accompaniment. I made a post here where I show the versatility of simplified beats: http://forum.mybeatbuddy.com/index.php?threads/show-us-your-skills.8437/

On reflection, the problem I had was with hi-hats and cymbals. When you turn up the volume, everything goes up - including the wash of sound from the cymbals. The beat gets buried in this and dynamics are lost.

1 Like

You make an excellent point ruairiau. One that usually only comes to us after having invested a long time as we have done as well (more than 2 years of dedicated BB in almost every song we do). Our trio rock cover needs are likely different than yours but the basic philosophy for all of us becomes the same. A partial set list is attached for reference and you can see, aside from the 10-12 in question in this thread which we are starting to experiment with using the more complex parts (keys, etc) we use stock beats from predominantly 3 kits. Many can examples be accessed from the FB link in my signature from recent live recordings.

Some use intros and outros but I rarely ever invoke the transition to the ‘2nd verse’ part where cymbals come to prominence. A song does not benefit from crowding in layers of complexity to compete with the important vocal/guitar lines. Standard Pro gets used as an alternative to bring the drums down vs Standard 1.1 or the other way Rock 1.1 kicks more for a few songs without needing to adjust the mixer channel.

A drum only (or even an added bass part) occupies a narrow spectral band that doesn’t crowd the vocals and we are still in business. A song like “Willin’” with an added piano and bass can be pretty clear as well but needs volume adjustment. Even the kick drums when monitored via a subwoofer that crosses over at 100Hz are hardly present at those low frequencies so percussion in general doesn’t create spectral mud at least not in the important regions.

Now that some folks have created MIDI files and the drum kits to support them we can expand our set and pleasantly surprise an audience (and keep ourselves amused) with added parts and songs we probably would not be doing without the important or signature parts (the arpeggiated strings in “Comfortably Numb” for instance thank you Phil). So that frames the current question - how to open spectral space with appropriate volumes for all these parts?

If by spectral space you mean you want to prevent the sound going muddy as harmonics overlap at different frequencies, then this is how I’d do it…

  1. I would let the beatbuddy just play drums.
  2. connect the midi out of the beatbuddy to a hardware synth to play your string parts (and other parts)
  3. run the beatbuddy and synth into separate channels in your mixer.
  4. cut out some holes for each channel using a graphic eq (similar to how you would do it when recording)
  5. profit???

An interesting exercise at least to satisfy the scientist in me. I have been on a mission to restrict more boxes and cables (and things to worry about) live but this might shed light on achieving spectral space. I run a 31 band on the entire output of the mixer on its way to the PA (rolling off below 100 Hz and cutting/peaking a few vocal frequencies but typically no more than 3dB). With better understanding I might be able to tweak a few frequencies to open up the sound without getting a bigger truck for live work.

If, like me, you have an iPad with your songs, then you already have a synth. Sampletank 2 is a great multi instrument synth.
All you need is an audio interface; I recommend the irig pro duo - it’s the best. Very compact, does audio and midi. Opens up the glorious world of music apps on your iPad.

I wouldn’t get too invested in trying to ‘fix’ your sound with EQ, not in a live environment. Sure, fix problem frequecies but there’s a lot of variables e.g. room size, people, position, reflection etc.

“I wouldn’t get too invested in trying to ‘fix’ your sound with EQ, not in a live environment…” I am fortunate to be able to turn to this forum and get outside my own short-sighted thinking. Our EQ was intended purely to establish a common useful output not even attempt to fix the unfixable rooms we play in. Thank you for the lead on Sampletank. We use iPads (UnRealbook for lyrics and even a definable button to trigger some short recorded bits).

Scrub this idea. I just tried this out, it doesn’t work. That’s really disappointing. What’s odd is that you can do this if the MIDI is played from an external source, e.g. your PC connected to BeatBuddy and a Synth. You simply set the beat buddy to only listen on channel 10 and do a MIDI Merge for the OUT. However, playing the same MIDI track on the BeatBuddy will not work as it appears to ignore channels (both reading them and when sending them out to other devices via MIDI OUT)

Good bit of work however. In the meantime attempting to improve the usefulness of a given kit for a given song (in this case Comfortably Numb where the synth strings seemed too loud for our setup relative to the other parts) I’ve ‘simply’ reduced the volume of each string channel (-1.5dB) and the overall kit volume (-1.5dB). A good candidate because the MIDI file is so good and it did seem the kit was louder than the standard drum kits we use that require no mixer fader change. I still suspect there is more apparent ‘prominence’ of some instrument frequencies to be studied and dealt with here as opposed to simple overall volume even of kit parts.

I say ‘simply’ because that means resetting the levels of each individual note (channels 72-127) then bugging Persist (again) to be reminded of Phil Flood’s mini-tutorial on changing/saving drum kits. There is an obscure upper left kit title you need to click and rename or it goes missing after all your work. We will run this at next rehearsal and see if there is sufficient improvement (likely) as opposed to further more complex exercises. Doesn’t mean I won’t keep chasing this as a study however.

A brief (I hope) explanation on what goes on with BB multi-instrument drums kits and why they do not work as a multi-instrument midi file - When I, or anyone else for that matter, creates a multi-instrument BB drum kit, in effect what we have done is applied the equivalent of a keyboard split configuration to the BB. Most midi keyboards support his type of arrangement, where one portion of keyboard can be designated for one instrument, and another section to another instrument. Many guitar synths allow similar things where tones can be split at a certain fret, for example. But these are, again, generally “performance” settings. If you recorded the midi out from one of these devices, it could be on one channel, with the notes spread out over the range of the instrument. The BB is working that way. Thus, with one of my STAX kits, for example, you have bass at 0-31, drums at 35-59, a keyboard at 60-96, and brass, maybe, at 97-127. That is a split keyboard. It is NOT 4 midi channels. The file that triggers those tones is NOT a midi type 0 file. Is it, actually, a one channel midi type 1 file. So, if you take a multi-part midi file and play it back on a a sequencer or DAW, you could have the BB assigned to channel 10, and have it trigger drums notes in a properly configured drum kit. Likewise, you could route the bass, keyboards, and strings to one or more sound modules on their respective channels to trigger those sounds. But with the file converted for use within a BB drum kit, it is no longer a mullti-part midi file. The BB does not split the ranges up and assign them to instruments. The drum kit creator did that. The BB is playing one channel. And, while you could have an OMNI out from the BB, if you are playing a multi-part song created for BB, ALL of it is going to come out, spread across all 128 available midi slots and creating a glorious cacophony into whatever module you decide to feed it. Because, again, it is NOT a midi type 0 file where multiple instruments have been joined into one file. In those files, you could, for example have note C2, midi 48, play back on both a saxophone and a piano. In BB, midi 48 will only play one wav file at a time. The difference is that the midi type 0 file recognizes that more than one of the 16 available channels can be assigned to a note at the same time. A BB file is only playing one channel.

1 Like

As always I appreciate the level of discussion/education that comes of these exchanges. My primary goal is to understand if it is at all possible to create sonic space for the individual sounds layered into the file that BB triggers (e.g., EQ applied before being merged into the single file). It may not be possible after the fact if keyboards, vocals, guitars occupy the similar frequencies - cutting/boosting one affects the others so no space is gained.

I’ve been thinking about this a bit this morning. It is possible to open up the sound even when instruments occupy the same fundamental tone space. You have to apply some multidimensional thinking. If the sound is a flat wall of tone, coming at you from the center, from C-2 up through G8, two notes occupying the same space ARE going to get crowded. There are a couple ways to deal with this. First, the simplest resolution is to use some panning. Yes, you can prepare BB kits that take advantage of panning. You need to use both BB outs, and you need to record stereo samples that are panned, as the BB does not let you ADJUST the panning. It will still play back notes that are panned in their proper space. For most live sound, though, you want the same sound coming out of the left and right speakers, so the crowd on one side doesn’t miss some instruments in the mix. So, option 2 - varying levels of reverb. Reverb has the effect of placing instruments from front to back in the sound space. Again, kits could be prepared using the process and you’d have some separation. Finally, processing the mix through an Exciter, like the BBE Sonic Maximizer, can open up space. While you normally might run one of these out of the final mix from the board, there is nothing preventing you from just applying it to a bus that only has a couple of your instruments. Essentially, what these things do is apply micro delays to various parts of that flat wall of tone, such that some of it hits you quicker than other parts. The delays are not perceptible, but the results are.

To understand this further, the issue is that the BeatBuddy can’t send MIDI notes out on different MIDI channels at the same time? I’m thinking that it could work with one channel if you don’t have overlap between the drum notes that the BB is playing and the notes you want triggered in an external synth: perhaps by filtering the drum notes out with a MIDI interface in between the BB and the synth…

I believe the problem is that even if midi commands are included in a midi file which the BB is playing as part of a song, BB ignores those commands, and there is currently no way to make it recognize those commands. For example, it does not see volume changes, but it does respond to differences in velocity. Velocity is part of a midi note, along with value, note on, and note off. Volume is a control change. The BB simply does not respond to control changes (or program changes) contained within the midi files in a BB .sng file. The pedal itself can send some and receive some control changes, and will respond to some program changes. It also allows program changes it receives to pass through to another device. But a BB .sng cannot send or receive these changes, and there is not a “non-song” type of of file, like a sysex, for example, that can be loaded with a BB song and then activated to make midi CC or PC messages.

I understand that part about the BB filtering out any MIDI PC or CC messages from the files it plays. Seems that it would be pretty straightforward to fix the firmware to both send these out and to respond itself to these messages in the file (such as volume change).

As far as using BB as a sequencer for external gear, people have been requesting separate MIDI channels out for a couple years now. See this for example: http://forum.mybeatbuddy.com//index.php?threads/midi-notes-out.4558/

Those two fixes would make the BB an incredible master sequencer! [edit: plus an incredible preset/other setting controller for other MIDI equipment.]

And just to clarify on your question, BB can send notes as an OMNI out, but, as presently configured, it cannot send out a different set of note messages on a different channel. And, I agree that it seems that some firmware enhancements could be made to address some of the issues we seem to encounter.