Please ensure Javascript is enabled for purposes of website accessibility Jump to content

tommasi

Members
  • Posts

    48
  • Joined

  • Last visited

Profile Information

  • Registered Products
    5

tommasi's Achievements

Contributor

Contributor (5/14)

  • First Post Rare
  • Collaborator Rare
  • Reacting Well Rare
  • Conversation Starter Rare
  • Week One Done

Recent Badges

438

Reputation

  1. Yes, it's true (though I wish you'd stop mentioning Windows or Unix...:) DSPs work like GPUs: you preload them with the a code pipeline they need to run. When they don't run it, cycles are wasted, but you can't switch code in or out onreal-time. Changing patch loads the code into the dsp, activating effects just switches a processing block on or off.
  2. ...nor can a thread span 2 cores on a SMP system, no matter what the thread is doing (be it audio processing, video processing, or defragging your hard drive). The architectures are mostly different because DSPs are not general purpose and have strict real-time requirements. That does not mean OS theory does not apply to them (as it applies to all embedded system, certainly not just DSPs) -- it is however a fact that an embedded system does not have all the resources available to a desktop OS (not that I was suggesting running Windows on a Helix!) Absolutely! That's what an OS does, resource allocation, and that is why it is all the more difficult with real-time (or near real-time) requirements like the Helix. I realize I sounded as if I was criticizing the way the Helix is engineered: I didn't mean to at all -- I was just making the point that it is theoretically possible and desirable, but that it is not necessarily feasible without overengineering, raising costs, prices and all things related. As a fairly steady Line6 returning customer, I don't think a bad engineering choice was made apart from leaving the on/off button out of the HD line. :) This was indeed the right choice, because now people at least have the ability to allocate resources manually to take advantage of both DSPs in a relatively, if not absolutely, transparent way. Joining the two paths in software is what I was missing from the X3 line.
  3. This is true only in a situation (which is the present one) where you might use ALL THREE of them together. Bypassed processing blocks "eat" virtual resources only because they could be engaged at some point, and the modeler needs to know it can handle them in case. But switching amp channels means only one of them is active at any given time, so that would still be 40% in any case. The advantage over the present situation is that if you use the helix in stompbox mode, now you cannot effectively switch patches to switch channels (you'd lose all the engaged "stomp" to the new patch default), nor can you switch between channels easily, because, as you said, you need to have both of them in a patch, effectively throwing away about 40% processing power (unless you use both channels at once, in which case you have no other option). Still, it's true that that is commonplace, but with a specifically targeted implementation there'd be no DSP overhead, and I bet sooner or later someone will do it, advertising is as "multichannel amp simulation".
  4. Actually having one "super path" does not automatically imply restriction. It implies, however, dynamic (at patch design time) resource allocation between the 2 DSPs to optimize the processing power consumed by the blocks added. From a software architecture standpoint, processing resources should be ideally virtualized -- you never tell your computer which processor (or core) to run a process or a thread on, the OS takes care of that for you. This ideal solution, however, means that the operating system is more complex, and virtualization of the resources adds a little overhead. There is little argument: a virtualized processing resource pool would be "better" from a patch programming standpoint. You'd never have to move bloks from one path to the other finding the optimal solution that accommodates them all within the available resource. It would however make the operating system cost more, be more complex (and therefore bug-prone), and it would probably "eat" away a little processing power (though not necessarily), it would certainly use more memory. I never really cared much for the "dual path" approach, as I mostly use "complex" single paths, but I always found the compromise acceptable considering that the alternative would have been an SMP OS. That would only make sense, I think, if these devices ran a full fledged OS like (embedded) windows, android (not being realtime it would be a challenge), or similar -- a significant change in how Line6 operates at the moment. I know a few keyboards that work that way. All in all, making things "right" at this stage would, I speculate, introduce many more headscratches than leaving the static DSP to path association. I believe things will change eventually in the future (not for the helix), but for the time being it seems like a perfectly working setting.
  5. The question is very interesting beyond the "mere" amp modeling scenario, and more into an epistemological one. I believe in older days "amp sim" had a more holistic approach. I don't care WHY something sounds the way it sounds, I just want to recreate its sound. Hence, you'd record the amp as a whole, and made sure the signal transformation from input to recorded output was well approximated (fitted) by your model. I like the holistic approach, maybe because I come from artificial intelligence, and we gave up trying to make something "intelligent" a long time ago, trading for something that "behaves intelligently". The holistic approach has a few disadvantages -- namely, that parameters are hard to implement: it's easy to create a model for an amp on a specific setting, recorded by a specific mic in a specific position, it's difficult to create a generic model that varies along those parameters accurately. On the other hand, this "holistic" approach is based on minimal (if not zero) knowledge of what we're modeling: we need to know absolutely nothing of the amp. This is still very much in use -- think of IR -- that's exactly how that works. I cannot model a "space" or a cabinet, for the physical variables are impossible or very hard to master and take into account, so I just see how it reacts to an impulse. Or, think of the profiling of an amp that the Kemper does, that's the main idea behind it. At any rate, it seems that amp modeling moved more towards the "reverse engineering" and modeling components -- alas, from my perspective, losing the snobbish ivory-tower approach of "I don't care how that works. that's for electricians". I believe the first time I heard Line6 mentioning something along those lines was for the HD line. It amazes me that this approach actually pays off -- given how an amp is a very complex system, trying to reproduce it as a whole, even approximately, would seem to have a better chance at capturing its essence than decomposing it into bits whose sum might diverge significantly from the actual thing. So, not only am I wildly off topic, I am also wrong.
  6. If you can leave both inputs connected, the above answer nails it. Otherwise, a small 4ch mixer will do!
  7. I have used it on gigs five or six times, and never had an issue. Add about twice as many rehearsal sessions. I would say it's pretty stable -- though there have been reports of things going wrong, they seem to be fairly consistent with specific setups -- so if it doesn't give you problem at rehearsal, I'd feel pretty confident with it live.
  8. Great tutorial. I'm curious about the IR, which is something I haven't started to dig in yet. I presume (though I might be wrong) that the stock cab models are based on IR themselves. Are 3rd party IR's like the one you use giving much better results, or is it just that they model your own preferred cab unavailable in stock?
  9. Ah yeah, indeed. Dual cab was what I was looking for (didn't have the helix handy when the doubt stroke, and THERE IS NO SOFTWARE EDITOR...). I "converted" my patches splitting amp+cab into amp first, and dual cab much later, after stereo fx, each cab configured the same. I can't say that the difference is very noticeable, but I had a couple spare blocks in the bottom path, and the dual cab was accommodated easily, and I have now more options!
  10. Very good point! Is there any way to apply a cabinet to a stereo signal without mixing down to mono? It will also suck up quite a bit of DSP power to have stereo cabs.
  11. I'm almost embarrassed asking this, after being a Line6 user since the original pod. The question is simple: say you have a chorus (or even a spring reverb) fx. Normally that goes after the preamp, but before the cabinet/mic in a standard amp setting. I've always put those after an amp+cab block, in any pod, but it just dawned on me that a more "canonical" approach would be using: amp (no cab) -> fx -> cab instead of amp+cab -> fx ? I will experiment with this as soon as I can.
  12. Absolutely. There shoud be a meter showing the signal level in and out of each block. I mean, when you select a block to change its parameters, an I/O signal meter would be extremely useful.
  13. I added one, requesting that the global EQ page displays the EQ curve and the frequency analysis: http://line6.ideascale.com/a/dtd/DIsplay-EQ-curve-and-frequency-analysis-in-global-EQ-page/799253-23508 EDIT: I noticed that there was another submission asking, among other things, for this as well -- it's in the list as "Expanded Display for EQ and Compression Blocks"
  14. Absolutely, though in fairness I was updating the unit, so in this case it wouldn't have made a difference... it was on a stool due to the USB cable not being long enough to reach the computer.
×
×
  • Create New...