If I understood correctly the Latency have to be considered along with MinScan parameter.
MegaDrum scans all inputs over and over. There's no latency and minscan parameter involved yet. During this scanning MD may detect some signals on some inputs but nothing happens until the signal is below defined threshold. When MD detects a signal above Threshold/Dynamic threshold, it will keep sampling it for MinScan period of time before marking the signal as registered and making it ready to be sent over MIDI. Which means (at least I think so) that MD doesn't send the MIDI message right away but keeps it as ready to send. There's another scanner which scans in Latency period of time for all registered MIDI messages and sends them all in one shot to the PC. So the Latency scanner reduces cost of communication sending several messages in one shot. It's very important to know that both scanners cannot work at the same time - if one works other doesn't. Therefore lowering the Latency period may degrade level detection precision because in most cases there will be only one message to send and the constant cost of communication will not be reduced (and rather multiplied). In other words while MD spends a time trying to send one message it cannot scan inputs and the signals are lost.
That's only my opinion. If this is true I hope I didn't betray the secret
. Nevertheless if this is true I still wonder if it's so important to wait Latency time to send several messages at once (usually there are 2 rarely 3 notes played at the same time so the cost seems not to be very high). Therefore I'm not sure if I'm right.
If this is wrong please tell so Dmitri (without going into details if you don't want to), because I don't want to mislead anybody.