Example packet size: 32,767 (0111 1111 1111 1111)
MSB = 32,767 / 256 = 127 (0111 1111, the left half of the bit mask).
LSB = 32,767 & 0xff (255) = 255 (0000 0000 1111 1111, the right half of the bit mask). Note that this gets casted back to a signed byte.
Anyway, I suspect the server checks the first byte to see if it's >= 160 when reading the packet back. So to ensure it's interpreted as a 2-byte integer server-side they add 160 to the first byte. Having any bit set in the MSB and adding 160 to it will always be equal to >= 160.
Does that make sense?