AR wrote:
On the x86, char and short are scaled up to int anyway
That sounds extremely scary to me. This is not an x86 thing. this is a compiler thing.
Char (same way as 'int') are _signed_ numbers. When you write
Code:
char byte=0xff;
the compiler should warn you the constant will be too big for the storage type.
Natively, the computer can only compare words of the same width (e.g. chars and chars, shorts and shorts, ints and ints, longs and longs, etc). Whenever you attempt to compare/move values between variables of different width, the system will _cast_ the value to the target type (or to the largest type for the purpose of comparison -- Solar, hit me back if i'm wrong here, will you ?)
That means when you write
Code:
int negative=-1;
if (byte == negative)
what you actually tell the compiler to do is
Code:
int negative=-1;
if ((int)byte == negative) ...
since the pattern "FF" is "-1" for a char, it will be converted to
an int of value "-1" before comparison occurs (e.g. "FFFFFFFF").
So the code will eventually be compiled as
Code:
negative dd 0xffffffff
bite db 0xff
mov eax,[negative]
movsx ebx,byte [bite]
cmp eax, ebx
hope it makes it clearer.