Page 1 of 1

C: Get Integer's High and Low Bytes

Posted: Sat Dec 15, 2007 10:24 am
by matias_beretta
Hello, thanks for reading my topic.

Is there any operator or anything to get the high and low bytes of an integer (int)???

Posted: Sat Dec 15, 2007 10:33 am
by Tyler
"&"

Code: Select all

int IamInt;
int IamLow;
int IamHigh;

IamInt = 0xAAAA5555;

IamLow = IamInt & (0x0000FFFF);
IamHigh = IamInt & (0xFFFF0000);
etc...

Not checked... 0x might not even be valid in your compiler.

thanks

Posted: Sat Dec 15, 2007 10:34 am
by matias_beretta
thanks ;)

Posted: Sat Dec 15, 2007 10:39 am
by Combuster
An int is 4 bytes... or 8... but rarely two.

Posted: Sat Dec 15, 2007 11:10 am
by XCHG
This one could run independent of the size of the value that it is passed to:

Code: Select all

#define HIBYTE(Value) Value >> ((sizeof(Value) << 0x03) - (sizeof(char) << 0x03))
So you could pass 1-byte, 2-bytes, 4-bytes, 8-bytes ... values to it and it will still be able to find the high byte. It could have been written better but I didn't have time.

question

Posted: Sat Dec 15, 2007 11:25 am
by matias_beretta
Integer = Word

Word = 2 Bytes

Is it ok?

Posted: Sat Dec 15, 2007 4:14 pm
by XCHG
On x86 IA-32, Integer is normally known as a 32-bit (signed) value. This table might help you better:

Nibble = 4 Bits
Byte = 2 Nibbles = 8 Bits.
Word = 2 Bytes = 4 nibbles = 16 Bits.
Integer (also known as DWORD on IA-32) = 2 Words = 4 Bytes = 8 Nibbles = 32 bits.
Quad Word (also known as INT64 or QWORD) = 2 DWORDs = 4 Words = 8 Bytes = 16 Nibbles = 64 bits.

Does that help?

Posted: Sat Dec 15, 2007 5:55 pm
by Masterkiller
XCHG wrote:On x86 IA-32, Integer is normally known as a 32-bit (signed) value. This table might help you better:

Nibble = 4 Bits
Byte = 2 Nibbles = 8 Bits.
Word = 2 Bytes = 4 nibbles = 16 Bits.
Integer (also known as DWORD on IA-32) = 2 Words = 4 Bytes = 8 Nibbles = 32 bits.
Quad Word (also known as INT64 or QWORD) = 2 DWORDs = 4 Words = 8 Bytes = 16 Nibbles = 64 bits.

Does that help?
Integer is a type that depends of the OS (or mode).
On 16-bit system int is 16-bit
On 32-bit system int is 32-bit
(Not sure for 64-bit system)
So there is two solution in C/C++:
1. DO NOT USE int, use long and short instead
2. Use INT_PTR (MSVC)
You should notice that WORD==unsigned short, DWORD==unsigned long

Posted: Sun Dec 16, 2007 5:13 am
by XCHG
Masterkiller wrote: Integer is a type that depends of the OS (or mode).
Allow me to rephrase that :) Integer and other data-types depend on the compiler and the architecture rather than the OS. The OS, I believe, must adhere to the architecture's convention of names for data-types. For example, in IA-32 Intel manuals, a Double Word or a DWORD is always 32-bits. An Operating System will only cause confusion to its users if it knows DWORD as a 64-bit or for example a 16-bit integral value. :roll:

Posted: Mon Dec 17, 2007 9:28 am
by JamesM
You should find that most OSs that follow POSIX like naming conventions almost* never use "int", "long". They use typedefs like "ptr_t", "pid_t" etc, and expect you to retrieve their values from the provided header files.

The answer is to use uint32_t et al., then you will never be confused!

Posted: Tue Dec 18, 2007 2:30 am
by os64dev
XCHG wrote:This one could run independent of the size of the value that it is passed to:

Code: Select all

#define HIBYTE(Value) Value >> ((sizeof(Value) << 0x03) - (sizeof(char) << 0x03))
So you could pass 1-byte, 2-bytes, 4-bytes, 8-bytes ... values to it and it will still be able to find the high byte. It could have been written better but I didn't have time.
Ok, to shoot at the obvious lets assume i pass 0xDEADFEED at this macro.

Code: Select all

int test = 0xDEADFEED;
if(HIBYTE(test) != 0xDE) {
    printf("this macro is faulty\n");
}
Guess what is will be seen on the screen. Indeed 'this macro is faulty' will be visible as the macro will return 0xFFFFFFDE.

Posted: Tue Dec 18, 2007 2:43 am
by JamesM
Well os64dev that's hardly a valid retort: a simple "& 0xFF" at the end somewhere will fix that.

I would not expect people to post picture-perfect code on this forum, the point is that the gist is there and can be followed. Indeed quite possibly having deliberately faulty code is better because it encourages debugging.

Posted: Tue Dec 18, 2007 5:49 am
by AndrewAPrice
JamesM wrote:You should find that most OSs that follow POSIX like naming conventions almost* never use "int", "long". They use typedefs like "ptr_t", "pid_t" etc, and expect you to retrieve their values from the provided header files.

The answer is to use uint32_t et al., then you will never be confused!

Code: Select all

#include <iostream>

const int BitsPerByte = 8; // change this for your architecture
int main()
{
   std::cout << "A char is a " << sizeof(char) * BitsPerByte << " bit value.";
   std::cout << "A short int is a " << sizeof(short int) * BitsPerByte << " bit value.";
   std::cout << "A long int is a " << sizeof(long int)* BitsPerByte << " bit value.";
   std::cout << "A long long int is a " << sizeof(long long int) * BitsPerByte << " bit value.";
   std::cout << "This computer uses " << sizeof(void *) * BitsPerByte << " bit addressing.";
   return 0;
}

Posted: Tue Dec 18, 2007 5:55 am
by JamesM
MessiahAndrw wrote:
JamesM wrote:You should find that most OSs that follow POSIX like naming conventions almost* never use "int", "long". They use typedefs like "ptr_t", "pid_t" etc, and expect you to retrieve their values from the provided header files.

The answer is to use uint32_t et al., then you will never be confused!

Code: Select all

#include <iostream>

const int BitsPerByte = 8; // change this for your architecture
int main()
{
   std::cout << "A char is a " << sizeof(char) * BitsPerByte << " bit value.";
   std::cout << "A short int is a " << sizeof(short int) * BitsPerByte << " bit value.";
   std::cout << "A long int is a " << sizeof(long int)* BitsPerByte << " bit value.";
   std::cout << "A long long int is a " << sizeof(long long int) * BitsPerByte << " bit value.";
   std::cout << "This computer uses " << sizeof(void *) * BitsPerByte << " bit addressing.";
   return 0;
}
What exactly was that meant to signify?

Posted: Tue Dec 18, 2007 6:06 am
by Solar
I don't know, really, but sizeof( char ) is always (and by definition) 1 (as char <=> byte in C), and "BitsPerByte" is called CHAR_BIT and found in <limits.h>, even if your C environment is not C99-compliant (in which case JamesM is right on mark: You should use uint32_t et al.).