C: Get Integer's High and Low Bytes

Programming, for all ages and all languages.
Post Reply
User avatar
matias_beretta
Member
Member
Posts: 101
Joined: Mon Feb 26, 2007 3:39 pm

C: Get Integer's High and Low Bytes

Post by matias_beretta »

Hello, thanks for reading my topic.

Is there any operator or anything to get the high and low bytes of an integer (int)???
Matías Beretta
Tyler
Member
Member
Posts: 514
Joined: Tue Nov 07, 2006 7:37 am
Location: York, England

Post by Tyler »

"&"

Code: Select all

int IamInt;
int IamLow;
int IamHigh;

IamInt = 0xAAAA5555;

IamLow = IamInt & (0x0000FFFF);
IamHigh = IamInt & (0xFFFF0000);
etc...

Not checked... 0x might not even be valid in your compiler.
User avatar
matias_beretta
Member
Member
Posts: 101
Joined: Mon Feb 26, 2007 3:39 pm

thanks

Post by matias_beretta »

thanks ;)
Matías Beretta
User avatar
Combuster
Member
Member
Posts: 9301
Joined: Wed Oct 18, 2006 3:45 am
Libera.chat IRC: [com]buster
Location: On the balcony, where I can actually keep 1½m distance
Contact:

Post by Combuster »

An int is 4 bytes... or 8... but rarely two.
"Certainly avoid yourself. He is a newbie and might not realize it. You'll hate his code deeply a few years down the road." - Sortie
[ My OS ] [ VDisk/SFS ]
User avatar
XCHG
Member
Member
Posts: 416
Joined: Sat Nov 25, 2006 3:55 am
Location: Wisconsin
Contact:

Post by XCHG »

This one could run independent of the size of the value that it is passed to:

Code: Select all

#define HIBYTE(Value) Value >> ((sizeof(Value) << 0x03) - (sizeof(char) << 0x03))
So you could pass 1-byte, 2-bytes, 4-bytes, 8-bytes ... values to it and it will still be able to find the high byte. It could have been written better but I didn't have time.
On the field with sword and shield amidst the din of dying of men's wails. War is waged and the battle will rage until only the righteous prevails.
User avatar
matias_beretta
Member
Member
Posts: 101
Joined: Mon Feb 26, 2007 3:39 pm

question

Post by matias_beretta »

Integer = Word

Word = 2 Bytes

Is it ok?
Matías Beretta
User avatar
XCHG
Member
Member
Posts: 416
Joined: Sat Nov 25, 2006 3:55 am
Location: Wisconsin
Contact:

Post by XCHG »

On x86 IA-32, Integer is normally known as a 32-bit (signed) value. This table might help you better:

Nibble = 4 Bits
Byte = 2 Nibbles = 8 Bits.
Word = 2 Bytes = 4 nibbles = 16 Bits.
Integer (also known as DWORD on IA-32) = 2 Words = 4 Bytes = 8 Nibbles = 32 bits.
Quad Word (also known as INT64 or QWORD) = 2 DWORDs = 4 Words = 8 Bytes = 16 Nibbles = 64 bits.

Does that help?
On the field with sword and shield amidst the din of dying of men's wails. War is waged and the battle will rage until only the righteous prevails.
User avatar
Masterkiller
Member
Member
Posts: 153
Joined: Sat May 05, 2007 6:20 pm

Post by Masterkiller »

XCHG wrote:On x86 IA-32, Integer is normally known as a 32-bit (signed) value. This table might help you better:

Nibble = 4 Bits
Byte = 2 Nibbles = 8 Bits.
Word = 2 Bytes = 4 nibbles = 16 Bits.
Integer (also known as DWORD on IA-32) = 2 Words = 4 Bytes = 8 Nibbles = 32 bits.
Quad Word (also known as INT64 or QWORD) = 2 DWORDs = 4 Words = 8 Bytes = 16 Nibbles = 64 bits.

Does that help?
Integer is a type that depends of the OS (or mode).
On 16-bit system int is 16-bit
On 32-bit system int is 32-bit
(Not sure for 64-bit system)
So there is two solution in C/C++:
1. DO NOT USE int, use long and short instead
2. Use INT_PTR (MSVC)
You should notice that WORD==unsigned short, DWORD==unsigned long
User avatar
XCHG
Member
Member
Posts: 416
Joined: Sat Nov 25, 2006 3:55 am
Location: Wisconsin
Contact:

Post by XCHG »

Masterkiller wrote: Integer is a type that depends of the OS (or mode).
Allow me to rephrase that :) Integer and other data-types depend on the compiler and the architecture rather than the OS. The OS, I believe, must adhere to the architecture's convention of names for data-types. For example, in IA-32 Intel manuals, a Double Word or a DWORD is always 32-bits. An Operating System will only cause confusion to its users if it knows DWORD as a 64-bit or for example a 16-bit integral value. :roll:
On the field with sword and shield amidst the din of dying of men's wails. War is waged and the battle will rage until only the righteous prevails.
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Post by JamesM »

You should find that most OSs that follow POSIX like naming conventions almost* never use "int", "long". They use typedefs like "ptr_t", "pid_t" etc, and expect you to retrieve their values from the provided header files.

The answer is to use uint32_t et al., then you will never be confused!
User avatar
os64dev
Member
Member
Posts: 553
Joined: Sat Jan 27, 2007 3:21 pm
Location: Best, Netherlands

Post by os64dev »

XCHG wrote:This one could run independent of the size of the value that it is passed to:

Code: Select all

#define HIBYTE(Value) Value >> ((sizeof(Value) << 0x03) - (sizeof(char) << 0x03))
So you could pass 1-byte, 2-bytes, 4-bytes, 8-bytes ... values to it and it will still be able to find the high byte. It could have been written better but I didn't have time.
Ok, to shoot at the obvious lets assume i pass 0xDEADFEED at this macro.

Code: Select all

int test = 0xDEADFEED;
if(HIBYTE(test) != 0xDE) {
    printf("this macro is faulty\n");
}
Guess what is will be seen on the screen. Indeed 'this macro is faulty' will be visible as the macro will return 0xFFFFFFDE.
Author of COBOS
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Post by JamesM »

Well os64dev that's hardly a valid retort: a simple "& 0xFF" at the end somewhere will fix that.

I would not expect people to post picture-perfect code on this forum, the point is that the gist is there and can be followed. Indeed quite possibly having deliberately faulty code is better because it encourages debugging.
User avatar
AndrewAPrice
Member
Member
Posts: 2299
Joined: Mon Jun 05, 2006 11:00 pm
Location: USA (and Australia)

Post by AndrewAPrice »

JamesM wrote:You should find that most OSs that follow POSIX like naming conventions almost* never use "int", "long". They use typedefs like "ptr_t", "pid_t" etc, and expect you to retrieve their values from the provided header files.

The answer is to use uint32_t et al., then you will never be confused!

Code: Select all

#include <iostream>

const int BitsPerByte = 8; // change this for your architecture
int main()
{
   std::cout << "A char is a " << sizeof(char) * BitsPerByte << " bit value.";
   std::cout << "A short int is a " << sizeof(short int) * BitsPerByte << " bit value.";
   std::cout << "A long int is a " << sizeof(long int)* BitsPerByte << " bit value.";
   std::cout << "A long long int is a " << sizeof(long long int) * BitsPerByte << " bit value.";
   std::cout << "This computer uses " << sizeof(void *) * BitsPerByte << " bit addressing.";
   return 0;
}
My OS is Perception.
User avatar
JamesM
Member
Member
Posts: 2935
Joined: Tue Jul 10, 2007 5:27 am
Location: York, United Kingdom
Contact:

Post by JamesM »

MessiahAndrw wrote:
JamesM wrote:You should find that most OSs that follow POSIX like naming conventions almost* never use "int", "long". They use typedefs like "ptr_t", "pid_t" etc, and expect you to retrieve their values from the provided header files.

The answer is to use uint32_t et al., then you will never be confused!

Code: Select all

#include <iostream>

const int BitsPerByte = 8; // change this for your architecture
int main()
{
   std::cout << "A char is a " << sizeof(char) * BitsPerByte << " bit value.";
   std::cout << "A short int is a " << sizeof(short int) * BitsPerByte << " bit value.";
   std::cout << "A long int is a " << sizeof(long int)* BitsPerByte << " bit value.";
   std::cout << "A long long int is a " << sizeof(long long int) * BitsPerByte << " bit value.";
   std::cout << "This computer uses " << sizeof(void *) * BitsPerByte << " bit addressing.";
   return 0;
}
What exactly was that meant to signify?
User avatar
Solar
Member
Member
Posts: 7615
Joined: Thu Nov 16, 2006 12:01 pm
Location: Germany
Contact:

Post by Solar »

I don't know, really, but sizeof( char ) is always (and by definition) 1 (as char <=> byte in C), and "BitsPerByte" is called CHAR_BIT and found in <limits.h>, even if your C environment is not C99-compliant (in which case JamesM is right on mark: You should use uint32_t et al.).
Every good solution is obvious once you've found it.
Post Reply