c question

Programming, for all ages and all languages.
Post Reply
User avatar
pulsar
Member
Member
Posts: 49
Joined: Wed Nov 22, 2006 1:01 am
Location: chennai

c question

Post by pulsar »

what is the output, explain it plz
#include <stdio.h>
int main()
{
int a=2;
float b=4.0;
printf("%d %f",a/b,a/b);
printf("%f %d",a/b,a/b);
}
everyone here are best programmers ( not yet not yet)
User avatar
JackScott
Member
Member
Posts: 1031
Joined: Thu Dec 21, 2006 3:03 am
Location: Hobart, Australia
Contact:

Post by JackScott »

Why don't you just try the output and see?

It should all be one line, with two decimal numbers and two floats. The floats will be in the middle. The first number, being a decimal, will either round up or down (I don't have a compiler on me, but I'm guessing down), because 0.5 can't be output as an integer.

Help you?
User avatar
Candy
Member
Member
Posts: 3882
Joined: Tue Oct 17, 2006 11:33 pm
Location: Eindhoven

Re: c question

Post by Candy »

pulsar wrote:what is the output, explain it plz
#include <stdio.h>
int main()
{
int a=2;
float b=4.0;
printf("%d %f",a/b,a/b);
printf("%f %d",a/b,a/b);
}
a/b is int divided by float. The result is therefore a float. That is cast to a double because printf doesn't specify the exact argument type (C language exception). It happens for both. In the first line, %d picks the first 32 bits and displays them (which is bull), %f picks the next 64 bits which is halfway between a number, also bull. The second line first prints %f, the first 64 bits, which will be the result, the %d will print the same bull as the line before for %d.

Try printf("%d %d %f", a/b, a/b); and ignore the compiler warning (if applicable). That displays two bits of bull (each 32-bit) and one real number.
Post Reply