C++ - Binary question(noob)

// migad.cpp : Defines the entry point for the console application.
//

#include “stdafx.h”

#include <iostream>
#include <conio.h>
using namespace std;

int stringsize(int max);
void showstring(int index);
void dispmultibin();

struct byte
{unsigned a :1;
unsigned b :1;
unsigned c :1;
unsigned d :1;
unsigned e :1;
unsigned f :1;
unsigned g :1;
unsigned h :1;

void ShowBin()
{
if (h) cout<<“1”;
else cout << “0”;
if (g) cout<<“1”;
else cout << “0”;
if (f) cout<<“1”;
else cout << “0”;
if (e) cout<<“1”;
else cout << “0”;
if (d) cout<<“1”;
else cout << “0”;
if © cout<<“1”;
else cout << “0”;
if (b) cout<<“1”;
else cout << “0”;
if (a) cout<<“1”;
else cout << “0”;
cout << " ";
}
};

union number
{
byte digital[3];
long int k;
} gigi[2];

int main(int argc, char argv[])
{
long int size,j;
size = sizeof(gigi[0].k);
cout << “Long Int Size is " << size<< " Bytes” <<"
“;
cout << “Struct Size is " << sizeof(gigi[0].digital[0])<<” Bytes”<<"
";
cout << "
";
cout << “Give a:” << "
“;
cin >> gigi[0].k ;
cout << “Give b:” << "
“;
cin >> gigi[1].k ;
cout << “Give i:” << "
“;
cin >> gigi[2].k ;
cout << “This is the number:” << gigi[0].k <<”+”<<gigi[1].k <<”
"<<gigi[2].k<<"
";
dispmultibin();
cin>>j;
return 0;
}

//displays binary code
void dispmultibin()
{
register int i,j;

cout << "The Binary code of the number you typed is: " << "
";

for(j=0;j<=2;j++)
{
for (i=0;i<=3;i++)
{
gigi[j].digital[i].ShowBin();
}
};
cout<<"
";
}

Here is a rather simple program I’ve been writing in visual studio 6(actually a practice exercise as I am a noob in c++).
What I am trying to do here is to represent a complex number in binary.
To do this I am using a struct with bits(byte) and make an array of bytes member ot the number union(digital).I use 4 bytes in the array digital as 1 long int consists of 4 bytes in my pc(i’ve checked from the sizeof() function.The rest of the code is quite obvious.
However when i run the program.I enter the values a=7,b=7,i=7.
Although in the main output of the program a=b=00000111 00000000 00000000 00000111, i =00000111 00000000 10100000 00010000
.Why the binary code of i is different from the binary code of a and b although i = 7?

Some more notes
gigi[0].k =a
gigi[1].k =b
gigi[2].k =i
the variables a,b,i are not used inside the program!!!
This program is incomplete.
If you want to compile this program you may need to modify it depending on the size of bites that long ints consist of in your pc

ick ick ick ick ick

why not use masking?
[that’s one ugly struct]

also… are you aware of byte order?

writing my own int to binary-convertor would look something like this

string blah(int val) {
  string result = "";
  for (int i = 0; i &lt; sizeof(int)*8; ++i) {
     if (val & (1 &lt;&lt; i))
        result += "1";
     else
        result += "0";
   }
   return result;
}

though even that could be better [assigning into a fixed-length string for example]…

OK, what I see first-off:
The “gigi” array is 2 elements long, but you try to pack 3 numbers in it.
The “digital” array in the union is 3 bytes long… most likely the “long int” type on your machine is 4 bytes long.
Using that union is quite ugly… you’re better using a loop and shifting/masking, like z3r0 mentioned.
Also, as z3r0 alluded to, there’s a byte order question… the bytes would be printed out in reverse order on a PC as opposed to a (PPC) Mac, for instance. There could even be a bit-order question depending on what order your compiler packs the bits in the struct… I don’t know if that’s specified by the spec or whether it’s implementation-dependant.

Yes it is GIGI[0],GIGI[1],GIGI[2] AND DIGITAL[0],DIGITAL[1],DIGITAL[2],DIGITAL[3]

That makes an array with 3 elements. The elements are digital[0], digital[1], and digital[2]. The “3” in the definition is the length of the array, not the index of the last element.