And we know already the resolution of the cameras of the Hubble. How his glorious color camera is 16megapixels, how other is 32 megapixels and how they have others with 1 megapixel and such.
Well, what of the four cameras would you use to take images of the Moon and what resolution you would use? We are going to see!
Imagine for example you want to shoot the Copernicus crater. NASA points the Hubble and takes this composite:
The curious of this image is that it is pure greyscale. The three values for R,G and B components are exactly the same. This only happens if you uses greyscale. Shooting the Moon in color will not give you this. They forced the camera to work in greyscale or they changed from color to greyscale when giving the tax payer this image. They don’t like to give Moon shoots in color. Wonder why.
(if you doubt that is Hubble just look to the page of that photo:
Well. You know what a composite of images is: the camera shoots an image and another and another and then you open gimp and join all of them. Looking to the composite we can say if the sensor was a rectangle (the 32megapixel sensor) or a square (16megapixel sensor in color). Look to my analysis:
First image I put the red square where I believe the first image was taken. Then I move that red square to the place where second image was taken, then third and fourth. The fourth image already touches the copernicus crater and the Artificial Intelligence software clearly detects that and immediately makes a 2x zoom and you can see if the 2x zoom were not done the five shoot in red square would be taken but that image was not taken. Instead the green one was taken and then the AI software moved to take others with the new green square around the Copernicus crater.
Is obvious is AI software because if it were manually done it would not present the green square so far away of the subject, nor would start shooting where first shoot was done also. It seems an automated task.
Now, the sensor used is square so my guess is they are using the 4k x 4k = 16megapixel color camera. The first red square would have 4k pixels and the green also would have 4k pixels. To make the composite you need to resample the red square shoots to 2x so they have 8k pixels each image, so your green 4k can match in the composite.
Well, if you measure how many pixels the NASA image has, it has 1626 pixels for the red square side and 813 for the green square (pixel above pixel below I am not being too precise here). That means they are not using the 1 megapixel camera but the 4k x 4k = 16 megapixel camera. But instead giving us 4k pixels on the green square they are giving us 0.8k pixels, exactly 5 times less resolution than the camera gives them.
And that is a demonstration NASA don’t likes high quality images or a demonstration NASA fools you everytime they post an image.
For that image the page says “You are attempting to access an image with an extremely high resolution. While the file size may be small, the number of pixels this image contains requires at least 10.48 MB of free RAM that is not being used by any other application, including your operating system.”
so they warn you a high image is going to occupy 10 megas of RAM and your computer is going to explode.
Give me a break.
I hope confirmation of they having the original 5x more quality and post for me to see Copernicus as God or some rock created it some day when the deception ends.