Arash,
15-bit: yes, you lose half the data precision. But you would not notice that in a histogram. A histogram is binned, typically into 128 or 256 levels. You wouldn't see the effect of losing one bit on a 16-bit image. Now and 8-bit image, losing a bit would be noticeable on a 256 binned histogram.
The 15-bit mode uses a math trick (I saw a description by a mathematician) to speed things up. There is still a speed advantage to do adds and subtracts versus multiplies and divides on every cpu, I believe. And whether or not software is 64-bit, it can still use 8-bit and 16-bit integers and associated integer math. The 64-bit is only needed for addressing large images greater than 4 GBytes in size.
And for photography, 15 bits is actually enough for any application. 15-bits is a linear dynamic range of 15 stops, and a gamma-encoded range which could be orders of magnitude larger. Now camera has even close to the signal-to-noise ratio (S/N) of 15-bits, and none even have the linear dynamic range (although that will hopefully change soon). The S/N of a digital camera image is the square root of the number of photons captured. Currently, cameras with large pixels, like the Canon 5D (Mark I), 1D Mark II capture at most about 80,000 photons in a pixel, which gives a S/N = 283. So 15 bits is fine enough precision, and since digital camera data is at most 14 bit with all the photon and read noise, you could never tell a 15 versus 16-bit difference.
See: my digital sensor performance summary web page:
http://www.clarkvision.com/imagedeta...ary/index.html
HDR can push the linear dynamic range and I believe all HDR programs convert 15/16-bit image data to at least 32-bit numbers (some floating point).
Roger