Cameras have been around for years now. And as time has progressed, camera technology has also risen to incredible levels. Today, we have cameras that can shoot high definition videos in up to 1455 frames per second.
To put this into contrast, normal videos and film are shown at roughly 24 frames per second in a theater or 30 in a television screen. Therefore, these camas can bring clarity and definition to extremely fast events. This is why we are now able to film bullets traveling in the air in slow motion even though the bullet is traveling at speeds of more than 1500 meters per second.
So, it all begins with the camera sensor. Today we can find high-speed cameras that record at different speeds.
The higher the speed, the costlier it becomes. Some owners especially in highly specialized sectors such as film even commission custom cameras for use in various productions. For high-speed cameras to work, mechanical shutters are eliminated and electronic ones placed in their stead. As such, the camera sensor is engineered with three crucial factors in mind; sensitivity, speed, and resolution.
These three factors make high-speed cameras perform spectacularly. Since clients usually want more than just speed, the resolution must be factored in. High-speed cameras can record at different qualities from 720p to 8k. For the image to attain that amount of spatial resolution and detail in the image, the lenses have to be incredibly sensitive to light.
Mainly because there is very little time to expose the lens to the light in capturing that moment. This also means that the object being shot must be very well illuminated for the best results. However, there is a tradeoff in all this. As the camera speed increases, resolution and sensitivity will decrease. This is why the sensor has to be top notch to maintain speed and picture quality.
The next step is transferring data from the lens into the high-speed camera as a frame in a single image.
A lens works by simply counting photons that make contact with the lens. The camera thus sends a signal to the lens to start counting the photons and then later to stop {read when recording starts and stops}.
At this point, there will be an electrical charge built up on the photo site equally proportional to the counted photons. During this period called digital exposure time, the charge on every one of the photosites is recorded and converted into digital form by the use of an analog-to-digital converter (A/D). The resulting product represents a pixel’s charge which is then stored into memory together with all the other pixels in the image. At this point, the next image is exposed.
Looking at a high-speed camera such as the Phantom v2512 which is one the best cameras in the field, we get to see how gigabytes of data are processed so fast and efficiently. The Phantom has the capability of shooting, 000,000 fps but for the reasons discussed earlier, the resulting picture would be that of a 128*32 resolution monochrome. For more decent resolutions such as 1280*800, we can do 25600fps.
What this means is that the camera can shoot 25Gpixels per second. However, this data is not sent into an SSD but into RAM. This, however, is Dynamic ram which is many times faster than conventional SSD. As such, cameras come with 72GB, 144GB or 288GB RAM. The better higher the image quality, the more RAM required. To put this in perspective, 7.6seconds of full resolution picture works best with the 288GB RAM camera.
Some users, however, want slower shooting for longer periods. This means that a custom SSD must be developed for high-speed video. Some cameras can record 1500MB/s which is about 1Gpixel/second. Therefore most high-speed cameras will have timers to prevent memory from being overrun.
This means that off the shelf drives and SSDs are no good there. However, drives connected in RAID design have much better speeds. Since the available bandwidth between the memory and processor must be properly used, higher resolution cameras require custom designs.
As such, these cameras will write directly to the memory and then processing and manipulation done later. The processing cannot be done live. As such all image enhancements are done while offline and not while shooting because there is not enough time.