Still Have Questions?
Here are a few frequently asked questions and answers. Feel free to contact us if you you don't find the answer you are looking for!
General Imaging
An ideal camera sensor would convert a known amount of light into an exactly predictable output voltage. Unfortunately, ideal sensors (like all other electronic devices) do not exist.
Due to temperature conditions, electronic interference, etc., sensors will not convert light 100% precisely. Sometimes, the output voltage will be a bit bigger than expected and sometimes, it will be a bit smaller. The difference between the ideal signal that you expect and the real-world signal that you actually see is usually called noise. The relationship between signal and noise is called the signal-to-nose ratio (SNR).
Signal-to-noise ratio is commonly expressed as a factor such as 20 to 1, 30 to 1, etc. Signal-to-noise ratio is also commonly stated in decibels (dB). The formula for calculating a signal-to-noise ratio in dB is: SNR = 20 x log (Signal/Noise).
Once noise has become part of a signal, it can’t be filtered or reduced. So it is a good idea to take precautions to reduce noise generation such as:
- Using good quality sensors and electronic devices in your camera
- Using a good electronic architecture when designing your camera
- Lowering the temperature of the sensor and the other analog devices in your camera
- Taking precautions to prevent noisy environmental conditions from influencing the signal (such as using shielded cable)
Many times, camera users will increase the gain setting on their cameras and think that they are improving signal-to-noise ratio. Actually, since increasing gain increases both the signal and the noise, the signal-to-noise ratio does not change significantly when gain is increased. Gain is not an effective tool for increasing the amount of information contained in your signal. Gain only changes the contrast of an existing image.
Binning is a technique used in advanced imaging where adjacent pixels on a sensor are combined into a single “super-pixel.”
This process increases the sensor’s sensitivity to light, improving the signal-to-noise ratio, which is especially useful in low-light conditions.
While binning enhances image brightness and reduces noise, it also lowers the overall spatial resolution because multiple pixels are merged into one.
Binning is commonly used in scientific imaging, astronomy, and hyperspectral imaging to optimize performance when capturing faint signals or when faster readout speeds are needed.
A Region of Interest (ROI) is a specific area within an image or sensor frame that is selected for focused analysis or processing.
By defining an ROI, imaging systems can concentrate on capturing and analyzing only the most relevant portion of a scene, which can improve processing speed, reduce data storage requirements, and enhance measurement accuracy.
ROI is commonly used in applications like machine vision, scientific imaging, and hyperspectral analysis, where precise monitoring or inspection of specific areas is critical.
RGB imaging is a technique that captures images using three primary colour channels: Red, Green, and Blue.
By combining these channels, an imaging system can reproduce full-color images that closely match what the human eye sees.
RGB imaging is widely used in applications such as digital photography, machine vision, microscopy, and scientific imaging, where accurate colour representation and analysis are important. For more detailed colour information via more than just three bands, hyperspectral and multispectral imaging in the visible spectrum can be used, which capture more than just three spectral bands.
If you have multiple network adapters in a single PC, keep the following guidelines in mind:
- Only one adapter in the PC can be set to use auto IP assignment. If more than one adapter is set to use auto assignment, auto assignment will not work correctly and the cameras will not be able to connect to the network. In the case of multiple network adapters, it is best to assign fixed IP addresses to the adapters and to the cameras. You can also set the cameras and the adapters for DHCP addressing and install a DHCP server on your network.
- Each adapter must be in a different subnet. The recommended range for fixed IP addresses is from 172.16.0.1 to 172.32.255.254 and from 192.168.0.1 to 192.168.255.254. These address ranges have been reserved for private use according to IP standards.
- If you are assigning fixed IP addresses to your cameras, keep in mind that for a camera to communicate properly with a network adapter, it must be in the same subnet as the adapter to which it is attached.
Hyperspectral Imaging
Hyperspectral imaging captures hundreds of narrow, contiguous spectral bands, while multispectral imaging typically uses a smaller number of broader bands.
This higher spectral resolution allows hyperspectral systems to detect subtle material differences that multispectral systems may miss.
Both pushbroom and FTIR (Fourier Transform Infrared) sensors are used for spectral imaging, but differ in how they collect and process spectral information.
Pushbroom sensors, like those found in Specim's hyperspectral cameras, capture an entire line of spatial pixels at once, with each pixel containing spectral information across many bands. As the sensor or target moves forward, it builds a full 2D spatial x spectral data cube.
- They are most common in VIS, NIR and SWIR hyperspectral imaging
- Provide higher spatial resolution
- Require motion to build the datacube
- Offer high throughput, high spectral detail and contain relatively simple optics
- They are no ideal for moving or changing targets
FTIR sensors feature moving mirrors to modulate incoming IR light, creating an interferogram. Using Fourier Transform, it is mathematically converted into a full spectrum for each measurement. They usually measure one spot or pixel at a time unless combined with imaging optics.
- They are more often used in MWIR and LWIR ranges
- Offer higher spectral accuracy and resolution
- Can capture entire spectrum simultaneously without requiring movement
- They offer high sensitivity, high resolution and broad spectral coverage
- They are typically slower, bulkier and more sensitive to vibration, and less suited for fast-moving imaging applications unless paired with focal-plane arrays.
A spectral frame is the image captured by the imaging spectrograph. The horizontal dimension or row is spatial. The field of view is defined by the focal length of the objective lens, distance to the target and width of the sensor. This field of view is then divided into the number of pixels of horizontal resolution. The vertical dimension is spectral. Each column of pixels placed on the sensor represents the intensity of light reflected from a portion of the thin slice of the target at a particular wavelength.
A waterfall image is simply a spatial image taken from our data cube. If we take an slice from the data cube in theframe x pixel width (the two spatial dimensions) we will get a recognizable image of the target at a particular wavelength.
A target is imaged by first determining the height of the scene that the spectral imaging system is exposed to. This is determined by the focal length of the lens, the imaging slit width and the distance to the target. If our height works out to be 0.5 mm, we must take a spectral frame, move either the imaging spectrometer or the target 0.5mm then take the next spectral frame, repeating this process until the entire scene has been imaged. If we move less than 0.5mm we will be oversampling the scene, repeating the data gathered from a single point. If we move further than 0.5mm we will be under sampling, missing data from the target we are imaging.
Specim has created a series of detailed tutorials on how to set up and begin capturing data with your FX camera and the 40x20 LabScanner. Check out our Tutorials section to access those resources!
This is a simple question with a complicated answer. The imaging speed is determined by:
·The sensitivity of the camera and the illumination of the target (lower sensitivity or lower light levels require longer integration times)
- The data transfer capabilities of the camera
- The pixel depth (an 8 bit pixel is ½ of the data of a 10 or 12 bit pixel)
- The transfer speed of the camera to computer interface (CameraLink is fast, USB is much slower)
- The computer’s ability to process the incoming data
A fast camera with lots of light can produce more than 100 full spectral frames per second. This means if the imaging height of the scene is 0.5 mm you can image more than 50mm/sec. Handling the data at the computer becomes a problem as this 100 frames/second results in 50 megabytes of data per second that needs to be processed. A number of compromises can be made including decreasing the bit depth or the spectral or spatial resolutions depending on the application.
The spectral resolution of the imaging spectrograph is defined by the optics of the prism or grating mechanism and the entrance slit width of the device.
The light entering the system is defracted into its components according to wavelength. For example Specim’s ImSpector V10-e provides a spectral resolution of 2.8nm with a 30um slit width (depending on the detector and optics).
Hardware: Check your data cable. Make sure the lens cap is removed. Is the power supply connected?
Software: Check your configuration software (such as MAX or Device Manager). Make sure you are using the correct camera drivers. Make sure you aren't missing DLLs.
Some CCD and CMOS detectors have a thin coating on the detector surface causing interference phenomena (like Newton rings) that are seen as horizontal waves. This is an aesthetic problem only and does not interfere with spectral imaging.
There are several possible reasons you could be experiencing this:
- The light source has an infrared cutoff filter or the fiber optics absorb the light
- The camera is equipped with an infrared cut-off filter (hot mirror)
- The detector has low response (low QE) above 700 nm
- The front objective coatings are not designed for above 700 nm
There are several possible reasons you could be experiencing this:
- The light source (usually halogen) does not produce much energy at the short wavelengths
- The camera detector has low response at the short wavelengths
- There is a lens coating on the front objective or a UV blocking filter present
Some possible reasons you could be experiencing focus issues with your hyperspectral camera:
- The back focal length of the lens is incorrect for the camera (may not be C-mount)
- The lens is not focused on your target - use a focus target to set the focus
- The objective lens is loose or incorrectly installed
- The objective lens is not suited for spectral imaging (low quality, wrong wavelength range, coatings)
If you are experiencing low signal due to your illumination, double check the following issues that could be causing this:
- Is the lens cap on the objective lens?
- Is the lens aperture open?
- Do you have adequate integration time?
- Do you have an incompatible light source (ie. is there an IR cutoff filter)?
- The target has high absorption/low reflectance.
This is often due to dust or debris on the entrance slit of the imaging spectrograph.
A data cube is simply a collection of sequential spectral frames placed back to back.
If we imaged a target with our 1024 pixel x 1024 pixel imaging spectrograph using an imaging height of 0.5mm and collected 200 frames, the dimensions of our cube would be frames x pixel width x pixel height or 200x 1024x1024.
An imaging spectrograph transforms a very thin slice of an image into its spectral components by using a prism, grating or both and projects the spectral information onto an imaging sensor.
When a spectral camera images a scene, the frame can be considered to be three dimensional.
What the user sees when viewing the image is the two dimensional spectral frame which is defined by the area of the detector. This frame typically has data for each pixel of the camera.
What must be taken into consideration is that this is the spectral image of an area defined by the optics of the spectral camera. For example, if the scene being imaged is 0.5mm, each pixel can be considered a 3D cube defined as pixel height x pixel width x scene height. If the scene height and the pixel width are not equal, a waterfall image which is simply a slice taken through the data cube will present a rectangular pixel defined as scene height x pixel width.
When this image is presented on a screen with square pixels, the image will appear to be “compressed”, even though the data is completely valid.
Below are some reasons you could be seeing this:
- Incorrect calibration – the spectral lines from a reference source have not been correctly identified. You can use a simple fluorescent table lamp to identify spectral lines
- The camera detector is too small, misaligned or not centered
- There are calculation errors.
Multispectral Imaging
Answer
Multispectral imaging captures data in a limited number of (typically) broad wavelength bands, whereas hyperspectral imaging captures hundreds of narrow, contiguous spectral bands.
Multispectral systems are generally faster and simpler, while hyperspectral systems provide more detailed spectral information. Multispectral systems are particularly advantageous when you only need a small number of bands to get the information you require for your application, whereas hyperspectral cameras provide much more detailed spectral information about your target and offer added versatility for use in multiple applications and deeper analysis capabilities.
Multispectral imaging typically requires a camera or sensor capable of capturing multiple spectral bands, along with a compatible lenses, possibly filters, illumination, a movement platform if the camera is a line-scan multispectral camera, camera control and acquisition software or an SDK, and software for analyzing and visualizing the spectral data.
A spectral band is a specific range of wavelengths of light that the sensor captures. Each band provides unique information about the material or object being imaged based on how the target interacts with light.
Infrared & Thermal Imaging
Infrared (IR) imaging refers broadly to capturing light in the infrared spectrum, which lies just beyond visible light on the electromagnetic spectrum. An infrared camera is a camera that is optimized to detect electromagnetic radiation within the IR range. Different sensors are used for detecting radiation at different energy levels in the IR, for example MCT and InSb sensors are typically used for detecting of Mid-wave Infrared Radiation, whereas Microbolometers are utilized to detect Long-wave Infrared Radiation.
Thermal imaging is infrared imaging, but with added specialized calibrations giving it the capability to quantify surface temperature of objects based on the intensity of the infrared radiation that the sensor receives.
No. Thermal imaging detects surface temperature and emitted infrared radiation. They do not have the capabilities to image through walls or other solid objects but can detect differences in thermal radiation across solid surfaces that may provide valuable insights about the objects or what is occuring on the other side (for example in building inspection).
Yes. Thermal imaging does not rely on visible light and can operate in complete darkness, smoke, or other obscured conditions.
Machine Vision
CCD sensors use devices called shift registers to transport charges out of the sensor cells and to the other electronic devices in the camera. The use of shift registers has several disadvantages:
- Shift registers must be located near to the photosensitive cells. This increases the possibility of blooming and smearing.
- The serial nature of shift registers makes true area of interest image capture impossible. With shift registers, the readings from all of the sensor cells must be shifted out of the CCD sensor array. After all of the readings have been shifted out, the readings from the area of interest can be selected and the remaining readings are discarded.
- Due to the nature of the shift registers, large amounts of power are needed to obtain good transfer efficiency when data is moved out of the CCD sensor array at high speed.
CMOS sensors and CCD sensors have completely different characteristics. Instead of the silicon sensor cells and shift registers used in a CCD sensor, CMOS sensors use photo diodes with a matrix oriented addressing scheme. These characteristics give CMOS the following advantages:
- The matrix addressing scheme means that each sensor cell can be accessed individually. This allows true area of interest processing to be done without the need to collect and then discard data.
- Since CMOS sensors don’t need shift registers, smear and blooming are eliminated and much less power is needed to operate the sensor (approximately 1/100th of the power needed for a CCD sensor).
- This low power input allows CMOS sensors to be operated at very high speeds with very low heat generation.
The quality of the signals generated by CMOS sensors is quite good and can be compared favorably with the signals generated by a CCD sensor. Also, CMOS integration technology is highly advanced; this creates the possibility that most of the components needed to produce a digital camera can be contained on one relatively small chip. Finally, CMOS sensors can be manufactured using well-understood, standardized fabrication technologies. Standard fabrication techniques result in lower cost devices.
The main difference lies in the CoaXPress standard each frame grabber supports, which directly affects data bandwidth and overall performance. CXP-6 frame grabbers support data rates up to 6.25 Gb/s per link, making them suitable for earlier-generation or lower-bandwidth cameras. CXP-12 frame grabbers support the newer CoaXPress 2.0 standard with speeds up to 12.5 Gb/s per link, enabling higher-resolution and higher-frame-rate imaging with fewer cables.
In practical terms, CXP-12 offers significantly more bandwidth, reduced cabling complexity, and better compatibility with next-generation machine vision cameras, while CXP-6 is a cost-effective choice for legacy systems or moderate data-rate requirements.
Check your network adapter settings.
Go to Start>Control Panel>Network Connections and right click on your network adapter. Select Properties from the drop down menu. When the properties window opens, click the Configure button. Select the Advanced tab and in the property box on the left, select the property called “Jumbo Frames”. Set the value as high as possible (for jumbo frames, it’s approximately 16KB).
Be aware that if your adapter doesn’t support jumbo frames, you might not be able to operate your camera at the full frame rate.
Check your network adapter settings.
Go to Start>Control Panel>Network Connections and right click on your network adapter. Select Properties from the drop down menu. When the properties window opens, click the Configure button.
Look for a tab with a name such as “Connection speed”. If you see a tab like this, select the tab and set the “Speed & Duplex” property to “Automatic identification” or “Auto”.
If you do not see a “Connection Speed” tab, select the Advanced” tab and look for the “Speed & Duplex” property. Set the “Speed & Duplex” property to “Automatic identification” or “Auto”.
Color Filters for Single-Sensor Color Cameras
In general single-sensor color cameras use a monochrome sensor with a color filter pattern. Another way to achieve a color image with only one sensor would be to use a revolving filter wheel in front of a monochrome sensor, but this method has its limitations.
With the color filter pattern method of color imaging, no object point is projected on more than one sensor pixel, that is, only one measurement (for a single color or sum of a set of colors) can be made for each object point.
There are several different filter methods for generating a color image from a monochrome sensor. In the following some frequently used filter arrangements are detailed.
Bayer Color Filter (Primary Color Mosaic Filter)
The following table 1 shows the filter pattern for a sensor of size xs x ys (xs and ys being multiples of 2):
The following table 2 shows the filter pattern for a sensor of size xs x ys (xs and ys being multiples of 2):
This is basically the same arrangement as the Bayer filter pattern, but instead of using primary colors (R, G, B) it works with complementary colors (magenta, cyan, yellow). The reason for this is that a primary color filter blocks of 2/3 of the spectrum (i.e. green and blue for a red filter) while a complementary filter blocks only 1/3 of the spectrum (i.e. blue for a yellow filter). Thus, the sensor is 2 times more sensitive. The tradeoff is a somewhat more complicated computation of the R, G, B values, requiring the input of each complementary color.
Primary Color Vertical Stripe Filter
Table 3 shows the filter pattern for a sensor of size xs x ys (xs being a multiple of 4):
This arrangement is very simple and basically well suited to machine vision applications. The drawback is that the horizontal resolution is only 1/3 of the vertical resolution.
UV Imaging
UV imaging is a technique that uses ultraviolet wavelengths of light, which are shorter than visible light, to reveal surface features, materials, or defects that are undetectable to conventional cameras. UV imaging is commonly used for inspection, scientific analysis, and forensic applications.
Ultra-violet radiation is electromagnetic energy between approx. 100 nm and 400 nm. On the electromagnetic spectrum, the UV range is found between the visible spectrum and X-ray range. The UV range is divided into three sections: UVA (315-400 nm), UVB (280-315 nm) and UVC (100-280 nm).
UV radiation is invisible to the human eye, which typically only detects electromagnetic radiation in the visible range (approx. 400 nm to 700 nm).
UV fluorescence imaging involves illuminating an object with UV light, causing certain materials to emit visible light in a specific waveband (colour). When the target absorbs the high-energy UV photons, electrons get excited by the UV energy, and then quickly release it as lower-energy photons (visible light) as they return to their regular state. The colour of visible light emitted depends on the chemical properties of the material.
This emitted light is then captured by a camera, enabling high-contrast identification of specific substances or defects. This is different than phosphorescence in that once the UV light source is removed, the fluorescence phenomenon also stops. It is commonly used in applications including security, forensics, art restoration and scientific analysis.
UV imaging requires a specialized camera or sensor sensitive to UV light, along with UV-compatible optics, filters, and sometimes controlled UV illumination sources.
While UV imaging itself is safe, prolonged exposure to strong UV light sources can be harmful to eyes and skin. Proper safety measures and protective equipment should always be used.