Monday, October 1, 2012

Before the Video Camera: First Steps



Just how does one get a still picture to move? And who figured out how to do it?

 If I could go back in time and ask these questions, we’d hear an international chorus of me, me, moi, moi, as Englishman Eadweard Muybridge, American Thomas Edison, and Frenchmen, Louis LePrince and Louis Lumiere all raise their hands frantically.  And they do all own a piece of the answer, for no single inventor could have done it without looking at the ideas of the others.

In this and the next  post  we’ll confine our discussion to Muybridge and Edison, for it is their inventions that are in direct line with what we call the video camera today. It is also amusing to note that judging from the lives they lived, neither one of these pioneers of moving images was able to sit still for long.

Before we start though, let’s keep in mind that a movie camera is nothing more than a camera that captures many images in sequence and records them on media. What sets the images in motion?

 Your brain; we are only able to process individual frames if presented  at a maximum of 12 per second. Our brains hold these images for about 1/15 of a second.  So if frames are presented at 15 per second or faster, we will perceive them as continuous motion. Early movies, or moving pictures, as they were rightly called had a frame rate of 14 to 24 images per second.

Keeping that basic concept in mind, let’s see what Muybridge did with it.

Eadweard Muybridge

British expatriate Eadweard Muybrige might never have been inspired to explore the moving image had not  Leland Stanford ,governor of California, businessman, and horse-owner been set on winning an argument.

Does a horse, at any time, while trotting, lift all four feet off the ground?

Stanford was on the yay side. In fact he was so entrenched that he hired Muybridge to prove him right. Famous for his large photographs of Yosemite Valley, Muybridge was a brilliant but eccentric photographer. And true to his nature, his new project followed a circuitous route to its completion.  Initially hired in 1872, Muybridge’s first attempts failed due to the lack of a fast enough shutter on his camera. His second attempt was delayed by six years; a period in which he was acquitted of murdering his wife’s lover.
After such a close brush with being confined to one space, he spent the next few years traveling though Mexico and South America. He supported himself with publicity photos taken for Union Pacific Railroad, owned by none other than Leland Stanford.

Upon his return to California Muybridge resumed his horse-in-action quest, working with a set-up of anywhere from 12 to 24 cameras, each equipped with a special shutter he designed to give an exposure of 2/1000 of a second. When lined up in sequence, it did appear that there were frames that captured  all four feet drawn up under the horse. Line drawings of his images soon circulated among the horsey set.



However with the publicity, came skepticism.  Doubters pointed to the leg positions in several of the frames and claimed they were anatomically impossible. Never one to back down from proving his point, Muybridge invented a device called a zoopraxiscope and took to the road for a series of lectures.

The zoopraxiscope was a lantern-like device that centered around a glass disc upon which he printed his photographs.  

 When the disc was rotated, the images were projected to the screen in rapid succession, giving viewers the illusion of motion. 

Some claim the zoopraxiscope was the precursor of modern cinema.  

Muybridge next devoted his efforts  to showing  humans in motion. His studies resulted in over 100,000 images capturing progressive movements within fractions of a second. 






A  question occurred to him : What if I could add sound?



 Upon hearing of the new sound producing phonograph Thomas Edison had invented, Muybridge and the zoopraxiscope traveled to New Jersey with just this proposition for Mr. Edison. 

Our next post will delve into what Mr Edison thought. Meanwhile if you are looking to purchase an IP camera or IP camera system or need information about one, please visit  www.kintronics.com



Thursday, September 13, 2012

What Video Cameras Learned from TV




Just what is a video camera? Let’s start with what it is not. A video camera is not a movie camera. A movie camera records images on film. A video camera creates electronic moving images. It has also been called a television camera and that is not entirely untrue since the modern video camera is a great- great grandchild, once removed, of the old TV camera.


Tracing the video camera tree is not an easy task since television was born, not of any one person’s invention but rather, evolved from the efforts and ideas of many people working alone and together over several decades and several countries.
The family tree would stem from two main root systems – mechanical pioneers and electronic pioneers.

Mechanical Pioneers


Paul Nipkow


Paul Nipkow came up with the hypothesis that if one were to dissect an image into small portions, it would be possible to transmit it sequentially. In so doing he discovered television's scanning principle: that light intensities of small portions of an image can be successively analyzed and transmitted over a wire.

He came up with a rotating spiral-perforated disk, called the Nipkow disk that would divide an image into a mosaic of points and lines. While rapidly rotating, the disk was placed between a scene and a light sensitive selenium photocell. The light then passed through a synchronously rotating perforated disk and replicated the image on the projection screen.
Nipkow Disk

The result was an image with a rudimentary 18 lines of resolution. It was 1884 and the world had its first mechanical television system.



In the 1920’s John Logie Baird, a Scottish engineer would use Nipkow’s technology to come up with his patented invention using transparent glass rods to transmit images. In 1924,  he was able to transmit simple face shapes, using reflected light  His transmission across a few feet consisted of flickering images of silhouettes in barely adequate half-tones, but these 30 line images were the first demonstrations of television by reflected light rather than back-lit silhouettes.

Logie and his first transmitter

Even though he would  transmit images in 1927 from London to Glasgow, Scotland, using  438 miles of telephone lines, and in 1928, achieve the first transatlantic television transmission between London and New York, Baird's mechanical system was rapidly becoming obsolete as electronic systems were being developed.

Meanwhile in the United States, Charles Jenkins, an inventor came up with the idea of viewing distant scenes by radio. His first success was a wireless transmission of a photograph of President Harding sent from Washington to Philadelphia. He called his invention Radiovision.
This rudimentary  mechanical television consisted of a mechanical scanning-drum and  a multi-tube radio set that had a special attachment for receiving pictures resulting in a fuzzy 40 to 48 line image projected onto a six-inch square mirror.
Jenkins with Radiovision receiver
 On June 24, 1923 he succeeded in transmitting moving silhouettes, and by June 23, 1925, had progressed to transmitting moving pictures.
Meanwhile in other labs and workshops, inventors were taking an electronic path.

Electronic Pioneers

In 1897, Karl Braun invented the cathode ray tube, the foundation on which modern television would be built. He took a sealed glass tube from which most of air had been removed. At one end, was a negative terminal, or cathode, through which electrons entered the tube. Due to the vacuum atmosphere the electrons formed a moving ray or beam through the tube. Braun discovered that an image could be produced when the ray struck a phosphorescent surface.
Cathode Ray Tube
Using his accomplishment as a building block, Braun used a changing current to deflect the electron beam within the cathode ray tube. The trace remaining on the tubes surface corresponded to the amplitude and frequency of the alternating -current voltage. He then set up a rotating mirror to produce a visible pattern based on the graphical representation of this current. His invention, initially known as Braun’s electrometer, and later as an oscilloscope, not only became the basic component of early television receivers but remained an essential instrument in future electronic research .
A couple of subsequent inventors tweaked the CRT concept a bit:
·         In 1927, Philip Farnsworth invented the image dissector tube and was the first person to transmit an image comprised of sixty lines of resolution.  His subject was a dollar sign from a glass slide, backlit by an arc .By 1928 Farnsworth was using electric scanning in both the pickup and the display device, and when he demonstrated it to the press , he billed it as the first working all-electronic television system.
·          Vladimir Zwroykin, a Russian immigrant, working for Westinhouse was impressed with Braun's invention andbegan to work on improving it. when Westinghouse told him to stop wasting time with this impractible pursuit, he worked on his own time and came up with the"kinescope,"a more sophisticated cathode-ray picture tube, and later, the "iconoscope," the first all-electronic camera tube.
For decades, the cathode ray tube would be the workhorse of televisions and other display devices until its eclipse by the liquid crystal display screen.
If you are looking to purchase an IP camera or would just like information, please visit www.kintronics.com



Thursday, August 9, 2012

The Iris Adjusts the Lens Aperture, but What Adjusts the Iris?


The eye has an iris. When we say we have blue eyes or brown eyes or green eyes, we are referring to the iris. But the iris is more than the colored circle surrounding the pupil. It acts as its diaphragm, reacting to the intensity of light, to widen or narrow the pupil which admits and focuses light on the retina at the back of the eye. Like the image sensor of a camera, the retina reacts to the light and sends a record of it, via the optic nerve, to the brain which makes sense of the image.  The iris constricts the pupil in bright light and dilates it in darker conditions.
The camera also has an iris which serves the same function. The camera iris controls the lens aperture, opening it so as to admit light to the photosites on the image sensor. Iris control for fixed surveillance cameras is of  three types:
·         Fixed  -  set at a specific circumference and cannot be adjusted.
·         Manual - allows you to adjust  the aperture by hand.
·         Automatic   - adjusts  the iris according to the prevailing light
If the camera is mounted indoors where the lighting remains the same, one can get by with either a fixed or manual iris since there is no reason to adjust the iris. However when using an outdoor camera,   automatic  iris control is needed.
 The iris setting affects image sharpness and depth of field. Depth of field refers to the distance both in front of and behind the focal point where objects share the same degree of sharpness. The deeper the field, the greater the portion of the scene that is visible. This is especially important in surveillance when covering a long corridor or escalator or a parking lot.
However, in waning light, the aperture may not admit sufficient light to pass through. The pixels that correspond to the darker portions of the image may not have enough time to collect sufficient photons, resulting in a shallow depth of field. 
A wide iris opening reduces depth of field while a narrow one increases it. The term, f-number is used to define the size of the lens opening, the higher the number, the smaller the opening. 


 
The following chart shows the effect the size of the aperture opening has  on the depth of field. A higher f-number  increases the depth while a lower f-number decreases it.





A smaller opening will also improve image sharpness.  This is because any lens will produce some sort of image aberration if the whole surface is used, so the smaller the opening, the less of the lens used, and the better the error reduction. 
However, and there’s always seems to be a however, too small an opening can actually blur an image due to what’s called diffraction. Diffraction arises in bright outdoor conditions when a lens closes its shutter too much and the light is diffracted or spread over too many photosites, resulting in loss of detail and a dull light-washed image.   Megapixel cameras compound the problem because not only do they have a large number of photosites but in many cases, the photosites are small and close together.
The following set of images illustrates diffraction at different iris settings on cameras within a range of megapixels.


More precise iris control would go a long way in decreasing diffraction, thus increasing sharpness and depth of field but unfortunately  the DC-iris lens as  mentioned above, only  controls the iris in response to light intensity . It does not allow for any finer adjustments that might result in more accurate photon collection. Axis Communications has developed a solution, jointly with Kowa, for minimizing diffraction. It is called P-Iris.
P-Iris provides automatic precise control of the iris opening. Rather than merely regulating the flow of light to the image sensor, P-iris sets the iris at the optimum  f-number at which the central and best part of the lens will be used.  The P in P-iris stands for precise.  Using this preferred setting as the default  insures better contrast, resolution and depth of field.  
But in some lighting conditions, P-iris  may not be enough. In those cases, electronic processing  is called for -  gain (amplification of the signal level),  or an alteration  in exposure time.  Either or both can  optimize the image quality by  maintaining  the best iris position for as long as possible.
In the rare instance when neither the preferred iris position, nor the electronic processing can correct the exposure,  cameras equipped with P-iris will automatically instruct the camera to change the iris position. Axis holds forth that any network camera equipped with P-iris will adjust itself to produce crisp high definition images with good depth of field, no matter what the lighting conditions may be, and will do it all automatically.
In addition to Axis, other camera manufacturers such as CBC Ganz and Vivotek are using the technology. If you have any questions about P-Iris cameras or any IP camera, visit Kintronics at http://www.kintronics.com/neteye/neteye.htmlor fill out a request for information form.

Friday, July 20, 2012

Are you sure you need all those megapixels? (Maybe you can get by with a little help from your lens)



When it comes to selecting an IP camera the first question out of some people’s mouths is how many megapixels does it have?  
But before asking this question, you should ask yourself how many megapixels do I need? 
And before asking that the question make sure you know what degree of resolution you need?  

Do you want to see the flow of foot traffic in your department store?


 or do you want to see the facial features of the woman pocketing that watch?


Resolution
Resolution is defined as the number of picture elements comprising an image. When dealing with IP cameras, the pixel is the unit of measure. Resolution may be expressed either horizontally x vertically (640x840), or as a total number (1.4 megapixels). The finer the detail you want to see, the greater the resolution (more pixels) you should seek. However it’s not quite that simple. Megapixels do equal greater resolution but there are a few more considerations, some plus, some minus.
(+)     Megapixels add greater resolution to the equation and you get a higher quality image.
(-)      But now you need more bandwidth and have higher storage requirements.
(+)     Compression and frame rate adjustments can solve this.
(-)      But this could lower the quality of your image.
Bearing all this in mind, the most practical thing to do is decide how many megapixels you need without going overboard?

PPF
 Camera developers have come up with a minimum of 40 pixels per foot (ppf) as the standard for facial recognition.PPF refers to the resolution of the final video frame and is based on the size of the area being recorded. However Recognition is a broad term, implying that this is a person is already known to the viewer. In most cases recognition is based not only on facial characteristics but also on familiarity with the subject’s body build and perhaps choice of clothing. If the person is looking away from the camera or wearing sunglasses, or a hat is obstructing their face, you may sort of, kind of think you recognize then but you can’t be sure. The ideal resolution would allow you to identify them. Notice I made a distinction between recognizing and identifying. For identification the standard has been raised to 80 ppf (across the person’s face.  So if you are watching a section of the store that is 20 foot wide, say the jewelry counter,  you need a camera that gives a horizontal resolution of at least 1600 pixels wide (20 feet x 80 pixels). 

                          40ppf                                                                                               80ppf


                  
                                                       
           







                                                                                                                                                  
                
                                                        On Second Thought
                                                                                                        
Suppose you want to cover the 20 foot jewelry counter plus 10 feet on either side, your total area of surveillance is now doubled to 40 feet. Picture a triangle here, by moving back you have now widened the base of the triangle but the angle remains the same. You have widened the field of view. And with a wider field of view comes the need for higher resolution to maintain that optimum facial identification.  Since you have doubled your area to 40 feet, using that original camera with a megapixel resolution would now cut your final resolution in half to 40ppf adequate for facial recognition but not identification.  So if you want to keep that requisite 80 ppf, you could consider purchasing a camera with a higher resolution (more pixels per foot, in this case, 40 feet x 80 pixels) of at least 3200. But now you have those storage and bandwidth excesses we spoke of. But there is another way. 

Consider the Lens
If you remember from previous blog entries, the lens plays a major part in the quality of the image and the field of view. It determines how far away you can see something and it also defines how wide an area you can cover.
  If we are monitoring an area closer to the camera we need a wider angle lens with low magnification thus a smaller focal length (less mm). But if we are monitoring something farther away, our angle of view becomes narrower and we need a higher vocal length (more MM) which will give us greater magnification. With a fixed angle lens this is an either/or proposition but in the expanded area situation described above we need both. There is a way, however, to widen the angle of view, maintain focus and still use the original camera.
You could use a variable lens. Variable lenses give you a choice of fields of view. If the field of view is changed, focus can be maintained within a range of focal lengths. Variable lenses are labeled according to zoom capability. For example a lens that states having an 8x zoom refers to the ratio between its longest and shortest focal length, and might give you a range of 6mm to 48 mm.


This would be an efficient move. You still have facial recognition but since you have not increased the resolution, you haven’t increased the bandwidth. Furthermore if in the future, a decision is made to once again change the viewing area, this lens can adapt, giving you added flexibility.


 Carpenters used to have an old saying when it came to making sure they weren’t wasting wood. Measure twice, cut once. The same could be applied to choosing the right camera. Consider your measurements.  Granted, choosing a camera without adequate resolution won’t give you the surveillance you need, but at the other end of the scale, purchasing a camera on the sole basis of megapixels can very well give you more resolution than you need.  This means you’re not only wasting budget dollars but eating up bandwidth without reaping any additional benefits.
If you need advice in choosing a camera or putting together a system that will give you the most for your investment, call Kintronics at 800-431-1658 or go to our website and fill out a request form.