Find online courses made by experts from around the world.
Take your courses with you and learn anywhere, anytime.
Learn and practice real-world skills and achieve your goals.
Video Production Training 101 with Camera Operations Technique - Learn some of the tricks and secrets of video production from the pros at VideoSkills Academy.
Throughout video production training course you will learn the various aspects of camera operations for both studio and remote productions
You will become familiar with the modern camcorder functions and its basic operation
You will understand common production terms and the roles of personnel in the entire production process
You will know what it takes to shoot properly composed and framed video
You will learn proper lighting techniques and control the total exposure in this video production training course.
You will understand the difference between professional and amateur video
You will take your great looking video to the next step: Editing…. from there, the sky is the limit!
Take this Video Production Training course and learn exclusive camera operations technique & lot more.</p>
Not for you? No problem.
30 day money back guarantee
Learn on the go.
Desktop, iOS and Android
Certificate of completion
|Section 1: Camera Basics|
INTRO TO VIDEO PRODUCTION 101 -WELCOME! We’re glad you’re here. In this video, we give you a brief overview of what you can expect to learn in the first session. We cover the basics, but not to the extent that the more experienced videographer will be yawning. There’s information here that is beneficial to just about every student level. In the remaining Sessions we build on the basic foundation and concentrate on developing the Skill and Art of the craft through better shooting and editing techniques.
We are going to teach you camera operations for both studio and remote events.
Remote events are the core of television production. There are things that movies can do better, there are things that radio can do better, but nothing can tell the story better than video. The remote equipment we use is highly mobile, and capable of accessing many locations. Most remote productions are live events for which a director has little or no control. It becomes the responsibility of the videographer to capture the event as it is happening. There are, however, remote productions, such as plays, that offer a certain element of control. Some productions, other than news events, usually require multiple cameras.
Single camera remote productions are often referred to as electronic news gathering (ENG) productions or electronic field productions (EFP).
CAMERA BASICS -
The primary purpose a video camera is to focus light through the lens onto a silicon sensor chip or to a prism that separates the light into three colors - red, green, and blue - and sends each color to a sensor chip. Generally speaking, video cameras with three sensor chips produce higher quality video. The three chip camera will have greater color definition and higher clarity but will be more expensive.
The sensor chips in video cameras are usually CCD chips. CCD stands for charged coupled device. These sensor chips are covered with thousands of picture elements called pixels. On a single chip camera, the pixels are clustered in groups. Each pixel within the cluster corresponds differently to incoming red, green, and blue light. The three chip cameras respond differently. Each pixel on the red sensor chip respond to the amount of red light falling on the sensor. The camera combines the values of light intensity on the red green and blue sensors and will produce all the shades between yellow, cyan and magenta.
SENSOR CHIPS -
Sensor chips respond to light falling on them by producing electrical signals that are amplified and fed to processing circuits throughout the camera. The electrical stream is continuously fed to a camera buffer at a specific frame rate. Normally, the buffer transfers its signal once every 60th of a second. Every time the buffer fills and empties, the camera is said to have created one field of video... not a complete frame of video. Each field of video contains only half of the information necessary to produce one complete frame of video. Each field produces either the odd scan lines or the even scan lines. Since the buffer does not produce all the lines of a frame at once, but only half of the lines, the standard definition picture is said to be interlaced.
What is a "camcorder?" A camcorder combines a video camera and videotape recorder (VTR) or a Digital Recorder (DTR) in a single unit. When these two names are abbreviated we get the word “cam corder.”
Modern camcorders have gotten away from recording images on a tape based format such as VHS and Hi8. Today, most camcorders record on a solid state memory chip such as: Compact Flash, SD, SDHC, SDXC or P2 (Panasonic)
Camcorders are designed with all the necessary controls in the camera and are compact and flexible so they can be used in a remote setting or “on location.”
Camcorders come in a variety of types, and often you can mix and match a camera with a variety of different recording devices. One of the main differences between a camcorder and studio camera is that the camcorder is small and portable and suitable for both indoor and outdoor use, while the studio camera is not.
MODERN CAMCORDERS -
Modern video cameras on the market today are technological marvels. Today, you can buy a digital camcorder that largely outperforms yesterday's professional studio cameras.
The truth is that nearly all cameras today are so good, that, at the student video level, there is very little difference between models and brands. They all are capable of shooting quality video when used properly. In those three words that conclude the previous sentence-you'll find the magic that calls forth good video: when used properly.
Cameras are tools in the same sense that a paintbrush is a tool. The paintbrush doesn't paint the wall or the picture, the person operating it is the artist. What makes the difference isn't the brand or even the features of the camcorder, it's the knowledge and skill of the camera operator.
Let's look at some of the basic skills that separate good camera work from the . . .well . . . not-so-good. The goal of good camera work is really pretty simple. We typically want to capture scenes that engage the audience into the story. We want whatever is being captured conveyed to the viewing audience to be clear and without distraction.
If there is a technical aspect of the captured video that is annoying or frustrating-that's bad. Problems that commonly diminish the quality of video fall into three general categories: exposure, framing, and camera movement.
|Section 2: Image Control|
This term refers to how much light the camera lens lets in and relates how bright or dim the recorded image looks.
Modern digital cameras perform auto exposure quite well. So this factor is less important than it used to be. For instance, if you're shooting video of a person who is standing in front of an exterior window--the light behind the person may cause the camera to underexpose the person in the shot. This is called Backlighting. And similarly, trying to record images in a dimly lit room or a dark space might leave you with grainy, underexposed images suffering from color and life. Not enough light also causes Underexposure. What is too much light?
Always be mindful of the environment - and understand that in order to produce well exposed video, it may be necessary to change the shot angle, or move to a place where the lighting is better before you press the record button.
The IRIS within the lens is needed to adjust the amount of light that will expose the image sensor. The f-number (focal ratio or f-stop) is the focal length divided by the "effective" aperture diameter.
The lower the ratio (f1.4) the larger the aperture. Lenses with lower f-stops have bigger glass elements and are considered “faster” lenses.
Exposure / F Stop Example
LENSES & FOCUSING -
Most camcorders have zoom lenses. This means it has the ability to go from one focal length to another. In other words it can zoom-in for a close up shot or zoom-out for a wide shot and anything in between.
Consumer grade video cameras have much smaller image sensors than pro grade, therefore they struggle to produce a good image in low light and with smaller lenses.
Having more light than actually needed offers you the flexibility to experiment with focus and exposure settings.
Whenever you zoom in to a subject with a higher focal length telephoto lens (for example X20, or 20 times amplification) it tends to flatten the image. That means the image looks 'even' or 'flattened' demonstrated by both the foreground and background in focus. In other words there is less depth perception. This isn't a bad thing, however, and can be quite flattering to your subject. Conversely, a wide lens can often make a subject look wider and fatter than real life. Beware, this may be unflattering depending on your who your subject is.
Digital zoom is an electronic process designed to artificially enlarge the picture cells (pixels). Unfortunately, the quality of the image begins to decline when you switch from optical zoom to digital zoom. Discount any sales pitches that tout “up to 120x (or more) magnification.” It sounds good on paper, but don't be deceived; there's really no advantage.
Lenses with large apertures, f1.2 to f2.8, let in more light than an f5.6 lens.
Lens apertures can vary from f1.2 to f22.
Have you ever noticed someone’s home movie where every time they moved the camera or zoomed in or out the focus goes all over the place? Most modern camcorders have an “auto focus” feature that takes the pain out of manually focusing. But, it comes at an unfortunate price. It cannot make instantaneous adjustments since it takes a few seconds to analyze the central area of the image arriving through the lens, calculate what part can be sharpened and make the appropriate changes. Just as reading that last sentence, it takes a while for the auto focusing process to complete. And, by the time it accomplishes the task at hand, you’ve moved the camera again and the whole process has to start over. One way to improve the accuracy and consistency of the auto focus feature is to increase the stability of the camera. How do we do that? The best way is to use a tripod. Make all your camera movements slow and gentle and give your camera a chance to stabilize before you hit record.
What can fool the auto focus? Sudden movements. It may not be you, this time, but a wind gust causing a tree limb or shrub in the background to make the auto-focus mechanism think that now is your area of interest. Suddenly the focus blurs your subject and zeros in on your background, just long enough to ruin your shot. Another situation for disaster is a sudden low light event. You’re already shooting in a dimly lit evening shot outdoors and a dark cloud decides to interrupt that beautiful setting. The auto-focus needs more light to make its calculations and when it loses that ability, it starts “hunting” back and forth. You might as well call it a day.
But, hark! There is a solution. Turn off that auto-focus and if your camcorder will allow it, manually set the focus for a sharp stable shot. Check your owners manual on how to set manual focus. Some camcorders have a “spot focus” feature on the cam’s touch screen controls. This will let you touch the screen in the area that you want to concentrate your focus on.
No one will argue that our eyes are pretty important pieces of our anatomy and likewise the
Here's a quick and easy tip for the next time you pack up your gear for a shoot. It's
Simple dust particles: A little dust on the
For more aggravating dust use a fine brush to gently knock off particles. A great
As you watch scripted TV, commercials, or feature length films, notice that you'll never see lens gunk getting in the wayof a shot (unless it's an intentional, stylistic choice). That's because
TIP: To add a professional touch to your movies, try experimenting with manual focus settings. Using a tripod, zoom out (wide shot) to frame your scene with something in the near foreground just off center, like a leaf on a tree. Manually focus on the tree leaf. Notice your object in the far distance. It’s out of focus. Now, while you’re recording, slowly change the focus from the tree leaf to the subject in the distance. It will take a little practice, but you’ll notice a cool effect frequently used in the movies and TV shows. It’s called “rack focusing” or “pull focus.” See how it shifts the area of interest and how our eyes automatically follow?
Zoom Lens Example
DEPTH OF FIELD -
Depth of Field refers to the Depth of Focus.
Without going into all the fancy math and complicated drawings Depth of Field can be explained this way:
The sharpest image, where your focus is properly adjusted, provides the greatest image detail.
Aperture or your lens f-stop affects the range of focus.
A low f-stop (large aperture) has a very narrow range of focus vs. a high f-stop, like f16, has a much greater range of focus or an increased depth of field.
Soft Focus: or Bokeh-
So here's a new word for you: Bokeh - pronounced bow-KA, is a Japanese term that identifies the out of focus area of the shot, usually for artistic purposes, and is a creative way to focus - or in this case, out of focus, shots to your advantage.
WHITE BALANCE -
Professional cameras require a certain amount of technical knowledge to operate them correctly. One of the most basic procedures required when operating a professional video camera is a procedure called white balance.
Some people have heard the term white balance and may think all you need to do is find a white paper, put the paper in front of the camera and press the button marked "White Balance". Well, there's more to it than that.
What is white balance you ask? White balance is a process of adjusting the camera's color sensitivity to match the prevailing scene color temperature. On most cameras, white balance may be set to automatic or manual. On professional video cameras, white balance can be set to exactly match the light source.
Many video cameras allow complete automated white balance. Many amateur camera operators are of the impression that you do not have to worry about the light source in which you are shooting. It is true that automatic white balance is convenient but it is also prone to color shifts, especially when shooting with mixed light sources. For optimum results, the professional videographer will always perform a manual white balance any time lighting conditions change.
Manual white balance is a camera setting that allows the camera operator to match the exact lighting conditions used while shooting a scene. Manual white balance is the professional way to shoot and it is very easy to accomplish. The camera operator shoots a white card and presses a white balance button on the camera. The white balance procedure adjusts the red, green and blue signals so that the white card appears white and exhibits no color tint. This procedure is performed every time the camera is powered up and every time different lighting conditions are encountered. All professional cameras allow you to set white balance manually.
It is vitally important to re-white balance when moving between indoor and outdoor scenes and also important when moving between rooms lit by different types of light. During the early morning and late evening hours, the color temperature changes quickly and significantly. Even though your eyes don't always notice it your camera will.
Each type of light has a numerical color temperature. The color temperature of a shaded area is 6500K, the color temperature of sunlight is 6000K. When shooting indoors the color temperature of an incandescent bulb is 3500K and fluorescent is 5500K. Indoor lighting has a much cooler color temperature than outdoor lighting therefore it is important to re-white balance when moving between indoors and outdoors.
TIP: Learning to set the white balance manually can save you some grief when the auto white balance on your camcorder gets fooled by changing or multiple light sources in your scene. Sunlight coming through a window, fluorescent lights and incandescent lights all have different color temperatures. If you point your camera in different areas around the room the auto feature will try to correct to the proper white balance and cause blatant color shifting during your shot. The results can be quite ugly.
Consistent white balance can be as easy as changing the setting on your camcorder to manual and follow these steps. Zoom in to a white piece of paper, someone’s shirt, or even an automobile (out of desperation) and press the White Balance (WB) button on your camcorder. You should see some sort of verification in the viewfinder or LCD screen the WB has been completed. Now, shoot the scene and set up for your next shot.
Manual White Balance Example
COLOR TEMPERATURE -
First of all, it’s not a bad idea to consider investing in some lighting equipment. Either an inexpensive battery operated light that fastens to the camcorder’s accessory shoe or some stand-alone lights. It will really help in those low light situations where dark and grainy video is your alternative. Generally, the more light you have to work with the better.
Color temperature refers to the color that various light sources give off. The human eye and brain aren’t aware of the changes unless they’re dramatic, like the orange glow from a high-pressure sodium street light to a bright blue sunny sky. Modern camcorders have pre-set values to select for typical lighting conditions. The normal color temperature of light at noon on a sunny day is measured at 5,600 Kelvin (the standard of color temperature), although on a very hot, sunny days when the sky is blue and the light intense, the color temp can be much higher. With your camcorder set to daylight mode you’ll be able to record a scene that looks perfectly normal when played back. However, if you were to switch off the auto white balance function and record indoors under artificial light you would notice a distinctly bluish cast to it. Similarly, if the camera’s white balance setting is for indoor shooting and you move outside the playback scene will have a reddish hue.
FRAMING and BACKGROUNDS -
Always take into consideration the environment in which you are positioned in order to create a pleasing shot composition. In other words, always check your backgrounds before hitting the record button. Even when your shot is reasonably composed it’s fairly common to see footage where no attention has been paid to the background. Sometimes, unaware to the TV interviewer, there is something goofy going on behind them. Sure, it can make for some hilarious home movie clips, but what’s more important is how it tells us to think about what’s going on behind the subject.
Not unlike still photography, you want to make sure your background is exactly what you want. Switch the camcorder on, look through the viewfinder or flip open the LCD view screen and consider everything that’s in the frame. Position the camera to verify the relationship between the subject and the background looks good. If it doesn’t, make the necessary adjustments before recording. This simple break in the action can avoid making unnecessary or embarrassing mistakes. Just imagine your friend at an ice cream parlor sitting down to catch that first delicious drip from the cone and you failed to notice the overflowing trash can (complete with flies) in the background. Yuk! Sometimes we get so used to familiar objects we don’t notice them until it’s too late.
TIP: Making adjustments can be as simple as changing the angle. Think three dimensionally. Change the camera position to up, down, left, right, in for tighter shot or back for a wide one. Use any combination to create the best and cleanest composed shot. For kids playing on the floor, get right down there with them. Low angle close up shots of their faces and hands create a very involving video.
Serious videographers take time to develop their skills in the basics of composition.
If you haven’t heard by now, it’s pretty important to properly frame your featured person or object. The basic rules of composition have changed over the years since the early days of film and photography and have evolved into a more artful and pleasing cinematic criteria. It’s always important to remember your audience will see only what you allow them to see. Close up shots of the kids building sand castles won’t have near the impact unless you can show your audience the beautiful sand beach with palm trees and the rest of the family as part of your story. Good composition can be employed to tell the complete story in a single shot. It also works to hide elements that you don’t want your viewers to see. You can do this. Usually, we all get in a big hurry to get it completed, but if we take a few moments to carefully compose the shot, we’ll see dramatic improvements in the overall look.
Remember the Rule of Thirds? It’s when you draw an imaginary tic-tac-toe grid of two horizontal and two vertical lines across your viewfinder or LCD monitor. Most of the time people will try to place the focal point of a scene in the middle of the frame regardless of what other points of interest there are. (some consider that old school) Try applying the Rule of Thirds to the composition and you will transform your scene. It really is so fundamental to good composition that it’s practically carved in stone. Whether we’re dealing with landscapes or people, find areas of keen interest, like the eyes or a lone tree on the horizon, and place them at the intersection of one of the horizontal and vertical points. Watch for balance in the overall frame and make sure there is nothing distracting from your main focus. Remember, you’re the artist with the goal of creating the scene with the most visual impact.
Vanishing points are those areas of a scene that draw the viewer’s eye into the scene and points to a particular place in the frame. These work good as a wide shot, also known as an establishing shot. It’s never a good idea to make the center of the frame the vanishing point. Pan your camera either to the left or right, using the rule of thirds to create a more interesting and pleasing shot to the eye.
Anytime you employ a facial close up shot, you’re invoking intimacy. A Very Close Up (VCU) shot conveys an extremely intimate feeling that you can now tell what that person is thinking. We’re talking a tight shot from the chin to just above the forehead at an angle that reveals the face and eyes looking off into the distance. Normally, your subject wouldn’t allow you to get that close…..you’re invading their space, but the camera can, and now your audience can sense that closeness as an emotional privilege.
Tip: When composing shots of people, resist cutting people off at certain anatomical joints such as elbows, knees, necks and hips. This tends to be unflattering and leaves the viewer uncomfortable with the shot. Make your cuts just above or below those areas for the most flattering picture.
Rule of Thirds
Composition Example - The video clip under the Extras tab illustrates how the medium shot is utilized along with the close up shot. The opening shot is considered a medium shot then a close up shot is taken to provide a more detailed view of the project. This video is part of the cake decorating training offered by Jodi at www.cakeartbyjodi.com
Field of View-
Get to know your camcorder's true field of view. The viewfinder on the camcorder can display a slightly different viewing frame than you might see when you playback your recorded video on a TV monitor. Images usually appear slightly smaller in the camcorder's viewfinder. The reason for this is that some manufacturers will compress the frame in the viewfinder so you can see all four edges of the frame. It's usually no more than 5%. On play back, the images on the monitor may appear to be slightly larger.
When checking the specifications on a camcorder's frame coverage, a 100% rating will eliminate any guesswork.
When shooting outdoors, always try to place your subject in the shade with the sun at your back. Use a reflector to highlight your subjects face with a soft glow. If you don't have a reflector, use an automobile sunshade that has a highly reflective surface. It's a very good inexpensive substitute for a photographic reflector. Avoid harsh bright sunlight and high contrasting shadows on your subjects face. Besides the "squinting" problem looking into the sun, your subject will thank you when they can enjoy the shade and a less bothersome reflection from the car sunshade. Their appearance will be enhanced by a softer more evenly lit face that stands out from the background.
On some occasions, the sunlight and the surrounding area may present a too bright overwhelming situation for the soft effect you're looking for. In that case, professionals sometimes use an ND filter. ND stands for Neutral Density. ND filters come in various shades that modify or reduce the intensity of light without affecting the color rendition. Most professional camcorders have ND filters built in and are easily selected to enhance the shot. They range in values from ND2 which is equivalent to 1 f-stop to ND8192 which represents 13 f-stop reduction. Using ND filters will allow you to experiment with slow shutter speeds to create a motion blur for a waterfall which would be impossible in full sunlight and still get the proper exposure using your ISO and aperture settings. If you don't have the ND filters built in, not to panic. They are available as screw on filters for the front of your lens. They come in a variety of sizes to fit all DSLR and camcorder lenses. Tiffen even makes a Variable ND filter. They are easy to carry and attach and make a nice addition to your gear.
For indoor lighting, if you don't have a fancy set of studio lights, don't worry. Turn on every available light in the room and even bring in more lights if you have them. A table top lamp can be used near your subject to light their face, or to add a soft light to the background. Besides adding lights to the room, remember that where you place your subject in relationship to the light source can make a world of difference. Correct any unflattering shadows by rearranging your subject and the lighting to get just the look you want.
Many entry level video editing programs like Movie Maker, iMovie, Magix Movie, Roxio, etc. have an impressive range of filters, titles, soft focus effects, banners, vignettes, and much much more. The big problem, of course, is that it is very tempting to go overboard with effects to the point where the production soon becomes saturated with it. The result is the overall quality of the edited movie suffers. If you use the effects sparingly, creatively, and artistically, they will have a considerable impact upon your movie. It's important to know where to draw the line.
Picture overlays that you've probably seen look like a video clip floated in a box over a background even when the action within the box continues to play. It's a kind of multilayer approach commonly used on TV broadcasts like Home and Garden or Travel Channels where a shot of home remodeling in progress is also textually described in a banner along the
An over-the-shoulder shot is commonly used to insert a title, graphic, or another video (picture in picture) literally above and to the right or left of your talking head. All of these effects are added in the editing process so shooting it right is essential for ease of editing and that professional look.
More advanced editing programs have built-in picture manipulation capabilities, like multiple
If you want to incorporate some still shots in your video, no problem. Importing images is easy if you have either a scanner (flat bed or slide) or a digital camera. Most video editing suites will accept images in today's common formats such as JPG, BMP, PNG and TIFF.
Apple Mac programs like iMovie, Final Cut Express and Pro will accept images in JPG, MOV and PICT formats as well as others. Photo editing software packages such as Photoshop, Lightroom, Paint Shop Pro, will provide you with the capability to manipulate your stills to make artistic backgrounds, lower third banners and just about any kind of overlay you can think of.
Tip: Make sure that your imported images conform to the resolution of your sequence settings. Consider whether you need a 16:9 format or 4:3 and whether your resolution is 720 x 480 or 1080 x 720. Importing an image in the wrong aspect ratio or resolution will cause it to be “squeezed” or “stretched” instead of appearing normal.
|Section 3: Standards|
STANDARD DEFINITION -
The National Television System Committee (NTSC) was established in 1940 by the United States Federal Communications Commission(FCC) to resolve the conflicts that arose between companies over the introduction of a nationwide analog television system in the United States.
In March 1941, the committee issued a technical standard for standard definition television that built upon a 1936 recommendation made by the Radio Manufacturers Association (RMA).
The standard recommended a Frame Rate of 30 frames (images) per second, consisting of two interlaced fields per frame at 262.5 lines per field and 60 fields per second. Other standards in the final recommendation were an aspect ratio of 4:3, and frequency modulation (FM) for the sound signal (which was quite new at the time).
DIGITAL VIDEO -
DV and HDV cameras today offer greatly improved performance over the older analog VHS and 8mm cameras. Not only will you get better screen resolution but you will also experience far better color fidelity and the ability to edit the digital video without significant loss of quality. Modern camcorders can record 720 lines progressive or 1080 lines interlaced.
The digital video revolution of the 1990s offered a major impact on the affordability, availability, and usability of the medium. Not all aspects of the digital video revolution are positive. The affordable technology enables a lack of professionalism and a lack of skill level.
There was a time when a professional videographer was considered to be a true craftsman, a special craftsman with access to the best storytelling tools available. Today, almost anyone can purchase a camcorder or DSLR camera, print up a few business cards, and call themselves a photographer or videographer.
Video equipment today has become so easy to obtain and use it is no longer an issue who owns the equipment to produce interesting stories. We all have those capabilities. The issue facing the professional videographer is not who owns the equipment but rather who owns the craft. It is the craft that is much tougher to come by.
To become a professional videographer it takes more than equipment. The professional utilizes the equipment to capture and tell a story.
Professional videographers are craftsman who have evolved into dominant storytellers. They use their equipment to communicate their story as would an artist in using their paint and paint brush.
It is due to the digital video revolution that we are all shooters now, but, that does not make us all storytellers and visual communicators. There are many poorly produced videos that suffer from the lack of fundamental skills.
By taking this course through VideosSkillsAcademy, it is evident that you want to learn the craft.
ASPECT RATIO -
For many years it was very obvious that the screen of the home television was shaped much differently than the screen in a movie theater. Both screens are rectangular, but the home television screen was more closely related to a square. The movie theater screen was considerably wider than it was tall. Technically speaking, the relationship of width to height is referred to as the aspect ratio. Traditionally, the home television screen was constructed with a 4X3 ratio while the movie theater screen has a much higher aspect ratio. Television receivers today offer on aspect ratio closer to the ratio of the movie theater screen. The aspect ratio of televisions sold today is 16X9. This means that for every 16 units wide there will be 9 units high. The common term used today for the 16 X 9 aspect ratio is widescreen. For example, a television screen that is 32 inches wide would be 18 inches high.
It is important to conform to standard aspect ratios so that images retain their correct proportions on any television screen. You have noticed that your favorite TV show on a large or small screen will have the same proportions.
Films released at the theater and widescreen video formats use aspect ratios that are very different from the original standard definition NTSC video format. The widescreen format of today allows videographers to compose wider landscape shots and action sequences.
These technological changes are all good but can cause difficulties if a project is shot in one aspect ratio but needs to be shown in another aspect ratio.
AGC: automatic gain control. Circuits designed to increase the signal in order to bring it within acceptable parameters. Such as light or audio. Used in a camcorder's automatic video audio level control circuits.
AIFF: audio interchange file format. The native audio format on Apple Macintosh computers.
Anamorphic: normally associated with widescreen recording and playback, anamorphic lenses optically compress the incoming 16:9 image into the 4:3 which is then uncompressed and played back during editing. This differs from the pseudo-widescreen effect offered on many DV cams, in which a standard 4:3 picture is cropped top and bottom to give a letterbox effect. Some recent camcorders now use a "wide" CCD chip to achieve a widescreen effect.
Aspect ratio: in simple terms, a picture's aspect ratio is the ratio of the width of a picture to its height. The conventional video and TV aspect ratio is 4:3 (four units of width measurement to every three units of height). Widescreen movies have a standard aspect ratio of 16:9, which is now offered by camcorders with either a wide CCD chip or an anamorphic lens.
Audio sampling: sound entering the video camcorder via a mono or stereo microphone needs to be converted from analog to digital before it can be stored on tape or media cards. On entry, it is sampled at a frequency equivalent to twice its highest pitch most commonly sampling at 48, 44.1 and 32KHZ.
Authoring: the process of combining all of the media assets into one file prior to the execution of a DVD or CD video project. The process results in the creation of a .VOB file containing compressed video, menu graphics, and chapter marking data.
Autofocus: a circuit provided in all consumer format camcorders and many professional ones in which the optical system will focus on the predominant object within the visible image. Autofocus works best in conditions of bright light due to the clear definition of the objects, but can "chase" or focus where the scene in front of the camera is dimly lit.
AVI: audio video interleaved. A Microsoft media file format for use within Windows, and the default file format for capturing video files on Windows-based systems due to its high quality compression. Also used as an Internet streaming and non-streaming format.
Backlight: source of illumination that originates from the opposite side of the subject to the camera lens. Used to light the back of people's heads and shoulders in order to achieve depth. Also useful in landscape shots, such as shooting against the sun to emphasize depth.
Bidirectional: the field of sensitivity possessed by a microphone which gives it a figure eight sensitivity pattern. A bidirectional microphone is particularly useful in picking up a conversation between two people.
Bit: a commonly used acronym for the binary digit the smallest piece of information a computer can use. Each alphabet character requires eight bits (called a byte) to store it.
Byte: a single unit of computer data made up of eight bits(zeros and ones) which is processed as one unit. It is possible to configure zeros and ones in only 256 different permutations. A byte can therefore represent any value between 0 and 255. Multiples of the byte are commonly measured in thousands i.e. kilobytes, millions megabyte and billions gigabyte. 1 trillion bytes is called a terabyte.
Capture: the process of transferring a data stream from tape or media card in a camcorder or recorder to the computer using either FireWire (IEE1394) or USB, where it can then be edited.
Capture card: a capture card occupies a spare PCI or PCIe slot and a computer and contains the input sockets required to capture sound and vision into a computer prior to editing. Video capture cards will contain one or more IEEE 1394 FireWire sockets to enable the connection of a DV or media card Camcorder.
CCD: charged coupled device. The CCD has a photosensitive surface containing an array of semiconductors, which collects a piece of data known as a pixel (Picture Element). The CCD then converts the information into an electrical charge, which is proportional to each pixel's color and saturation. As a general rule, the larger the CCD the more pixels it is capable of generating, which in turn leads to higher quality images. Consumer digital video camcorders employ CCD's whose size typically range from 1/16 inch to 1/3 inch generating perhaps 300,000 pixels for each color.
CD burner: a device designed to write and read compact discs in a variety of formats including data, audio and video.
Chromakey: the electronic substitution of an alternative image or sequence into an area of continuous color within a video picture. This is commonly identified with TV weather broadcasts where a presenter appears in front of a computer generated weather map while actually standing in front of a blue or green screen. The overlaid image is keyed to a specific color or chroma reference.
Chrominance: the color component of a video signal.
CMOS: Complimentary Metal Oxide Semiconductor. A CMOS imaging chip is a type of active pixel sensor made using the CMOS semiconductor process. Extra circuitry next to each photo sensor converts the light energy to a voltage. Additional circuitry on the chip may be included to convert the voltage to digital data. CCD vs CMOS -Neither technology has a clear advantage in image quality. On one hand, CCD sensors are more susceptible to vertical smear from bright light sources when the sensor is overloaded; high-end frame transfer CCDs in turn do not suffer from this problem. On the other hand, CMOS sensors are susceptible to undesired effects that come as a result of rolling shutter.
Codec: an acronym for compression/decompression. A codec is essentially a set of mathematical algorithms which, when applied to an image or sound file, dispense with redundant data in that still enables the original image or sound to be reconstructed.
Color temperature: See Kelvin scale.
Composite Video: an analog video signal in which the luminance and chrominance signals are combined into a composite signal that uses a single connection for transfer of data between devices.
Cutaway: the practice of cutting away from the main action or subject to a complementary action or subject during filming or editing.
Depth of focus: the sharpness of the image from foreground to background, determined by the amount of light hitting the subject and which, in turn, determines the lens aperture (f stop). The greater the aperture (Iris, exposure setting) the less depth of focus. The smaller the Iris the greater the depth of focus.
Depth of field: the term applied to the area of sharpness immediately before and behind the object in focus. A tightly framed flower blowing in the breeze will be in focus through a limited range from the lens, beyond which it will be out of focus in the foreground and background.
Digital zoom: a feature of some camcorders that achieve increased magnification by electronically magnifying the pixels that make up the digital image. Digital zoom can produce heavy blockiness at higher magnification.
DVD: Digital versatile disc. High capacity development of compact disc, allowing storage of up to 4.7 GB of data, including MPEG-2 files for playback on domestic DVD players. Also available in dual layer, doubling the storage and in Blue Ray
DVD burner: a device capable of writing and reading files appropriate to DVD movie playback, in addition to other popular CD formats. A DVD drive is required to author and burn DVD recordable and rewritable discs for playback in domestic DVD players and computer based media players.
DVE: digital video effects. The manipulation of images relative to others that exist digitally on the timeline during editing and compositing. Commonly used to describe picture in picture and multilayered effects.
Editing: computer-based applications designed to facilitate the capture and combining of video and audio clips, together with the associated images, titles, audio effects, on a timeline in preparation for output to a range of destination media such as video tape, disk, web, and e-mail.
Fields: pictures from a conventional television comprise sequences of frames, each made up of two interlaced fields. NTSC is displayed using 525 lines at 30 frames per second (actually 29.97 frames per second) at 60 Hz. Each field represents one pass on scanning lines followed by even scanning lines. This reduces the effect of flicker.
FireWire: also known on some camcorders and computers as i.link. The standard for high-speed digital data transfer employs a single cable connection which was deployed and patented by Apple Computer Inc. Now the standard means for connecting digital camcorders to suitably equipped computers and other devices, and properly referred to as IEEE 1394 standard.
f-stop: a measurement of the aperture, or opening, of a lens and measured in f-numbers. Each f-stop represents a doubling of the amount of light entering the lens over the preceding higher number and example F2 passes twice as much light as F2 .8.
i.link: the Sony registered brand name for the digital connection which conforms to the IEEE1394 standard for high-speed data transmission. Commonly known as FireWire.
IEEE 1394: the technical specification for the standard of data transfer commonly known as FireWire, which was originally developed and patented by Apple Computer Inc.
Image stabilization: optical image stabilization and electronic stabilization. The former uses optical elements to compensate for camera shake, the latter processes incoming data digitally to similar effect. The advantages are minimal in low-cost digital camcorders.
JPEG: the abbreviation for the joint picture experts group. An industry group which defines the compression standards for photographic images. The abbreviated form gives its name to a lossy compressed file format commonly used in the transfer of images, such as those that make up webpages, where smaller file sizes are required in order to make the download process faster.
Kelvin scale: a scale for measuring the color (or temperature) of light, and universally used by film and video makers as well as still photographers. Average daylight is measured as 6500 Kelvin in Europe and 5000 Kelvin in North America. Blue colors radiate high temperatures whereas red colors radiate lower temperatures.
LCD: liquid crystal display. The type of color display commonly used in the flip out screen's on camcorders. Also used in some viewfinders.
Luminance: the technical name for the saturation, or brightness, of a video signal
Macro: a lens, or lens attachment, designed to provide pin sharp images of objects situated in very close proximity to the front lens element.
Memory stick: flash memory storage card developed by Sony. Memories stick Pro allows 1 GB storage and Memory Stick Pro Duo of 32 GB. Other branded variants include Memory cards, Compact Flash and SD card.
MOV: file extension applied to QuickTime media files. The default video capture file format is used by Apple Mac based video capture and editing applications.
MP3: type III codec used to compress audio files into a form small enough to be downloaded from websites stored in portable devices such as the Apple iPod.
MPEG: motion picture expert group. The body responsible for the MPEG-1, MPEG-2, MP3 and MPEG-4 compression format as used in digital audiovisual media.
NLE: nonlinear editing. The use of the computer systems to digitally capture and arrange video, audio, and associated media clips using appropriate editing software. Evolved from the use of linear, tape-based editing in which sequences had to be assembled in a linear manner, making rearrangement difficult.
NTSC: national television standards committee. This is the TV standard employed in the United States of America, Canada, Japan, and other countries in South America and many nations in the Pacific. NTSC uses 525 lines made up of two interlaced fields scanning at 29.97 frames per second or 59.94 fields per second.
Pixelation: an unwanted visual artifact where the image breaks down into recognizable blocks of color. This most frequently occurs as a result of compression, particularly in color boundaries or areas where there are high contrast edges.
Postproduction: this is the process that takes place once production footage has been acquired, and encompasses everything from the first off-line to computer graphics, edit mastering, compositing, and audio track laying and dubbing. In professional video it's common to talk of a production going in to “post.”
POV: point of view shot. Is a commonly employed cinematic device designed to represent the view of a character in a sequence. A automobile, for instance, might be proceeding along a street when a truck crosses his or her path. The camera assumes the position of the automobile as contact with the truck is made, enabling the viewer to see the automobile’s point of view.
Progressive scan: a process of combining interlaced fields into a sequence of single progressive frames at rates of 1/60 second NTSC. This reduces the effect of flicker, and benefits the recording and playback of sequences where sharp individual frames are required. Several DV camcorders now support progressive scan shooting.
QuickTime: media compression and streaming playback format developed initially for use of the Apple Mac computers, and now also available for use with Windows PCs. QuickTime Pro offers additional paid-for functions, including simple editing and file conversion.
Render: the process of producing a composite file from a number of source files such as video clips, audio clips, titles, graphics, etc., on a desktop video program timeline. A render will also take place in order that the result of applying a transition between two video clips may be viewed. On fast computers, render will take place in real time. On other systems, rendering might be undertaken as a background activity. A final render is the effect of preparing the completed project in a format appropriate to a particular use, such as compressing as MPEG-2 for DVD playback.
Rolling shutter:a method of image acquisition in which each frame is recorded not from a snapshot of a single point in time, but rather by scanning across the frame either vertically or horizontally. In other words, not all parts of the image are recorded at exactly the same time, even though the whole frame is displayed at the same time during playback. This produces predictable distortions of fast-moving objects or when the sensor captures rapid flashes of light. This is in contrast with global shutter in which the entire frame is exposed for the same time window.
Telephoto: term applied to a lens with a high level of magnification. A zoom lenses telephoto setting is considered to be its maximum magnification factor.
Timecode: a system of uniquely identifying a frame within a video recording, whether analog or digital. The timecode is made up of four sets of two digits, representing hours, minutes, seconds, and frames, which are generated internally within the camcorder (or, in the case of other recording systems, a standalone timecode generator for it) and electronically embedded into the recording area time codes can be used to identify the location of a shot within a recording on tape dividing up the codes are unique – (DV camcorders have a habit of resetting the code to zero if a blank section of tape is detected).
Timeline: the visualization of a video or film project in time. Desktop video applications employ a timeline as means of constructing the clips that make up a project, starting with the first clip at the zero point and proceeding to the end in a left to right direction. Some desktop video editing tools offer an alternative to that of viewing clips on a timeline which is commonly referred to as the storyboard viewing mode.
Transition: a visual transition is the means by which the viewer is transported from one part of the story to another using a wide range of visual tools. The most common transition is the dissolve also known as a mix or crossfade). Other transitions include wipes (linear, rectangular, circular) and DVEs (complex digital effects in which shots or sequences are treated as objects and assigned paths to control their motion, zooming in, zooming out, rotating, and so on).
Turnkey: a system that has been designed and built to be switched on and operated without the user having to make additional modifications or settings. Many professional desktop video editing and compositing stations are designed as turnkey systems.
Upload: the process of transferring data from a personal or network computer to a remote computer, such as an Internet server
USB: Universal serial bus. A connecting port on most modern cameras, camcorders and computers for the connection of peripheral devices to the Apple Mac or Windows PC which can be daisy chained together or used via an external connecting hub. Mini USB is a variant of the standard connector which enables users of digital video camcorders to export digital images, sounds and MPEG-1 MPEG-4 compressed via clips to and from a users computer using supplied software. USB has three versions: USB 1 with transfer speeds of 12 Mb per second. USB 2 transfers data at 480 Mb per second. USB 3 transfers data at 5 Gb per second.
Video capture card: a PCI or PCIe adapter used to connect video equipment to a computer for editing purposes. Typically the capture card will contain one or more FireWire ports enabling the transfer of data from the camcorder to the computer and back and may contain analog connections as well. As many computers now contain FireWire ports as standard, capture cards are less vital than they once were.
WAV: derived from “waves” or “wave form.” The native audio file format is used by Windows-based computer systems.
White balance: often referred to as WB, the process of referencing a camera or camcorder to either daylight or artificial light in order that white objects appear their correct color on camera.
Widescreen: a wide TV image, also referred to as 16 x 9 due to the relative horizontal and vertical proportions.
Wide chip: a CCD imaging chip employed by manufacturers like Sony to enable resolving a true widescreen image at the ratio of 16:9 without the need to crop a standard 4:3 image by the use of black bands at the top and bottom of the screen.
Wide-angle converter: an optical lens attachment designed to reduce the minimum focal length of the lens and therefore increase the field of view. Will also have the effect of increasing the depth of focus of a shot. Typically, a wide-angle lens converter halves the lenses widest focal length to increase the angle of vision.
XLR: a standard 3 pin connection most commonly used with microphones and high-quality audio sources feeding a camcorder, recorder, or mixer with a balanced audio signal. Some camcorders have unbalanced minijack type connectors, for which a converter needs to be used when connecting XLR mic cables.
Y/C component: a video signal in which the two main components the luminance (Y) and the chrominance (C) remain separate throughout the signal chain. If the Y. and C. components remain separate the resulting image display will not suffer from the same cross luminance and cross chrominance artifacts as composite video.
Zoom: a zoom lens is used to vary the focal length of a lens by altering the relationship of the optical elements within the lens. Zooming in and out can also be achieved by digital signal processing within the camcorder called digital zoom though the high zoom ratios are to be avoided due to the resulting heavy pixelization.
|Section 4: Planning|
PLANNING YOUR PROJECT -
Any video project that happens outside of the studio is considered to be a remote production. Remote productions include many event types: awards programs, concerts, parades, dedications, speeches, weddings and celebrations.
Studio production and remote production are very different. Studio production provides maximum control. Studio lighting and audio can be tightly controlled, providing optimal levels for video production. Studio production provides an environmentally controlled location.
There are times when a camera crew has to be on location. Remote production often provides an exciting atmosphere such as an audience are even a cheering crowd. Either, however, can easily disrupt or even cancel a remote production. It is possible for a remote production to become a stunning production with natural lighting and outdoor scenery.
Although there are many events that will be covered as a single camera production, many production situations require more than one camera. Multiple cameras provide different viewpoints of an event.
Examples of multi-camera event coverage include:
* events spread over large areas
* events where there is no opportunity to move a camera quickly
* events that are one time opportunities
* events where cameras must be in fixed or concealed places
* events where the action cannot be accurately anticipated
When preparing for a remote multi-camera production, it is important to prepare quite differently than preparing for a studio production. Remote multi-camera productions usually require a larger crew with more equipment. Considerably more planning and preparation is required. More questions must be asked, like: how many cables are needed, what type of microphones will be used, is power available?
Preproduction meetings are essential during the planning phase of all productions. These meetings provide communication for all crew members. During these meetings ideas are shared in an attempt to ensure a successful production.
PRODUCTION PHASES -
The production process is commonly broken down into:
Preproduction, Production, and Postproduction
There is an old TV Production saying: “The most important phase of production is preproduction” The importance of preproduction is fully appreciated after things get pretty well messed up. The production people look back and wish they had planned their production.
In preproduction the basic concepts and methods of production are developed. It is in this phase that the production can be set on a proper course or set on a misguided course (messed up) so badly that no amount of time, talent, or editing expertise can save it.
In order for any program to be successful, you must be constantly reminded throughout each production process the needs, interests, and general background of the target audience (the audience your production is designed to reach).
In order for a program to have value and a lasting effect, it must in some way have an emotional affect on its target audience.
During the preproduction process, not only are the key talent and production members selected, but also all the major elements are discussed and planned. In preparation for large productions, since various items such as scenic design, lighting, and audio are interrelated; they must be carefully coordinated in a series of production meetings.
The production phase is where it all comes together in a final performance. Basically … Show Time!
Production sessions may be produced live or recorded. With the exception of news programming, sports remotes, and some special-events, productions are usually recorded for later broadcast or distribution.
Recording the show or program segment provides an opportunity to fix problems by both making changes during the editing phase or stopping the recording and re-shooting the segment.
Tasks, such as taking down sets, dismantling and packing equipment, handling final financial obligations, and evaluating the effect of the program, are part of post production.
Editing is one of the main components in the post production process. Visual effects (VFX) have become more sophisticated; editing has gone far beyond the original concept of simply joining segments in a desired order. Editing is now a major focus of production creativity.
The editing process can enhance a production in the way of razzle-dazzle. In fact, it's pretty easy to become so over zealous with the special effect capabilities of your equipment that the final production can loose its original intent. As you edit for the final product, consider all this high-tech stuff merely a tool for a greater purpose. The true goal is effective communication of ideas and information.
Tips: Things to remember for a remote location shoot: Extra Batteries, lights, media storage, monitor, laptop, power cords, strips, diffusers, reflectors, microphones, headphones, etc. Do you need to write a script or put together a Story Board? Does your client need Cue Cards or a teleprompter?
For a multi camera shoot, a clapboard is handy to not only tell you in the editing phase what scene or take you’re in, but it helps sync the audio between cameras. Also, consider if you need any dreaded paperwork, like modeling release forms, or permits to use private property. Doing your homework in advance can save you a lot of headaches later. Lastly, don’t forget to remind everyone on the set to turn off their cellphones and be alert for unexpected interruptions that could ruin the best take of the day.
Chances are you are the sole proprietor and will be wearing the hat of many corners. Now that you know the duties and responsibilities of the various crew members, the single shooter will have to fill all the positions themselves. This can be a blessing or a curse depending on how you approach your project. We can not over emphasize pre-planning. Meet early with your clients and coordinate or rehearse your video shoot taking plenty of notes. Make sure you have a clear understanding of what your client wants as a finished product. Every step of the way, keep the end product in mind.
My Dad introduced me to photography when I was just a boy. He taught my brother and I the fundamentals of darkroom practices, chemicals, enlargers, printing, film developing and printing techniques. I worked in the photo-journalism departments of both Jr. and Sr. High Schools using everything from sheet film to 35mm. In college I worked for a professional portrait studio as a lab tech and photographer. When new technology emerged affording many of us the capabilities of shooting video I eagerly adopted the 8mm film and soon the VHS and Beta tape platforms. It wasn't long before the much smaller 8mm videotape and Hi8 emerged on the scene. DV and mini DV machines popped up to capture our motion pictures and when the hard drive and with the introduction of the mini DVD camcorders I thought we finally arrived! Here I was, burning my HD movies right on a playable DVD that I could pop into any player or editing suite. Of course we've progressed way beyond that these days with the advent of solid state media storage and DSLRs that shoot amazing HD video. All this time I've been a freelance photographer, videographer and avid hobbyist. In my most recent professional career I worked as a broadcast maintenance engineer at Fox2 in St. Louis. For a couple of decades I've been building and developing websites using HTML and a variety of template tools. I've been a photo and video contributor at IHS Productions, Art of Film and LLS Media Solutions.
Keeping up with the advances in technology can be challenging and enriching. My desire is to share with you the experiences I've gained over the years that have given me unbelievable enjoyment and satisfaction.
Hours of video content