{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

videoformatsPincusAscher

videoformatsPincusAscher - 194 THE FILMMAKEFI'S HANDBOOK...

Info icon This preview shows pages 1–7. Sign up to view the full content.

View Full Document Right Arrow Icon
Image of page 1

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 2
Image of page 3

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 4
Image of page 5

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Image of page 6
Image of page 7
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 194 THE FILMMAKEFI'S HANDBOOK the line—pairs—per—millimeter measurement used to evaluate film Stocks. If someone says, “DV has a resolution of 500 lines,” that person is referring to TV lines per pic— ture height. TV lines are a fairly inexact, simplified way to discuss resolution. Bear in mind that even if afommt is theoretically capable of a certain resolution1 a low—resolution camera CCD, soft lens or unsharp monitor may reduce the actual performance. The term resolution is also used to describe how much data is stored for each frame. See Resolution, 13. 33. VIDEO FORMATS AND FEATURES Color Video Systems For the basics of video color, see Chapter 5. For an introduction to composite and component systems, see p. 19. The camera’s CCD generates three distinct color (chrominance) signals—red, green and blue (R, G and B). From the R, G and B signals, the camera produces a luminance signal (represented with the letter Y) that corresponds to the brightness of the picture. How these four signals (R, G, B and Y) are processed and routed through the video system has a big impact on the quality of the image and on what types of equipment you can use together. The way a system handles color is some— times called its colov‘ipare. These are the four major methods currently in use: 1. Composite video. R, G, B and Y are merged (encoded) into one signal that can be sent in a single wire. Simplifies recording and broadcast but results in a 10wer quality image. 2. Component video. Effecdvely separates each of the three colors and lumi- nance. Uses three wires: Y and two “color difference signals” (R minus Y; B minus Y). You will find component written variously as “Y, R — Y, B - Y ” or “YUV” or “YC,Cb”—all mean the same thing. Used in Betacam, D1, DV and other formats. Hi git-quality image with excellent color reproduction. 3. WC, also called S-video (Separate video). Combines the colors into one chrominance signal (C) but keeps them separate from the luminance. Image is not as good as component, but better than composite. Used in S—VHS and Hi8. An S-video cable has wires for Y and C. 4. RGB. Keeps the three colors separate using different paths for R, G and B. Since the colors are isolated from each other, this is a form of component video. Used in computers and some video systems. Can offer a wider palette of colors than standard component. To get the most out of any noncomposite system (component, Y/C or RGB), you need a VTR and a monitor that have outputs and inputs in that system. A Hi8 camera, for example, will only show its best picture quality if connected to a monitor using an S—video cable (Fig. 6—7). If the signal is routed through any composite equipment or connector, many of the benefits of these other systems are lost. However, sometimes you need to connect noncomposite equipment to composite THE VIDEO IMAGE 195 camera component monitor encoder decoder camera S—video lYICl encoder :3 ~—+—~~ camera composite monitor .Fig. 7-4. Component, S—video and composite video Systems vary in the paths they use to 'convey the video signal from one piece of equipment to another. For systems that use mul- ‘ple paths, the signal is sometimes sent on multiple cables and sometimes me various paths re part of one cable. See text for explanation. (Robert Brun) “systems. i'l/Iost component or Y/C gear also has standard NTSC or PAL compos- ite connectors (usually labeled “video in" and “video out”). These can be handy, but the quality of the image is lower. Composite Systems Until the advent of digital broadcasting, all television was broadcast in compos— ite form. The vast majority of TV sets and Other video equipment are designed for composite video. However, component and the other systems described above are increasingly common. Eventually, composite video may become obsolete. Differ- ent parts of the world use different, incompatible composite standards, which re- gmres different sets of cameras and editing equipment Video can be converted om one standard to another, but with some loss in quality (see p. 527). 1. NTSC. (Pronounced “en—tee—ess—see.”) Used in the United States, Canada, Japan, Central America and other places where electrical power is supplied at 60 Hz. NTSC uses 5 25 horizontal scan lines to record a complete frame (see p. 18). The original black—and—white teleVision standard was exactly 30 frames per second. When color was introduced, this was dropped slightly to 29.97 fps. This difference is imperceptible to the eye, but complicates things like i I l 1 f l 195 THE FILMMAKER'S HANDBOOK timecode, camera shutter speeds when filming a monitor (see Chapter 17), and synchronizing sound recorders and other equipment (see below). NTSC has a number of drawbacks. Since it uses fewer horizontal scan lines than PAL, the picture is not as sharp (and hi gh—definition digital formats are much sharper than either of them). NTSC is prone to certain image defects, includ— ing ringing (edges of objects repeat or echo) and dot crawl (lines seem to vibrate or move a bit like a barber pole). Because film is generally shot at 24 fps, the 30 fps NTSC frame rate creates certain problems in transfers (see Chapter 17). . PAL. (Pronounced “pal.”) Used in most of Western Europe, Great Britain, Australia, New Zealand and China. In PAL, the image is scanned with 625 lines at 25 fps. PAL provides a higher—quality video image than NTSC. Be cause the frame rate is exactly 2 5 fps, and the elecrrical current in PAL coun— tries is at 50 Hz, many aspects of working with PAL are simplified. 3. SECAM'. (Pronounced “see—cam” or “say—cam”) Used in France, parts of Eastern Europe, the former Soviet Union, Iran and Iraq. Like PAL, uses 625 scan lines at 25 fps, but the color encoding is different. [“9 Component Systems Component systems maintain separation between the color components in the video signal (see p. 19). In video production, the term component usually refers to the Y, R - Y, B — Y systemdescribcd above. However. bear in mind that the RGB method is also a component system. When people use the terms NTSC video or PAL video, they are usually referring to (caraparire video. For example, someone might ask if a camera has an NTSC output, meaning encoded, composite video. Yet, among component standards there is an NTSC version as well as a PAL version. Since the NTSC component standard is 525 lines at 60 fields a second, it is sometimes referred to as “525/60 component.” The PAL version is 62 5/50. The same terminology applies to various digital formats. TIMECODE To make full use of the tools available for film and video production, timecode is essential. The idea of timecode is simple: to assign a number to every frame of picture or sound so we can easily find those frames and work with them. Timecode is a ninning “clock” that counts hours. minutes, seconds and frames (see Fig. 1—18). Timecode comes in a few different flavors, which can be confusing. Types of TImecode in the United States and other parts of the world where NTSC video is stan- dard, video is generally recorded at about 30 frames per second. Video timecode is a 24~hour clock that goes as high as 23:59:59:29 (twenty-three hours, fifty—nine minutes, fifty—nine seconds and twenty—nine frames). One frame later it returns to 00:00:00200. Note that since there are thirty frames per second, the frame counter only goes to up to :29. This timecode system is called SMPTE nondrop timecode. Many people just refer to it as “SMPTE” (pronounced “simpty”) or “nondrop” (often written “RTD’U. This is standard, basic timecode. THE VIDEO IMAGE 197 One of the quirks about NTSC color video is that it actually doesn’t run email} at 30 fps. It runs just slightly slower, at 29.97 fps. You can’t see the difference, but this .1 percent reduction in speed affects the way timecode keeps time. If you watch a tape until the nondrop timecode indicates one hour of video, and then quickly look at your watch, you will see that actually one hour and 3.6 seconds have gone by. This discrepancy is no big deal if the movie is not intended for broadcast. Nondrop timecode is often used for production. Since broadcasters need to know program length very exactly, dropfi‘ame (DF) timecode was developed. This system drops two timecode numbers every minute so that the timecode reflects real time2 A program that finishes at one hour drop frame timecode is indeed exactly one hour long. W'ith drop frame timecode, 720 alifimi-zer of video are dropped and tbefi'amc mt: doesn’t change. The only thing that‘s affected is the way the frames are counted (numbered). This is a point that confuses many people. Switching a camera from ND to DF has no affect on the picture or on the number of frames that are recorded every second. The only thing that changes is the way the digits in the timecode counter advance over time. Many television—bound programs are done with DF code. Many editing sys— tems can work with either drop or nondrop, and shooting with nondrop doesn’t prevent you from finishing with drop. Mixing drop and nondrop code in the same project can sometimes cause problems. DF timecode is sometimes indicated with semicolons instead of colons between the numbers (00;l4;25;15). In Europe and other parts of the world where PAL video is standard, video runs at exactly 25 fps. EBU timecode uses a similar 24-hour clock, except the frame counter runs up to :24 instead of :29. EBU code keeps real time, so there is no need to drop frames. Consumer or prosumer equipment sometimes uses nonprofessional timecode systems such as RC timecode. These systems may limit your ability to interface with professional equipment. On some such systems, the timecode cannot be set. Recording Timecode There are a few ways to record ti mecode on tape. Some tape formats have a 1072— gitudinal track just for timecode (LTC). Some formats allow you to use one of the audio tracks for code. Longitudinal and audio timecode can be added or changed even after the video has been recorded. Another type of timecode recording is vertical interval timerode (WTC, pro— nounced “vit—see”). VITC is recorded as part of the video signal (during vertical blanking; see The Raster Revisited, p. 193). One advantage of VITC is that it can be read by the VTR even when the tape is not moving (very useful for editing). WT C does not use up any audio tracks. VIT C must be recorded at the same time as the video and cannot be added later (except during dubbing to another tape). There are other ways of recording timecode. The address rind: on a W' U—matic VTR is reserved for code but must be recorded with the video. 2The :00 and :0] frames are dropped every minute, unless the minute is a multiple of 10 (no frames are dropped at 10 min., 20 min., etc). Thus, the number following 00:04:59:29 is 0095;00:02. But the number following 00:09:59z29 is 00110200200. 198 THE FILMMAKER'S HANDBOOK 0n analog tape formats, tiniecode degrades when copied from tape to tape. Use a timecode generator to regenerate the code while dubbing and be sure it is sync—locked to the video. ADDING Cons To A Noncoosn TAPE. If you plan to do an online edit, it is virtu— ally impossible without timecode. If the original footage was shot without code, you may be able to postscripe it with code after shooting (this works best with tape formats that use longitudinal or audio timecode tracks). If this is not possible, one solution is to create a new set of master tapes by bumping up the original footage to a Iimecode tape format. For example, footage shot in Hi8 might be transferred to Betacam with timecode added during the transfer. The Beta tapes would then become the masters, used to generate work— tapes and to do the online edit. This can also be done if your editing facility can‘t handle small—format tapes. Using Tlmecode in Production Many camcorders are equipped to generate timecode and record it on the tape. Professional cameras have several different timecode modes. in record rm: mode, the timecode advances whenever the tape is running, providing a continuous, run— ning count of tape as you shoot it. Most professional cameras allow you to select each tape’s starting code number. If you are using tapes less than an hour long, you might start the first tape at one hour (1:00:00200), then start the second tape at two hours (2:00:00:00) and so on. That way, the timecode on each tape is different. If any two tapes have the same code, there will be confusion during editing. Many cameras allow you to select rarer bits (Ll-bits), which are recorded with the timecode and can be used to identify camera roll numbers, the date or other uiformation. Be sure to use U—bits if the timecode on any two tapes is the same. Some people prefer to shoot with a time—of—day (TOD) clock recorded as time— code (this is sometimes called fi‘ee mm mode). TOD code is usually discontinuous on the tape, since every time you stop the camera there is a jump in the code. TOD code is useful if you need to identify when things were filmed, or if more than one camera is shooting at the same time. This may cause problems, however, if you shoot the same tape on successive days. Say you finish the first day at four in the afternoon (16:00:00:00 code). You start the next day at eleven in the morning (11:00:00200 code). When you edit this tape, the edit controller will find the lower code number aficr the high number, causing problems. Using TOD code will likely result in several tapes having the same code numbers unless the date or other information is included in the U—bits. Some camcorders have a real time mode that puts the time of day in the user bits, and allows you to select record—run code for the regular timecode. Sometimes when you are using more than one video camera, you want to have them running the exact same timecode to facilitate editing. Some professional cameras allow you to jam .912: one camera with the code from another camera or separate timecode source. Keep in mind that some cameras may drift slightly over time, so you may need to rejam the cameras every few hours to keep their time— code identical. For perfect sync, the two cameras should be gailorkcd together. THE VIDEO IMAGE 199 This may be done with a master sync signal fed to both cameras. If you plan to do live switching between multiple cameras to record on one tape, genlocking will ensure the cameras are in phase to permit switching. See Timecode Slates and ln-Camera Timecode, p. 283, for using timecode with audio recorders and film cameras. DIGITAL RECORDING Basic Theory Before digital recording existed, there was analog. In analog recording, changes in light or sound are represented by a changing electrical signal. If you record someone with a microphone and tape recorder, and the person starts speaking louder, the voltage of the signal sent from the mic to the recorder increases. The level of the electrical audio signal is analogous to the loudness of the sound. Analog recording systems can be very high quality, and continue to be used, but problems arise particularly when we try to make copies of analog recordings. The idea of digital recording is to measure the level of the electrical signal from moment to moment, and record those measurements as discrete numbers. Later, we can re—create the original signal by referring back to that list of numbers. As an example, imagine you wanted to track the path of a turtle as he crawls across a piece of paper. You could attach a pencil to his shell and let him make a tracing as he walks (a kind of analog recording). Or you could take a ruler and make a series of measurements—after one minute he’s 6” to the left and 8” above where he started; at a minute two he’s 12" to the left and 10" above the starting point. This is a kind of digital recording. Later, you could take your ruler and your list of measurements and re—create the path he took on another piece of paper. Let’s say that for some strange and perverse reason, other people want to know exactly where this turtle went. If you had his tracing on paper, you could make copies of the paper to give to people. But copies often come out a different size than the original, and copies of copies can get fuzzy. But if you had taken measure— ments with the ruler, you could just hand people the list of numbers and they could make their own map. The list of numbers can be copied over and over, and the tenth copy should be just as accurate as the first. Digital recording works by sampling the audio or video signal at regular inter— vals; each sample is a measurement of the voltage at that moment in time. That voltage measurement is then converted to a number that can be recorded on tape or on disk. In digital systems, the numbers are in binary code, which uses a series of ones and zeros. (The number 5 would be 101 in binary.) Each digit in a binary number is a bit (101 is thus a three—bit number). Eight bits together make a Eyre. Converting the original voltage into a number is called qtmmiaing. In some video systems, every sample is converted into an eight—bit number. In higher quality systems, the samples may be quantized into a ten—bit number. The more hits you use per sample, the finer the gradations you can represent in color or brightness. One way to visualize this is to imagine using a ruler to measure the strength (voltage) of the video signal. One end of the ruler is zero voltage, the 200 THE FILMMAKEFI'S HANDBOOK other end is the maximum voltage the system can record. With a low number of hits, the ruler might only have markings in, say, one—inch increments. With many bits, we have markings in tiny fractions of an inch, so we can make more precise measurements. The entire process of converting a video or audio signal to digital form is called digitizing and is done by an analog—to—digiml (A/D) converter. The converter may be part of the camera, or it may be part of the audio or video recorder. To view or hear the signal, it can be reconstructed in its analog form using a digimI—to—mmiog (D/A) converter. Sampling Rate How often we sample or measure the video or audio signal affects how accu- rately we can re-create it. To go back to the turtle example, since he is moving fairly slowly we could measure his position, say, once a minute and get a fairly ac— curate idea of where he went. Now imagine tracking an ant. In one minute he might go up, down and around. We have to measure his position much more than once a minute to get a good record of where he went. In digital terms, we need to use a higher sampling rate for the faster—moving ant. In audio and video recording, the speed with which the signal changes is related to its fi'equmcy—the higher the frequency, the faster it is changing. Frequency is measured in hertz (Hz; see Frequency, p. 246). To make hi gh-quality recordings, we need to be able to capture high frequencies. If a sound recording lacks high frequen- cies, it may sound muddy or dull. If a video recording lacks high frequencies, fine detail in the image may be lost, making it appear unsharp. A Swede named Henry Nyquist proved that the sampling rate has to be at least twice the maximum fre— quency we hope to capture. Since humans can hear sounds up to about 20,000 Hz (20 kHz), a digital audio recorder needs to sample at least 40,000 times a second to capture the full range of sound. Many digital audio recorders use a sampling rate of 44.1 kHz or 48 kHz in order to do just that. For more on audio sampling rates and the specifics of digital audio recording, see Chapter 9. Pixels 'n' Bits When an image is digitized, it is divided into a grid of pixels (picture elements). Each pixel is essentially a sample of the brightness and/or color of the image at that spot. The smaller the pixels (and the closer together the lines of the grid), the sharper the image will look. Take a look at Fig. 7-6. The top image is divided into a lattice of fairly large pixels. The middle image has much smaller pixels (and thus...
View Full Document

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern