ffmpeg is a movie-encoding command line tool that is also useful for capturing and assembling time lapse movies, particularly using webcams. Below are some example commands to get it to do stuff. It is run from a terminal command line on Linux, OSX, and Windows. You will probably want to add the location of the ffmpeg files to your system PATH so that you can invoke ffmpeg from any directory.
The code shown here is specific to Windows 7 and later machines, so some commands will not work on OSX or Linux. Specifically, Windows uses the “
dshow” option for capturing video. On Linux you’d substitute “
v4l2” (short for “
video4linux2“), and recent OSX versions should use “
avfoundation“. See this page for more info on the various permutations of video capture options.
Also note that the order of the options (denoted by the leading dash symbol) usually matters in ffmpeg commands, so if you get errors when trying to use an option, double check that you have the syntax correct AND that you’ve placed it in the right portion of the command. In general, there are options that apply to input files, and options that apply to output files, and you need to place them in the correct order.
To show what cameras are currently available for use on your computer:
[code]ffmpeg -list_devices true -f dshow -i dummy [/code]
Which should return something like this:
[code gutter=”false”]ffmpeg version N-64514-g14e2406 Copyright (c) 2000-2014 the FFmpeg developers
built on Jul 7 2014 22:09:37 with gcc 4.8.3 (GCC)
configuration: –enable-gpl –enable-version3 –disable-w32threads –enable-av
isynth –enable-bzlib –enable-fontconfig –enable-frei0r –enable-gnutls –enab
le-iconv –enable-libass –enable-libbluray –enable-libcaca –enable-libfreetyp
e –enable-libgme –enable-libgsm –enable-libilbc –enable-libmodplug –enable-
libmp3lame –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libope
njpeg –enable-libopus –enable-librtmp –enable-libschroedinger –enable-libsox
r –enable-libspeex –enable-libtheora –enable-libtwolame –enable-libvidstab –
-enable-libvo-aacenc –enable-libvo-amrwbenc –enable-libvorbis –enable-libvpx
–enable-libwavpack –enable-libwebp –enable-libx264 –enable-libx265 –enable-
libxavs –enable-libxvid –enable-decklink –enable-zlib
libavutil 52. 91.100 / 52. 91.100
libavcodec 55. 68.102 / 55. 68.102
libavformat 55. 45.100 / 55. 45.100
libavdevice 55. 13.101 / 55. 13.101
libavfilter 4. 10.100 / 4. 10.100
libswscale 2. 6.100 / 2. 6.100
libswresample 0. 19.100 / 0. 19.100
libpostproc 52. 3.100 / 52. 3.100
[dshow @ 000000000032f680] DirectShow video devices
[dshow @ 000000000032f680] "Logitech Webcam C210"
[dshow @ 000000000032f680] DirectShow audio devices
[dshow @ 000000000032f680] "Microphone (Webcam C210)"
[dshow @ 000000000032f680] "Microphone (Realtek High Defini"
The last few entries list the names of the available cameras and audio sources. In this case, I have only one webcam available, called “Logitech Webcam C210″, and two potential audio sources, one of which is the microphone in the webcam. Since we’re discussing time lapse, I’m going to ignore the audio options from here on out.
Once you’ve picked a camera to use, you can have ffmpeg give you a list of potential image sizes and frame rates for that camera:
[code]ffmpeg -f dshow -list_options true -i video="Logitech Webcam C210"[/code]
which returns the following for my camera (output truncated):
[dshow @ 0000000002aff680] DirectShow video device options
[dshow @ 0000000002aff680] Pin "Capture"
[dshow @ 0000000002aff680] pixel_format=bgr24 min s=640×480 fps=5 max s=640×4
[dshow @ 0000000002aff680] pixel_format=bgr24 min s=160×120 fps=5 max s=160×1
[dshow @ 0000000002aff680] pixel_format=bgr24 min s=176×144 fps=5 max s=176×1
[dshow @ 0000000002aff680] pixel_format=bgr24 min s=320×176 fps=5 max s=320×1
[dshow @ 0000000002aff680] pixel_format=bgr24 min s=320×240 fps=5 max s=320×2
I typically don’t worry about the pixel_format info. I just look for the frame size options and typically pick the largest one (640×480 for this old camera).
Now that you know what the camera is named, and what image size it can produce, you can open up a display window and use that to aim the camera and check focus. This uses the “ffplay” function instead of “ffmpeg”.
[code]ffplay -f dshow -video_size 640×480 -i video="Logitech Webcam C210"[/code]
Keep in mind that nothing is being recorded in the window that pops up, it is only showing you a live view of what the camera sees. Unfortunately while ffmpeg is recording your timelapse movie you will not have access to this live view, so you need to make sure your picture looks good now before starting the time lapse capture.
To start capturing a series of time lapse images, use a command like the following:
[code]ffmpeg -t 3600 -f dshow -s 640×480 -i video="Logitech Webcam C210" -r 0.5 -f image2 image%04d.jpg[/code]
The options in that command line are as follows:
- -t 3600 = capture images for 3600 seconds (1 hour). This is NOT the number of images to be captured, so be careful to specify a long enough runtime here. You’ll recall that a full day is 86,400 seconds.
- -f dshow = use Windows direct show for capture, this will be different on Mac/Linux
- -s 640×480 = use a image size of 640×480 (horizontal x vertical pixels). There are also standard options like “hd720” for cameras that can produce those sizes.
- -i video=”Logitech Webcam C210″ = specifies the video source to use. We found this exact name using the commands above.
- -r 0.5 = specifies the capture rate in frames per second. A value of 30 would (try to) capture 30 images per second. A value of 0.5 would capture one frame every 2 seconds (1 sec / 2 = 0.5). This is how you set the time lapse interval. For a 10 second interval, you’d use 0.1. You can also input this value as a fraction, so -r 1/2 would also give 1 frame every 2 seconds, and -r 1/5 would give one frame every 5 seconds.
- -f image2 = specifies that images will be captured as bmp, jpg, or png, depending on how you specify the filename (shown next)
- image%04d.jpg = output file name pattern. In this case it will produce jpg files, but you could specify .png or .bmp.
The output filename pattern is always the final argument to the command. If you want to put images in a directory besides the current directory, you can specify a full path like
d:\temp\timelapse\image%04d.jpg, but all of the directories in that file path need to exist before you start.
image%04d may require some explanation. ffmpeg is going to produce output file names based on this pattern, where the start of every filename will be
image. You could change that part to whatever you want. The important part is the
%04d, which tells ffmpeg to create a sequential number for each new file, starting at 1 and counting up. In this case it will pad that number with leading zeros so that it is 4 digits long. The first few output filenames would then look like this:
That format becomes extremely useful later on when you try to assemble these images into a time lapse movie, since ffmpeg uses that sequential numbering to decide how to order the frames of the movie. If you expect to capture a lot of images, you can change the number of digits in the filename. For example, if you expect to capture between 10,000 and 99,999 images, you would use
%05d, and you could increase that to
%06d or larger if you expect to capture more images. There are ways to do fancy things like insert the date and time of image capture into the name automatically, but if your goal is to assemble the images into a movie you really want the sequential numbering. For posterity, here’s a command that would tag the filename with the date and time:
ffmpeg -t 100000 -f dshow -s 640×480 -i video="Logitech Webcam C210" -r 0.5 -f image2 -strftime 1 "%Y%m%d_%H%M%S.jpg"
Sometimes your webcam needs to be placed in a weird orientation, or has to shoot using a mirror. There are video filter options that can automatically flip the captured images before saving them so that your time lapse images look sensible (this can also be done when assembling the movie later).
[code]ffmpeg -t 100000 -f dshow -s 640×480 -i video="Logitech Webcam C210" -r 1/2 -vf "hflip, vflip" -f image2 image%04d.png [/code]
In this case, I have added the option
-vf "hflip, vflip", which invokes video filters (vf). This does a horizontal flip of the image, and a vertical flip of the image before saving it to disk. Because this uses two different filters, they are separated with a comma.
Assembling the time lapse movie
Once you have a folder full of sequentially numbered images, ffmpeg can assemble them into a movie file for you (see the wiki as well). The simplest command to make a movie would be something like this:
ffmpeg -framerate 30 -i image%04d.jpg -c:v libx264 -r 30 outputfile.mp4
Let’s break down those options:
- -framerate 30 = This is the input frame rate. It effectively says that you want each input image to appear on screen for 1/30 of a second. If you wanted each input image to appear on screen for a whole second, you’d use a -r 1. The default is 25. If you select a slower input frame rate than the output frame rate, ffmpeg will simply duplicate the input image appropriately so that it stays on screen for the desired amount of time.
- -i image%04d.jpg = ffmpeg will look for input images that match this filename pattern. As with the discussion above on filename patterns, the
%04dtells ffmpeg to look for a 4-digit number in the filename. If you used
%05dto generate your captured images, you’d want to use
%05dhere as well.
- -c:v libx264 = This specifies the video encoding to be used. libx264 is a good default choice for high quality and low file size, and should be readable by all sorts of video players. It may or may not work in some versions of PowerPoint on some operating systems, so check that before you get too excited about adding video into your presentations. It is possible to output as a wmv file or other formats, so search the web if you need a different format.
- -r 30 = specifies the framerate of the final movie. 30 frames per second is fairly standard, 24 fps is also common, and you could do something like 48 or 60 if you have special needs.
- outputfile.mp4 = the filename and format. As before, you can specify a different directory path to put the file into, otherwise it will appear in the current working directory. The .mp4 format should work on most web and mobile devices, and uploads to Youtube just fine.
Starting from a different image number
Sometimes the first few frames of your timelapse will be unusable, maybe because they’re really washed out, or you just decide that nothing interesting happened until much later in the capture process. You can specify a starting file number as part of the command to assemble the movie. If the first image file has a value of 4 or less, ffmpeg will find it automatically, but if you decide you want to start assembling the movie using an image number larger than 4, use the
ffmpeg -framerate 30 -start_number 25 -i image%04d.jpg -c:v libx264 -r 30 output.mp4
In the above example, ffmpeg will look for an input image called
image0025.jpg and start incrementing input file names from that point. The output movie file will be called output.mp4 in this case.
Changing the output resolution
If you captured a fairly high resolution series of images (1280×720, maybe 1920×1080) but want a smaller output file size, you can specify a different output image size. For example, let’s say your time lapse images are 1920×1080 pixels, but you want a much smaller 640×360 movie. Use the
[code]ffmpeg -framerate 30 -i image%04d.jpg -s:v 640×360 -c:v libx264 -r 30 outputfile.mp4[/code]
In this case, you’re responsible for picking a resolution that matches the aspect ratio of the input files, otherwise you’ll end up with a stretched or compressed image.
Tweaking the movie quality (and file size)
If you want to make sure the quality of your final movie is as high as possible, and are willing to deal with longer encode times and potentially larger file sizes, you can invoke some of the
-crf options. See the wiki for more info.
[code]ffmpeg -framerate 30 -i image%04d.jpg -s:v 1280×720 -c:v libx264 -preset slow -crf 18 -r 30 output.mp4[/code]
There are various
preset speeds like slow, veryslow, medium etc. that affect the processing time. The
-crf values run from 0 to 51, and 23 is the default. Going to a smaller crf value will increase the output file size, but increase the quality of the final product. A value of 23 is probably acceptable for most of what you do, and there are diminishing returns in perceptible quality once you go below 18.
Capture a video directly
In case you simply want to capture video from the webcam, you can use a command like this (see the wiki as well):
[code]ffmpeg -f dshow -video_size 1280×720 -framerate 30 -i video="Microsoft LifeCam Cinema" -vcodec libx264 -b:v 15M output.mp4[/code]
Some of those options include:
-video_size 1280x720– This is one of the resolutions available for the Microsoft LifeCam Cinema
-framerate 30– The maximum framerate the LifeCam Cinema can put out. You may not get a full 30 frames per second in some cases.
-i video="Microsoft LifeCam Cinema"– the name of the video input device. As currently written, this command would not capture any audio. You could specify an audio input here as well, using the list of devices produced by the
ffmpeg -list_devices true -f dshow -i dummycommand up at the top of this article. In this case you would modify this argument to read something like
-i video="Microsoft LifeCam Cinema":audio="Desktop Microphone (4- Cinema -"
-vcodec libx264– Encode the video in the h264 codec.
-b:v 15M– Set the bitrate for the video output to 15 megabits (15,000 kbits/sec). This will produce a large, high quality file. You may wish to reduce this to something much smaller like 2M, or just eliminate this argument altogether and use the ffmpeg defaults.
output.mp4– As usual, the last item is the name of the output file to write the movie to.
Re-encode as a Windows wmv file
If you’re concerned about having your video play on a Windows machine, particularly in a Powerpoint presentation on someone else’s computer, your safest bet is probably to use a wmv file type. You can use ffmpeg to convert your mp4 or avi or whatever to a wmv file.
[code]ffmpeg -i input.mp4 -qscale 2 -vcodec msmpeg4 -acodec wmav2 output.wmv[/code]
In that command, you find the following:
-i input.mp4– the name of the input video.
-qscale 2– this forces a high quality re-encoding, but you could use a larger value up to 31 to make the output file smaller (and lower quality).
-vcodec msmpeg4– specifies that the output video codec should be Microsoft’s msmpeg4 format.
-acodec wmav2– specifies that the output audio codec should be Microsoft’s wma format.
output.wmv– name the output file, and give it a wmv file extension.