Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to trim video clips instantly without reencoding (bernd.dev)
277 points by verst on April 4, 2020 | hide | past | favorite | 111 comments


Even more instantly, if your file is not in a "fixed" container format like MP4 or AVI but one designed for streaming use like an MPEG TS or PS, you can simply cut a (suitably large) range of bytes from it using any generic file manipulation tool; on POSIX-like systems, head and tail would work.

After all, it is called a stream for a reason... while there's definitely a lower limit to how small you can cut out a piece and have it still decode, as far as I know, all common audio and video codecs are designed with sync codes and "markers" so that a decoder can easily find a valid data block to start decoding from. Something like a broadcasted signal from a TV channel has no start nor end, so decoders need to be able to just jump in at any point and start decoding.


This is essentially how we broke apart long streams into manageable archive files at Justin.tv. You can get slightly better decoder behavior if you’re careful about where you make the cut: some frames are encoded as a difference from the previous frame and others are self-contained. If you cut before one of the self-contained frames, the decoder can start immediately instead of throwing away some data.


Yes, indeed, I had worked on a small project of mine and needed a really efficient and cheap way to serve terabytes of data to a lot of users, fast. I was poor and young (15 years old).

I re-created (I haven't invented this, obviously) a way to split the video file into keyframe segments and mark down the start byte offsets of the keyframes, and then I could "virtually" split the file for streaming, so that a user wouldn't 1) buffer the whole file, 2) need to have the whole file to share to others (P2P in the browser), 3) need to restart the stream and sharing if the connection broke.

This could've obviously been done with HLS or DASH, but that required remuxing the files and keeping lots of them. I instead remuxed files into TS container, indexed all the files, made JSON manifests, and had a network of reverse proxy "CDN" servers on continents that would pull the files from a few central servers, cache the small virtual chunks, which were created by reading byte offsets from the file and serving it as a "file" with PHP.

In the end, the project collapsed due to non-technical issues, and me loosing interest and doing more legitimate and useful things. I then resold the technology a few times to some people who wanted the similar thing but wouldn't have the issues I had, then I moved on and forgot it all.

It's history, it's made my career in some way, at least from my side, it was innovative for it's time, and made me nerdy-cool in school, both with other kids and the school staff.

-

If someone wants to know a bit more, read this: https://www.theverge.com/2015/10/21/9585984/browser-popcorn-... (It's cringe, I was young and stupid, LOL)


Are you the author of popcorn time, or did you just operate a popcorn time web service?


Just the web service. Thankfully 5 years ago, so the statute of limitations in Serbia expired.


Very cool! I was in particular dealing with MP4s that also had subtitle tracks that I wanted to preserve.


Nice one. The key is the `-c copy -map 0` option which preserves the same codecs and stream metadata. This makes the edit operations nearly instant too.

While on the topic of ffmpeg uses, here is a script I posted that speedup videos by a factor of 1.5: https://news.ycombinator.com/item?id=22584131 (use case record an unscripted screencast or demo with lots of pauses and uhms, then speed it up to make it look like you are super well prepared and caffeinated --- hidden bonus: if demoing a software product it will appear to be 33% faster!)


VLC can speed video or audio up/down on the fly, preserving audio pitch. I'm using it with audiobooks and old movies, and also occasionally (without pitch lock) to turn hard house into sweet groovy house or breakcore into playful idm, as suggested by RDJ himself—see https://youtube.com/watch?v=5yBvP3616Wc and https://youtube.com/watch?v=aWqf17mUyoQ

Btw, for some reason ffmpeg doesn't do well with changing speed of music—the result sounds poor compared to VLC. I found that `sox` gives better quality.


Ah, and I neglected to note that VLC does the speed-trick both on desktop and Android. On a phone, it's not quite perfect for audiobooks due to some hiccups in controls, but the versatility is its favor.


Mpv too


Plus `mpv` has awesome keyboard controls in case you need to skip back and return to normal speed, or loop a segment etc.


Lots of love to mpv in general because it's an awesome tool, but VLC also have hotkeys for those things + I'm pretty sure VLCs feature set is much larger than mpv while still being super lightweight.

Found this page but don't think that's all the hotkeys available even: https://wiki.videolan.org/QtHotkeys/


Same here but for what it does support, mpv is the beast.


Make gabber slower again


You might like this compilation: https://www.youtube.com/watch?v=WsYQcHrqRs4

Especially at 0.85 speed without pitch lock. YouTube can't do that, so you'll have to use an app, like NewPipe on Android, or download the thing.

As for Soulwax, don't miss their earlier mix of new-beat: https://youtube.com/watch?v=4XUipCxjmmw


I dont know what it is called but they use to start radio shows some seconds (or minutes?) before broadcasting them. Some technology was used to skip tiny parts where the host was silent (breathing or going uhhhmmm)



I haven't tried speeding up TV before, I think that would ruin some of what I like about TV (and I use Plex which doesn't have that option) but podcasts I speed up with Overcast and I've been watching a little more on Youtube (news, tutorials, etc) lately and I've really used the playback speed options a lot. Normally 1.5 is fine as long as I'm paying attention. Sometimes if I'm trying to get something down word-for-word I'll slow it down to .5 so I can keep up (I guess I'm not a very fast typer). I love being able to control it so easily.


Thanks! I'll check out your speed up trick. I also discovered the ffmpeg reddit today.

I've been playing around a bunch with FFmpeg these last few days because I thought it was ridiculous I had to manually perform certain tasks in ScreenFlow which appeared entirely programmable.

I have another more interesting post on subtitles / captions, but that is for a more niche audience.

Pretty interesting how H.265/HEVC Hardware encoding differs across platforms and how software encoding produces much smaller files at the same quality.


Pretty interesting how H.265/HEVC Hardware encoding differs across platforms and how software encoding produces much smaller files at the same quality.

Consumer hardware encoders are more constrained than software in what they can do, and thus are usually designed to optimise what they are best at: encoding speed and power efficiency.


Am I the only one to have stability problems when playing h265 with VLC on Windows?


I have no problems. I'd check your GPU drivers.


My friend has been using ffmpeg for trimming game clips for a while. His workflow is: opening a video in a video player, making note of start and stop times, and plugging the times into his ffmpeg script.

Avidemux is a FOSS video editing tool I ran into that implements ffmpeg. Since this workflow requires a human to manually evaluate points in time in a video, a single tool seems like a better choice.

https://en.wikipedia.org/wiki/Avidemux


I wrote a graphical tool with Lazarus that invokes ffmpeg. I used it to create clips to report cheaters in an online game, from videos created with OBS.

A word of warning: ffmpeg can't always cut where you want. Sometimes cutting in a certain point gives you a few blank (black screen) seconds. I guess it's because codecs work by storing what's different from previous frames.

If someone uses Lazarus (or maybe Delphi) it's as simple as:

s := Format('"C:\Program Files\ffmpeg\bin\ffmpeg.exe" -i "%s" -ss %s -c copy -t %s "%s"', [InputFile, Start, Duration, OutputFile]);

Memo1.Lines.Text := s;

WinExec(PChar(s), SW_SHOWDEFAULT);

With the parameters taken from TEdit controls. The TMemo was used to visualize what I was sending to ffmpeg.


That’s where .net (if you use windows) is super useful. You can create a wpf app with a video player and some shortcuts to fast forward or mark timestamps in a few lines of code. I did that to solve a similar problem (and use ffmpeg to execute the editing too - extract segments and remerge them).


There's a great ffmpeg GUI for doing the same thing (and more) called LosslessCut [0].

https://github.com/mifi/lossless-cut


Not obvious from the front or releases page (without clicking around), but this is a multiplatfor app, Linux/Windows/Mac.


Neat, but that's feature creep if I've ever seen it. I opened it expecting something clean and simple but that screenshot looked almost like Premiere.


yeah, the first impression is a bit cluttered/overwhelming, especially if you're just looking to do a very basic trim with a preview. but a simple tutorial with some annotated screenshots could fix that.


There is an interesting GUI tool that is supposed to allow you to do this with frame accuracy by re-encoding the section of the cut video before the first keyframe, and merging it with the losslessly copied section. However unfortunately in practice I have found it so unreliable as to be useless. I wonder if there is a similar better tool for this.

https://github.com/ozmartian/vidcutter


AviDemux [0] works reasonably well for this.

As usual it fails to calculate exact frame timings, which means the cuts may not end up exactly where the GUI is telling you (even if you take care to define the cut start at a keyframe).

[0] http://avidemux.sourceforge.net/


Avidemux is great for what it is but doesn't make any attempt to allow frame-exact cutting by partial reencoding afaik, which is what I found unique/interesting about VidCutter


I made a script to do this a while back for cutting intros off of videos.

https://gist.github.com/iwalton3/c034ec5a942466206fbee859184...

The notable function that does video cuts without a full re-encode is shortenVideo, which cuts a certain number of seconds off the beginning of a video. I've never had any issues with it, but I only used it on h264 mp4 files.


I tried installing VidCutter on Windows. When I enabled the re-encode mode, trying to render a video made the program stuck with no CPU activity, and did not encode a video at all.


Yep I had that recently. I tried it another time before that and then it decided to spawn dozens of ffmpeg instances at once basically forcing me to reset my PC. It's a good idea but just not implemented well it seems.


Speaking of FFMPEG, anyone have a command-line they can share for a good de-shake filter? (Like the kind that keeps the subject in the center by rotating/moving the frame if needed?)

I've Googled and tried this so many times but there are so many filters and knobs I half-understand (at best) that I haven't really managed to find one I actually like in practice, but I feel like it must be possible given that people post videos online that do this extremely well.


I used this to stabilize a bunch of vacation videos with great results. It works better than the deshake filter but does require two passes.

   function stabilize {
       tempfile=".temp$RANDOM$RANDOM.trf"
       ffmpeg -nostdin -loglevel error -y -i "$1" -vf vidstabdetect=shakiness=5:show=1:result="$tempfile" -f null -
       ffmpeg -nostdin -loglevel error -y -i "$1" -vf vidstabtransform=input="$tempfile",unsharp=5:5:0.8:3:3:0.4 -movflags +faststart "$2"
       rm "$tempfile"
   }
Usage: stabilize infile outfile.mp4


Thank you! I'll give it a try.

Update: Just tried it. It seems to do some kind of blurring to smooth the vibrations over several frames instead of trying to cancel the shaking by inverse-transforming the frame with respect to the shake. I think I've tried this in my experiments before -- and I guess it's the difference between "de-shake" and "stabilization"? Sadly it doesn't really seem better to my eyes sadly, but thanks for the help anyway.


It works by applying a low-pass filter on the camera's motion, so a sudden kick turns into a slow, low amplitude bob.

Perhaps you just need to increase the smoothing parameter. I found the default of 10 way too low, and needed around 50-100 for my very shaky 50 fps home videos.


It does have a mild blur filter which you can remove, but if it isn’t actually stabilizing the video from shakiness, something is wrong.

See here: https://ffmpeg.org/ffmpeg-filters.html#vidstabdetect-1

Maybe your ffmpeg build doesn’t have it? Also try removing the log level parameter.


It is stabilizing, just not in the way (or as much) as I imagined, is the issue. I tried to describe it, not really sure how else to. But it's not really the same kind of output I'm expecting from a deshaking filter. I'd expect black margins etc. to bleed in for example, but I don't see that here.


>This filter generates a file with relative translation and rotation transform information about subsequent frames

>which is then used by the vidstabtransform filter.

The first pass is meant to find the rotation, then I assume the next pass cancels it out.


Maybe my terminology is wrong, but what I'm talking about is basically what you see with the biker and pen here, or the washing machine at 3:44:

https://www.youtube.com/watch?v=I6E6InIQ76Q&t=9s

They stay stable in the center whereas the rest of the frame is transformed. This necessarily requires introducing black/white crop frames into the image with all kinds of shapes and sizes, but it turns out incredibly smooth and doesn't lose any of the frame. It also requires no noticeable blurring at all from what I can tell. But that doesn't seem to be quite what's happening with these commands though.


I believe what you're looking for is what Reddit's popular bot u/stabbot does.

I was able to find this comment thread where a user posted some ffmpeg scripts to replicate the behavior. Am on mobile currently so I can't verify they do what you're looking for but here's a snippet.

//PART 1 [Defaults: shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=0] ffmpeg -i shaky-input.mp4 -vf vidstabdetect=shakiness=5:accuracy=15:stepsize=6:mincontrast=0.3:show=2 dummy_crop.mp4

//PART 2 ffmpeg -i shaky-input.mp4 -vf scale=trunc((iw0.90)/2)2:trunc(ow/a/2)*2 scaled_crop.mp4

//PART 3 [-strict -2 ONLY IF OPUS AUDIO] - [Unsharp Default: '5:5:1.0:5:5:0.0'] ffmpeg -i scaled_crop.mp4 -vf vidstabtransform=smoothing=20:input="transforms.trf":interpol=no:zoom=-10:optzoom=2,unsharp=5:5:0.8:3:3:0.4 stabilized_crop-output.mp4

https://www.reddit.com/r/stabbot/comments/9f7ayj/comment/e5x...


Stabbot is open source. It's written in Python and calls out to ffmpeg with vidstabdetect/transform: https://gitlab.com/juergens/stabbot

I don't think vidstab can do exactly what's being asked for here, I've never seen it rotating the frame completely to keep a rotating pen stabilized, or move the frame inside a larger canvas to keep a moving subject centered. I think you have to use video editing and do that manually.


Yeah that kind of looks like it. Thank you! I'll try it out.


Despite the above example apparently applying an actual blur filter, you always might get a bit of blur on videos in general because of the motion blur inherent in the source, which becomes more obvious once the motion is removed. You can't really do much about that except for refilm the video with a shorter shutter speed.


You can also reuse previous and following frames instead of using solid color borders


I usually do it with VirtualDub [0] + deshaker [1]

Not command line though.

[0] http://www.virtualdub.org/

[1] https://www.guthspot.se/video/deshaker.htm


I'm sure it is possible, but YouTube has a built in one you can select which may explain why lots of online videos have it even if they aren't using pro software.


Oh wow I see, thanks. That would explain a lot...


I think vidstab is the only option for ffmpeg and it doesn't seem that great. I recently compared it to Premiere and Premiere was quite a bit better. I would also love a top tier option in ffmpeg.


It seems that Blender can do this 2D motion tracking well [1]. FFmpeg has a filter for this as well [2].

I haven't tried either approach.

[1]: https://www.youtube.com/watch?v=nU8zqn091rM

[2]: http://blog.gregzaal.com/2014/05/30/camera-stabilisation-wit...


Blender will be able to able to you to apply transformation to stabilize the footage, granted you'll need to cut the final output to remove the blank areas around the footage since it's moving around.

What Blender won't help you with (I haven't found it at least) is removing the motion blur that happens because of the camera shake. Which is fine, Blender is not a post-production tool, but worth knowing that if you want to stabilize footage, you probably need Blender + insert-favorite-post-production-tool.


if you want scriptability, go for Avisynth. It's complicated, but very powerful.


I always use Avisynth and Virtualdub with Deshaker


The problem with this is that it can only cut on i-frames which depends on the way your file was encoded. A lot of cameras encode around 1 i-frame per second so that means that you can not accurately cut your videos from say 0.5 to 2.5, you must cut from 0 to 3. This may or may not be a big deal for your specific need


There's very little inaccuracy at the outpoint, so it would be 0 to 2.5.


What happens when the start time doesn't match up with a keyframe? Presumably some re-encoding has to happen there?


Don’t quote me on this, but I think ffmpeg lets you choose between starting at the nearest Iframe or re-encoding the first GOP.


I believe by default it snaps to the nearest iframe. At least that's been my experience.


Yeah, my understanding is that it's actually very hard to get ffmpeg not to do this, which is why I'd be interested in understanding exactly what this link is supposedly doing. I believe it's almost certainly iframe snapping. If anyone knows of anyway to enforce reencoding up to the first iframe in the copy, I'd love to hear it.


No it is possible, but the -ss needs to be after the -i not before:

https://ffmpeg.org/ffmpeg.html#Main-options

> -ss position (input/output)

> When used as an input option (before -i), seeks in this input file to position. Note that in most formats it is not possible to seek exactly, so ffmpeg will seek to the closest seek point before position. When transcoding and -accurate_seek is enabled (the default), this extra segment between the seek point and position will be decoded and discarded. When doing stream copy or when -noaccurate_seek is used, it will be preserved.

> When used as an output option (before an output url), decodes but discards input until the timestamps reach position.

It is much slower when used an output option, and I don't really know what it does when you do stream copy, but I suspect it leaves the whole segment with some sort of offset instruction. I often get artefacts when I play it back on vlc, so I try to avoid that option (unless I am re-encoding).


It's slower when used as an output option because it only has an effect on ffmpeg's output; that is, ffmpeg literally has to decode the whole stream up to that point because it doesn't know it can seek.

I believe this has no effect on whether or not you can cut precisely - if transcoding is not enabled, even if ffmpeg is sticking the frames before the first keyframe in the output file, they're effectively garbage from the point of view of the decoder (which might explain why you see artifacts in some players).


This indeed does snap to keyframes when input seeking is used. On output seeking, it cuts the video at the exact time, but there isn't enough information to rebuild the image as it's still a truncated copy of the stream.

It's possible to reencode a (h264-encoded, at least) video up to the first keyframe; it's discussed in this StackExchange submission: https://video.stackexchange.com/a/23542

Coincidentally, I refined the above approach very recently and posted the result to my own blog: https://csrd.science/blag/2020/trimming-an-h264-video-and-pr...

The gist is to extract and reencode the segment containing the group of pictures to be cut (using the stream segment muxer), then rejoin the subsequent segments and remux the trimmed audio back in afterwards.


Nice work! Would definitely like to see the complete write up and code you talk about at the end, especially if you manage to test it on mkv or mp4 containers.


There is no provision for partial stream encoding. It's either complete streamcopy or encoding.


I have used a similar command, maybe the same thing, and had an issue where audio didn't sync to video in some cases.

It was an instant processing thing but I don't remember for sure if it was the same command.


It will just start at the closest keyframe I believe.

Certainly if the duration specified doesn't land you on a key frame it also ends at the closest keyframe.


As always, ffmpeg is terrific, but unfortunately the solution given here is not frame-exact editing, so to me it is just another quick-and-dirty solution when a re-encode is not desired, sort of like avidemux in that regard.


Absolutely. If you need something precise and quality, this isn't the tool. It's definitely for a very quick chop in the command-line, when you want to share some 30s clip in a 2 hour video with a friend or something.


A friend = lots of devs

Screencasts don't need frame precision fortunately. A set up that isn't completely polished also feels more authentic.


Agreed. The post was written for an audience that doesn't care about exact frame editing.

Would the following work?

# Calculate the exact start time (in s.msec format) via: START = start_frame_number/frames_per_second

ffmpeg -ss $START -i video.mp4 -c copy -map 0 -frames 1400 output.mp4


Afaik ffmpeg requires a specific option to disable snapping to key frames. Don't remember which, but it can be easily found with a web search.


You may want to use -noaccurate_seek


Note that ffmpeg isn't super fast when you want to cut a file into several pieces, mostly because you'll have to start it over and over for each piece (unless you delve into some black magic with its filter graph, which is beyond my patience).

For splitting an audiobook into pieces, so that you can use the seek bar sanely, `mp3splt` can be used in one run pretty much at the disk's speed. Its format language for metadata is atrocious, but thankfully putting the result into a script frees me from revisiting the horror.

Here's the command I use to cut into 10-minute pieces without a 1-minute piece hanging at the end, and to name and title the pieces sequentially—so you can be spared the ordeal:

mp3splt -t '10.00>2.00' -o '@f @n2' -d . -g 'r%[@o,@N=1,@t=#t @N]' "$1"

—where $1 is the source file, and `.` is the current directory as the destination.

IIRC it also comes with oggsplt and flacsplt—but no such luck for aac or ac3.


With ffmpeg, if you use -ss in input mode (ie before the -i), it looks for the nearest keyframe before but it doesn’t scan the full video so running it multiple time for each segment shouldn’t take much more time than a one pass software (and gives you the opportunity to do the segments in parallel if you can afford the I/O).


Alas! I turned to mp3splt after having rather, erhm, prolonged experience with ffmpeg, and found the difference to be night and day. IIRC ffmpeg also utilized the CPU considerably.


I've been using Avidemux for this purpose for years now, very nice open source tool. It can trim any video from either end without re-encoding. I've also been using it to remove sound from videos without re-encoding too.


  ffmpeg -ss $START -i $INFILE -c copy -map 0 -to $END $OUTFILE
This is functionally the same as

  ffmpeg -ss $START -i $INFILE -c copy -map 0 -t $END $OUTFILE
FFmpeg normalizes input timestamps unless told otherwise via -copyts. So, in the first command, output timestamps start from 0, effectively making -to a duration limiter.

Both -ss and -to should be on the same side of the input for (close to) expected result.


If you have a Mac, you can also do this with QuickTime's Trim option. It lets you adjust the trim points easily and you can then click Save (not export) to trim the file losslessly.


I’ll have to try this! I was under the impression that it was lossy in Quicktime X. I’ve been going back to Quicktime 7 for this option (which has apparantly finally stopped working in Catalina, but the Apple downloadable version originally released for Snow Leopard works fine under Mojave still!). I’ve been using Quicktime 7 Pro to do all kinds of things FFMPEG can do, not only cropping losslessly but also removing, extracting and changing the soundstream. It’s simpler for me than having to remember the commands :)


Also concat videos one after the other!


They added this functionality, or at least something similar, to Quick Look on MacOS Mojave. It’s very useful.

https://support.apple.com/en-ie/guide/mac-help/mh14119/mac


> A huge time saving over workflows involving video editors.

VirtualDub and Avidemux are (simple) video editors that do it exactly the same way.


yea came to post avidemux is this with a simple ui


Similarly, jpgs are encoded in blocks, so as long as you cut along block lines, you can crop a jpg without decoding or re-encoding.


Are there any tools that make this easy (i.e. not manually editing the binary)?


I remember reading about one...

Here’s a simple command-line tool: http://ben.com/jpeg/

This one has a UI, but no screenshotS of it (scroll down to jpegcrop), and it also mentions lossless rotation which I haven’t investigated: https://jpegclub.org/

This page explains a bit more about lossless operations and mentions that Irfanview has them—it’s a great image viewer for that I use, and I didn’t even know: https://www.impulseadventure.com/photo/lossless-rotation.htm...


Is someone aware of how popular tools do video trimming (also JPEG cropping).

I have been looking for lossless (reencode-less) trimming of videos since the end of 90s and always just found huge video editing tools that never had these features. And then ypu were stuck with some CLI tools where you need to count the number of frames or milliseconds or something like that. Like if a WYSIWYG tool isn‘t what most people would want to use.

How does the native iOS photo editing tool handle video trimming? And photo cropping? iMovie? And what about google android tools? Or popular Windows tools?

I used to crop my photos with XnView, which supports lossless cropping. And I‘m always puzzled this hasn‘t really take off in other popular tools.

Lossless crop of photos and lossless trim of videos should always be included as a feature.

Reencoding sucks.


Most video editing software (that I've ever used) supports making clips or setting in/out points in longer videos. This doesn't necessarily make new trimmed clips as files on disk because that's expensive (storage and computation) depending on the video's codecs. Editors also expect to export a wholly new output from source so they don't need to make those intermediate clips. It's like pass by reference instead of pass by value.

On iOS the trimming is done losslessly, when you trim a clip it basically does the same as what `ffmpeg` is doing here. It seeks to the new start time and copies all the GOPs (group of pictures) to the new end point. The camera records video with really short GOPs so the trimming can be pretty accurate. Only if you apply filters or crop the dimensions of the video will the trimmed clip be reencoded. Any iOS software using AVKit can do the same lossless trims, I imagine most editors on iOS do.

When doing rotation I know the Photos app just changes the JPEG rotation flag in the EXIF data. If you send the raw photo to something that ignores or doesn't understand EXIF rotation you'll see just the sensor's default orientation. I believe HEIF works the same way (the container has a rotation atom). When exporting an image (reencoding for sharing or after editing) it will bake in the rotation to the image data, actually performing the rotation to the image.


Great feedback. Thanks alot


When you use ffmpeg with macOS, you can save a lot of reencoding time (a magnitude of time for H.264) by using videotoolbox, the Apple supplied harware acceleration backend.


If you want a GUI on top of it, you can do the same thing in VirtualDubMod for many file formats using the "Direct Stream Copy" option:

http://www.digital-digest.com/articles/virtualdubmod_cutavi_...

Admittedly it's aging a bit and doesn't support a lot of newer formats, but I still find it useful. As much as I enjoy command line, having the gui is nice to mark in&out points and do the trimming all in one spot. You'd still have to open a video player to figure out where to start & end.


With FFmpeg builds being produced for browsers[1], it'd be great see this working in a web app. I tried making one a few years back, but it was a little early. It only worked in Chrome, on a desktop, and with smallish MP4s at the time[2].

[1] https://github.com/Kagami/ffmpeg.js/ [2] https://zvakanaka.github.io/vidslicer/


I have been using this for some time now. 6 months back, while I was trying to cut a 10mins video to extract a section of 2 mins, I had discovered this. It was a real time saver. Quie a gem.


This is not exactly correct.

I recently had to cut (for reasons) an mp4 video somebody sent me over Whatsapp.

After fighting with ffmpeg for a day - they hey word here is "keys" - because there were issues with the video: Sound would start at point but video didn't, or the first part before the first key in it of the video was re encoded and the quality was crap I finally resorted to using Handbrake, that did the job I needed.

Just get any short video somebody sent you and try do use this script, and you'll see what I'm talking about.


The ffmpeg command used in the article is key frame seeking because the -ss argument comes before the -i argument. If the -ss argument is placed after the -i argument, an all frame seeking mode is used.

The difference is explained here: http://www.markbuckler.com/post/cutting-ffmpeg/


This is so good. Trying to do the same with Davinci Resolve for GoPro videos in 2.7k resolution took me hours.

p.s. I'm more than sure that there's ffmpeg port to windows, but I've just used it via Ubuntu on WSL (1.0) and it worked perfectly on a 3.72GB file of 8 something minutes length:

time=00:00:38.47 bitrate=67220.6kbits/s speed=91.8x


I found how to trim the beginning of a video to an arbitrary amount without reencoding. I think the `-itsoffset` flag in ffmpeg causes ffmpeg to add metadata to shift the video in time, without reencoding or actually trimming the underlying video.

ffmpeg -i video.mp4 -itsoffset 0.25 -i audio.ogg -c copy -y av.mkv


Did you intentionally use the meme in a w̶r̶o̶n̶g̶ non-standard way?


I honestly couldn't remember the original meme. Jackie Chan comes up when you search for "mind blown" but also "wtf"

Can you point me to the original?

If you have suggestions for a better meme I can change it :)


I think it's typically the Tim & Eric skit that gets used for mind blown reaction. You could maybe use a different meme format, like the Drake one... But honestly I don't mind :) Coincidentally, I was looking at the same problem this morning. I found that exact snippet on an SO answer. Amazing!

Snippet: https://giphy.com/gifs/whoa-hd-tim-and-eric-xT0xeJpnrWC4XWbl...

Full video: https://youtu.be/FYJ1dbyDcrI


Thanks for the background on the meme!

I was also happy that this FFmpeg approach properly maintained all my subtitle streams. That's something I was working with a few days ago if you look at the other post on my blog.


Can you also concatenate exactly same format video files instantly?


Yea just put -c copy


There is an online service for this over at veed.io

http://veed.io/


Caveat is that a clip will be able to start only from a keyframe. So precision of this clipping is pretty much so-so.


Maybe because I'm a addicted to node but I usually use npm to install ffmpeg

    npm install ffmpeg-static@latest
It works on windows, mac, and linux. Gets installed locally (don't have to muck up my system). I do have to look up the path but I don't mind

    node_modules/ffmpeg-static/bin/<os>/x64/ffmpeg


Compare

    $ sudo apt-get install ffmpeg
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following additional packages will be installed:
      i965-va-driver intel-media-va-driver libaacs0 libaom0 libass9 libavcodec58 libavdevice58 libavfilter7 libavformat58
      libavresample4 libavutil56 libbdplus0 libbluray2 libbs2b0 libchromaprint1 libcodec2-0.8.1 libcrystalhd3 libdc1394-22
      libflite1 libgme0 libgsm1 libigdgmm5 liblilv-0-0 libmysofa0 libnorm1 libopenal-data libopenal1 libopenjp2-7
      libopenmpt0 libpgm-5.2-0 libpostproc55 librubberband2 libsdl2-2.0-0 libserd-0-0 libshine3 libsnappy1v5 libsndio7.0
      libsord-0-0 libsratom-0-0 libssh-gcrypt-4 libswresample3 libswscale5 libva-drm2 libva-x11-2 libva2 libvdpau1
      libvidstab1.1 libx264-155 libx265-165 libxvidcore4 libzmq5 libzvbi-common libzvbi0 mesa-va-drivers
      mesa-vdpau-drivers va-driver-all vdpau-driver-all
    Suggested packages:
      ffmpeg-doc i965-va-driver-shaders libbluray-bdj firmware-crystalhd libportaudio2 serdi sndiod sordi libvdpau-va-gl1
      nvidia-vdpau-driver nvidia-legacy-340xx-vdpau-driver nvidia-legacy-304xx-vdpau-driver
    The following NEW packages will be installed:
      ffmpeg i965-va-driver intel-media-va-driver libaacs0 libaom0 libass9 libavcodec58 libavdevice58 libavfilter7
      libavformat58 libavresample4 libavutil56 libbdplus0 libbluray2 libbs2b0 libchromaprint1 libcodec2-0.8.1
      libcrystalhd3 libdc1394-22 libflite1 libgme0 libgsm1 libigdgmm5 liblilv-0-0 libmysofa0 libnorm1 libopenal-data
      libopenal1 libopenjp2-7 libopenmpt0 libpgm-5.2-0 libpostproc55 librubberband2 libsdl2-2.0-0 libserd-0-0 libshine3
      libsnappy1v5 libsndio7.0 libsord-0-0 libsratom-0-0 libssh-gcrypt-4 libswresample3 libswscale5 libva-drm2 libva-x11-2
      libva2 libvdpau1 libvidstab1.1 libx264-155 libx265-165 libxvidcore4 libzmq5 libzvbi-common libzvbi0 mesa-va-drivers
      mesa-vdpau-drivers va-driver-all vdpau-driver-all
    0 upgraded, 58 newly installed, 0 to remove and 1 not upgraded.
    Need to get 34.7 MB of archives.
    After this operation, 145 MB of additional disk space will be used.
    Do you want to continue? [Y/n] n
vs

    $ npm install ffmpeg-static
    
    > ffmpeg-static@4.1.0 install /home/gregg/temp/node_modules/ffmpeg-static
    > node install.js
    
    Downloading ffmpeg [||||||||||||||||||||] 100% 0.0s
    
    + ffmpeg-static@4.1.0
    added 19 packages from 52 contributors and audited 21 packages in 14.318s
    found 0 vulnerabilities
No admin need, no system libs to install, it just works.


Microsoft guy thinks ffmpeg is amazing? Mind blown!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: