MacOS vs Windows - Graphics Cards and Render Times

O.k. before I get into this I'm NOT trying to stir the hornet's nest in terms of how fast or slow HitFilm is compared to other programs, Mac vs Windows, or complain (far from it) about speed - instead I'm trying to figure out how HitFilm uses resources when rendering and why machines that are much more powerful than my own seem to be having speed issues.

So here we go (strap in, this is going to be long).

My computer is a mid 2012 MacBook Pro.

Here's the spec sheet:  https://everymac.com/systems/apple/macbook_pro/specs/macbook-pro-core-i5-2.5-13-mid-2012-unibody-usb3-specs.html

The basics are:

i5 dual core 3210M

16gb of ram (1600)

Samsung Evo 850 SSD

Integrated Intel 4000 graphics

---

So it's an old computer with integrated graphics, and in my mind HitFilm should barely be useable, but it's not. HitFilm actually runs rather well, much better than expected in fact.

What I don't understand is some users with seemingly MUCH more powerful computers with dedicated GPUs, 4 cores, tons of ram, etc. sometimes have render and speed issues.

I tried Shotcut and OpenShot before finding Hitfilm and they were totally unusable - editing in their timelines would stutter, and stutter badly. Thinking I was going to have to buy a new computer (or find one to use) I tried HitFilm Express 2017 and to my amazement it's incredible smooth, even with 4 to 6 video and audio clips being used/rendered at one time.

The footage I'm using is 1080p at 23.976 fps in AVCHD/h.264 format.

Renders take around 1.8 X footage length - so a 2:35 second trailer takes around 4:30 to 5 minutes to render (without hardcore effects like denoise - that does take forever).

So a 5 to 6 year old laptop with only two cores and an  integrated graphics card is totally fluid when editing and the render time is what one would expect, yet I'm reading about these much more powerful machines having problems, and I'm wondering why.

Btw, when rendering cpu use on my machine usually goes up to about 80%  and RAM use is about 14GB.

Could this be a MacOS vs Windows issue? or a Windows version issue? An SSD vs HDD issue? A ram issue? An expectations issue?

I'm delighted that Hitfilm works so well when editing and slow(ish) render times are acceptable for such an old machine.

I don't get it.

Anyone have any thoughts?

Comments

  • @djfrodo ; Glad to hear you are having a good experience with HF.  In case I missed it, what file format are your video files?  Are they constant frame rate or variable frame rate.

    Since you brought this up, how about shooting us a MediaInfo output on the video that is working for you.

    A lot of people have problems (Macs included) with variable frame rate video.  Transcoding to constant frame rate in a format that HF likes has solved a lot of problems.

    Plus, I would have to agree that running your video from an SSD will certainly speed things up a bit.

    Other things that tend to bog down the system are multiple programs running in the background.  Virus software can also be a culprit. 

    Being a Windows user I always run msconfig and turn off as many things in the Startup tab that I can safely boot the system without.   I don't need Microsoft Office to start up automatically, so I uncheck that.  I don't need Adobe Version Cue to run on start up and uncheck that, and so on.

    If you are serious about editing, don't load junk on your system and don't let the installers force every piece of software you have to fire up at startup.

  • edited January 26

    The footage I'm using is 1080p at 23.976 fps in AVCHD/h.264 format

    Both - most is constant bit rate but a few shots were taken on a phone and the are vbr.

    I think the second is the important issue.

    I think cbr an ssd, and lots of ram (and macos) are the keys here.

    This machine shouldn't be doing as well as it is.

    https://i.imgur.com/jsCPwwl.png

  • Triem23Triem23 Moderator

    Actually, yeah, it might. Check it out--according to the report you posted here you have your file encoded at 840mbps, which is a very low compression ratio. By comparison, uncompressed 1920x1080/23.98 footage is only 1.2gbps.

    Basically you're putting no stress on your CPU to decode. Usually when we have users with slow AVC/MP4 performance we're seeing DSLR rates of 35mbps (35:1 compression) or screen captures down around 3mbps (almost 400:1 compression). The CPU has to do that decompression. Yours has no work to do. All the stress is on your SSD.

    I've never said this to an AVC user... You would save significant disk space and get similar performance with Pro Res, I bet. 

  • edited January 28

    So...I think I kind of Forest Gumped my way into this.

    I shot on a Canon C100 because I just wanted the larger sensor.

    If I load a clip into HitFilm and render a prores 422 it's actually faster than if I render to h.264 (so youtube, etc).

    I actually asked this forum and the answer was render to prores 422 for intermediate editing, and that's what I went with (the mediainfo stuff was directly from the camera).

    If this is all true, I can tell everyone to shoot with a good camera (not a dslr), and buy the expansion pack for extended formats (so AVCHD), and use an SSD, with lots of ram.

    I agree AVCHD is not what most people would recommend - but I don't care, it's what came out of the camera, and it worked.

    The extra ram, while not obvious, is probably important (basically for the integrated graphics, which eats 512mb of ram).

    I have no idea how good/bad the compression/quality of the footage I have is...if it were audio I'd be all over it.

    Did I get good quality or...not so much?

    More importantly, can I not compress my footage at all to gain higher quality?

    I just went with whatever, on both the camera and in HitFilm.

    Please, please, please, advise.

    tldr: For anyone out there I would advise an SSD, lots of ram, and not to shoot on a dlsr.

    Just my 2 cents.

  • We've noticed in the office that Macs consistently out perform newer Windows PCs, despite running on much older hardware (sometimes, very, very old hardware).

    I can only assume that it is a combination of higher quality components and more tightly integrated software.

  • Triem23Triem23 Moderator

    @DannyDev I'd argue software more than hardware. Apple basically uses off the shelf components for the majority of a machine, but there's a much smaller hardware pool to code for. The Mac hardware of this entire decade has a smaller CPU/GPU pool than PC users have in this generation. Makes it easier to really optimize drivers, I'd say. Same with misc hardware like RAM, and drive storage. 

  • Yeah I have to believe it's MacOs and the Core support for audio and video. When I switched to MacOs for music production is was a revelation - Core Audio "just worked". There was no need for 3rd party drivers so no tweaking or dealing with drivers, so the integration with software was better.

    I know the Apple tax is painful and that one can get a much more powerful machine for less money using Windows, but the pain involved with getting an audio or video production system to work well is...offputting. I did it for years with audio.

    One other thing I was amazed by when I looked it up is that HitFilm was Windows only until it crowd sourced the development for MacOs sometime around 2014(? - I can't remember exactly).

  • Worrying if it's drivers that's the explanation, given that the hardware side should be equally optimized - Intel and AMD spending plenty on it and PC motherboard manufacturers going all out to shine amongst all the others; with super hot gaming and editing rigs etc. 

    When something is 2 or more times faster on worse hardware, that's some super sucky driver writing right there, if it is that.  Which PC drivers can be blamed? GPU? Video decompression? What else would be PC specific that's that important and sucks so badly? Are Apple driver coders really orders of magnitude better than PC driver coders?

    Any PC compiler settings for speed optimisation that can be cranked up, or turned on? Or debug mode settings turned off?

  • Triem23Triem23 Moderator

    @Palacono it's having fewer hardware drivers to write, period. With the current generation there are basically two GPUs for Mac. Compared to PC where you have the whole 10xx line the whole RX line, several different Intels and AMDs. And that's probably 20 vs two in one year alone. Hey! Order of Magnitude. Apple holds things to very tight tolerances. Reason Apple accessories cost more is Apple Certification. Apple devs can easily test every possible hardware component in relatively few machines. We didn't even get into sound cards! And everything else. 

  • @Triem23 Yes, but it's not like they're all having their PC drivers written by one overworked guy is it? And Motherboard manufacturers split up their products lines to work with a limited subset of each CPU, GPU, RAM type etc. and they're building on the frameworks of the previous set, so it's not like they don't know what to expect. Even the Ryzen's just slip right in and work. Edge cases fail, they don't run at half speed.  So assuming same number of man hours put into each GPU type by each manufacturer (if that's the bottleneck? not been made clear): where else to look?

    I'm still getting updates for a 7 year old card from NVidia, so I'd guess they've probably got it pretty sorted by now and anything they learned from that they applied to their next generation and the next and the next. And shouted about it pretty loudly too. So, although it's always: "update your video card drivers", that's always about edge cases that just fall over entirely, never 2x performance improvements.

    Can you recommend one single PC setup that works well? You can cherry pick any components and manufacturer you like. It'll still suck.

    When anyone complains their PC and GPU are virtually idle at render time, the reasons given are always "something is waiting for something else" well, isn't that the point of multi-threading? Make it work better. Use RAM and frame buffers or something. Buy better "something" code if it exists. That might be cheaper than spending time trying to work around the limitations of the current "whatever the problem is". Not using all the power was actually suggested as being a good thing, because the PC might overheat...

    With that logic the ideal PC is a single core CPU and a base level GPU because the bottleneck might just be in the HDD read speed.

    Yes, I know multiple clips, overlaid and blended, plus models etc. puts a strain on that, but if I take a simple .MP4 clip, right out of the camera and render it with nothing else on it in Vegas, it takes 1/3-1/4 of the time it takes to do the same thing in Hitfilm. Unless it's a Hitfilm Composite shot, when the roundtrip is slowed to Hitfilm speed+ extra overhead.

    All 8 threads of the 4 cores are pegged at 100% the entire time in Vegas. In the middle of summer I have to take the side off the PC case and point a fan into it to keep the temperature down on long renders. Yes, I could go liquid cooled etc. but: side issue. It's just using all the resources available and the result is faster when it's not doing anything "fancy".

    Is Hitfilm repeatedly checking  for things that aren't happening?
    You've got Vegas and Hitfilm, so you could also do your own tests.

    If the whole "write from HDD, decompress, send to GPU, do stuff to frame, read from GPU, compress, write to HDD", is bottlenecked at each stage because...reasons, then could that overhead be spread over multiple frames by processing several frames at a time if there is the RAM to hold them?  Less passes through (some of) the bottlenecks: faster result?

    Most frames are independent of those either side of them, so could potentially be done sequentially by layer without affecting any others. Motion Blur and several effects breaks that "rule", so would slow it down again - as they do now - but if it's slowing from an initial faster speed: still faster overall.

    Here's a weird thing: Tracking. What does it have to do? Read frame from HDD (slow), decompress frame (potentially slow?), compare tracked area with previous frame using Optical flow (non-trivial, slow?), and calculate feature position change (quick), draw new position of tracking box (quick) and a row of Xs behind it (would like to say "quick", but the way deleting keyframes works: I have my doubts) and display. The "display" bit should be pretty quick as there aren't even any effects applied.
    Like: Movie Player quick.  All the hard work has been done by then. But when you select another program on the Task bar so that Hitfilm loses focus, the tracking speeds along 3 or 4 times faster. It's the one place in Hitfilm where you can actually see the overhead of just updating the display, and it's surprisingly slow.

    So, if just updating the display is that slow, compared to all the other slow things that are still being done: then shouldn't it be possible to identify the culprit and do something about it?

  • @DannyDev

    I've seen quite the opposite. There are a number of people who've said that they're able to get realtime playback at 1/4 resolution on a new iMac "Pro" and I'm getting realtime playback at 1/4 resolution on an HP Spectre x360 with an eGPU. That's with 8K Redcode. 

    I originally got into HitFilm on a mac... and eventually switched back because the windows machines were trouncing the macs, performance wise, mainly by dint of having newer hardware in them. These days, they're nearly identical in performance with the same specs, but that also means that there's no way that a mac can keep up with a similarly priced Windows machine, because you get so much more. Of course, I'm only looking at machines that I'd actually trust to work, like HP, Dell, Boxx. 

    @djfrodo

    You're using a professional camera, so of course you're getting solid performance. I was able to edit a bunch of wedding videos using a dual-core laptop with no dedicated GPU in Premiere, and that was with as many as five camera multi-cam shoots... because Premiere, being optimized around that market (it's Premiere's largest user base by a huge margin), handles h.264 footage well. 

    Feed Premiere on that box some HD ProRes or DNxHD and it plays fine... until you add a LUT or a Lumetri effect to it, then the performance flatlines. HitFilm on the other hand could still give me reasonably smooth playback with just one color effect added.

    HitFilm is pretty efficient.  Its performance better than you'd expect given how capable it is. 

     

  • @Palacono "But when you select another program on the Task bar so that Hitfilm loses focus, the tracking speeds along 3 or 4 times faster. It's the one place in Hitfilm where you can actually see the overhead of just updating the display, and it's surprisingly slow."

    IMO, It's not the literal display update of that frame that is the performance issue here. It's related. Decoder context thrashing. This was previously discussed in a old thread here.

    https://hitfilm.com/forum/discussion/44903/tracking-performance-questions

  • @DannyDev

     

    re: You're using a professional camera...

    Yeah I never knew the compression on dslrs was that high and the effect that would have on editing and rendering. I was about to rent a Canon 5dmkiii until I (luckily) did some research. The battery issues and rolling shutter alone made renting the c100 worth it, not to mention the bigger sensor.

    As for using LUTs in HitFilm - editing and rendering are really stuttery and slow, but I've found proxies can make up the difference.

    re: HitFilm is pretty efficient.  Its performance better than you'd expect given how capable it is.

    I totally agree. I can't believe that HitFilm actually exists and that it's as good as it is. It seems to me that good, free products like HitFilm Express and Reaper (for audio) have become so good that if I were Adobe or something like Cubase/ProTools (Avid) I'd be very worried at the moment.

  • @NormanPCN Ah yes, I remember that thread - couldn't find it because..this forum software - and you saying it's scanning several frames ahead, then having to seek backwards to display a past frame because...reasons, plus some other unnecessary stuff being done in loops.

    Interesting.... just checked in HF6 and DannyDev (or someone else) did what he threatened to do in that thread: removed the one performance enhancing "option" we had available.  So now if you switch focus when tracking: Hitfilm 6 stops dead. It's now always nice and slow. And time was spent on that instead of something from the wishlist? How kind. ;)

    Did anyone also do anything useful and remove any of those unnecessary checks  in HFP 6, can you tell? It still seems as slow as ever and I see we still have the wonky keyframes that appeared in HF4, so maybe it's back to scanning in HF2 (HF3 was lumpy and slow) to get both speed and accuracy. :D

  • DannyDevDannyDev Staff
    edited January 29

    @Palacono

    "Interesting.... just checked in HF6 and DannyDev (or someone else) did what he threatened to do in that thread: removed the one performance enhancing "option" we had available. 

    This was a bug. We looked into it here specifically when you raised the issue.  It was 'fixed' (tracking stops) because media may be 'slept' (unlocked) while HF is in the background which could lead to undefined behavior, possibly a crash, if the tracking engine continued to access the video stream.

    That tracking continued while HF was in the background was never an intended feature or 'enhancing option'. It stops for the same reason that playback and RAM preview stop - we can no longer guarantee safe access to the media data when the application is in the background.

    "So now if you switch focus when tracking: Hitfilm 6 stops dead. It's now always nice and slow. And time was spent on that instead of something from the wishlist? How kind. ;)"

    Inflammatory statements do not encourage us to work on these issues any faster. Tracking is fast. Very fast. In fact when it was first implemented we actually discussed artificially slowing it down for usability reasons!

    The problem is actually media decoder thrashing which has already been discussed: When tracking in HF, both the viewer and the tracking engine are competing to access frames. THAT'S why putting HF into the background  and thereby disabling viewer playback, caused tracking to complete faster; it removed the decoder contention.

    Decoder trashing IS something we are actively working on but's not a quick fix.

    As for spending time on requested features in the wishlist, we (the developers) don't decide what features/enhancements get developer time.

    As a former developer yourself, I'm sure you understand that.

    "Did anyone also do anything useful and remove any of those unnecessary checks  in HFP 6, can you tell? It still seems as slow as ever and I see we still have the wonky keyframes that appeared in HF4, so maybe it's back to scanning in HF2 (HF3 was lumpy and slow) to get both speed and accuracy."

    What checks? What 'wonky keyframes'?

  • edited January 30

    @DannyDev "It stops for the same reason that playback and RAM preview stop - we can no longer guarantee safe access to the media data when the application is in the background."

    Hmmm. When Hitfilm does a an export, the export render does not pause/stop when I switch away from Hitfilm. So given what you say, it should stop the export because not stopping is a bug. I know the export is done via a helper 'background' process so that is never the system active app anyway. Maybe/probably export keeps the files protected. Just being the system active app does nothing to ensure safe access to files. So if export can protect files when Hitfilm is not the system active app then why not other function(s) in Hitfilm.

    It is easy to protect a file from being altered. Just have the file open with DenyWrite sharing. I'm sure you know this but others are in the forum.

    Both Hitfilm and Vegas have a global preference for, "close all media files when not the active application". There is only one reason to have such an option in an app. Files are being opened with DenyWrite. The DenyWrite protects the file from alteration. DenyWrite would stop someone from switching away from Hitfilm, modifying some media file, and then switching back to Hitfilm. Hence the existence of the option. If files are opened with a DenyNone then there is no reason to bother to close the file handle when the app is not the system focused app.

    So if we the user do not have the close all files option checked, then files should remain protected after Hitfilm loses focus.

    A ram preview is the same as an export to the point of the final frame result render. Meaning all the input media is treated the same. The tracking engine of which you speak does not have it's own media decoder and such. I'll would bet the farm on that. I believe it is being fed frames from Hitfilm proper. Hitfilm proper getting frames from MC/Qt/VfW/native(cfhd, prores). So really not any different than an export with respect to keeping input files protected during Hitfilm use.

    You say you use many third party libs and that is normal. I've noticed that with MC the file is being opened by MC. Hitfilm is passing a FileSpec to MC. That one way to do it. It's not the most flexible way but I digress. You do lose control over file sharing unless the lib provides options. Whatever, it does not matter. If the Apis available do not allow you to pass a handle or the sharing mode used for the file open, then that is something of a red flag. If they (tp libs) always used DenyNone then that is realistically okay if they don't have API option(s) for file sharing. The host can handle file share protection as necessary. File sharing modes are fundamental to file access and if the library is inappropriate about sharing use then one has to wonder what else are they inappropriate/obtuse about. Even if control over file opens is given up to third party code you can still protect files. A host can protect with a gratuitous file open of a file doing nothing more than open with a DenyWrite to protect the file from alteration during use.

    I've never had a need like Hitfilm/Vegas to keep all files open for the duration but I have very much had a need to detect file changes when someone switched away from the app and back. Thinking about something that happened to me once along those lines, app switch, brings to mind the Hitfilm UI responsiveness but I digress.

    You mentioned ram preview. What Hitfilm does is really unacceptable. Hitfilm holds a user/pc hostage. We cannot do anything useful with our time on the PC while waiting for a ram preview to finish. Ram preview is a necessary evil. Computers are just not fast enough. It's not acceptable to hold a machine hostage during an export and Hitfilm does not hold it hostage. So why is it acceptable during a ram preview.

     

Sign in to comment