Easiest Workflow Blender to Hitfilm Pro 2017

Hi All.

I've been working on a film with a small team about two years and we're at the stage where we want to produce a teaser/trailer. I have hired a Blender digital artist and he has created a character model. I'm fairly experienced in Blender and can produce some animation with his rig. The model has a sophisticated rig with materials and textures. It is very organic, not a gun or a ship or anything like that.

I bought Hitfilm Pro 2017 last year when it came out. My aim is to produce some live action sequences and composite the animated character into the live action scenes and add effects. I would like to know the best workflow between Blender and Hitfilm so I can camera track and export "something" that I can composite into the live action layer in Hitfilm. In reading many experiments, it sounds like the way to go is to export the animation sequence as either OpenEXR or .PNG.  The character must be "match moved" within the live action. I have some storyboards. I also understand that at this time exporting alembic from Blender 2.78a (the latest) is the most problematic if I want all the rig points (there are 100s), materials and textures to come across. Same with OBJ, FBX, etc.

 What I need is someone to suggest a successful workflow, step by step for me.  Since I have to track camera movements, do I try and solve in Mocha, export the tracking data somehow to Blender, animate the character of course in Blender for 3D spatial relationships, then export the OPenEXR or PNG sequence (with Alpha) into Hitfilm and apply additional effects?

My 3d digital artist is an experienced modeler/rigger/animator for 15 years, so he is trying to pull me into Blender to do most of the compositing, but I want to give the workflow into Hitfilm with Hitfilm as the primary editor/compositor.  Of course lighting the model is important and synchronizing that with the environment, etc.

I would be willing to pay someone to come up with a surefire workflow for me to follow so I'm not aimlessly experimenting.

My email is aknittel1@verizon.net

Comments

  • CNKCNK
    edited January 2017

     Helping you out;

    @Triem23   @NormanPCN   @Aladdin4d

  • Triem23Triem23 Moderator

    @CNK I ain't a Blender guru. That's @spydurhank or maybe @Asshan

  •  Let's hope that Aladdin or Norman is then, or I'll look silly.

  • I don't know jack about Blender. 

  • aknittel , I know much about blender and have been producing stuff but your requirements are more likely to be in HitFilm 2017, of which I don't have any experience and not even the software :)

  • @CNK Now you look silly. ;)

    @aknittel I'm not a Blender guru either but there is a Blender->HitFilm script for Blender to get Blender tracking info into to HitFilm. I do not know if the script still works with the latest version of Blender but everything was working in April of 2016 with HitFilm 4 Pro. Here's the thread complete with tutorials

    http://hitfilm.com/forum/discussion/4602/updated-blender-hitfilm-script/p1

     

     

  • That script still works for me in HFP2017.

  • Thanks all.

    Can someone definitively tell me if I produce three passes (object, Z, shadow)in 3 separate openexr (or png sequence)  files, how does HitFilm identify and process and assign those layers?

  • Triem23Triem23 Moderator

    Hitfilm simply treats each sequence as a media clip. For compositing, in general you would place the object render in the stack. If the shadow layer as alpha, put it on top of object. If shadows have a white background, then change layer blend to multiply.

    A Z-layer should probably go in an embedded composite shot. A Z-layer becomes a greyscale map which would be applied to a grade layer with effects to clamp the area, so z-layers often require post processing to remap the values. As an example, lets say the Z-map is driving a Lens blur, but the in-focus point is at about halfway back... So you would place curves on the z-map layer (in its composite shot) and remap black to white, 50% grey to black and white to about 50% grey*. Having the z-map in its own Composite bakes in the curves.

    In your main comp you would create a Grade Layer with Lens Blur, then a Set Matte using the z-layer embedded composite as the source layer, matte type Luminance, blend mode Replace.

    This is how you'd adjust a z-map in other software, like Ae, as well.

  • Triem23Triem23 Moderator

    * remember, with real cameras and optics its easier to blur foreground than background. The DoF of a shot is 1/3 in front of the pin-sharp plane and 2/3 behind it.

  • Not sure about scene size or coplexity but if there is only 1 object with 100 rig point's texuteres then there is 100 rig point's.

    If it is easier to handle inside of blender tweak there export png sequence with transparent background and then tweak that inside of hitfilm.

    But 1st thing at my mind?
    If you haven't enough skill why do you wanna just so hard thing rigth away? I am not saying that no but most of people's stop doing what they like cause they try first hardest possible thing and that won't work and life sucks, toys sold back to boring life...

    Why don't you go first like 10 rig point object, see how that works best ways and then move on, cause thing is if you have 100 rig point's then you have 100 rig point's

    Back to that... Do you really think that you need all 100 rig point's on that? I have seen many really awesome animation less than 10 rig points.

    Can you seperate those parts?

    Easy way to drop down workflow is split some smaller pieces that model... Depending what it is.
    Making 4 clones of that model, ( like LOD https://en.wikipedia.org/wiki/Level_of_detail )
    Where futher it got only couple rigs and closer you get it would got more.

    So basically it would be easier handle that way i think...

    These are just mu tought's i am just trying to give perspection for seeing things... Not saying can't or can... just trying to save some pain... :)

     

  • Ahh, well the model/rig is already done and is Pixar quality. It needs to be because the teaser/trailer is going to be seen by some major producers. I cant afford to cheap out with a model that I cant capture subtle expressions when it's posed.  I have a lot of experience with the Blender/model part and I have  subbed out the modeling/rigging/shading to one of Blender's best people. Believe me when I say it looks photo-realistic when lit properly.

    If the best compositing workflow is to animate in Blender, and then bring into live action in HitFilm, I'll do that. I was just looking at all the other ways to handle a 3d model in HitFilm, and I believe this character is just too organic for that workflow.

    But to your point. I don't have map every single rig point possibility to HitFilm. Just consider a walk cycle though. No in hit right mind would attempt to actually animate all the rig points it would take and do that in HitFilm.

    Triem, thanks for all the detail compositing info.

  • Err guys didn't Josh recently do a video on Blender to Hitfilm?

  • He did. I watched it. Raises more questions sometimes.

  • Triem23Triem23 Moderator

    @aknittel for an organic character on a live plate its arguably better to track footage and animate in Blender to composite in Hitfilm. Or, if Blender can import .ma tracking data, autodesk matchmove is free.

     Otherwise, you can export from Blender in Alembic or FBX, but for that I'd be referring to Josh and Tony Cee's respective tutorials on the Hitfilm channel. 

    Now, in Hitfilm you'll only have Blinn-Phong and Cook-Torrance shaders. My gut tells me that Hitfilm is great for mechanical models, but not as good for organics. If you want more advanced lighting and materials, like subsurface scattering, Blender has more advanced rendering options, and can export PNG or EXR sequences. 

Sign in to comment