How do we get a similar look in Hitfilm? Photos of Calder Wilson (High Dynamic Range)

Have a look at the still photos of this fellow, Calder Wilson:

http://www.calderwilson.com/

Scroll down the page -- the photos were taken at various venues over a period of many years, yet the color quality of the photos matches.

What sort of video LUT would produce this hyper-real coloring?

Do I assume he's first using a polarizing filter during the photo shoot to remove glare and pop the colors?  

Comments

  • Triem23Triem23 Moderator

    Those are all HDR images produced by combining stacked photos in specific software to edit in a 32-bit/channel color space before exporting an 8 or 16 bit/channel image.

    Difficult to tell if he used tone compression, Fusion or exposure blending as his processing model.

    It would be difficult, if not impossible to recreate in Hitfilm. The color information just isn't there.

    HDR processing usually involves local tonal compression, Curves adjustments and saturation adjustments. You can try playing around with Hue/Saturation and stacking layers of heavy Unsharp Mask to approximate the look, but it's not something a LUT will give you. 

  • Aladdin4dAladdin4d Moderator
    edited October 31

    @Davlon - No real narrative, just comments.

    You need bracketed shots, not necessarily locked down shots although that helps. A simple monopod can do wonders here.

    Technically stacking can be done in anything supporting layers including Photoshop but there's all kinds of plugins and other software, both free and paid that specialize in the right "type" of stacking and there are other reasons to stack images like noise reduction, improving resolution or focus stacking.

    Color bit depth - The bit depth determines the number of colors available. 8 bit = 16,777,216 colors. 16 bit = 281,474,976,710,656. Float vs integer has to do with precision and the range that can be stored. 16 bit integer means full 16 bit precision, 65536 values per channel. 16 bit float can store a wider range but the precision goes down to 10-11 bits per channel. 32 bit float increases the precision to 23-24 bits per channel. Both 16 and 32 bit float allow for negative values. 16 bit integer does not. 

    Tone compression - Takes all the data merged together from bracketed shots, and creates a baseline exposure to be the basis for further edits. It can also help to create a balanced Histogram.

    Exposure Blending - Really Multiple Exposure Blending. Take images with different exposures and hand blend them using layers and masks. Basically recreating the use of a graduated neutral density filter. 

    Fusion - Making a final fused image from a stack of images. In this one each pixel in each image is weighted then depending on the weights includes or excludes pixels from being used in the final image. The weighting can be based on a lot of different factors depending on what the end goal is. Noise reduction will give different weights than weighting for HDR. This technique can also be used for changing the depth of field like linked to above. 

    HitFilm doesn't support Redcode RAW so you'll lose some color information getting the footage to something HitFilm can use. Other than that HitFilm can use pretty much all of the color information you can throw at it but what it lacks are things like tone compression and being able to weight pixels when blending layers based on user parameters like real HDR software does. 

  • Triem23Triem23 Moderator
    edited October 31

    @Aladdin4d pretty much covered it.

    Photoshop can do HDR . I use Photomatix Pro, usually, although On1 PhotoRaw 2018 has HDR tools. PhotoRaw is ok. Photomatix is the best HDR software I've used, and I've tried at least six.

    Luminance HDR is free and Open Source. Haven't tried it myself. http://qtpfsgui.sourceforge.net/?page_id=2

    32-bit, the non-techie version (Aladdin nailed the techie version): More bits-per-pixel just gives you more values to adjust. For all practical intents and purposes all display devices currently display 8-bit color--256 levels per channel. There are broadcast finishing monitors that can display 10/12 bits, but those cost thousands of dollars/pounds. But, with 8-bit/channel video you have about 16.5 million color values (256*256*256). With 32-bit/channel color, for all practical purposes you have infinite latitude. (2.17 billion*2.17 billion*2.17 billion color values) You can do insane color/contrast adjustments with that data before you bring it back to a 16-bit/channel or 8-bit/channel format to display on an 8-bit/channel display. 

    "HDR" will be the next big thing in video--but "video HDR" is a misnomer. This just refers to the new 10-bit/channel TV's coming out and the 4K UHD Blu-Ray spec supporting 10-bit color. THIS IS NOT "HDR," THIS IS A BULL-PATTY MARKETING TERM! A 10-bit display--1024 levels/channel--will totally change the way video looks, but we're still 5-10 years away from that approaching mainstream use, and, it's still not actually HDR. Point being when adjusting color levels, 256 levels won't stand up to TOO much processing. with 32-bit color combined from multiple 14-bit RAW exposures, you have an insane amount of latitude.

    The hyper-real look of many HDR's is technically a processing artifact. One of the processing methods is basically applying a ton of sharpening. Since you have so much color latitude the ugly haloing you get in 8-bit sharpening can be modulated into a rather attractive haloing.

    Last note I can give is that HDR doesn't actually require a tripod, or even a monopod if you have reasonably steady hands and a fast camera (like my 5D MKIV shoots at 9fps. Pretty good. Several cameras can shoot close to 16fps). The HDR software itself will have specialized tools for ghosting as, even on a tripod leaves, waves, clouds and people tend to move.

    But these three images are all mine, all HDR and all handheld.

    https://farm5.staticflickr.com/4464/38021848542_f16b6fb5a8_o.png

    https://farm5.staticflickr.com/4484/37999523126_a81d5c75ca_o.jpg

    https://farm1.staticflickr.com/599/32307598282_01e3174994_o.jpg

    With this third one, you can see the "center" exposure of my three photo bracket, and how dark the face of Bryan (Blue Lantern) is. The "Overexposure" photo is what let the HDR process pick out his face like that. It's also what let be bring out all that shadow detail on his chest and legs. The single exposure couldn't be pushed that far.

    Incidentally, the Lantern photo was finished in Hitfilm Pro 2017. Neon Path was my savior tool on that series.

    A tutorial on a psuedo-HDR look? Interesting. I'm leaving the country for six weeks in a couple of days, so I wouldn't even think about this until some time next year, but it would be interesting to try.

    Saturation/Vibrance and Sharpening would be the basic tools to mess with.

    Last-last note. That Rokinon 12mm (full frame) or 8mm (crop sensor), fisheye is one hell of a lens, and I love it dearly. Obviously that's the Golden Gate Pylon and the Lanterns. The island pic was the Canon 24-70mm at 70mm

    Oh, one more, just so it's not all "blue" pictures. ;-)

    https://farm5.staticflickr.com/4448/38000114606_987339905f_o.jpg

    This image became the background plate for the Caustics (making water effects) tutorial I did for the Hitfilm channel earlier this year

  • Triem23Triem23 Moderator

     BTW, yes, he also probably used a polarizer to reduce glare. 

  • edited October 31

    My understanding is that HDR shots need a locked down camera to build the stack.  This fellow is shooting live concerts -- no chance of asking 10k people to stay in one pose for the duration of his photosession (!)

    Unless you're saying that the sky & fireworks is one shot, the stage is another shot, the people another shot?  Event photographers rarely have the luxury of doing anything locked down for fear of getting trampled or in the way.

    > by combining stacked photos in specific software 

    Photoshop? Other?

    > 32-bit/channel color space before exporting an 8 or 16 bit/channel image.

    This I should know, but I don't -- how does working in 32 bit contribute to the uber-real  quality of these images?  Some of them are astounding.

    >  Difficult to tell if he used tone compression, Fusion or exposure blending as his processing model.

    Before I ask questions about the terms, I googled to see what I could find.  I didn't find anything specific for "photographic tone compression"  .  Can I ask you for an explainer?

    I found "exposure blending"

    >  It would be difficult, if not impossible to recreate in Hitfilm. The color information just isn't there.

    Hypothetical question: If I shot on a high end camera like Red or similar, what image information is present in that footage that Hitfilm does not use?  

    > HDR processing usually involves local tonal compression, Curves adjustments and saturation adjustments. You can try playing around with Hue/Saturation and stacking layers of heavy Unsharp Mask to approximate the look...

    I'm not there yet.  Would you want to do a tutorial on this?   Most HF tutorials instruct on creating-the-impossible.  But more useful for everyday work is enhancing-the-ordinary to move it into extraordinary.

     

  • ( @Davlon sorry, you fell foul of the automated spam filter there. I've marked you verified now, so none of your future posts should get caught up in it)

  • @Davlon

    Triem23, Aladdin4d and DanielGWood covered the details of HDR photography at a level of detail that should answer your questions.  However, to better understand the proper way of bracketing for HDR is to use manual exposure with the ISO and F stop locked and the Shutter speed is the variable that is changed.  If you are shooting a low light subject (Sun Rise, Sun Set, urban twilight etc) you would need to Lock the camera down.  But if  if your center exposure is high enough, As an example your +/- 2 stops would be 1/500 of a second for the center exposure , 1/2000 of a second for the  -2 stop and 1/125 of a second for the +2 stops.    most DSLR's have an automatic bracket feature so after setting the exposure and bracketing parameters, the camera will take all three (or 5 depending on the camera) So if you have your camera has a quick enough FPS  ( I use a Nikon which does 6 FPS in Raw there should be minimal ghosting.     Like Triem, I also use Photomatix which has a feature to corrent the alignment of a hand held shot as well as ghost elimination.  I have seen images processed in Photomatix of a moving train with the ghost train(s)  removed by the software.   So in a nut shell the Concert Scene would not be a problem hand held.  NIK also has a HDR plugin which is currently free, 

  • I know it takes time to write good responses --I thank you all for helping to educate the rest of us.

    @ DanielGWood -- I kept trying to respond to Mike's first post and the system wouldn't take my post.  Thank you for whitelisting me --

     

  • Triem23Triem23 Moderator

    @DanielGWood interesting observation here. The spam filter must have picked him up hours after his initial post as Aladdin and I have both responded to the comment you "released!"

  • I'm working in Photoshop 25+ years -- since version 2.5 for Windows.  (It came on 5 mini floppy disks back then).  I know the program well.  I'm not able to get the candy apple/day-glo colors I'm seeing in the Calder Wilson shots while still maintaining a sense of realism.  If I could figure it out in Photoshop I'd then have a chance of recreating it in HF. 

    If I would make layers and masks for every image element and then adjust the curves & colors maybe I could complete a portion of one of the concert shots.  This guy is shooting these things in rapid succession -- look at his Music page...

Sign in to comment