755 megapixel 300 fps camera coming soon

Aladdin4dAladdin4d Moderator
edited April 2016 in Filmmaking

Not a misprint it really is a 755 MP camera shooting @300 fps and since it's a light field camera position and depth data is already included

Promo video  

https://vimeo.com/161949709

This one shows keying without actually having to key

https://player.vimeo.com/video/161533699

Red Shark article

Is Lytro's new 755MP, 300fps cinema camera the biggest leap in video tech ever?

And some of us thought news of a possible Vegas Pro 14 was exciting.......

If anybody was wondering 755 MP works out to be roughly 42,000 x 18,000 pixels and @300 fps you need in the neighborhood of 400 GigaBytes of storage per second.

«1

Comments

  • I'll be the first one here to say that this thing is useless. I thought studios wanted to reduce cost when making movies.

  • Pretty awesome. I have read that you only can rent them. Put still a high price.  

  • Aladdin4dAladdin4d Moderator
    edited April 2016

    @KevinTheFilmaker In a lot ways this will reduce overall costs because much of what is done in post is now is potentially just a click or two away. No more green screen (or the sets to go with it) or fighting to get a clean key just set a depth range and done. Camera solve for matchmoving - already done. HDR - already done in one camera pass. Roto - just like keying set a depth range and done. Decide you want a rack focus shot later - no problem. The list could go on and on but that's the real potential here

    A full rig is supposed to rent for 125k per day rental available Q3 of this year which is a lot but I'm sure that includes longer term storage and media management too which is another area of potential cost savings over all.

  • OH MY GOD, 125k? Studios are fighting with eachother because they're looking for VFX artists that won't ask questions about their salary. Is 125k reasonable, does anyone here have any experience with big Hollywood productions?

  • Pretty crazy technology for sure, and the implications are amazing.  That keying stuff is awesome.  Curious what sort of cleanup tools they'll have for that, and how good they'll get it before it really starts being used.  Lots of potential, but as with any keying solution, you're never guaranteed good results, so proper tools for correction will be needed.

    As Stephans said, it's a rental only for now from what we know, with packages starting at $125k.  And that's not just the camera, since you need the server setup as well to process this kind of information and store it securely, and it comes mounted on a dolly with motion control.  The thing is 6 ft long after all and has a very wide lens.  We also don't know for sure if that rental rate for a day, a week, or a certain contract period or what.  Lots of unknowns still for this, including workflow and storage.

    With the capabilities, that price could be more reasonable than one would think right now.  The potential savings for keying, matchmoving, 2d to 3d conversion, etc could be huge and save post pro a lot of time and money.  And having an entire point cloud for every pixel for every shot baked in before you even touch it would be awesome.

    Another info source:
    https://www.fxguide.com/fxpodcasts/fxpodcast-301-lytro-cinema/

  • Aladdin4dAladdin4d Moderator
    edited April 2016

    @KevinTheFilmmaker

    Studios are fighting with each other because they're looking for VFX artists that won't ask questions about their salary.

    Assuming this camera lives up to expectations what's reasonable is going to be very relative. Compared to shooting with something from RED the camera rental itself is insanely expensive but what if shooting with RED means you're going to need 4 times the post production staff for three times as long to accomplish the same tasks? All of the sudden the Lytro rental is going to look a lot more reasonable.

    I don't know all the numbers but I can guarantee you Lytro does and didn't come up with that 125k figure out of thin air. There's some combination of real world total costs that make 125k per day rental reasonable otherwise that wouldn't be their target price and they wouldn't have invested so much knowing there was absolutely no way to ever be at least competitive.

     

  • CNKCNK
    edited April 2016

    @Aladdin4d

    I guess you're right. Those were my first thoughts anyways.

    I still don't see how something like this could be useful for your mainstream Hollywood budget movie.

    If they mostly create things that can't be done in camera. Or, if we say that they use this camera to record real water, flowing as it would in CG, and using real water instead of CG. Then this camera would make alot of sense, because water is still the hardest to create as CG.

    I would love to see more realistic CG. In Hollywood budget movies I still see bad and obvious CG every 10 minutes.

  • Triem23Triem23 Moderator

    Studios are fighting with each other because they're looking for VFX artists that won't ask questions about their salary.

    This is true, but in Film the VFX artists are more or less the only people involved who get screwed out of salary--when you add up the salaries of the principal Avengers and Joss Whedon for Long Weekend of Ultron it becomes clear that the paychecks for those eight people would have made two Deadpools.

    We won't even talk about the Teamsters, who are first-in, last-out, starting at $50/hour going into double overtime ($100/hour) every single day. Teamsters drive the trucks. That's it. That's all they do. Most of the day they're sitting around the set making $50 or more an hour to wait for the end of the day. The starting salary for the truck guys is double of that for a VFX artist who actually works the entire day.

    With Hollywood's fix it in post mentality a lot of money is stupidly wasted on VFX--here's an example--Mission to Mars has a shower scene with Carrie Anne Moss. Ms. Moss refused to be topless on-set because Val Kilmer creeped her out. The sane way to do this would have been to determine this ahead of time and have the makeup department make rubber nipples (a-la "Splash.") which would have cost a few hundred bucks, tops, to sculpt and cast. No additional on-set cost, because the makeup artist is already on-set. Since this came up on the day of shooting, moleskin was cut. This sequence was passed off to a (female) VFX artist who had to matchmove, model, texture and render the nipples, THEN passed to another (female) VFX artist for physics and fluid simulation. This took 6 weeks of work and cost nearly $100,000.

    So, yeah, a camera like this would totally be something that a big studio would use. Going back to Long Weekend of Ultron--Robert Downey Jr. was on set for five shoot days, plus another week of rehearsals and costume/makeup tests or ADR. Basically, Downey was making $3 million/day. If you have a camera that can reduce keying time, motion tracking time, etc, etc, etc, then it's worth it to the studios, because that's going to reduce on-set time for greenscreen and tracking markers and a whole bunch of other things you don't want eating into your shoot day when your Tony Stark is $3-million of your budget.

  • Wow this thing looks amazing. The future of camera just took a new leap if this is anything to go by, ok a consumer version is still years ++ away mainly I suspect due to costs but it still could happen.

    2nd thought on this is, the scene they are demoing here is indoors, which means there is a limited number of pixels that the camera will need to record, I wonder how it will handel externals and all the changes that real, light, wind and depth gives.

    Still good to see.

  • This seems like something which in 10-20 years time will be very common. Right now it's way beyond reach of most productions, but the benefits are so huge and so evident that they'll hopefully find ways to scale and brings costs down.

  • This article in Variety, following the demonstration of the camera at NAB, has a nice brief explanation of the potential benefits of this thing. Also, it turns out the prototype is like the size of a car, but they hope to get it smaller for production. http://variety.com/2016/artisans/news/lytro-cinema-camera-introduction-light-field-nab-show-1201757185/

  • edited April 2016

    More info on how the sensor works, in this article: http://m.dpreview.com/news/6720444400/lytro-cinema-impresses-large-crowd-with-prototype-755mp-light-field-video-camera-at-nab

    Apparently it's shooting something like 21MP (edited for accuracy) video, but with 36 different angles of the scene, to generate the depth map. The photo sites on the sensor have micro lenses placed over a 6x6 grid of pixels, resulting in a sensor that is a foot wide, and gives enough variance in angle to triangulate the scene depth. It's explained better in the article.

  • Typo, you shifted the decimal place. 21MP.

  • You are correct. There wasn't supposed to be a decimal, but I'm clearly not skilled at typing on my phone.

  • Not sure if you people have seen it, but last FilmRiot Vlog, Ryan had a question about this camera and his thoughts, he said he was excited but had some concerns and that these are best described in the following article. http://prolost.com/blog/lytrocinema

  • Nice write up from Stu - thanks for the link. Definitely an interesting perspective, and the Lytro definitely has all those risks built in - it also brings the possibility of maximum choice paralysis.

    There's already a million choices to make when making a movie, starting with the initial idea and running all the way through to final export. But currently those decisions are spread (unevenly, but still) throughout the entire production. You make a lot of choices at the script stage. You make a TON of choices on the set. And then you make another set of choices in the editing room. They all feed into each other, but at each stage of production you can tick some of the key decisions off and move on. It means your brain can shift gears. You can wipe the slate clean and be fresh.

    If you're filming, you'd hope you have a script you're happy with, give or take some tweaks. If you're editing, you'd hope that you had the footage you needed.

    The Lytro potentially postpones a huge number of those decisions into post. A lot of stuff that is traditionally taken care of on set suddenly gets shifted into the editing room. And that is an enormous amount of pressure on the editor - I'm not talking from a technical perspective, but from a creative perspective. Similarly, the director/cinematographer doesn't have the mental compartmentalisation of finishing a day knowing that they nailed that shot/scene. Instead, it's still kind of up in the air and vague as to what it's going to be.

    Curious stuff, eh?

  • Aladdin4dAladdin4d Moderator

     This camera is all about potential things good, bad and ugly all at once. There's no telling exactly how this technology is going to change things but it's going to be interesting seeing what shakes out. 

  • edited April 2016

    But he said in the interview that it only produces 4Mp photos, i.e. that's a frame size of about 2680x1512, or about 40% larger than 1080p at 1920x1080 ( ~2Mp) or about  2.7k video size.

    So yes, you have some editing wiggle room around the edges to reframe up to a point, but it's virtually obsolete before it gets out of the door. You can't zoom in or reframe too far before you're stretching virtual pixels just to produce 1080p output. Next gen will be the one to watch, at 4x that sensor size,, so close to 6k video frame size, giving you some wiggle room in a 4K frame.

  •  The short film debuted at NAB used 4K footage from the Lytro, intercut with 4K footage from an Arri Alexa, so its definitely capable of more than 4 MP. The article I linked from DPRreview does the math to show that at worst, you would get a 21MP frame, which is greater than 6k.

    Also, that article from DPReview discusses the ability to shoot with an aperture of f/0.3 or even lower on the Lytro, since the aperture is defined after the capture. You have the ability to shoot with lenses that are impossible to build in the real world. And the low light performance is insane, since the sensor is so massive, and 36 separate samples of the scene allows for lots of averaging and noise removal. They had to add noise to the Lytro footage so it wouldn't stand out agains the Arri footage it was intercut with.

    Very cool stuff, which probably won't have any impact on me for another 5 years at least.

  • At NAB, they were saying it was a lot more like hundreds of samples per pixel, which computes given the 755 megapixel sensor.

    Processing that footage is cloud-based, currently via Nuke plugins.

    The trick wasn't so much shooting at f/.3, it was that you could adjust the depth of field to simulate an f/.3 aperture. You could also adjust the focal plane so that you could get view-camera like control over the focus plane.

    The footage looked great, and they were able to pull a very clean key on three actors without a green screen behind them, and on top of that, with a grip carrying a ladder behind the actors.  They were even able to extract confetti from one take and insert it into another. Amazing stuff.

    And yes, the camera was huge. 

    BTW, the 4 megapixel thing comes from the earlier Lytro Illum.

     

  • Ah, my mistake, that first link about NAB 2016 contained a video from 2014, where he mentioned the output size was 4Mp at 13:30 seconds into the interview. I didn't pay attention to the date on the video. Doh!

    OK, so the laterally reframing ability is going to be dictated by the area that the sensors are spread over, so the large prototype will allow more than a scaled down, more manageable sized final product. Or maybe the final product will remain huge, to allow for the most leeway to do that.

  • Aladdin4dAladdin4d Moderator

    @Palacono More accurately it's dictated by the number of light receptors on the sensor package. They went with large sensor equivalents for this camera. The official size of the sensor package is .5 meter. Large sensors give you higher quality, lower noise images so it makes sense for them to have done it but in theory they could have done it with cell phone camera sensor equivalents making for a much smaller camera with the same abilities.

    There may be other physical limits involved like they can't make a lens array capable of accurately targeting individual receptors on a sensor that small or they can but the quality is very poor or for some other reason but in theory it can be done on a much smaller scale without losing anything.

     

  • @Aladdin4D, how can it be made smaller without losing anything? That's akin to saying you can get stereo imaging from a single viewpoint.

    I know you're grabbing all the light from all the angles, but some part of the sensor has to actually be in line with the lens to capture proper parallax differences.  Hold your finger up in front of your nose and look through one eye then the other. Say those were the limits of the left and right side of the sensor. You can't generate an image as if seen by a third eye next to one of your ears. Or if you can, then that's truly some light-bending magical stuff.

  • Triem23Triem23 Moderator
    edited April 2016

    @Palacono I think @Aladdin4d  meant a similar system could be built on a smaller scale that would possess the core functions of DoF etc. 

    As is noted, the current camera shoots 755 mp with a massive sensor plane. Great, so we make one for 8k. And let's make it 60 frames per second. Thats only about 50 mp so less than 1/10th the spatial resolution and 1/5th the temporal resolution.

    Now we can shrink our lenses and imaging planes and storage arrays and still produce a light field image. Have we traded down resolution and framerate? Yes. But we can keep the abilities of the light field process. 

    I think that's where Aladdin was going. 

    Btw, you can totally get stereo from a single viewpoint. Do you have any idea how many 3D films are converted in post? Hitfilm alone has at least four ways to generate/create depth from a 2D image. 

  • Aladdin4dAladdin4d Moderator

    @Triem23 That was a big part of it yeah and I think that's definitely one way to get there. The most important part is keeping the number of data points per frame up.

    @Palacono ;

    "how can it be made smaller without losing anything? That's akin to saying you can get stereo imaging from a single viewpoint".

    Leave optics, ISO and such out for the moment. You do it the same way a 20 MP sensor could be full frame, APS-C, M43, or fit in a phone. They all capture 20 MP worth of data. The area the data is captured over isn't important.

    Getting simplistic the Lytro is capturing 36 4k planes of data per frame. You could do the exact same thing with 36 individual cameras in fact that's how the Matrix was done. An array of cameras was used and the captured information was put together to create light field data. Lytro's sensor just happens to be one piece with the same number of light receptors as 36 individual sensors. 

    The surface area of the sensor doesn't matter because there's 36 spatial data points per pixel and that's what's important. A larger sensor could arguably give you more accurate data points because the distance between light receptors has to be accounted for but with 36 data points to extrapolate from who cares? On top of that if you use the existing sensor as a baseline you can create a correction factor to compensate for any loss of accuracy by going to a smaller sensor.

  • The area the data is captured over IS relevant... a phone capturing a 20 megapixel image is generally capturing more noise than data, which is why you hardly ever see any prints made with a cell phone camera image. 

    Also, with bigger sensors, you end up using higher magnification lenses to get the same field of view that you would on a smaller one, so you end up with more detail... because you've made the details bigger.

    What they captured for the Matrix wasn't a light field. It was a collection of images from multiple sides of the actors, which they could then morph between. Very different approach.

    The surface area of the sensor matters a great deal. That's part of why people are so obsessed with bigger sensors, for example, why a 6K Alexa uses a 65mm sensor, and why the 100 megapixel Sony sensors are 6x4.5cm, rather than 135 format. 

    Using 36 cameras won't get you anywhere near what Lytro is capturing. Their big new idea really is capturing information about ray direction as well as where it struck the sensor. That's where they're getting their depth information from. There aren't any other cameras currently existing besides Lytro's that capture that information, so they're not able to derive depth information from the live capture.

  • edited April 2016

    Doh! Actually I've been particularly dumb, because I'm already using something a little bit like this technology, I just never joined the dots.

    I use Pix4D with aerial photos to create a point cloud that then creates a 3D mesh that can be viewed from any angle. The point cloud can be as well, but it's not really dense enough to be totally convincing if too 'side on' to the original image angle (although previous versions of Pix4D only had that option). The generated mesh works better, for a given amount of "better" considering the texture that's spread over the whole model is somewhat limited in size.

    Although the software needs a certain amount of separation to calculate the point cloud accurately in 3D, and I generally only overlap the photos by about 60%, I might try using a long run of images very close together to see if the point cloud is dense enough and accurate enough to  stand alone from a distance, where the pixels are close enough together to look right from viewing angles outside those in the original image.

    That would be a distant second cousin to what Lytro is doing, I guess. :)

  • Aladdin4dAladdin4d Moderator
    edited April 2016

    @WhiteCranePhoto Yes smaller sensors are noisy that's why I said "Leave optics, ISO and such out for the moment" and in an earlier post said "Large sensors give you higher quality, lower noise images" and "There may be other physical limits involved like they can't make a lens array capable of accurately targeting individual receptors on a sensor that small or they can but the quality is very poor or for some other reason but in theory it can be done on a much smaller scale without losing anything."

    For the purposes of what I was describing which is how many data points are used to extrapolate the final data and how you could get the same number of data points with a smaller sensor the surface area of collection is completely irrelevant.

    EDIT: Sorry to @WhiteCranePhoto for sounding snippy here. I actually meant to change it a bit and add "Sorry for not being clear about that" but I got distracted by food and cold drink and forgot about it when I came back to finish the monstrosity below

    A Brief History of the Light Field: 1996 - Present Day

     

    Moving on to The Matrix. It wasn't just a bunch of cameras at different angles that they morphed between. That's a purely photographic method and The Matrix method was much more than that. It was actually laser targeted, synced cameras traveling along programmed paths being triggered at precise intervals. The camera paths and intervals were determined before hand by using ray trace visualizations. The individual frames were then scanned and the resulting data loaded into a light field engine which was then used to interpolate the final output. Whenever you hear or read it was "done with sophisticated software interpolation" the sophisticated software was a light field engine working with stacks of images. There was another film that predates The Matrix that did roughly the same thing but for the life of me I can't remember the title.

    The approach evolved throughout The Matrix films into becoming a machine vision guided array of HD cameras allowing them to skip having to ray trace a scene first so they could shoot and go straight into the light field engine

    Virtually all of the concepts and techniques used came straight from work done at Stanford's Graphics Laboratory as part of their Immersive Television Project. What was used in the first film was derived from the work of Marc Levoy and Pat Hanrahan which focused on creating new views from existing imagery using light field calculations without the need for complex geometry. The ray trace visualization was used to improve the output of the light field engine in The Matrix. First side note - synthetic focusing, a hallmark of Lytro, was first demonstrated by Stanford at the time as a product of this research. Side note for @Palacono what you've been doing isn't just similar it's exactly the same. It all starts with this paper. 

    This is a picture of the camera array that led to the HD camera array used after the first Matrix film.. 

    It's called the Stanford Multi-Camera Array. It was a large and long running project that focused on several different things that could be done with such a large array not just light field work. The abilities we associate with Lytro, refocus, synthetic aperture, parallax, depth etc. were almost all done with arrays like this first. The majority of the work beyond this point was all an effort get to a single sensor solution so you didn't need the array.

    The light field work from the first paper can also be used to construct multi-perspective panoramas. When Stanford did it they called it The Stanford CityBlock Project. Google StreetView  and Microsoft StreetSide both grew out of this research.

    Synthetic aperture techniques were developed using an array of 100 cameras and they figured how do it without an array of cameras using a single camera, a projector and an array of mirrors to create 16 virtual cameras. The end result is seeing through partially occluded scenes. Lytro uses the virtual camera approach.

    The next thing they figured out was you could use a microlens array and a single sensor to capture light field data instead of an array of cameras. The student that did this, Ren Ng, is the founder of Lytro. All of the equations, computations and techniques that make what a Lytro camera does possible were figured out first primarily using an array of cameras and the core light field engine from the 1996 paper. I'm not saying that to criticize Ren Ng's work, it's a fantastic innovation. I'm saying it to point out that everything being done with the Lytro Cinema Camera can still be done with an array of cameras or sensors because that's exactly what was done on the road to Lytro.

    Second side note - Cameras like the Kinect and Intel RealSense are fruits of the same research as is using a camera for 3D scanning

  • +like :)

     

Sign in to comment

Leave a Comment