AMD Ryzen Discussion - R5 is coming soon!

CNKCNK
edited March 16 in Everything Else

 Ryzen 5 will be coming out 11 April (at least in Sweden) with the following SKU's:

AMD Ryzen 5 1600X (6C/12T)

$249 / $242

Intel Core i5-7600K (4C/4T)

AMD Ryzen 5 1600 (6C/12T)

$219 / $213

Intel Core i5-7600 (4C/4T)

AMD Ryzen 5 1500X (4C/8T)

$189 / $192

Intel Core i5-7500 (4C/4T)

AMD Ryzen 5 1400 (4C/8T)

$169 / $168

Intel Core i3-7350K (2C/4T)

From: http://www.sweclockers.com/nyhet/23520-amd-lanserar-ryzen-5-med-sex-och-fyra-karnor-den-11-april - I don't own or claim to own this table.

You can get a 3.8 GHz, 8 core 16 thread Ryzen 1700x for $399.00 on Amazon.com, it has already become a best seller on Amazon, wow....

So, would HitFilm REALLY benefit from something like this? I've participated in older threads here on the forum, and from the looks of it, HitFilm wants 4 fast cores over 8 slower cores, however, these Ryzen CPU's are all unlocked, and they're not slow.

Interesting to hear everyone's opinion about these new CPU's from AMD..

 

 

«1

Comments

  • Promising. Teamed with a solid GPU it ought to make for a great system for composting, color grading, and VFX.

    And I'd be able to view scopes ;)

    I discovered that with scopes on, HitFilm grinds to a halt on my machine. I *definitely* need a GPU. Fortunately, my tax refund will help with that... 

     

  • Big question. Does it beat or equal an Intel CPU clock for clock. Not since the days of Athlon has AMD had something competitive in performance with respect to Intel. Clock for clock.

    Once Intel abandoned the Netburst architecture, they have always looked back at AMD with respect to performance.

    4K types would probably like the 8 core CPU.

  • Triem23Triem23 Moderator

    Personally I don't pre-order new gear. I like to wait several months after release to see how it functions in the real world. 

    Non computer case in point--on paper, MAN I want a Blackmagic Ursa Mini. Pre-release reviews were fantastic! Reality--Lens Pro has a note specifically recommending you NOT rent this camera from them... In the field it has a lot of failures. 

  • Clock for clock is irrelevant. It's like dpi in a digital image. 

    What matters is performance on applications that people use. 

    Per clock performance is even less relevant now that both Intel and AMD use dynamic clock speeds anyway. 

    Based on that, performance per watt would be a lot more useful than performance per clock cycle.

    If performance per clock cycle were all that mattered, no one would have any reason to look anywhere but at the GPU.

  • @Triem23 that's a good point also. At least AMD has a good track record for its products actually working, while Black Magic's blunder is still a pretty recent memory. 

  • The 'leaked" benchmarks look very promising and AMD is claiming a 52% increase in IPC performance. The latest benchmark I've seen has the 1700X beating a Kaby Lake in single threaded performance.

    Of course this is just one benchmark and there's no real world use cases out yet so I'm going to wait and see what happens after people actually use them for a while. If the hype is true then powerhouse systems just came into the reach of a lot more people.

  • The Cinebench and video encoding benchmarks look good also.

  • CNKCNK
    edited February 24

    Yeah, that Cinebench benchmark look extremely good for AMD. I just wish that they wouldn't have delayed their other CPU tiers, the R3 and R5, also, you can really see what they're going for here with their naming scheme.

    Question is, how many cores and threads do you really need for HitFilm? We know that people here use lower end systems, myself included.

    I remember Norman saying that HitFilm can't fully utilise a 4 core 8 thread Intel CPU, and please correct me if that's wrong, I don't want to intentionally mislead people.

    And, AMD have said that the lower tier CPU's in the Ryzen family will come a few months later, which includes (rumors I believe) the 4 core 4 thread, 4 core 8 thread and the 6 core 12 thread CPU's.

    It's crazy to think that after all this time, AMD drops the bomb and only release their highest tier CPU's, all at once on March 2nd.

    I guess we could rename this thread to AMD Ryzen Discussion I suppose, if nobody minds.

  • It's probably for the marketing benefits of having the fastest CPU on the market even if it's only for a while. The whole point in having processors tiered is to justify higher price tags (and therefore margins) on higher end units. Intel and AMD tend to make a lot of variants using the same process + design, so they can price them for maximum revenue. The high end ones get the attention and the margins, people look at them and think, "Sweet, I want an AMD CPU" and then end up buying a less expensive model because they can afford it.

    I'm running HitFilm on a dual core Kaby Lake. An Ultrabook. No dedicated GPU... firing up scopes chokes the system up so much that I can't get any playback. But it's working, for now. It will get better when I add a RazerCore + GPU. Some day.

     

  • @CNK I never said HF cannot fully utilize a 4C/8T CPU. Given that HF only uses the CPU for certain things and only a couple of those things are multi-threaded you have meet certain circumstances to use a lot of cores. The major multi-thread CPU tasks in in HF AFAIK are the media file decode and MP4/AVC encode. The media file decode is quite variable by codec.

    The more you use multiple simultaneous media streams you use in compositing the more cores you need. A transition on the NLE timeline is two streams even though a single track. 4K needs more cores than HD all else being equal.

    If you ask how many cores do you "need". Show me a specific timeline and you'll get an answer. Show a different timeline and maybe get a different answer. Generally, I think 4C/8T CPUs are probably enough for most users typical HD work.

    Contrast this with GPU use. HD is approx 2 million pixels so if you had a GPU with 2 million stream processors they could all be used in parallel for most graphics/effects algorithms. Simplistically speaking of course. These days, huge GPUs are only approx 4 thousand stream processors.

    @WhiteCranePhoto "Clock for clock is irrelevant. It's like dpi in a digital image."

    I will have to disagree with that. Instruction throughput efficiency is all that matters. Extra cores and extra clock rate are benefits for their respective targets. Those targets being application and task specific.

    If you care about Hitfilm performance, then you are out of luck for a benchmark. Nobody that does reviews sets up HF benchmarks. All we can do is look for something "close". With regards to HF and how it is likely using the CPU, you want a benchmark that stresses the branch prediction logic of the CPU. How the CPU keeps its internal pipeline moving and retiring instructions. HF CPU use is likely nearly pure integer and heavy on logic with branches. Probably not heavily dependent on SIMD (SSE) instructions.

  • @NormanPCN Agree or not, it still doesn't make any sense. It's a completely irrational metric, because clock speed is just a design parameter.

    The G4 had higher IPC than the Pentium4, yet with a G4 you'd get 1/3 the performance for 3x the price of a Pentium4 system, because of a whole slew of other factors. Did IPC make the G4 better? No, and by logic, it made it a dazzling failure, probably because it was designed for networking applications and not for personal computers.

    A lot of AMD fanbois latched onto the "it's better because it has higher IPC than Pentium4" but of course when Intel started pushing the IPC envelope, suddenly it wasn't a big deal anymore... the fact is that IPC wasn't the reason that the Athlon was a winner, it was a winner because it cost less and performed better than its competitor within the same thermal envelope, which is of course the most significant metric these days; it's the primary reason that Kaby Lake is faster than Sky Lake... heat. Less heat means it can run in turbo for longer periods, leading to higher performance.

    I don't know that it's been confirmed yet, but Ryzen supposedly wins on that front... on top of actually leading in some non-trivial benchmarks (Cinebench, for example ).  If Ryzen DOES win on perf/watt, then it's going to make things even more interesting, since it has an edge in performance even without winning the perf/watt metric. For us computer users, that's a win/win.

    The IPC goofiness is like people claiming that an Alexa Mini is better than a Red Epic-W, or the other way around. The Big Boys (tm) who actually SHOOT big-budget film point out now and then that there are a LOT of feature films shot with Alexas where they bring in Reds for VFX shots... and almost as many feature films shot on Reds as there are shot on Alexa these days.

    So which one is better?

    There's one way to be sure which one will give you the better image: it's the one in the hands of the more talented cinematographer.

    Short version: Ryzen looks like it's going to rock. It's quite a bit less expensive than the contemporary Intel version, might actually be a bit faster, and consumes less power. Ergo, consumers win. Case closed. :)

     

     

  • @NormanPCN My mistake. Though I do remember that we had a discussion about the fact that my CPU got to 90-100%, where as your CPU wouldn't breach 50%, or well not close to what mine did. So, is that a sign that HitFilm works better with real cores rather than hyper threading, even if it can't tell the difference between the two, it somehow does anyways? 

    3 days left guys, and Intel are already offering sales on their current CPU's, though probably with a bit of hush hush behind the scenes with their select "partners".

  • edited February 27

    @CNK "So, is that a sign that HitFilm works better with real cores rather than hyper threading,"

    On a HT CPU all the "cores" are "logical". Logical meaning that two cores share an execution unit (the physical core, "real" core). Those two logical cores, as I call them, are as real as anything. An app has no idea what is what. One can detect CPU core and threads (logical) available but a core is a core.

    Think of it this way. You have a guy with a hammer (physical core, execution unit) and someone (logical core) is handing him the nails.  The guy can only hammer a certain number of nails per minute. So now you have two guys handing nails to the guy with the hammer. He can still only hammer the same number of nails per minute no matter now many people hand him nails. So each guy handing nails is handing them slower than if there was just one guy handing nails. They have to share the guy with the hammer. So what if one of the feeders has to open a new box of nails, or drops a nail or takes sip of coffee. Now the other guy can hand nails to the hammer guy twice as fast because for a period of time he is not sharing the hammer guy with the other guy feeding nails. So with two guys feeding nails you can normally keep the guy hammering going at 100% of what he is physically able to do.

    This is why on a 4C/8T machine that 50% CPU utilization as reported by task manager and others can effectively be 100% of what the CPU is able to do. Once the core execution units are saturated there is no more no matter what CPU utilization says.

    In the real world most threads have stalls a lot and so another thread if available can help keep the guy hammering away (physical core).

    Very commonly most threads cannot fully saturate a modern physical core with work. So a second thread feeding instructions can keep the physical guy hammering as fast as he is able.

    "Better" CPUs like i7 verses i3 might have an extra execution unit within the physical core which for single thread is not so useful but in hyper-thread it really helps throughput. Again, forget about CPU %. CPU % says nothing about actual throughput in instructions retired per cycle.

  • Triem23Triem23 Moderator

    @NormanPCN I think the hammer metaphor was just about the best simple explanation I've seen on this topic. 

  • edited February 27

    Thanks Mike.

    My metaphor could lead to a question by some, so I'll pre-answer. One could think that maybe on hyper-thread CPUs the threads run at half speed. Not really as it all depends. The system thread scheduler has to be smart about what CPUs it assigns an available thread to run on.

    Consider 4Core/8Thread, so (1, 2) (3, 4) (5, 6) (7, 8). So if you have four threads ready to run (not waiting for work) the system thread scheduler does not want to run the threads on CPU 1, 2, 3 and 4. This is because (1,2) and (3,4) share a core and therefore would run at half speed. Better to assign the threads to 1, 3, 5, 7 or similar. It gets more complicated with scenarios from that simple example. Just understand that the operating system scheduler is aware of the CPU internal organization and adjusts accordingly.

  • edited February 28

    Yeah, that's an interesting analogy, but what's Sony Move Studio/ Vegas/Magix doing when it pegs all 8 processors on 4 cores flat out at 100% in Task Manager - with GPU render assist turned on too - when it renders something out? I sure never have that 'problem' with Hitfilm.

    Sometimes if I want to do something else in Sony I have to use Set Affinity to turn off one of the "Processors" so I can do some light browsing, as there is nothing left over for Windows and it gets pretty sluggish. Doesn't happen with Hitfilm; loads of CPU left over.

  • edited February 27

    @Palacono Hitfilm certainly has resource utilization issues. If CPU % is less than 100 then that means there is no thread executing for some period of time on some percentage of cores. Some period being the sampling period of the program showing the utilization. It does not mean the app or some portion of the app is not multi-threaded or has not started enough threads. You can have threads waiting for work. Keeping the app/threads working with minimal overhead is the issue and Hitfilm has a real issue here.

    It is fun to speculate, but there are too many things that come into the mix here to speculate reasonably. The only thing one can say is Hitfilm is not efficient (thus fast), relatively speaking on most fronts.

    Hitfilm's pipeline performance issues can affect the GPU as well. With most stuff I play with, Nvidia does not think Hitfilm is doing enough with the GPU for Nvidia to jump the clock off of idle/2D clock rate. At times I get a partial clock jump. Some of that can come from real time frame rate. I just force my GPU to full clock in the Nvidia control panel. For the record I have  GTX 980. I don't see this in Vegas. A simple single media file and color wheels and curves effects used as a test. Vegas jumps the GPU to full clock. Vegas uses OpenCL (like most(all?) others) so maybe there is something there. I fold OpenCL and CUDA into the same thing for that s

  • Ah yes that makes a lot of sense. Thank you for that very nice explanation.

  • BTW, it seems that Intel has implicitly admitted that Ryzen is a rather strong competitor by dropping the prices on i5 and i7 processors :)

    I really hope that Ryzen lives up to the hype, I've been an AMD fan for quite a while and would like to see AMD back on top for once. It's been quite a while since the Athlon/Opteron days.

     

  • @WhiteCranePhoto Yep! Judging by the reviewers reactions it's looking really good right now for AMD.

     

  • @CNK sure sounds like it!

  • CNKCNK
    edited March 3

    Well, at 1 GHz less, the Ryzen is basically on par in single threaded and beats most of Intel CPU's badly in multi threaded performance, price/performance.

    I'm not citing sources, it's all over the internet at this point.

    Well, it looks like Intel are going to resort to extreme price cutting soon...

  • Honestly, I'm just glad that Ryzen is competing on performance instead of on price.

  • CNKCNK
    edited March 8

    Intel might be able to pull ahead soon, and when they do, we're going to see another FX I feel like. Good multi thread, weak single thread.

    Developers want more threads to work with, apparently since the Xbox 360 days, but Intel have offered dual cores etc, basically not ideal CPU's for game development. I hope Ryzen is going to change that.

  •  Sooo, Any of yall try out the ryzen with hitfilm yet?  I'm really curious if Hitfilm will utilize the whole thing!

  • I wish... I won't be getting a new computer for a while; I need to generate revenue first to recover from the investment in this new camera I have. :)

     

  • edited March 10

    Ha, I hear ya, I've got this a8 6600K OC'ed to 4.7 but she just can't do any sort of transmissions at 1080p DNxHD.  Hard to judge how the transition looks when you only see 3 frames. lol

    I'm debating between a Ryzen, or picking up an old fx8350 and give the ryzen time to mature.  Possibly even waiting for Zen 2.

    I built this rig for $500, so it's kinda hard to throw another $500 at it just for the CPU, but when I built it I didn't realize I'd be getting into video editing as much as I have.  My goal was to be able to run movie maker fluently, because that should be able to handle any editing needs I have... Ba ha ha ha.

    Off topic, but I thought it was pretty interesting, I'm currently running this a8 6600k which is a 2 Core / 4 Thread chip.  For grins I disabled 2 of the threads making it a 2C/2T chip.  CS-GO went from 90 FPS to 27 FPS.  But there was NO noticeable difference in Hitfilm play back (DNxHD), it was still slightly choppy during transitions.  Then I tried disabling a whole core making it a 1C/2T chip, and playback was garbage!

    I never thought about that before but it does make sense, when video decoding, all the threads are doing basically the same thing, so it would make sense that they're limited by their shared floating point math modules and such that I really don't know anything about.

    So my thought process is that a 8C/16T CPU should fix that!

  • edited March 10

    @Triem23  Oh, I think I'm starting to feel the upgrade fever.  I've been toying with the idea of building up another system...why?  just because, I guess.  Can one really have too many computers?   Good article and I even understood most of it.  Ryzen would make a good fit for me because I don't game...like at all...I'm strictly a productivity guy.

  •  Ryzen 5 will be coming out 11 April (at least in Sweden) with the following SKU's:

    AMD Ryzen 5 1600X (6C/12T)

    $249 / $242

    Intel Core i5-7600K (4C/4T)

    AMD Ryzen 5 1600 (6C/12T)

    $219 / $213

    Intel Core i5-7600 (4C/4T)

    AMD Ryzen 5 1500X (4C/8T)

    $189 / $192

    Intel Core i5-7500 (4C/4T)

    AMD Ryzen 5 1400 (4C/8T)

    $169 / $168

    Intel Core i3-7350K (2C/4T)

    From: http://www.sweclockers.com/nyhet/23520-amd-lanserar-ryzen-5-med-sex-och-fyra-karnor-den-11-april - I don't own or claim to own this table.

    So... AMD are once again going to rock the market because their IPC is by no means bad. 

    Thoughts?

Sign in to comment

Leave a Comment