View Full Version : DDR4 memory?
TheGreatSatan
09-13-2011, 12:31 PM
Looks like it's coming soon:
http://techgage.com/news/samsung_develops_the_first_30nm_ddr4_dram/
xr4man
09-13-2011, 01:31 PM
isn't a lower cas latency number better? or is a cas latency of 13 not an issue since the speed of the ram is so much higher than ddr3?
TheGreatSatan
09-13-2011, 03:40 PM
I'd rather have lower voltage RAM, which is what DDR4 delivers
xr4man
09-13-2011, 03:44 PM
yes, agreed, but what about my question about cas latency? that's something i've always had a hard time understanding.
Twigsoffury
09-13-2011, 04:36 PM
yes, agreed, but what about my question about cas latency? that's something i've always had a hard time understanding.
I think CASlatency is the delay between data received/refresh/data sent and how quick its "Available" again.
But don't hold me to that.
TheGreatSatan
09-13-2011, 09:40 PM
Higher freq is better than latency. Lower the freq, better the latency
xr4man
09-14-2011, 08:44 AM
ok, that's sort of what i was guessing, although it seems a bit backwards to me.
NightrainSrt4
09-14-2011, 10:45 AM
Higher freq is better than latency. Lower the freq, better the latency
The first sentence isn't true in some cases, and it especially isn't true if your frequencies aren't scaling at a rate faster than the latency. Most products this isn't really a big problem because the frequency is scaling sufficiently enough to overcome the additional latency, but if that isn't happening then you are losing performance. You might be able to cycle faster, but if you can't get access to the data as often, then you may as well have gone and gotten a slower set with a much lower latency.
This problem usually only tends to show up on cheaper overclocked sets of ram, where they only get a mild overclock but they have to up the latency by a large amount to get it stable. But yes, if you buy decent ram, then the higher frequency will be better than the small increase in latency. It's the cheap stuff that the memory chips really couldn't handle the increased frequency that they bump the latency up by so much you might as well have just bought the lower frequency/lower latency sticks. But, this is where they get the average public with the "higher frequency is better" mantra. IIRC, you used to see this (haven't checked recently) with the lower priced/bottom end overclocked sticks from OCZ and the like.
DDR4 should be nice. Haven't looked over the specs yet.
xr4man
09-14-2011, 12:46 PM
thanks for the explanation. that clears up a lot of stuff.
Konrad
09-14-2011, 07:22 PM
The silicon is rated and binned before leaving the factory, then tested and binned again at another factory before being mounted onto the PCB modules. The parts are matched, and paired with comparably matched memory controllers, then marketed not a lot differently than tiers of CPU parts.
Bottom line is the bottom line, the market is saturated so prices are competitive, you basically get exactly what you pay for (barring the premium you might insist on paying for certain brand names).
I find that these days there's really no gain when attempting to overclock low- and mid-priced parts, regardless of their specs, because all the particular units which can sustain overclocked performance have already been culled out to sell at higher prices. What's worse is when the parts are factory overclocked beyond their engineering specs, they cannot be overclocked any further, unless you think maybe 1-5% gains are worth the effort.
The only way to get faster RAM is to pay extra to get lower timing and higher frequency values. If the specs aren't proudly stamped all over the stick then it's safe to say they're junky.
The sad part is?
By the time I get a DDR4 stick of RAM, it'll be considered highly outdated.
Yeah. I'm a computer modder with no income whatsoever.
msmrx57
09-14-2011, 10:17 PM
Yeah. I'm a computer modder with no income whatsoever.
I'm in the same boat. My newest machine is a AMD dual core. :(
Konrad
09-14-2011, 10:27 PM
I take it back. There are other ways to get faster RAM. The problem is they basically all involve exotic cooling like liquid nitrogen.
DemonDragonJ
09-21-2011, 11:16 AM
I am definitely excited about this new development, but as the slowest component (i.e., the greatest bottleneck to overall performance) in a modern computer is a mechanical hard drive, how great of a performance increase shall be experienced from upgrading to DDR4 memory?
On the subject of memory, there is an issue about which I have been wondering for some time: how is it that memory with a higher operating frequency also often has a higher latency? If the operating frequency represents the number of calculations that the memory can perform in a specified duration (identical to the operating frequency of a CPU), and the latency represents the number of clock cycles (calculations) between the CPU requesting specific data from the memory and the memory providing that data, would it not be more logical for a greater number of calculations per second (a higher operating frequency) to also allow for a shorter delay between the CPU requesting data and the memory providing it (a lower latency)? Could someone here please explain that to me?
Apart from that issue, the aspect of DDR4 that most intrigues me is that, according to the Wikipedia article on DDR4 SDRAM (http://en.wikipedia.org/wiki/DDR4_SDRAM), the multi-channel architecture of previous generations of memory shall be discarded in favor of a "point to topology, where each channel in the memory controller is connected to a single module." I hope that that transition leads to much faster performance overall.
Konrad
09-22-2011, 01:05 AM
The great mechanical hard drive performance bottleneck is not so much an issue with SSD drives. The new bottlenecks are Shenzhen garbage flash controller chips and, probably soon, their mobo-connected bus speeds.
How long before they decide on the most elegant solution: separate RAM banks for each processor core?
DemonDragonJ
09-22-2011, 10:15 AM
How long before they decide on the most elegant solution: separate RAM banks for each processor core?
That sounds very interesting to me, but that concept unfortunately is unfamiliar to me? Could you please either explain it in greater detail, or direct me to a web site that could provide me with further information on the subject?
Konrad
09-22-2011, 10:40 PM
It seems like a simple idea to me: design the architecture so the mobo is populated with multiple RAM banks and controllers, each dedicated directly to a single processor core (or single group of cores). This would obviously be a considerable paradigm shift, requiring a new generation of processors and matched (Northbridge/MCH) chipset components; ie, wait until Intel or AMD introduces it before Gigabyte and Asus start engineering it onto PC mobos.
It seems obvious that multiple memory controllers can collectively do a better job of running multiple simultaneous memory tasks than a single controller could. As it stands, all 8 of your cores still have to share access to all external memory ... on-die L2/L3 cache is great, and fast, but surely more multiple-simultaneous bandwidth to external cache/memory can only be better? I think that conditions where one or more processor cores has to idle clock priority while waiting for other core(s) to finish their memory access is the sort of thing which happens kinda rarely but with enough statistical frequency to have noticeable impact on overall performance thresholds. Think of it as separate "channels" if you prefer, except slaved directly to the processor cores instead of to the chipset bridge, one step closer to true parallel computing (in the context of simultaneously running multiple computers on one mobo). More hardware always means more capabilities, and if it's all engineered intelligently then it can vastly exceed the sum of its parts.
It might even increase fault tolerance; allowing a locked processor or bad RAM chip to only BSoD/crash/reboot an isolated segment of the machinery while retaining the functions and data carried by the remainder.
The idea was mine, although no doubt every competent silicon engineer on the planet has already considered or studied it, no doubt there's already technical (and even corporate) terms describing such architecture, and perhaps even some (non-PC) multi-core platforms which already implement it.
PCs are great and all, don't get me wrong, I'm a nerd with passion. But as far as real computers go, consumer-class PCs are just overpriced cheap mass-produced junk, no matter how many fancy nVidia cards you stuff into them. They're compatible with common software and hardware, and with each other, and comparatively inexpensive and (these days) fairly idiot-proofed and easy to operate. Please don't flame me, lol.
DemonDragonJ
09-23-2011, 11:17 AM
It seems like a simple idea to me: design the architecture so the mobo is populated with multiple RAM banks and controllers, each dedicated directly to a single processor core (or single group of cores). This would obviously be a considerable paradigm shift, requiring a new generation of processors and matched (Northbridge/MCH) chipset components; ie, wait until Intel or AMD introduces it before Gigabyte and Asus start engineering it onto PC mobos.
It seems obvious that multiple memory controllers can collectively do a better job of running multiple simultaneous memory tasks than a single controller could. As it stands, all 8 of your cores still have to share access to all external memory ... on-die L2/L3 cache is great, and fast, but surely more multiple-simultaneous bandwidth to external cache/memory can only be better? I think that conditions where one or more processor cores has to idle clock priority while waiting for other core(s) to finish their memory access is the sort of thing which happens kinda rarely but with enough statistical frequency to have noticeable impact on overall performance thresholds. Think of it as separate "channels" if you prefer, except slaved directly to the processor cores instead of to the chipset bridge, one step closer to true parallel computing (in the context of simultaneously running multiple computers on one mobo). More hardware always means more capabilities, and if it's all engineered intelligently then it can vastly exceed the sum of its parts.
It might even increase fault tolerance; allowing a locked processor or bad RAM chip to only BSoD/crash/reboot an isolated segment of the machinery while retaining the functions and data carried by the remainder.
The idea was mine, although no doubt every competent silicon engineer on the planet has already considered or studied it, no doubt there's already technical (and even corporate) terms describing such architecture, and perhaps even some (non-PC) multi-core platforms which already implement it.
PCs are great and all, don't get me wrong, I'm a nerd with passion. But as far as real computers go, consumer-class PCs are just overpriced cheap mass-produced junk, no matter how many fancy nVidia cards you stuff into them. They're compatible with common software and hardware, and with each other, and comparatively inexpensive and (these days) fairly idiot-proofed and easy to operate. Please don't flame me, lol.
Wow; that is definitely very interesting. I certainly would be interested to see if this new memory architecture becomes more popular, as it seems as if it could help computers to operate much faster and more efficiently.
Additionally, your comment about parallel computing is also intriguing, but if it becomes more popular, will serial communication (i.e., USB, PCI-express, SATA) remain the primary communication type for computer interfaces, as serial communication is superior to parallel communication, or will parallel communication become popular again? Or are parallel communication and parallel computing unrelated to each other?
xr4man
09-23-2011, 11:41 AM
Or are parallel communication and parallel computing unrelated to each other?
you are correct to a certain extent.
the processor already runs parallel communications natively. that's where you get 16 bit, 32 bit, 64 bit, etc. in a 64 bit processor you have 64 bits of data going to the processor at one time in parallel. same thing with an ide port for a hard drive. the data is flowing in x amount of bits per clock cycle. with sata coms it just changes the parallel data into a single stream of serial bits.
but with parallel computing, you would have each core or processor doing it's own thing at the same time as another core or processor. whether their communications are in parallel or serial.
i think that's pretty much how they work now. however, that parallel processing has to go through a single memory controller chip. so that parallel data to and from the processor has to go through the controller chip "serially".
so what konrad is saying is if you give each core it's own memory controller, then that parallel data can go through IT'S OWN MEMORY CONTROLLER "PARALLELY" (i just made a new word).
does that make any more sense?
Konrad
09-23-2011, 02:56 PM
I don't see USB, PCI-e, SATA, etc disappearing any time soon; they're standards designed for compatibility with off-motherboard hardware, they're reasonably well designed and they allow all the hardware makers who aren't working in mobo factories to contribute to PC technology.
But these days the chipset (most especially the Northbridge/MCH component) is basically half of the computer, soldered onto the mobo, designed in tandem with whatever family of processors plug into the mobo socket. The "standards" for these parts (along with the parts themselves) are made by Intel or AMD; how the CPU and mobo communicate isn't relevant as long as it works and all the other hardware (PCI-e, USB, RAM, etc) that plug onto the mobo still follow normal industry standards.
The consumer would basically just see a mobo with all the usual hardware connectors (and the usual "new" proprietary processor socket) plus perhaps a few more slots to populate with RAM sticks. Graphics cards with their own onboard processors and memory and memory controllers have already done this for years; the simple truth is that they can operate faster when computing and manipulating data locally instead of sharing memory with slowpoke main processor priorities ... these cards even bridge directly through each other rather than share data through the mobo chipset. Imagine a PC where each processor core has its own memory, where they can bridge data directly to each as well, independant of whatever wait might be queued at the (one or two or even three) memory controller channels used today.
It seems like a sensible evolution to me, since RAM ain't all that costly these days and we've already seen the focus of PCs move away from "one big fast brain" to a distributed multi-core (and increasingly parallel) architecture.
RogueOpportunist
09-24-2011, 12:48 AM
DDR4? With these new APU's using system ram I can see AMD pushing for DDR5 before 2012 is out... They need the edge on Intel wherever they can get it and they aren't going to keep that edge by using DDR3 for graphics memory, even if it is a "mid-range" offering.
Think about it, you could theoretically take a GT460, put DDR1 memory on it and make it worthless as a "modern" video card... Seems like a stupid statement... Until you realize that this is almost exactly what AMD has done by forcing the GPU to pull from system memory... I mean it works O.k. for the time being but reading through some of the more thorough reviews it's pretty plain to see that people have already noticed the "compromises" AMD had to make that have resulted in bottlenecked performance.
AMD has no choice but to push for DDR4+ as system ram, they NEED it like they need silicon, without it 2012 completely falls a part for them... The only real savings grace in all this is that as far as "graphics" goes AMD could release a steaming dog turd and it would still outperform any Intel IGP offering.
Konrad
09-24-2011, 01:17 AM
Why is this "need" for DDR4 limited to AMD? Intel and nVidia have the same pressing interest in faster memory, and they'll certainly offer their own APU architectures (although perhaps under different names) if the approach actually proves effective.
The APU approach gives higher integrated performance at the cost of lesser modularity; an AMD APU computer might have top-tier components at the outset but it won't compete with even better processing components being offered 1-2 years later. In a way, it's more all-or-nothing ... gotta upgrade the entire platform all at once rather than just individually upgrade the weakest components; this might not be an issue if a multicore APU is priced comparatively to discrete multicore CPU and GPU components. Probably a terrible thermal nightmare as well, concentrating all those busy electrons in such densely integrated parts - although price, performance, and reliability factors still remain to be seen. I suspect this approach will see a lot of use in laptops and other mobile platforms, which is great, but nobody breaks records or builds über-gaming boxes from laptop components.
RogueOpportunist
09-24-2011, 01:45 AM
Easy answer to that, Intel does not "need" faster DDR since they do not have high end graphics integrated into the CPU that demands it... As far as Nvidia goes... Well... They don't need what they already have.
By putting discrete level graphics on-die AMD has locked themselves into a future demand for faster desktop memory, not because desktops "need" faster memory, I mean lets face it, even the difference between 800mhz and 1600mhz RAM is barely noticeable to 80% of computer users but enthusiasts want faster DDR because it makes their wang longer, it's pretty much on par with Extreme Edition processors, 600$ motherboards and triple channel memory.
Introducing faster DDR is like coming out with 64-bit processors, it's not "new" tech, it has been around for quite some time now, the only reason we haven't seen faster ram already is because there is no demand for it... I know the first response to that is going to be "then why do we have 1333, 1600, 1800mhz, etc.. Well...That's easy... They're not making higher level ram chips, all those chips are made at the same time, it is the exact same product, what you are paying for is a higher bin... The same as with CPU's... They aren't making faster product... They are just labeling it as faster product.
Watch, in 2012 you're going to see the exact same conversations people had about the "validity" of triple channel memory, except it will be in the context of DDR4. I can see AMD platforms going up in cost due to their graphics chips demanding faster memory, while if Intel is smart they will just flip it around and offer cheaper triple channel DDR3 alternatives and take what was AMD's argument against triple channel and flip it around on DDR4... That won't happen though, Intel will push anything they can on the market.
This is of course just my opinion and I have absolutely no clue what the future holds... All I see is what's presented and right now AMD "needs" faster memory while Intel doesn't.
Konrad
09-24-2011, 02:26 AM
As you say, DDR4 is just an evolutionary increment. The higher bin is the result of finer engineering tolerances; smaller scale and tighter integration ... so yes, there is higher yield of parts which can sustain faster performance specifications.
The RAM modules themselves incorporate a controller chip; this is where the voltages and SPD timings and other parameters are firmwired into place. This onboard controller is not especially complex but it needs to be the fastest component on the module, otherwise it slows down data throughput for the faster RAM chips.
I don't really see any great demand for APU architecture at this point, since we don't have a lot of high end graphics cards saturating all the bandwidth on the mobo's PCI-e 3.0 bus (under normal circumstances), especially since GPU cards bridge directly into each other. I sort of expect the first APU offerings won't be spectacular in real use, although they'll give AMD a head start on the race.
AMD could just as easily accomplish faster memory bandwidth by stacking more RAM slots and channels ... memory is cheap enough that soon it might be cheaper to implement it this way, DDR4 is still too new to be good bang for the buck. The biggest limiter is actually mobo real estate; everybody likes to miniaturize the form factor as much as possible and RAM slots take a lot of space. I imagine AMD's Northbridge chips will run a bit cooler because much of their heat will be transferred into the APU core (since many more signals will be isolated on-die instead of directed across mobo data busses). I hope they don't decide to cheap out by removing PCI-e lanes from this component of the chipset, I hate it when mobo makers splice extra lanes into their boards because it always invokes some kind of tradeoff.
Yeah, of course we'll see "hybrid" boards which can accept new DDR4 and current DDR3 (and perhaps slower DDR types as well), at least for a while, as usual. Every new generation (new socket) processor looks great at introduction but quickly ends up becoming the bottom offering within its family, and first-gen mobos usually end up having all sorts of minor flaws which get corrected in future revisions (not always possible to fix with BIOS firmware updates). I personally think it's worth waiting until the first wave has passed, oversights get properly fixed, and prices shuffle a few steps down the ladder.
RogueOpportunist
09-24-2011, 02:42 AM
I'm not trying to open a debate on the methods of use or the willingness for manufacturers to push product to market long before there is a demand and relying on marketing tactics to make sales figures... All I'm saying is AMD "needs" that ram to be faster, when I say that I am considering that their APU offerings aren't going to remain static, while I doubt APU offering will be as aggressively "revamped" as GPU offerings I do see the very next offering as being something that will make todays product look pale in comparison... I mean it just makes sense right? The 100$ GPU of today was the 400$ GPU of last year... or hell 6 months ago.
AMD's current APU offerings have a memory bandwidth bottleneck, it is their very first APU and already the memory is an issue. Given the current trend with GPU development I wouldn't consider it far fetched to say that the next APU offering isn't even going to function on DDR3 due to that bottleneck, it is a compounding problem, what might only translate into a 10 or 20% drop in performance today could translate into a 50-75% drop in performance on the next gen GPU's... It's like with my GT460 example... Put DDR1 on any modern video card and you render the thing useless by modern standards, you couldn't sell the thing for 5$ if you wanted to, the memory bottlebneck is just too severe and the GPU can't function properly... Well my opinion is that the exact same thing is going to happen with DDR3, eventually the evolution will come to a point where DDR3 is just no longer functional as video memory, at which point Intel won't care because they don't have a "high end" IGP that needs fast memory and Nvidia won't care because video cards use their own RAM... AMD on the other hand is going to have to do something about memory, what that something is I have no idea but my point is AMD "needs" it... Intel shouldn't really care less one way or the other.
DemonDragonJ
09-25-2011, 09:29 AM
you are correct to a certain extent.
the processor already runs parallel communications natively. that's where you get 16 bit, 32 bit, 64 bit, etc. in a 64 bit processor you have 64 bits of data going to the processor at one time in parallel. same thing with an ide port for a hard drive. the data is flowing in x amount of bits per clock cycle. with sata coms it just changes the parallel data into a single stream of serial bits.
but with parallel computing, you would have each core or processor doing it's own thing at the same time as another core or processor. whether their communications are in parallel or serial.
i think that's pretty much how they work now. however, that parallel processing has to go through a single memory controller chip. so that parallel data to and from the processor has to go through the controller chip "serially".
so what konrad is saying is if you give each core it's own memory controller, then that parallel data can go through IT'S OWN MEMORY CONTROLLER "PARALLELY" (i just made a new word).
does that make any more sense?
Yes, that explanation is very helpful, but I now wish to ask: why is parallel computing an apparently recent phenomenon, if multi-core processors have been in existence for several years now? Is parallel computing simply a complex and difficult process to achieve?
xr4man
09-25-2011, 10:33 AM
i don't think it is a recent phenomena. i just think the cost has come down enough for it to become main stream for consumer pcs.
although i did read an article a while back about the real world need of 6 core processors and that until there is software that is optimized to use all six cores, it wouldn't be much of a jump in performance. it went on to say that writing code for multi-core processors was a bit more difficult.
i think i read that about a year or so ago, so obviously things have changed, but it's just an example.
Kayin
09-25-2011, 01:26 PM
AMD could simply address that memory issue through a sideport. There's also been talk of integrating DRAM into the package like their Xbox360 GPU. They have experience with both.
Besides, there's talk they'll abandon DDR altogether. Their newest cards (HD7000) are all running XDR (the newest incarnation of RAMBUS) with a quad data rate, not a dual. It's not too big a leap to put the desktop version into their setups. Yes, it's a respun processor, but DDR4 would be as well-the IMC will not handle it natively.
DDR4 isn't JEDEC certified, so it will be 2014 at the earliest before it comes out. I expect AMD to go with sideband addressing soon, and for Trinity possibly an integrated DRAM cache plus sidebanding to deal with the HD7000 series core in it.
Sturm und drang, but no real DDR4 till 2014. Could AMD benefit? Dubious at best. First gen DDR4 will most likely (like DDR2 and 3) be slower than DDR3. Only over time will it ramp up.
Konrad
09-25-2011, 08:25 PM
The embedded microcode running in the processor hardware actually automates efficiency to some degree, but yeah, the software needs to be optimized to gain full use of the hardware. Modern compilers and operating systems automate this to some extent as well, but the reality is that even the best automation is dumber than smart design. Most programs do not lend themselves well to parallel computing anyhow; the result is cores either remain idle or are tasked with running other programs.
Software will never fully utilize the latest and greatest hardware technology anyhow, it's always slightly behind the curve, but it'll never catch up if the hardware is not introduced.
In any event, while faster is always better, I still don't see high-end DDR3 being obsolete for a few years; the components which require critical memory speed (ie: CPU and GPU, mostly) already integrate their own dedicated memory controllers, it's not at all difficult to double your word length by addressing twice as many RAM chips through a glue logic controller, doubling and redoubling your memory bandwidth right up to the architecture limits imposed within the processors themselves ... it just costs more.
RogueOpportunist
09-26-2011, 02:34 AM
why is parallel computing an apparently recent phenomenon, if multi-core processors have been in existence for several years now? Is parallel computing simply a complex and difficult process to achieve?
Some technologies are "discovered" long before they are needed in any kind of mainstream computing, for example "64-bit" has been around since 1961 believe it or not, long before we ever even had personal computers, yet it took until 2003 or so before the 64-bit processor came to the PC, granted there was a big difference between the 64-bit of 1961 and a modern 64-bit processor but even in 2003 64-bit wasn't "needed", hence the reason it took years after release for 64-bit to get a proper support.
Also, R&D costs money and some technologies are more expensive than others to develop, plus sometimes you need to build a series of lesser technologies first before you get yourself to the point where you can even begin researching the better stuff... Or maybe they just knew they could sell multi-core processors for 10 years before they introduce the "next big thing"... It is still business after-all.
Plus, 64-bit processors weren't needed in early last decade because most MoBos had a max of 1GB RAM. The reason we went from 16-bit to 32-bit was, simply, RAM.
It will be a LONG time before we see 128-bit procys.
Konrad
09-27-2011, 12:34 AM
Smaller nanoscales on the lithography means ever-increasing memory density. And today's graphic cards are RAM pigs, more is always better, exponential progress which far outpaces the mainframes and 386s of three or four decades past. And just when you think you've finally gotten the graphics to match human perceptions of realtime photo-realism somebody invents a substantially higher monitor resolution which increases the computational load yet again.
R&D is basically interchangeable with marketing anyhow. These days, the imperative to improve technology is always driven by how successfully you can convince consumers they need to buy it. I think we may not need 128-bit architecture by the end of this decade, but it'll be around nonetheless.
Powered by vBulletin® Version 4.2.1 Copyright © 2025 vBulletin Solutions, Inc. All rights reserved.