What's the price for the board alone?
What's the price for the board alone?
WH1T3 0U7
*******************************
Modified Thermaltake View 37
Intel 9900K, MSI Z390A, 128GB (32GB x4) GSkill Royal 3200MHz, RTX 3080 Vision, EVGA Nu Audio, 1TB Silicon Power SSD, EVGA 1300G2, ID cooling 360mm AiO, LG 3440 x 1440
As soon as I saw that board, my mouth just dropped. If only there was 4 way SLI on that board. What clock speed are the processors? Wow...24 cores and 192GB of RAM. Imagine looking in task manager in windows and seeing graphs for 24 cores. And the RAM usage would probably show as 0% for just windows.
I think installing that many DDR2 sticks would get a bit tedious after a while.
Hopefully your client will have more than 192GB of hard drive space, or that much RAM is a bit of a waste XD.
Imagine setting up four of those as a cluster. 96 cores, and 768GB of RAM If you used 5 in a cluster, you would have just under a terabyte of RAM. Time for another jaw drop.
I wouldent call this a server, I'd call it a super server. (Well, under industry definitions, I'd call it a server.) What OS you thinking of? What about ubuntu server edition, or is that not the sort of thing you are looking for?
Current Mod
AMD Athlon X2 7750 Black Edition 2.7GHz (Overclock Not Tested)
OCZ 2x2GB DDR2 800Mhz GOLD XTC Memory
Samsung HD502HI 500GB Hard Drive SATAII - Green Drive
Zotac 9800GT Synergy Edition 512MB DDR3
Artic Power 500W PSU
Gigabyte GA-M57 SLI-S4 Rev. 2.0
Personally Ubuntu server edition is not that great for industry from what I have heard. I prefer Fedora/Redhat. If the client has need for a system that expense they can certainly afford Redhat Enterprise Edition, not to mention the Redhat license includes full support.
What kind of a server is this guy making that he needs all that?
If you've seen how long it takes to compile or render a pixar movie, this is minor.
The server will have a small internal drive setup (~300 gb in raid 6) for the OS and applications. No need for massive drive space as most of the data used is on a NAS. The main reason for the large amount of memory is for loading of database indexes to speed up searches. The main reason for that many procs is to allow more instances of the customers application running therefore allowing more clients to connect and search. From what I understand, once the search result is found, it is passed off the the node that actually has the fastest connect to the data.
We are talking hundreds of these nodes around the country. (not the index node, the data node). Part of their software (custom built) determines the location where the data is required and migrates that data to the node closest to the most recent requests. So the first access of that data may be slower than the next customer's access depending on the frequency.
I can follow most technical conversations even if they are way over my head. I learned long ago that sometimes you have to push the "I believe" button until that specific piece of info comes along that ties everything together then "ping", it all makes sense. In this case I felt like a blathering idiot and my head felt like it was going to explode!
Oh and due to the specific nature of the software, nothing except RedHat has the ability to serve up the info as fast as the clients require. Trust me they have tested this for many years. BTW: This company has been in business since the late 1800's...
"...Dumb all over, A little ugly on the side... "...Frank Zappa...