Most of the current comments are along the lines of "You will only need this card if you are doing _____". ie "Gaming at 4k", "Parallel Compute".
I think it's pretty clear this is a halo product, but I want to point out that having two non-crippled gpu's on a stick is an impressive technical achievement. Sure they will be throttled when the heat constraints kick in, but I am excited to see this sort of technology trickle down into the next "Asus 760 Mars" product.
A halo product is something that isn't there to be the real flagship that's driving the revenue, but instead to be the ultra-cool item that people really lust after, associate that with the brand, then buy one of the lower end models.
For example, the Ford Shelby GT500 http://www.ford.com/cars/mustang/trim/shelbygt500/ is probably not even turning a profit for Ford, but when someone picks up an automotive magazine with an article about the car, it makes them more likely to buy the entry level Mustang at 40% of the cost, or any other Ford product.
The Dodge Viper is probably a better example, as it's a totally custom chassis, body, etc.
The GT500 still shares a LOT with a production Mustang, and al the economies of scale that imparts. It's also producded in very high quantities, 5000+ yearly. They're making money or they wouldn't buiild that many.
I've heard it explained that halo cars help create brand loyalty with an instinctual response similar to "My dad can beat up your dad" fights on the schoolyard.
Even better example is Bugatti Veyron, fastest production car in the world. Some analysts say that company is losing over $6m on every car sold, despite $2m price tag. Only 400 cars were sold to date.
I believe orik is referring to the halo effect, a term used in marketing. It's the idea that when a customer has a positive experience with a companies products, they will likely choose that companies products in the future simply because of the past positive experience.
The idea is the customer thinks:
"Look at this amazing product Nvidia is producing! They must have great engineers working on bleeding edge technology!".
Even if they don't buy the $3000 GPU, they now associate Nvidia with high quality GPUs which may influence them to buy Nvidia product in the future.
An iPhone is a good example. Real life example: my wife bought an iPhone, I was impressed and bought one, which led to me buying a MacBook to do dev work (and because "hey, they do make good machines"), which led to a whole house full of Macs and various Apple gear.
Because they made what I thought was a decent phone at the time, they sold some other gear as a result.
Initially the iPhone 8GB sold for $600 on-contract, which worked out to an ASP of around $800 or higher, and average TCO over 2 years of $3k.
Also it was announced 6 months prior to release, was often sold out after it's release, and completely unavailable in most countries until months or years after initial release.
It was very much a halo product (at release, less so over time).
In fact, I would probably consider it as one of the most successful halo products ever - as it not only cast it's halo over the entire Mac+iPod product line, but it eventually became the revenue driver of the entire company.
Assuming the phrase "halo product" derives from "halo effect", I'm confused about how my example isn't the quintessential definition. 1. Buy iPhone. 2. Enjoy iPhone. 3. "Hey, I'll bet Apple's other products are pretty good, too." 4. Profit for Apple.
Disagreement on the definition, or poorly explained personal example?
(EDIT: now that I think about it, our first iPods were the halo product that made me discard old Apple biases that I was carrying from the 90s.)
That's pretty impressive. According to the announcement this card will sport 8 Teraflops (or about 112 GIPS), which corresponds to the entire available computing power on this planet in 1990. Let's play some Counterstrike :D
Another fun one. At 8 Teraflops it makes it equivalent to the top supercomputer of 2000. So our desktops (in terms of raw computing capability) are only about 14 years behind supercomputers, and that gap is closing rapidly.
I definitely see a need for graphics cards like these with virtual reality around the corner.
I haven't bought a new graphics card in 5 years because even next gen games play well enough. But I can see VR changing that with the need to render the same scene twice (one for each eye).
I wonder how much money you'd have to drop today to have a rig that can push 4K to each eye? There isn't a headset that can support that yet but I'd imagine its atleast in the next 5-10 years.
It looks like two GeForce Titan Blacks in SLI can do one 4K screen. So if you did two monitors each with its own SLI, it would work for about $4000 in GPUs as of right now.
This card is more expensive than all the hardware in my current PC, and I am running pretty high end gaming rig with 900D, GTX 680, 3700K, xonar, 16 GB of RAM, roomy SSD, velociraptor, Johny Guru 5 star rated PSU, and custom built water cooling.
I don't track gaming GPU requirements that much, but this strikes me as past high end. Is this even useful for gaming, or is it really aimed at doing crypto, cryptocurrency mining, simulations, etc?
The thing to note about this card is that it's a dual-GPU card - two chips with some sort of intra-card PCIE bus between them. You could get equivalent gaming performance by buying two of the GTX 780 Ti for about $1500. That level of performance is roughly what you want if you want to game at 4k resolutions at 60 FPS, so there are gaming applications for this card, even if it falls into "insanely high-end". The niche for this sort of card is in mini-ITX and similarly small computers, where there isn't enough room for a pair of cards and the builder wants "throwing money away" levels of performance.
That said, the selling point of the Titan cards is that their GPUs don't have the same restrictions put on their general-purpose compute performance as the standard gamer cards. NVIDIA locks this performance on their geforce cards in order to protect their lucrative GPGPU business, so this is really more of an entry-level card for scientific computing and other applications.
Cryptocurrency mining would be an obvious application, but a quirk of NVIDIA and AMD's differing architectures means that AMD cards are vastly more powerful at the specific functions needed to mine cryptocoins.
While a few years ago this card would have been great at mining cryptocoins, nowadays no gpu is ever going to be worth it as ASICs are 2-3 orders of magnitude faster and more efficient.
Its mainly used for the DP floating point that isn't limited artificially. The titan supercomputer at ORNL uses an older version of these. So heavy compute.
Dual-GPU...so not very good for VR gaming. The only reason you'd want such an expensive high-end GPU is to be future proof with VR gaming at 4k/120fps. But dual-GPUs aren't great for VR gaming.
Nvidia needs another card between the normal Titan and this one, that's a single card, and is targeted at VR gaming, and costs $1500 at most.
Not a whole lot. Nvidia cards are not as efficient at mining compared to AMD ones.
1. Single gtx titan gets 300 mh/s [0].
2. There are two gtx titan chips in this card, so let's say 600 mh/s.
3. A rack should fit around 60 of these. This gives us 36 gh/s (gigahashes a second).
4. According to this[1] calculator you will get a whooping 0.00361597 BTC per day, which would be worth around $2.12. Tomorrow expected payout drops to $2.09. Electricity should cost you at least an order of magnitude more.
It mentions supercomputers, but also gaming. Supercomputers aren't known for being particularly good for gaming, to put it that way. Is this product just a new gaming GPU, or is it aimed at a different market, even altcoin miners?
Real-time shading/rendering is essentially a massively parallel operation -- the exact type of problems that GPUs (and supercomputers in general) are good for.
In fact, many of the top supercomputers today use GPUs.
Yep. And it would probably be good for altcoins indeed. It just can't be a good bitcoin miner because the ASICs are already out there. If an altcoin can be mined with an ASIC (I believe the purpose of some, like litecoin, are not to be?) then it is near pointless to mine with a GPU (unless in a pool).
Also, I would assume the reason behind supercomputers not being good for gaming is that they are parallelized in ways other than the chips on the GPU (entire machines are networked together via various interfaces). The software behind distributing the processing between several machines, or whatever aspect makes it super, is probably what limits the ability of supercomputers to run video games :0
The best performance/watt altcoin miner is the NVidia 760 right now, although the R9 290x has a case for being the best well-rounded altcoin miner (Once you factor the fact that you'll need 3 or 4x the 760s to keep up with a single R9 290x).
I'm on 660TIs in SLI which are not the newest horse in the stable and can run just about anything out maxed (16x AA etc.) @ 1080p. You won't need this card unless you're going multi monitor and / or retina level.
Keep in mind that even if you're maxing the in-game settings, you aren't maxing out the image quality fully. You can still go beyond that with supersampling (essentially rendering at a higher resolution, then downscaling to your monitor's resolution in order to prevent aliasing). Standard anti-aliasing tries to detect edges in order to find specific parts of the image to supersample, but it isn't perfect. If you want the best image quality possible, supersampling is the way to go.
The downside though is that it's extremely resource intensive. Even 16x MSAA antialiasing is faster than 2x supersampling. With supersampling, at 2x on 1080p you're rendering at 3840x2160 then scaling down to 1080p - effectively the same as gaming on 4k.
I have two rigs, one with dual r9 270s and another with SLI'd gtx 760s. Each can run dota 2 at 2560x1600 with 2x supersampling at around 40 and 30fps respectively.
The image quality is beautiful, don't get me wrong, but even those cards in an SLI isn't enough to push that amount of pixels.
With quad Titan Z's, you could probably do 4x supersampling - nearing the quality you'd get with source film maker, but in real time.
Personally I find having a higher frame rate in faster paced game is far more important than having a diagonal line be perfectly smooth. In the middle of a game you'd never be concerned with how perfect the image looks.
But going from nearly perfectly smooth lines to perfectly smooth lines should not be a profitable market. Nobody will notice the difference during any real gameplay. You can already do great anti aliasing so why would someone pay $1000+ more for a 10% or less gain.
I think the main purpose it serves for Nvidia is branding. This gets plastered all over the tech news and everyone knows Nvidia makes the fastest GPU. People then associate them with better technology (for better or worse).
Why someone would buy it? There are a lot of people who want the very best. It's the same reason people buy an Aston Martin over a Toyota.
From a more practical perspective though, there's also future proofing. This card will be able to deliver high frame rates for a few generations of games. People who don't want to regularly rebuild their rigs might prefer this card.
However, there is an extension of "multi monitor" that's probably going to become a big thing: VR gaming. VR gaming demands both more pixels and a consistent, high framerate, and could percolate more quickly down from "most hardcore of the hardcore" to "many people who enjoy gaming own one" than 4K displays or huge monitor stacks have.
This particular GPU is a multi-GPU single-card solution which is probably worse for VR, because alternate-frame rendering adds latency and can result in uneven frame timings especially when two sequential frames have dramatically different complexity.
Yes, and thus the cards are always racing one another.
If the cards display frames as they're available, they end up with uneven display times, which is a quite jarring visual effect. If they VSync they end up with the standard VSync issues where occasional slow frames are difficult to handle. Both are undesirable traits for VR. An output buffer naturally adds latency, which is also undesirable.
AMD cards in Crossfire used to be notorious for this effect, but they've since added an adaptive frame-rate limiter (forcing fast frames to delay, like VSync but not bound to the display's refresh) which helps.
Wow, I bet the cost / benefit ratio on this one is not so great for most applications. Just wait a year till a minutely pared-down version is available for 10% the price.
IMHO the only reason you'd want this is to save on a PCIe slot when you absolutely need the double-precision floating point operations to run at full speed.
Pro video cameras record beyond 5K - eg. RED Dragon does 6K. Shortly (probably at NAB this year) we'll be seeing theater projectors and studio monitors at that resolution.
interesting what human computing endeavor has first crossed 10-20 exa-flops - the estimate of computing power of the brain. 500 exa-flops is something that would be enough to emulate the brain.
I think it's pretty clear this is a halo product, but I want to point out that having two non-crippled gpu's on a stick is an impressive technical achievement. Sure they will be throttled when the heat constraints kick in, but I am excited to see this sort of technology trickle down into the next "Asus 760 Mars" product.