Vigile writes "When NVIDIA released the GTX Titan in February, it was the first consumer graphics card to use the GK110 GPU from NVIDIA that included 2,688 CUDA cores / shaders and an impressive 6GB of GDDR5 frame buffer. However, it also had a $1000 price tag that was the limiting specification for most gamers. With today's release of the GeForce GTX 780 they are hoping to utilize more of the GK110 silicon they are getting from TSMC while offering a lower cost version with performance within spitting range. The GTX 780 uses the same chip but disables a handful more compute units to bring the shader count down to 2,304 — still an impressive bump over the 1,536 of the GTX 680. The 384-bit memory bus remains though the frame buffer is cut in half to 3GB. Overall, the performance of the new card sits squarely between the GTX Titan ($1000) and AMD's Radeon HD 7970 GHz Edition ($439), just like its price. The question is, are PC gamers willing to shell out $220+ dollars MORE than the HD 7970 for somewhere in the range of 15-25% more performance?" As you might guess, there's similarly spec-laden coverage at lots of other sites, including Tom's, ExtremeTech, and TechReport. HotHardware, too.
Please create an account to participate in the Slashdot moderation system
MojoKid writes "AMD is announcing its Radeon HD 8970M. The mobile GPU is based on a design that has a few small feature changes that have led it to be unofficially labeled a Graphics Core Next (GCN) 1.1 part versus AMD's previous gen GCN 1.0 technology. AMD claims that the Radeon HD 8970M is significantly faster than NVIDIA's GeForce GTX 680M in a variety of tests, but high-end laptops that use AMD hardware are harder to find these days."
MojoKid writes "For the past decade, AMD and Intel have been racing each other to incorporate more components into the CPU die. Memory controllers, integrated GPUs, northbridges, and southbridges have all moved closer to a single package, known as SoCs (system-on-a-chip). Now, with Haswell, Intel is set to integrate another important piece of circuitry. When it launches next month, Haswell will be the first x86 CPU to include an on-die voltage regulator module, or VRM. Haswell incorporates a refined VRM on-die that allows for multiple voltage rails and controls voltage for the CPU, on-die GPU, system I/O, integrated memory controller, as well as several other functions. Intel refers to this as a FIVR (Fully Integrated Voltage Regulator), and it apparently eliminates voltage ripple and is significantly more efficient than your traditional motherboard VRM. Added bonus? It's 1/50th the size." Update: 05/14 01:22 GMT by U L : Reader AdamHaun comments: "They already have a test chip that they used to power a ~90W Xeon E7330 for four hours while it ran Linpack. ... Voltage ripple is less than 2mV. Peak efficiency per cell looks like ~76% at 8A. They claim hitting 82% would be easy..." and links to a presentation on the integrated VRM (PDF).
An anonymous reader writes "In a 15-way graphics card comparison on Linux of both the open and closed-source drivers, it was found that the open-source AMD Linux graphics driver is much faster than the open-source NVIDIA driver on Ubuntu 13.04. The open-source NVIDIA driver is developed entirely by the community via reverse-engineering, but for Linux desktop users, is this enough? The big issue for the open-source 'Nouveau' driver is that it doesn't yet fully support re-clocking the graphics processor so that the hardware can actually run at its rated speeds. With the closed-source AMD Radeon and NVIDIA GeForce results, the drivers were substantially faster than their respective open-source driver. Between NVIDIA and AMD on Linux, the NVIDIA closed-source driver was generally doing better than AMD Catalyst."
Vigile writes "One of the drawbacks to high end graphics has been the lack of low cost and massively-available displays with a resolution higher than 1920x1080. Yes, 25x16/25x14 panels are coming down in price, but it might be the influx of 4K monitors that makes a splash. PC Perspective purchased a 4K TV for under $1500 recently and set to benchmarking high end graphics cards from AMD and NVIDIA at 3840x2160. For under $500, the Radeon HD 7970 provided the best experience, though the GTX Titan was the most powerful single GPU option. At the $1000 price point the GeForce GTX 690 appears to be the card to beat with AMD's continuing problems on CrossFire scaling. PC Perspective has also included YouTube and downloadable 4K video files (~100 mbps) as well as screenshots, in addition to a full suite of benchmarks."
crookedvulture writes "AMD has revealed more details about the unified memory architecture of its next-generation Kaveri APU. The chip's CPU and GPU components will have a shared address space and will also share both physical and virtual memory. GPU compute applications should be able to share data between the processor's CPU cores and graphics ALUs, and the caches on those components will be fully coherent. This so-called heterogeneous uniform memory access, or hUMA, supports configurations with either DDR3 or GDDR5 memory. It's also based entirely in hardware and should work with any operating system. Kaveri is due later this year and will also have updated Steamroller CPU cores and a GPU based on the current Graphics Core Next architecture." bigwophh writes links to the Hot Hardware take on the story, and writes "AMD claims that programming for hUMA-enabled platforms should ease software development and potentially lower development costs as well. The technology is supported by mainstream programming languages like Python, C++, and Java, and should allow developers to more simply code for a particular compute resource with no need for special APIs."
Indiana University has replaced their supercomputer, Big Red, with a new system predictably named Big Red II. At the dedication HPC scientist Paul Messina said: "It's important that this is a university-owned resource. ... Here you have the opportunity to have your own faculty, staff and students get access with very little difficulty to this wonderful resource." From the article: "Big Red II is a Cray-built machine, which uses both GPU-enabled and standard CPU compute nodes to deliver a petaflop -- or 1 quadrillion floating-point operations per second -- of max performance. Each of the 344 CPU nodes uses two 16-core AMD Abu Dhabi processors, while the 676 GPU nodes use one 16-core AMD Interlagos and one NVIDIA Kepler K20."
An anonymous reader writes "Today AMD has officially unveiled its long-awaited dual-GPU Tahiti-based card. Codenamed Malta, the $1,000 Radeon HD 7990 is positioned directly against Nvidia's dual-GPU GeForce GTX 690. Tom's Hardware posted the performance data. Because Fraps measures data at a stage in the pipeline before what is actually seen on-screen, they employed Nvidia's FCAT (Frame Capture Analysis Tools). ... The 690 is beating AMD's new flagship in six out of eight titles. ... AMD is bundling eight titles with every 7990, including: BioShock Infinite, Tomb Raider, Crysis 3, Far Cry 3, Far Cry 3: Blood Dragon, Hitman: Absolution, Sleeping Dogs, and Deus Ex: Human Revolution." OpenGL performance doesn't seem too off from the competing Nvidia card, but the 7990 dominates when using OpenCL. Power management looks decent: ~375W at full load, but a nice 20W at idle (it can turn the second chip off entirely when unneeded). PC Perspective claims there are issues with Crossfire and an un-synchronized rendering pipeline that leads to a slight decrease in the actual frame rate, but that should be fixed by an updated Catalyst this summer.
illiteratehack writes "10 years ago AMD released its first Opteron processor, the first 64-bit x86 processor. The firm's 64-bit 'extensions' allowed the chip to run existing 32-bit x86 code in a bid to avoid the problems faced by Intel's Itanium processor. However AMD suffered from a lack of native 64-bit software support, with Microsoft's Windows XP 64-bit edition severely hampering its adoption in the workstation market." But it worked out in the end.
mikejuk writes "This is a strange story. AMD Vice President of Global Channel Sales Roy Taylor has said there will be no DirectX12 at any time in the future. In an interview with German magazine Heise.de, Taylor discussed the new trend for graphics card manufacturers to release top quality game bundles registered to the serial number of the card. One of the reasons for this, he said, is that the DirectX update cycle is no longer driving the market. 'There will be no DirectX 12. That's it.' (Google translation of German original.) Last January there was another hint that things weren't fine with DirectX when Microsoft sent an email to its MVPs saying, 'DirectX is no longer evolving as a technology.' That statement was quickly corrected, but without mentioning any prospect of DirectX 12. So, is this just another error or rumor? Can we dismiss something AMD is basing its future strategy on?"
An anonymous reader writes "Years of desire by AMD Linux users to have open source video playback support by their graphics driver is now over. AMD has released open-source UVD support for their Linux driver so users can have hardware-accelerated video playback of H.264, VC-1, and MPEG video formats. UVD support on years old graphics cards was delayed because AMD feared open-source support could kill their Digital Rights Management abilities for other platforms."
MojoKid writes "Every year, AMD and NVIDIA re-brand their GPU product lines, regardless of whether the underlying hardware has changed. This annual maneuver is a sop to OEMs, who like yearly refreshes and higher numbers. The big introduction NVIDIA is making this year is what it calls GPU Boost 2.0. When NVIDIA launched the GTX Titan in February, it discussed a new iteration of GPU Boost technology that measured GPU temperature rather than estimating TDP. This new approach gave NVIDIA finer-grained control over clock speeds and thermal thresholds, thereby allowing for better dynamic overclocking. That technology is coming to the GeForce 700M mobile family. In notebooks, GPU Boost 2.0 is a combination of thermal and application monitoring. GPU Boost 2.0 is designed to reflect an important fact of 3D gaming — no two applications use the same amount of power. The variance can be significant, even within the same game. It's therefore possible for the GPU to adjust clocks dynamically in order to maximize frame rates. Put the two together, and NVIDIA believes it can substantially improve FPS speeds without compromising thermals or electrical safe operating margins."
MojoKid writes "AMD made a number of interesting announcements today at the Game Developers Conference, currently taking place in San Francisco. AMD revealed their 'Radeon Sky' series of graphics products targeted at cloud gaming and virtualized computing applications. The company also showed off the dual-GPU powered AMD Radeon HD 7990, and extended the 'Never Settle: Reloaded' gaming bundle program to include BioShock Infinite. AMD revealed three Radeon Sky Series cards, two based on the Tahiti GPU and another based on Pitcairn. The top of the line Radeon Sky 900 is powered by two Tahiti GPUs linked to 6GB of memory (3GB per GPU). The Sky 700 is powered by a single Tahiti GPU and the Sky 500 is based on Pitcairn. All of the cards are passively cooled and are designed for cloud gaming / computing servers. The upcoming high-end, consumer targeted Radeon HD 7990 was also previewed, but few details were given. Devon Nekechuk, Product Manager of AMD Graphics, did say the triple-fan setup was whisper quiet. We think it's safe to assume the card features 6GB of memory and clocks are in-line with current Radeon HD 7970 GHz Edition cards."
Nerval's Lobster writes "One could argue that the University of Illinois' "Blue Waters" supercomputer, scheduled to officially open for business March 28, is lucky to be alive. The 11.6 petaflop supercomputer, commissioned by the University and the National Science Foundation (NSF), will rank in the upper echelon of the world's fastest machines—its compute power would place it third on the current list, just above Japan's K Computer. However, the system will not be submitted to the TOP500 list because of concerns with the way the list is calculated, officials said. University officials and the NSF are lucky to have a machine at all. That's due in part to IBM, which reportedly backed out of the contract when the company determined that it couldn't make a profit. The university then turned to Cray, which would have had to replace what was presumably a POWER or Xeon installation with the current mix of AMD CPUs and Nvidia GPU coprocessors. Allen Blatecky, director of NSF's Division of Advanced Cyberinfrastructure, told Fox that pulling the plug was a 'real possibility.' And Cray itself had to work to find the parts necessary for the supercomputer to begin at least trial operations in the fall of 2012."
MojoKid writes "AMD has just announced a new family of Elite A-Series APUs for mobile applications, based on the architecture codenamed 'Richland.' These new APUs build upon last year's 'Trinity' architecture, by improving graphics and compute performance, enhancing power efficiency through the implementation of a new 'Hybrid Boost' mode which leverages on-die thermal sensors, and offering AMD-optimized applications meant to improve the user experience. AMD is unveiling a new visual identity as well, with updated logos and clearer language, in a bid to enhance the brand. At the top of the product stack now is the AMD A10-5750M, a 35 Watt, 3.5GHz quad-core processor with integrated Radeon HD 8650G graphics, 4MB of L2 cache and a DDR3-1866 capable memory interface. The low-end is comprised of dual-cores with Radeon HD 8400G series GPUs and a DDR3-1600 memory interface."
New submitter Dputiger writes "Nvidia's latest GTX Titan puts a renewed focus on multi-monitor gaming, but how does it compare against other cards at half the price? 'The games we tested fall into two general camps. Arkham City, DiRT 3, and Serious Sam: BFE are all absolutely playable on the GTX 680 or 7970 in a single-card configuration, even with detail settings turned all the way up. Shogun 2, Metro 2033, and Crysis 3 aren’t. In Shogun 2 and Metro 2033, however, the Titan maintains a playable frame rate at High Detail when the other two cards are stumbling and stuttering. Crysis 3 was the one exception — in that game, all three cards remained playable at High Detail, and dropped below that mark once we increased to Very High Detail and added 4x SMAA.' Field of view adjustments, the impact of bezels, and single-card performance at multiple detail levels are all covered, as is the price of multi-screen setups."
Vigile writes "A big shift in the way graphics cards and gaming performance are tested has been occurring over the last few months, with many review sites now using frame times rather than just average frame rates to compare products. Another unique testing methodology called Frame Rating has been started by PC Perspective that uses video capture equipment capable of recording uncompressed high resolution output direct from the graphics card, a colored bar overlay system and post-processing on that recorded video to evaluate performance as it is seen by the end user. The benefit is that there is literally no software interference between the data points and what the user sees, making it is as close to an 'experience metric' as any developed. Interestingly, multi-GPU solutions like SLI and CrossFire have very different results when viewed in this light, with AMD's offering clearly presenting a poorer, and more stuttery, animation."
Vigile writes "NVIDIA's new GeForce GTX TITAN graphics card is being announced today and is utilizing the GK110 GPU first announced in May of 2012 for HPC and supercomputing markets. The GPU touts computing horsepower at 4.5 TFLOPS provided by the 2,688 single precision cores, 896 double precision cores, a 384-bit memory bus and 6GB of on-board memory doubling the included frame buffer that AMD's Radeon HD 7970 uses. With a make up of 7.1 billion transistors and a 551 mm^2 die size, GK110 is very close to the reticle limit for current lithography technology! The GTX TITAN introduces a new GPU Boost revision based on real-time temperature monitoring and support for monitor refresh rate overclocking that will entice gamers and with a $999 price tag, the card could be one of the best GPGPU options on the market." HotHardware says the card "will easily be the most powerful single-GPU powered graphics card available when it ships, with relatively quiet operation and lower power consumption than the previous generation GeForce GTX 690 dual-GPU card."
An anonymous reader writes "The Khronos Group has published the first products that are officially conformant to OpenGL ES 3.0. On that list is the Intel Ivy Bridge processors with integrated graphics, which support OpenGL ES 3.0 on open-source Linux Mesa. This is the best timing yet for Intel's open-source team to support a new OpenGL standard — the standard is just six months old whereas it took years for them to support OpenGL ES 2.0. There's also no OpenGL ES 3.0 Intel Windows driver yet that's conformant. Intel also had a faster turn-around time than NVIDIA and AMD with the only other hardware on the list being Qualcomm and PowerVR hardware. OpenGL ES 3.0 works with Intel Ivy Bridge when using the Linux 3.6 kernel and the soon-to-be-out Mesa 9.1." Phoronix ran a rundown of what OpenGL ES 3.0 brings back in August.
MojoKid writes "AMD has yet to make an official statement on this topic, but several unofficial remarks and leaks point in the same direction. Contrary to rumor, there won't be a new GCN 2.0 GPU out this spring to head up the Radeon HD 8000 family. This breaks with a pattern AMD has followed for nearly six years. AMD recently refreshed its mobile product lines with HD 8000M hardware, replacing some old 40nm parts with new 28nm GPUs based on GCN (Graphics Core Next). In desktop, it's a different story. AMD is already shipping 'Radeon HD 8000' cards to OEMs, but these cards are based on HD 7000 cores with new model numbers. RAM, TDP, core counts, and architectural features are all identical to the HD 7000 lineup. GPU rebadges are nothing new, but this is the first time in at least six years that AMD has rebadged the top end of a product line. Obviously any delay in a cutthroat market against Nvidia is a non-optimal situation, but consider the problem from AMD's point of view. We know AMD built the GPU inside Wii U. It's also widely rumored to have designed the CPU and GPU for the Xbox Durango and possibly both of those components for the PS4 as well. It's possible, if not likely, that the company has opted to focus on the technologies most vital to its survival over the next 12 months." Maybe the Free GNU/Linux drivers will be ready at launch after all.