[go: up one dir, main page]

Jump to content

GeForce 900 series

From Wikipedia, the free encyclopedia
(Redirected from GeForce GTX Titan X)

GeForce 900 series
Nvidia GeForce GTX 980 Ti Founders Edition
Release dateSeptember 18, 2014; 10 years ago (September 18, 2014)
CodenameGM20x
ArchitectureMaxwell
ModelsGeForce series
  • GeForce GT series
  • GeForce GTX series
Transistors2.94B (GM206)
  • 5.2B (GM204)
  • 8.0B (GM200)
Fabrication processTSMC 28 nm
Cards
Mid-range
  • GeForce GTX 950
  • GeForce GTX 960
High-end
  • GeForce GTX 970
  • GeForce GTX 980
Enthusiast
  • GeForce GTX 980 Ti
  • GeForce GTX TITAN X
API support
DirectXDirect3D 12 (feature level 12_1)[2][3][4][5]
Shader Model 6.7
OpenCLOpenCL 3.0[a]
OpenGLOpenGL 4.6
VulkanVulkan 1.3[1]
SPIR-V
History
Predecessor
SuccessorGeForce 10 series
Support status
Maxwell fully supported

The GeForce 900 series is a family of graphics processing units developed by Nvidia, succeeding the GeForce 700 series and serving as the high-end introduction to the Maxwell microarchitecture, named after James Clerk Maxwell. They are produced with TSMC's 28 nm process.

With Maxwell, the successor to Kepler, Nvidia expected three major outcomes: improved graphics capabilities, simplified programming, and better energy efficiency compared to the GeForce 700 series and GeForce 600 series.[6]

Maxwell was announced in September 2010,[7] with the first Maxwell-based GeForce consumer-class products released in early 2014.[8]

Architecture

[edit]

First generation Maxwell (GM10x)

[edit]

First generation Maxwell GM107/GM108 were released as GeForce GTX 745, GTX 750/750 Ti and GTX 850M/860M (GM107) and GT 830M/840M (GM108). These new chips provide few consumer-facing additional features; Nvidia instead focused on power efficiency. Nvidia increased the amount of L2 cache from 256 KiB on GK107 to 2 MiB on GM107, reducing the memory bandwidth needed. Accordingly, Nvidia cut the memory bus from 192 bit on GK106 to 128 bit on GM107, further saving power.[9] Nvidia also changed the streaming multiprocessor design from that of Kepler (SMX), naming it SMM. The structure of the warp scheduler is inherited from Kepler, which allows each scheduler to issue up to two instructions that are independent from each other and are in order from the same warp. The layout of SMM units is partitioned so that each of the 4 warp schedulers in an SMM controls 1 set of 32 FP32 CUDA cores, 1 set of 8 load/store units, and 1 set of 8 special function units. This is in contrast to Kepler, where each SMX has 4 schedulers that schedule to a shared pool of 6 sets of 32 FP32 CUDA cores, 2 sets of 16 load/store units, and 2 sets of 16 special function units.[10] These units are connected by a crossbar that uses power to allow the resources to be shared.[10] This crossbar is removed in Maxwell.[10] Texture units and FP64 CUDA cores are still shared.[9] SMM allows for a finer-grain allocation of resources than SMX, saving power when the workload isn't optimal for shared resources. Nvidia claims a 128 CUDA core SMM has 86% of the performance of a 192 CUDA core SMX.[9] Also, each Graphics Processing Cluster, or GPC, contains up to 4 SMX units in Kepler, and up to 5 SMM units in first generation Maxwell.[9]

GM107 supports CUDA Compute Capability 5.0 compared to 3.5 on GK110/GK208 GPUs and 3.0 on GK10x GPUs. Dynamic Parallelism and HyperQ, two features in GK110/GK208 GPUs, are also supported across the entire Maxwell product line.

Maxwell provides native shared memory atomic operations for 32-bit integers and native shared memory 32-bit and 64-bit compare-and-swap (CAS), which can be used to implement other atomic functions.

While it was once thought that Maxwell used tile-based immediate mode rasterization,[11] Nvidia corrected this at GDC 2017 saying Maxwell instead uses Tile Caching.[12]

NVENC

[edit]

Maxwell-based GPUs also contain the NVENC SIP block introduced with Kepler. Nvidia's video encoder, NVENC, is 1.5 to 2 times faster than on Kepler-based GPUs meaning it can encode video at 6 to 8 times playback speed.[9]

PureVideo

[edit]

Nvidia also claims an 8 to 10 times performance increase in PureVideo Feature Set E video decoding due to the video decoder cache paired with increases in memory efficiency. However, H.265 is not supported for full hardware decoding, relying on a mix of hardware and software decoding.[9] When decoding video, a new low power state "GC5" is used on Maxwell GPUs to conserve power.[9]

Second generation Maxwell (GM20x)

[edit]

Second generation Maxwell introduced several new technologies: Dynamic Super Resolution,[13] Third Generation Delta Color Compression,[14] Multi-Pixel Programming Sampling,[15] Nvidia VXGI (Real-Time-Voxel-Global Illumination),[16] VR Direct,[17][18][19] Multi-Projection Acceleration,[14] and Multi-Frame Sampled Anti-Aliasing (MFAA)[20] (however support for Coverage-Sampling Anti-Aliasing (CSAA) was removed).[21] HDMI 2.0 support was also added.[22][23]

Second generation Maxwell also changed the ROP to memory controller ratio from 8:1 to 16:1.[24] However, some of the ROPs are generally idle in the GTX 970 because there are not enough enabled SMMs to give them work to do and therefore reduces its maximum fill rate.[25]

Second generation upgraded NVENC which supports HEVC encoding and adds support for H.264 encoding resolutions at 1440p/60FPS & 4K/60FPS compared to NVENC on Maxwell first generation GM10x GPUs which only supported H.264 1080p/60FPS encoding.[19]

Maxwell GM206 GPU supports full fixed function HEVC hardware decoding.[26][27]

Advertising controversy

[edit]

GTX 970 hardware specifications

[edit]

Issues with the GeForce GTX 970's specifications were first brought up by users when they found out that the cards, while featuring 4 GB of memory, rarely accessed memory over the 3.5 GB boundary. Further testing and investigation eventually led to Nvidia issuing a statement that the card's initially announced specifications had been altered without notice before the card was made commercially available, and that the card took a performance hit once memory over the 3.5 GB limit were put into use.[28][29][30]

The card's back-end hardware specifications, initially announced as being identical to those of the GeForce GTX 980, differed in the amount of L2 cache (1.75 MB versus 2 MB in the GeForce GTX 980) and the number of ROPs (56 versus 64 in the 980). Additionally, it was revealed that the card was designed to access its memory as a 3.5 GB section, plus a 0.5 GB one, access to the latter being 7 times slower than the first one.[31] The company then went on to promise a specific driver modification in order to alleviate the performance issues produced by the cutbacks suffered by the card.[32] However, Nvidia later clarified that the promise had been a miscommunication and there would be no specific driver update for the GTX 970.[33] Nvidia claimed that it would assist customers who wanted refunds in obtaining them.[34] On February 26, 2015, Nvidia CEO Jen-Hsun Huang went on record in Nvidia's official blog to apologize for the incident.[35] In February 2015 a class-action lawsuit alleging false advertising was filed against Nvidia and Gigabyte Technology in the U.S. District Court for Northern California.[36][37]

Nvidia revealed that it is able to disable individual units, each containing 256KB of L2 cache and 8 ROPs, without disabling whole memory controllers.[38] This comes at the cost of dividing the memory bus into high speed and low speed segments that cannot be accessed at the same time unless one segment is reading while the other segment is writing because the L2/ROP unit managing both of the GDDR5 controllers shares the read return channel and the write data bus between the two GDDR5 controllers and itself.[38] This is used in the GeForce GTX 970, which therefore can be described as having 3.5 GB in its high speed segment on a 224-bit bus and 0.5 GB in a low speed segment on a 32-bit bus.[38]

On July 27, 2016, Nvidia agreed to a preliminary settlement of the U.S. class action lawsuit,[36] offering a $30 refund on GTX 970 purchases. The agreed upon refund represents the portion of the cost of the storage and performance capabilities the consumers assumed they were obtaining when they purchased the card.[39]

Async compute support

[edit]

While the Maxwell series was marketed as fully DirectX 12 compliant,[3][40][41] Oxide Games, developer of Ashes of the Singularity, uncovered that Maxwell-based cards do not perform well when async compute is utilized.[42][43][44][40]

It appears that while this core feature is in fact exposed by the driver,[45] Nvidia partially implemented it through a driver-based shim, coming at a high performance cost.[44] Unlike AMD's competing GCN-based graphics cards which include a full implementation of hardware-based asynchronous compute,[46][47] Nvidia planned to rely on the driver to implement a software queue and a software distributor to forward asynchronous tasks to the hardware schedulers, capable of distributing the workload to the correct units.[48] Asynchronous compute on Maxwell therefore requires that both a game and the GPU driver be specifically coded for asynchronous compute on Maxwell in order to enable this capability.[49] The 3DMark Time Spy benchmark shows no noticeable performance difference between asynchronous compute being enabled or disabled.[49] Asynchronous compute is disabled by the driver for Maxwell.[49]

Oxide claims that this led to Nvidia pressuring them not to include the asynchronous compute feature in their benchmark at all, so that the 900 series would not be at a disadvantage against AMD's products which implement asynchronous compute in hardware.[43]

Maxwell requires that the GPU be statically partitioned for asynchronous compute to allow tasks to run concurrently.[50] Each partition is assigned to a hardware queue. If any of the queues that are assigned to a partition empty out or are unable to submit work for any reason (e.g. a task in the queue must be delayed until a hazard is resolved), the partition and all of the resources in that partition reserved for that queue will idle.[50] Asynchronous compute therefore could easily hurt performance on Maxwell if it is not coded to work with Maxwell's static scheduler.[50] Furthermore, graphics tasks saturate Nvidia GPUs much more easily than they do to AMD's GCN-based GPUs which are much more heavily weighted towards compute, so Nvidia GPUs have fewer scheduling holes that could be filled by asynchronous compute than AMD's.[50] For these reasons, the driver forces a Maxwell GPU to place all tasks into one queue and execute each task in serial, and give each task the undivided resources of the GPU no matter whether or not each task can saturate the GPU or not.[50]

Products

[edit]

GeForce 900M (9xxM) series

[edit]

Some implementations may use different specifications.

Model Launch Code name Fab (nm) Transistors (million) Die size (mm2) Bus interface Core config[b] Clock speeds Fillrate Memory API support (version) Processing power (GFLOPS) TDP (watts) SLI support[c]
Base core clock (MHz) Boost core clock (MHz) Memory (MT/s) Pixel (GP/s)[d] Texture (GT/s)[e] Size (MiB) Bandwidth (GB/s) Type Bus width (bit) DirectX OpenGL OpenCL Vulkan Single precision[f] Double precision[g]
GeForce 910M[53][54][55] Aug 18, 2015 GF117[h] 28 585 116 PCIe 3.0 x8 96:16:8 775 1550 1800 3.1 12.4 1024 14.4 GDDR3 64 12.0 (11_0)[2][5] 4.6 1.1 297.6 1/12 of SP 33 No
March 15, 2015 GK208 Un­known 87 384:16:8 575 575 5.13 9.2 2048 1.2 1.1 441.6 18.4
GeForce 920M[56][57][58] March 13, 2015 GF117[h] 585 116 96:16:8 775 1550 3.1 12.4 1024 1.1 297.6 1/12 of SP
GK208 Un­known 87 384:32:16 954 954 7.6 30.5 2048 1.2 1.1 732.7 22.9
GeForce 920MX[59][60] March 2016 GM108[i] 1870 148 256:24:8 1072 1176 8.58 25.7 2048 DDR3 GDDR5 549 1/32 of SP 16
GeForce 930M[61][62] March 13, 2015 384:24:8 928 941 7.4 22.3 2048 DDR3 712.7 22.3 33
GeForce 930MX[63][64] March 1, 2016 Un­known Un­known PCIe 3.0 x8 384:24:8 952 1020 2000 Un­known Un­known 2048 Un­known DDR3 GDDR5 Un­known Un­known Un­known Un­known Un­known Un­known
GeForce 940M[65][66][67] March 13, 2015 GM107 1870 148 PCIe 3.0 x16 640:40:16 1029 1100 2002 16.5 41.2 2048 16 - 80.2 GDDR5 DDR3 128 1.2 1.1 1317 41.1 75 No
GM108[i] Un­known Un­known PCIe 3.0 x8 384:24:8 8.2 24.7 64 790.3 24.7 33
GeForce 940MX[68][69] March 10, 2016 1870 148 384:24:8 1122 1242 8.98 26.93 2048
4096
16.02 (DDR3)
40.1 (GDDR5)
861.7 Un­known 23
GeForce 945M[70][71][72] 2015 GM107 ? 640:40:16 1029 1085 ? 16.46 41.2 ? ? DDR3 GDDR5 128 1,317.1 ? 75 ?
GM108[i] ? ? PCIe 3.0 x8 384:24:8 1122 1242 8.98 26.93 64 861.7 23
GeForce GT 945A[73][74] March 13, 2015 Un­known Un­known 384:24:8 1072 1176 1800 8.58 25.73 2048 14.4 DDR3 Un­known Un­known Un­known 33 Un­known
GeForce GTX 950M[75][76] March 13, 2015 GM107 1870 148 PCIe 3.0 x16 640:40:16 914 1085 5012 14.6 36.6 2048(GDDR5)
4096(DDR3)
80(GDDR5)
32(DDR3)
DDR3 GDDR5 128 1.2[77] 1.1 1170 36.56 75 No
GeForce GTX 960M[78][79] 640:40:16 1029 1085 16.5 41.2 2048
4096
80 GDDR5 1317 41.16 65
GeForce GTX 965M[80][81] January 5, 2015 GM204 5200 398 1024:64:32 924 950 5000 30.2 60.4 12.0 (12_1)[2][5] 1945 60.78 60 Yes
GeForce GTX 970M[82] October 7, 2014 1280:80:48 924 993 5012 37.0 73.9 3072
6144
120 192 2365 73.9 75
GeForce GTX 980M[83] 1536:96:64 1038 1127 49.8 99.6 4096
8192
160 256[84] 3189 99.6 100
  1. ^ In OpenCL 3.0, OpenCL 1.2 functionality has become a mandatory baseline, while all OpenCL 2.x and OpenCL 3.0 features were made optional.
  2. ^ Shader Processors: Texture mapping units: Render output units
  3. ^ A maximum of 2 dual-GPU cards can be connected in tandem for a 4-way SLI configuration as dual-GPU cards feature on-board 2-way SLI.
  4. ^ Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.[25]
  5. ^ Texture fillrate is calculated as the number of TMUs multiplied by the base core clock speed.
  6. ^ Single precision performance is calculated as 2 times the number of shaders multiplied by the base core clock speed.
  7. ^ Double precision performance of the Maxwell chips' are 1/32 of single-precision performance.[51][52]
  8. ^ a b Lacks hardware video encoder
  9. ^ a b c Lacks hardware video encoder and decoder

Chipset table

[edit]

GeForce 900 (9xx) series

[edit]
Model Launch Code name Process Transistors (billion) Die size (mm2) Core config[a] Bus interface L2 Cache
(MB)
Clock Speeds Memory Fillrate[b] Processing power (GFLOPS)[b][c] TDP (Watts) SLI support Release price (USD)
Base (MHz) Boost (MHz) Memory (MT/s) Size (GB) Bandwidth (GB/s) Bus type Bus width (bit) Pixel (GP/s)[d] Texture (GT/s)[e] Single precision Double precision MSRP
GeForce GT 945A[85][86][87] February, 2016 GM108 TSMC
28HP
Un­known Un­known 512:24:8 (4) PCIe 3.0 x8 ? 1072 1176 1800 1 / 2 14.4 DDR3 / GDDR5 64 8.5
9.4
25.7
28.2
1,097.7
1,204.2
34.3
37.6
33 No OEM
GeForce GTX 950[88] August 20, 2015 GM206-250 2.94 227 768:48:32 (6) PCIe 3.0 x16 1 1024 1188 6600 2 105.7 GDDR5 128 32.7
38.0
49.1
57.0
1,572.8
1,824.7
49.1
57.0
90 (75[f]) 2-way SLI $159
GeForce GTX 950 (OEM)[90] Un­known GM206 1024:64:32 (8) 935 Un­known 5000 80.0 29.9
 
59.8
 
1,914.9
,
59.8
 
Un­known OEM
GeForce GTX 960[91] January 22, 2015 GM206-300 1127 1178 7000 2
4[g]
112.1 36.0
37.6
72.1
75.3
2,308.0
2,412.5
72.1
75.3
120 $199
GeForce GTX 960 (OEM)[93] Un­known GM204 5.2 398 1280:80:48 (10) 924 Un­known 5000 3 120.0 192 44.3
 
73.9
 
2,365.4
,
73.9
 
Un­known OEM
GeForce GTX 970[94] September 18, 2014 GM204-200 1664:104:56 (13) 1.75 1050 1178 7000 3.5 +
0.5[h]
196.3 +
28.0[h]
224 +
32[h]
58.8
65.9
109.2
122.5
3,494.4
3,920.3
109.2
122.5
145 4-way SLI $329
GeForce GTX 980[96] September 18, 2014 GM204-400 2048:128:64 (16) 2 1126 1216 4 224.3 256 72.0
77.8
144.1
155.6
4,612.0
4,980.7
144.1
155.6
165 $549
GeForce GTX 980 Ti[97] June 1, 2015 GM200-310 8 601 2816:176:96 (22) 3 1000 1075 6 336.5 384 96.0
103.2
176.0
189.2
5,632.0
6,054.4
176.0
189.2
250 $649
GeForce GTX TITAN X[98] March 17, 2015 GM200-400 3072:192:96 (24) 12 192.0
206.4
6,144.0
6,604.8
192.0
206.4
$999
  1. ^ Main shader processors: texture mapping units: render output units (streaming multiprocessors)
  2. ^ a b Base clock, Boost clock
  3. ^ To calculate the processing power see Maxwell (microarchitecture)#Performance.
  4. ^ Pixel fillrate is calculated as the number of ROPs multiplied by the respective core clock speed.
  5. ^ Texture fillrate is calculated as the number of TMUs multiplied by the respective core clock speed.
  6. ^ Some GTX950 cards were released without power connector powered only by PCIe slot. These had limited power consumption and TPD to 75W.[89]
  7. ^ Some manufacturers produced 4 GB versions of GTX 960. These were often criticized as useless move, as titles that would use so much VRAM and actually gain advantage over 2 GB versions, would already run too slow on those resolutions and settings, as GTX960 didn't have enough compute power and memory bandwidth to handle it.[92]
  8. ^ a b c For accessing its memory, the GTX 970 stripes data across 7 of its 8 32-bit physical memory lanes, at 196 GB/s. The last 1/8 of its memory (0.5 GB on a 4 GB card) is accessed on a non-interleaved solitary 32-bit connection at 28 GB/s, one seventh the speed of the rest of the memory space. Because this smaller memory pool uses the same connection as the 7th lane to the larger main pool, it contends with accesses to the larger block reducing the effective memory bandwidth not adding to it as an independent connection could.[95]

Discontinued support

[edit]

Driver 368.81 is the last driver to support Windows XP/Windows XP 64-bit.[citation needed]

32-bit drivers for 32-bit operating systems were discontinued after the release of driver 391.35 in March 2018.[99]

Notebook GPUs based on the Kepler architecture moved to legacy support in April 2019 and stopped receiving critical security updates after April 2020.[100][101] The Nvidia GeForce 910M and 920M from the 9xxM GPU family are affected by this change.

Nvidia announced that after release of the 470 drivers, it would transition driver support for the Windows 7 and Windows 8.1 operating systems to legacy status and continue to provide critical security updates for these operating systems through September 2024.[102]

See also

[edit]

References

[edit]
  1. ^ "Vulkan Driver Support". Nvidia. February 10, 2016. Retrieved April 25, 2018.
  2. ^ a b c Ryan Smith. "Maxwell 2's New Features: Direct3D 11.3 & VXGI - The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". anandtech.com.
  3. ^ a b "Maxwell and DirectX 12 Delivered". The Official NVIDIA Blog.
  4. ^ "MSDN Blogs". msdn.com. Microsoft.
  5. ^ a b c Ryan Smith. "Microsoft Details Direct3D 11.3 & 12 New Rendering Features". anandtech.com.
  6. ^ "Nvidia: Next-Generation Maxwell Architecture Will Break New Grounds - X-bit labs". xbitlabs.com. Archived from the original on June 29, 2013.
  7. ^ Ryan Smith. "GTC 2010 Day 1: NVIDIA Announces Future GPU Families for 2011 And 2013". anandtech.com.
  8. ^ "GeForce GTX 750 Class GPUs: Serious Gaming, Incredible Value". geforce.com.
  9. ^ a b c d e f g Smith, Ryan; T S, Ganesh (February 18, 2014). "The NVIDIA GeForce GTX 750 Ti and GTX 750 Review: Maxwell Makes Its Move". AnandTech. Archived from the original on February 18, 2014. Retrieved February 18, 2014.
  10. ^ a b c Ryan Smith, Ganesh T S. "Maxwell: Designed For Energy Efficiency - The NVIDIA GeForce GTX 750 Ti and GTX 750 Review: Maxwell Makes Its Move". anandtech.com.
  11. ^ Kanter, David (August 1, 2016). "Tile-based Rasterization in Nvidia GPUs". Real World Technologies. Retrieved August 16, 2016.
  12. ^ Triolet, Damien (March 3, 2017). "GDC: Nvidia talks about Tile Caching by Maxwell and Pascal". Hardware.fr. Retrieved May 24, 2017.
  13. ^ "Dynamic Super Resolution Improves Your Games With 4K-Quality Graphics On HD Monitors". geforce.com.
  14. ^ a b "Whitepaper: NVIDIA GeForce GTX 980" (PDF). Archived from the original (PDF) on July 21, 2017. Retrieved September 20, 2014.
  15. ^ "NVIDIA - Maintenance". geforce.com.
  16. ^ "Maxwell's Voxel Global Illumination Technology Introduces Gamers To The Next Generation Of Graphics". geforce.com.
  17. ^ "NVIDIA Maxwell GPUs: The Best Graphics Cards For Virtual Reality Gaming". geforce.com.
  18. ^ "How Maxwell's VR Direct Brings Virtual Reality Gaming Closer to Reality". The Official NVIDIA Blog.
  19. ^ a b Ryan Smith. "Display Matters: HDMI 2.0, HEVC, & VR Direct - The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". anandtech.com.
  20. ^ "Multi-Frame Sampled Anti-Aliasing Delivers Better Performance To Maxwell Gamers". geforce.com.
  21. ^ "New nVidia Maxwell chips do not support fast CSAA". realhardwarereviews.com. Archived from the original on May 7, 2019. Retrieved May 7, 2019.
  22. ^ "Introducing The Amazing New GeForce GTX 980 & 970". geforce.com.
  23. ^ Ryan Smith. "The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". anandtech.com.
  24. ^ Ryan Smith. "Maxwell 2 Architecture: Introducing GM204 - The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". anandtech.com.
  25. ^ a b "Here's another reason the GeForce GTX 970 is slower than the GTX 980". techreport.com. October 2014.
  26. ^ Ryan Smith. "NVIDIA Launches GeForce GTX 960". anandtech.com.
  27. ^ Ryan Smith. "NVIDIA Launches GeForce GTX 950; GM206 The Lesser For $159". anandtech.com.
  28. ^ "NVIDIA Discloses Full Memory Structure and Limitations of GTX 970". PCPer. Archived from the original on February 25, 2015. Retrieved January 28, 2015.
  29. ^ "GeForce GTX 970 Memory Issue Fully Explained – Nvidia's Response". WCFTech. January 24, 2015.
  30. ^ "Why Nvidia's GTX 970 slows down when using more than 3.5GB VRAM". PCGamer. January 26, 2015.
  31. ^ "GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation". AnandTech.
  32. ^ "NVIDIA Working on New Driver For GeForce GTX 970 To Tune Memory Allocation Problems and Improve Performance". WCFTech. January 28, 2015.
  33. ^ "NVIDIA clarifies no driver update for GTX 970 specifically". PC World. January 29, 2015.
  34. ^ "NVIDIA Plans Driver Update for GTX 970 Memory Issue, Help with Returns". pcper.com. January 28, 2015.
  35. ^ "Nvidia CEO addresses GTX 970 controversy". PCGamer. February 26, 2015.
  36. ^ a b Chalk, Andy (February 22, 2015). "Nvidia faces false advertising lawsuit over GTX 970 specs". PC Gamer. Retrieved March 27, 2015.
  37. ^ Niccolai, James (February 20, 2015). "Nvidia hit with false advertising suit over GTX 970 performance". PC World. Retrieved March 27, 2015.
  38. ^ a b c Ryan Smith. "Diving Deeper: The Maxwell 2 Memory Crossbar & ROP Partitions - GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation". anandtech.com.
  39. ^ "Nvidia settles class action lawsuit". Top Class Actions. July 27, 2016. Retrieved July 27, 2016.
  40. ^ a b Advanced API support nvidia.com
  41. ^ "GeForce GTX 980 - Specifications - GeForce". geforce.com.
  42. ^ "DX12 GPU and CPU Performance Tested: Ashes of the Singularity Benchmark". pcper.com. Archived from the original on April 15, 2016. Retrieved August 31, 2015.
  43. ^ a b Hilbert Hagedoorn (August 31, 2015). "Nvidia Wanted Oxide dev DX12 benchmark to disable certain DX12 Features ? (content updated)". Guru3D.com.
  44. ^ a b "The Birth of a new API". Oxide Games. August 16, 2015.
  45. ^ "[Various] Ashes of the Singularity DX12 Benchmarks". Overclock.net. August 17, 2015.
  46. ^ "Lack of Async Compute on Maxwell Makes AMD GCN Better Prepared for DirectX 12". TechPowerUp. August 31, 2015.
  47. ^ Hilbert Hagedoorn (June 24, 2015). "AMD Radeon R9 Fury X review". Guru3D.com.
  48. ^ "[Various] Ashes of the Singularity DX12 Benchmarks". Overclock.net. August 17, 2015.
  49. ^ a b c Shrout, Ryan (July 14, 2016). "3DMark Time Spy: Looking at DX12 Asynchronous Compute Performance". PC Perspective. Archived from the original on July 15, 2016. Retrieved July 14, 2016.
  50. ^ a b c d e Smith, Ryan (July 20, 2016). "The NVIDIA GeForce GTX 1080 & GTX 1070 Founders Editions Review: Kicking Off the FinFET Generation". AnandTech. p. 9. Retrieved July 21, 2016.
  51. ^ Smith, Ryan (September 18, 2014). "The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2". AnandTech. p. 1. Retrieved September 19, 2014.
  52. ^ Ryan Smith. "The NVIDIA GeForce GTX Titan X Review". anandtech.com.
  53. ^ "GeForce 910M - Specifications - GeForce". geforce.com.
  54. ^ "Archived copy". Archived from the original on June 24, 2017. Retrieved October 27, 2016.{{cite web}}: CS1 maint: archived copy as title (link)
  55. ^ "NVIDIA GeForce 910M Specs | TechPowerUp GPU Database". Techpowerup.com. August 22, 2022. Retrieved August 22, 2022.
  56. ^ "GeForce 920M - Specifications - GeForce". geforce.com.
  57. ^ "NVIDIA GeForce 920M". TechPowerUp. Archived from the original on March 4, 2016. Retrieved March 17, 2015.
  58. ^ "NVIDIA GeForce 920M". TechPowerUp.
  59. ^ "GeForce 920MX - Specifications - GeForce". geforce.com.
  60. ^ "NVIDIA GeForce 920MX Specs | TechPowerUp GPU Database". Techpowerup.com. August 22, 2022. Retrieved August 22, 2022.
  61. ^ "GeForce 930M - Specifications - GeForce". geforce.com.
  62. ^ "NVIDIA GeForce 930M". TechPowerUp.
  63. ^ "GeForce 930MX - Specifications - GeForce". geforce.com.
  64. ^ "NVIDIA GeForce 930MX Specs | TechPowerUp GPU Database". Techpowerup.com. August 22, 2022. Retrieved August 22, 2022.
  65. ^ "GeForce 940M - Specifications - GeForce". geforce.com.
  66. ^ "NVIDIA GeForce 940M". TechPowerUp.
  67. ^ "NVIDIA GeForce 940M". TechPowerUp.
  68. ^ "GeForce 940MX - Specifications - GeForce". geforce.com.
  69. ^ "NVIDIA GeForce 940MX". TechPowerUp GPU Database. Retrieved December 16, 2017.
  70. ^ "GeForce 945M - Specifications - GeForce". geforce.com.
  71. ^ "NVIDIA GeForce 945M Specs | TechPowerUp GPU Database". Techpowerup.com. August 22, 2022. Retrieved August 22, 2022.
  72. ^ "NVIDIA GeForce 945M Specs | TechPowerUp GPU Database". Techpowerup.com. August 22, 2022. Retrieved August 22, 2022.
  73. ^ NVIDIA™ GeForceGT 945A (1GB GDDR5) user-selectable by application via NVIDIA Control Panel http://store.hp.com/us/en/ContentView?catalogId=10051&langId=-1&storeId=10151&eSpotName=Sprout-Pro#!
  74. ^ "NVIDIA GeForce 945A Specs | TechPowerUp GPU Database". Techpowerup.com. August 22, 2022. Retrieved August 22, 2022.
  75. ^ "GeForce GTX 950M - Specifications - GeForce". geforce.com.
  76. ^ "NVIDIA GeForce GTX 950M". TechPowerUp.
  77. ^ "NVIDIA GeForce GTX 980". TechPowerUp.
  78. ^ "GeForce GTX 960M - Specifications - GeForce". geforce.com.
  79. ^ "NVIDIA GeForce GTX 960M". TechPowerUp.
  80. ^ "GeForce GTX 965M - Specifications - GeForce". geforce.com.
  81. ^ "NVIDIA GeForce GTX 965M". TechPowerUp.
  82. ^ "GeForce GTX 970M - Specifications - GeForce". geforce.com.
  83. ^ "GeForce GTX 980M - Specifications - GeForce". geforce.com.
  84. ^ Triolet, Damien (February 4, 2016). "GTX 970: 3.5 Go et 224-bit au lieu de 4 Go et 256-bit ?". Hardware.FR (in French). Retrieved May 27, 2016.
  85. ^ "Sprout Pro by HP". HP. Archived from the original on January 9, 2019. Retrieved January 9, 2019.
  86. ^ "Linux, Solaris, and FreeBSD driver 361.28 (long-lived branch release)". Nvidia. February 9, 2016. Archived from the original on February 16, 2016. Retrieved February 10, 2016.
  87. ^ "NVIDIA GeForce 945A Specs". Archived from the original on November 3, 2024. Retrieved August 6, 2018.
  88. ^ "GTX 950 | Specifications". GeForce. Archived from the original on December 12, 2015. Retrieved December 11, 2015.
  89. ^ Shilov, Anton. "GIGABYTE Adds 75W GeForce GTX 950 to Lineup". www.anandtech.com. Retrieved May 16, 2024.
  90. ^ "GeForce GTX 950 (OEM) | Specifications | GeForce". geforce.com. Archived from the original on September 23, 2018. Retrieved January 9, 2019.
  91. ^ "GTX 960 | Specifications". GeForce. Archived from the original on December 12, 2015. Retrieved December 11, 2015.
  92. ^ "Nvidia GeForce GTX 960 2GB vs 4GB review". Eurogamer. October 18, 2015.
  93. ^ "GeForce GTX 960 (OEM) | Specifications | GeForce". geforce.com. Archived from the original on September 23, 2018. Retrieved January 9, 2019.
  94. ^ "GTX 970 | Specifications". GeForce. Archived from the original on December 7, 2015. Retrieved December 11, 2015.
  95. ^ Wasson, Scott (January 26, 2015). "Nvidia: the GeForce GTX 970 works exactly as intended, A look inside the card's unusual memory config". The Tech Report. p. 1. Archived from the original on January 28, 2015. Retrieved January 26, 2015.
  96. ^ "GTX 980 | Specifications". GeForce. Archived from the original on December 8, 2015. Retrieved December 11, 2015.
  97. ^ "GTX 980 Ti | Specifications". GeForce. Archived from the original on December 11, 2015. Retrieved December 11, 2015.
  98. ^ "GTX TITAN X | Specifications". GeForce. Archived from the original on December 5, 2015. Retrieved December 11, 2015.
  99. ^ "Support Plan for 32-bit and 64-bit Operating Systems | NVIDIA".
  100. ^ Eric Hamilton (March 9, 2019). "Nvidia to end support for mobile Kepler GPUs starting April 2019". Techspot.
  101. ^ "List of Kepler series GeForce Notebook GPUs". Nvidia.
  102. ^ "Support Plan for Windows 7 and Windows 8/8.1 | NVIDIA".
[edit]