Department of Urology and Surgery, Haikou Fukang Hospital 0`8`9`8灬6`5`3`4灬0`2`1`3`
.
.
.
.
.
.
. p>
.
.
.
.
.
.
.
.
It is determined by the stream processor unit, video memory frequency, video memory bit width, etc. Therefore, when the display core is different, the core A high frequency does not mean that the graphics card is powerful. For example, the core frequency of GTS250 has reached 750MHz, which is higher than the 576MHz of GTX26, but in terms of performance, GTX26 is definitely better than GTS250. Among chips of the same level, the one with a higher core frequency has better performance. Increasing the core frequency is one of the methods of overclocking the graphics card. There are only two mainstream display chips, ATI and NVIDIA. Both of them provide display cores to third-party manufacturers. With the same display core, some manufacturers will appropriately increase the display core frequency of their products so that they can work at a higher frequency than the display core. fixed frequency to achieve higher performance.
Introduction to folding video memory
The main types of video memory used on graphics cards are SDR, DDR SDRAM, DDR SGRAM, DDR2.GDDR2.DDR3.GDDR3.GDDR4.GDDR5.
DDR SDRAM is the abbreviation of Double Data Rate SDRAM (double data rate), which can provide a higher operating frequency and bring excellent data processing performance.
DDR SGRAM is a product improved from Synchronous Dynamic Random Access Memory (SDRAM) by graphics card manufacturers specifically for the needs of graphic designers. In order to enhance graphics access processing and graphics control efficiency. SGRAM allows individual modification or access to data in memory in blocks (Blocks). It can work synchronously with the central processing unit (CPU), which can reduce the number of memory reads and increase the efficiency of the graphics controller, although it has good stability. , and the performance is also very good, but its overclocking performance is very poor.
The current mainstream ones are GDDR3 and GDDR5. (GDDR4 was not popular in the market due to failure)
XDR2 DRAM: The system architecture of XDR2 is derived from XDR, unlike XDR which has a huge difference compared to RDRAM. It can be seen in the comparison. The overall architectural difference between XDR2 and XDR systems is not big. The main difference is reflected in the speed design of the relevant buses. First, XDR2 increases the frequency of the system clock from XDR's 400MHz to 500MHz; secondly, on the RQ bus used to transmit addressing and control commands, the transmission frequency is increased from 800MHz to 2GHz, which is 4 times the XDR2 system clock; finally, The data transmission frequency is increased from 3.2GHz of XDR to 8GHz, which is 16 times the system clock frequency of XDR2, while XDR is 8 times. Therefore, Rambus calls the data transmission technology of XDR2 16-bit data rate (Hex Data Rate, HDR). . Rambus said that the standard design bit width of the XDR2 memory chip is 16 bits (it can dynamically adjust the bit width like XDR). Calculated based on the transmission rate of each data pin of 8GHz, that is, 8Gbps, the data bandwidth of an XDR2 chip will be Reaching 16GB/s. In comparison, the currently fastest GDDR3-800 has a chip width of 32bit, a data transmission rate of 1.6Gbps, and a single-chip transmission bandwidth of 6.4GB/s, which is only 40% of XDR2. The gap is very large. obvious.
Folding bandwidth
Video memory bit width is the number of bits of data that the display can transmit in one clock cycle. The larger the number of bits, the greater the amount of data that can be transmitted at the same frequency. . In 2010, the video memory bit widths of graphics cards on the market mainly include 128-bit, 192-bit, and 256-bit. The video memory bandwidth = video memory frequency X video memory bit width/8, which represents the data transmission speed of the video memory. When the video memory frequency is the same, the video memory bit width will determine the size of the video memory bandwidth. For example: for 128-bit and 256-bit video memories with the same memory frequency of 500MHz, their memory bandwidths are: 128-bit=500MHz*128/8=8GB/s; and 256-bit=500MHz*256/8=16GB/s, yes 2x that of 128-bit. The video memory of the graphics card is composed of blocks of video memory chips, and the total bit width of the video memory is also composed of the bit width of the video memory particles. Video memory bit width = video memory particle width × number of video memory particles. The memory chips of the video memory have the memory number of the relevant manufacturer. You can look up the number on the Internet to understand its bit width. Then multiply it by the number of video memory chips to get the bit width of the graphics card. For other graphics cards with the same specifications, the larger the bit width, the better the performance.
Folding capacity
When other parameters are the same, the larger the capacity, the better, but when comparing graphics cards, you cannot only pay attention to the video memory (many js will use low-performance cores with large video memory as a selling point) .
For example, the 384M 9600GT is far stronger than the 512M 9600GSO because of the gap in core and memory bandwidth. Video memory capacity is only one reference when choosing a graphics card. Factors such as core and bandwidth are more important. These determine the performance of the graphics card prior to video memory capacity. However, a necessary amount of video memory is required, because insufficient video memory may occur at high resolutions and high anti-aliasing. Currently, the memory capacity of graphics cards on the market ranges from 256MB to 4GB.
Folding package type
TSOP (Thin Small Out-Line Package) thin small size package
QFP (Quad Flat Package) small square flat package
MicroBGA (Micro Ball Grid Array) micro ball gate array package, also known as FBGA (Fine-pitch Ball Grid Array)
Mainstream graphics cards before 2004 were basically packaged in TSOP and MBGA. Most of them are packaged in TSOP. However, due to the emergence of nvidia's gf3.4 series, MBGA has become mainstream, and mbga packaging can achieve faster video memory speeds, far exceeding the TSOP limit of 400MHZ.
Folding speed
Video memory speed is generally measured in ns (nanoseconds). Common video memory speeds include 1.2ns, 1.0ns, 0.8ns, etc. The smaller the speed, the faster and better it is. The theoretical working frequency calculation formula of video memory is: equivalent working frequency (MHz) = 1000 × n / (video memory speed) (n varies depending on the type of video memory. If it is GDDR3 video memory, n = 2; if it is GDDR5 video memory, n = 4) .
Folding frequency
Video memory frequency reflects the speed of the video memory to a certain extent, measured in MHz (megahertz). The frequency of video memory has a great relationship with the type of video memory:
SDRAM video memory generally works at a lower frequency, which can no longer meet the needs of graphics cards.
DDR SDRAM video memory can provide higher video memory frequencies, so currently most graphics cards use DDR SDRAM, and the video memory frequencies they can provide vary greatly. It has now developed to GDDR5, and the default equivalent operating frequency has reached a maximum of 4800MHZ, and the potential for improvement is still very large.
The memory frequency is related to the memory clock cycle, and the two have a reciprocal relationship, that is, the memory frequency (MHz) = 1/the memory clock cycle (NS) X1000. If it is SDRAM memory, its clock cycle is 6ns, then its memory frequency is 1/6ns=166 MHz; for DDR SDRAM, its clock cycle is 6ns, then its memory frequency is 1/6ns=166 MHz, But what you need to understand is that this is the actual frequency of DDR SDRAM, not the usual DDR memory frequency. Because DDR transmits data during both the rising and falling periods of the clock, data is transmitted twice in one cycle, which is equivalent to twice the frequency of SDRAM. The DDR frequency that is customarily called is its equivalent frequency, which is the equivalent frequency multiplied by 2 on its actual operating frequency. Therefore, the memory frequency of 6ns DDR memory is 1/6ns*2=333 MHz. But what you need to understand is that when the graphics card is manufactured, the manufacturer sets the actual working frequency of the video memory, and the actual working frequency is not necessarily equal to the maximum frequency of the video memory. This kind of situation is relatively common. However, there are also cases where the video memory cannot work stably at the nominal maximum operating frequency.
Folding stream processor unit
Before the DX10 graphics card came out, there was no such thing as "stream processor". The GPU is internally composed of "pipelines", divided into pixel pipelines and vertex pipelines, and their numbers are fixed. Simply put, the vertex pipeline is mainly responsible for 3D modeling, and the pixel pipeline is responsible for 3D rendering. Since their number is fixed, a problem arises. When a game scene requires a large amount of 3D modeling but does not require too much pixel processing, it will cause the vertex pipeline resources to be tight and the pixel pipeline to be idle in large quantities. Of course, there are also Completely opposite situation. This will result in the shortage of some resources and the idle waste of other resources.
Under such circumstances, people first proposed the "unified rendering architecture" in the DX10 era. The graphics card canceled the traditional "pixel pipeline" and "vertex pipeline" and unified it into a stream processor unit. Both vertex operations and pixel operations can be performed, so that in different scenarios, the graphics card can dynamically allocate the number of stream processors for vertex operations and pixel operations to achieve full utilization of resources.
Now, the number of stream processors has become a very important indicator in determining the performance of graphics cards. Nvidia and AMD-ATI are also constantly increasing the number of stream processors in graphics cards to improve the performance of graphics cards. Achieving leapfrog growth, for example, AMD-ATI's graphics card HD3870 has 320 stream processors, HD4870 has 800, and HD5870 has reached 1,600!
It is worth mentioning that the N-card and A-card GPU architectures are different, and the allocation of the number of stream processors is also different. There is no comparison between the two sides.
Each stream processor unit of the N card only contains 1 stream processor, while the A card is equivalent to 5 stream processors in each stream processor unit. (A card stream processor/5) For example, although the HD4850 ??has 800 stream processors The stream processor is actually only equivalent to 160 stream processor units. In addition, the A card stream processor frequency is consistent with the core frequency. This is why 9800GTX+ only has 128 stream processors, but its performance is equivalent to HD4850 ??(N card stream processor frequency Approximately 2.16 times the core frequency).
Collapse 3DAPI
API is the abbreviation of Application Programming Interface, which means application program interface, while 3D API refers to the direct interface between the graphics card and the application.
The 3D API allows the 3D software designed by programmers to simply call the program in its API, allowing the API to automatically communicate with the hardware driver and activate the powerful 3D graphics processing function in the 3D chip, thereby greatly It greatly improves the design efficiency of 3D programs. Without the 3D API, programmers must understand all graphics card characteristics when developing programs in order to write programs that perfectly match the graphics card and bring out the full performance of the graphics card. With the 3D API, a direct interface between the graphics card and the software, programmers only need to write program code that conforms to the interface to fully utilize the performance of the graphics card without having to understand the specific performance and parameters of the hardware, which greatly simplifies program development. efficiency. Similarly, display chip manufacturers design their own hardware products according to standards to optimize API calls for hardware resources and obtain better performance. With the 3D API, the maximum compatibility of hardware and software from different manufacturers can be achieved. For example, in terms of games that best embody the 3D API, game designers do not have to consider the characteristics of a specific graphics card when designing, but only develop games according to the interface standards of the 3D API. When the game is running, it is called directly through the 3D API. Graphics card hardware resources.
The main 3D APIs used in personal computers are: DirectX and OpenGL.
Folding RAMDAC frequency
RAMDAC is the abbreviation of Random Access Memory Digital/Analog Convertor, which is a random access memory digital to analog converter.
The function of RAMDAC is to convert digital signals in the video memory into analog signals that can be displayed on the monitor. The conversion rate is expressed in MHz. The process of processing data in computers is actually the process of digitizing things. All things will be processed into two numbers, 0 and 1, and then accumulated calculations will be continued. Graphics accelerator cards also rely on these 0s and 1s to perform various processing such as color, depth, brightness, etc. on each pixel. The signals generated by the graphics card are represented by numbers, but all CRT monitors work in analog mode, and digital signals cannot be recognized. This requires corresponding equipment to convert digital signals into analog signals. RAMDAC is the device in the graphics card that converts digital signals into analog signals. The slew rate of the RAMDAC is expressed in MHz, which determines the refresh frequency (similar to the meaning of the "bandwidth" of the monitor). The higher its working speed, the wider the frequency band, and the better the picture quality at high resolution. This value determines the maximum resolution and refresh rate supported by the graphics card with sufficient video memory. If you want to achieve a refresh rate of 85Hz at a resolution of 1024×768, the RAMDAC rate must be at least 1024×768×85Hz×1.344 (conversion factor) ≈ 90MHz. In 2009, the mainstream graphics card RAMDAC can reach 350MHz and 400MHz. Now most graphics cards on the market are 400MHz, which is enough to meet and exceed the resolution and refresh rate that most monitors can provide.
Folding cooling device
The power required by a graphics card is the same as that required by a 150-watt light fixture. Since operating integrated circuits requires a considerable amount of power, the internal current is The generated temperatures also increase relatively, so if these temperatures cannot be lowered in a timely manner, the above-mentioned hardware equipment is likely to be damaged, and the cooling system is to ensure that these equipment can operate stably and in a timely manner without heat dissipation. The processor or heat sink, GPU or memory will overheat, which can damage the computer or cause it to crash, or even become completely unusable. These cooling devices are made of thermally conductive materials. Some of them are considered passive components, dissipating heat silently, while others are hard not to make noise, such as fans.
The heat sink is usually regarded as a passive heat sink, but regardless of whether the installed area is a thermal conductive area or other internal areas, the heat sink can exert its effect and help other devices reduce the temperature. The heat sink is usually installed on the GPU or memory together with the fan, and sometimes the small fan is even installed directly on the hottest part of the graphics card.
The larger the surface area of ??the heat sink, the greater the heat dissipation effect (usually it must be operated together with a fan), but sometimes due to space limitations, large heat sinks cannot be installed on devices that require heat dissipation. ; Sometimes the devices are too small to allow large heat sinks to be connected to these devices for heat dissipation. Therefore, the heat pipe must transfer the heat energy from the heat sink to the heat sink for heat dissipation at this time. Generally speaking, the GPU casing is made of high thermal conductive metal, and the heat pipe is directly connected to the chip made of metal, so that the heat energy can be easily conducted to the heat sink at the other end.
Many processor cooling devices on the market are equipped with heat pipes. It can be seen that many heat pipes have been developed into equipment that can be flexibly used in graphics card cooling systems.
Most radiators are just composed of a heat sink and a fan. The fan blows heat energy on the surface of the heat sink. Since the GPU is the hottest part of the graphics card, the graphics card radiator can usually Applied to GPU, at the same time, there are many retail accessories on the market for consumers to replace or upgrade, the most common of which is the VGA radiator.
Collapse and edit the working principle of this paragraph
Once data leaves the CPU, it must go through 4 steps before finally reaching the display:
1. From the bus (bus) to the GPU (Graphics Processing Unit, graphics processor): the data sent from the CPU is sent to the North Bridge (main bridge) and then to the GPU (Graphics Processing Unit) for processing.
2. Enter the video RAM (video memory) from the video chipset (graphics card chipset): send the data processed by the chip to the video memory.
3. Enter the Digital Analog Converter (= RAM DAC, random read and write storage digital-to-analog converter) from the video memory: read the data from the video memory and then send it to the RAM DAC for data conversion (digital signal to analog signal). But if it is a DVI interface type graphics card, there is no need to convert the digital signal to analog signal. And directly output digital signals.
4. From the DAC to the monitor: send the converted analog signal to the display.
Display performance is part of the system performance. Its level of performance is determined by the above four steps. It is different from the performance of the video card (video performance). If you want to strictly distinguish, the video performance of the video card Performance should be determined by the middle two steps, because the data transmission in these two steps is within the graphics card. The first step is for the CPU (the core of the computer composed of arithmetic unit and controller, called microprocessor or central processing unit) to enter the display card. The last step is for the display card to directly send data to the display screen.
Collapse and edit this section common faults of graphics cards
1. No display when booting
This type of fault is usually caused by poor contact between the graphics card and the motherboard or a problem with the motherboard slot. . For some motherboards with integrated graphics cards, if the main memory is used as the main memory, you need to pay attention to the location of the memory module. Generally, the memory module should be inserted into the first memory module slot. Due to the problem of the graphics card, there will be no display failure at startup. After startup, there will usually be one long and two short beeps (for AWARD BIOS).
2. Abnormal color display
This type of failure is generally caused by the following reasons: 1. Poor contact between the graphics card and the monitor signal line; 2. The monitor itself is faulty; 3. In some software The color is abnormal when running in it, which is generally common in old machines. There is an option to verify the color in the BIOS, just turn it on; 4. The graphics card is damaged; 5. The monitor is magnetized. This kind of phenomenon is usually caused by magnetism. The object is too close, and the display screen may be deflected after magnetization.
3. Crash
Such failures are generally caused by incompatibility between the motherboard and the graphics card or poor contact between the motherboard and the graphics card; incompatibility between the graphics card and other expansion cards can also cause a crash.
4. Flowery screen
Flurry display and unclear handwriting are generally caused by the monitor or graphics card not supporting high resolution. When the screen is distorted, you can switch the startup mode to safe mode, then enter the display settings under Windows 98, and click the "Apply" and "OK" buttons in the 16-color state. Restart, delete the graphics card driver in the normal mode of Windows 98 system, and restart the computer. You can also edit the SYSTEM.INI file in a pure DOS environment without entering safe mode, change display.drv=pnpdrver to display.drv=vga.drv, save and exit, and then update the driver in Windows.
5. The graphics card driver is lost
After the graphics card driver is loaded, the driver will be automatically lost after running for a period of time. This type of failure is generally due to poor quality of the graphics card or the connection between the graphics card and the motherboard. Incompatibility will cause the graphics card temperature to be too high, causing the system to run unstable or crash. At this time, the only option is to replace the graphics card.
6. Abnormal noise or patterns appear on the screen
Such failures are generally caused by problems with the video memory of the graphics card or poor contact between the graphics card and the motherboard. It is necessary to clean the gold finger of the graphics card or replace the graphics card. [1]
A brief history of the development of folding edit this paragraph
Folding CGA graphics card
The origin of civilian graphics cards can be traced back to the 1980s of the last century. In 1981, when IBM launched the personal computer, it provided two graphics cards, one was the "monochrome graphics card" (referred to as MDA) and the other was the "color graphics card" (referred to as CGA), as can be seen from the name. , MDA is used with a monochrome display, which can display 80 rows x 25 columns of alphanumeric data, while CGA can be used on an RGB display, which can draw graphics and alphanumeric data. At that time, the main purpose of computers was. It is text data processing. Although the resolution of MDA is 752 dots in width and 504 dots in height, which is not enough to meet the larger display requirements, it is more than enough for text data processing, and CGA has color and graphics capabilities. It is capable of general display of graphics data, but its resolution is only 640x350, which is naturally not the same as color display.
Folding MGA/MCGA graphics card
In 1982, IBM launched another graphics card. The MGA (Monochrome Graphic Adapter), also known as the Hercules Card, was launched. In addition to displaying graphics, it also retained the functions of the original MDA. Many games required this card to display animation effects, which was popular at the time. Also on the market is the EGA (Enhanced Graphics Adapter) made by Genoa, which is an enhanced graphics card that can simulate MDA and CGA, and can draw graphics bit by bit on a monochrome screen. The EGA resolution is 640x350. Produces 16-color graphics and text. However, these graphics cards are all digital. It was not until the emergence of MCGA (Multi-Color Graphics Array) that the MCGA was integrated into the PS/2 Model. 25 and 30 imaging systems. It uses Analog RGA image signals with resolutions up to 640x480. The difference between digital RGB and analog RGB is like the difference between ON-OFF switching and fine-tuning switching. The signal display will convert the voltage value of each signal into a range that matches the color brightness. Only analog displays can be used with MCGA to provide a maximum of 256 colors. In addition, IBM also provides an analog monochrome display.
Folding VGA interface graphics card
VGA (Video Graphic Array) is the display graphics array that IBM uses in its PS. /2 Model 50, 60 and 80 built-in imaging system. Its digital mode can reach 720x400 colors, drawing mode can reach 640x480x16 colors, and 320x200x256 colors. This is the first time that a graphics card can display up to 256 colors at the same time. The popularity of VGA graphics cards has brought computers into the glorious era of 2D graphics cards. In the following period, many VGA graphics card design companies continued to introduce new products, pursuing higher resolution and bit color. At the same time, IBM launched the 8514/A Monitor display specification, which is mainly used to support 1024x768 resolution.
In the process of advancing from the 2D era to the 3D era, one graphics card that cannot be ignored is the Trident 8900/9000 graphics card. For the first time, the graphics card becomes an independent accessory in the computer, instead of It is an integrated chip. The Trident 9685 it launched later is the representative of the first generation 3D graphics card. However, it is GLINT 300SX that can truly open the door to 3D graphics cards. Although its 3D functions are extremely simple, it is of milestone significance.
The era of folding 3DAGP interface graphics cards
1995 is definitely a milestone year for graphics cards. 3D graphics accelerator cards officially entered the field of vision of players. At that time, games had just entered the 3D era, and the emergence of a large number of 3D games also forced graphics cards to develop into true 3D accelerator cards. And this year also created a company. Needless to say, everyone knows that, yes, it is 3Dfx. In 1995, 3Dfx was still a small company, but as a veteran 3D technology company, it launched the industry's first true 3D graphics accelerator card: Voodoo. In Moto Hero, the most popular game at the time, Voodoo's performance in terms of speed and color made game-loving users crazy. Many game fanatics spent more than a thousand yuan to buy a no-name brand in the computer city. Experience with Voodoo graphics card. 3Dfx's patented technology Glide engine interface once dominated the entire 3D world. It was not until the emergence of D3D and OpenGL that this situation changed.
Voodoo comes standard with 4Mb of video memory, which can provide 3D display speed and the most gorgeous pictures at a resolution of 640×480. Of course, Voodoo also has shortcomings. It is just a daughter card with 3D acceleration function. When using it, it needs to be paired with a 2D card. A powerful graphics card, I believe many experienced EDO players still remember the much-talked-about golden combination of S3 765+Voodoo. Speaking of S3 765, we have to mention the former king S3 graphics card.
The S3 765 graphics card was the standard configuration of compatible machines at the time. It supported up to 2MB EDO video memory and could achieve high-resolution display. This was a function of high-end graphics cards at the time. This chip truly promoted SVGA. It can support 1024×768 resolution, and supports up to 32Bit true color at low resolution, and it is also very cost-effective. Therefore, the S3 765 actually brought the S3 graphics card its first glory.
Then S3 Virge was launched in 1996. It is a graphics card integrated with 3D acceleration, supports DirectX, and contains many advanced 3D acceleration functions, such as Z-buffering, Doubling buffering, Shading, Atmospheric effect and Lighting actually became the pioneers of 3D graphics cards, achieving the second glory of S3 graphics cards. Unfortunately, with the pursuit of 3Dfx, the Virge series of S3 did not continue its glory and was eventually abandoned by the market.
After that, in order to fix the shortcoming of Voodoo not having 2D display, 3Dfx then launched VoodooRush, which added Z-Buffer technology. Unfortunately, compared to Voodoo, VoodooRush’s 3D performance has not been improved at all, which is even more terrible. What's more, it brings a lot of compatibility issues, and the high price also restricts the promotion of VoodooRush graphics cards.
Of course, the 3D graphics accelerator card market at that time was not dominated by 3Dfx. The high prices left a lot of room for other manufacturers to survive, such as Trident 9750/9850, which was considered the king of cost performance at the time, and provided SIS6326 with Mpeg-II hardware decoding technology, and Riva128/128zx launched by nVidia for the first time in the history of graphics card development, are favored by many players, which also promotes the development of graphics card technology and the maturity of the market. 1997 was the year when 3D graphics cards first emerged, and 1998 was the year when 3D graphics cards sprung up and fierce competition emerged. The 3D game market in 1998 was booming, and a large number of more exquisite 3D games were launched collectively, making users and manufacturers look forward to faster and more powerful graphics cards.
Under the great honor and dazzling halo brought by Voodoo, 3Dfx has launched another epoch-making product: Voodoo2 with its strategic position. Voodoo2 comes with 8Mb/12Mb EDO video memory, PCI interface, and dual chips on the card, which can perform single-cycle multi-texture operations. Of course, Voodoo2 also has shortcomings. Its card body is very long, and the chip generates a lot of heat, which has become a problem. Moreover, Voodoo2 is still a 3D accelerator daughter card and needs the support of a 2D graphics card. But it is undeniable that the launch of Voodoo2 has brought 3D acceleration to a new milestone. With the effects, graphics and speed of Voodoo2, it has conquered many 3D games that were popular at the time, such as Fifa98, NBA98, Quake2 and so on. Perhaps many users still don’t know that the most popular SLI technology in 2009 was also a new technology of Voodoo2 at that time. For the first time, Voodoo2 supported dual graphics card technology, allowing two Voodoo2s to work together in parallel to obtain double the performance.
Although 1998 was the year when Voodoo2 shined, other manufacturers also had some classics. In addition to inheriting its superb 2D level, Matrox MGA G200 has made revolutionary improvements in 3D. Not only can it provide processing speed and special effects similar to Voodoo2, it also supports DVD hard decoding and video output, and is the first of its kind. The 128-bit independent dual bus technology greatly improved performance. Together with the popular AGP bus technology at the time, the G200 also won the favor of many users.
Intel's I740 was launched with Intel's 440BX chipset at the time. It supported AGP 2X technology and came standard with 8Mb of video memory. Unfortunately, the performance of I740 was not good. The 2D performance was only on par with S3 Virge. In terms of 3D, it is only at the level of Riva128, but it has obvious advantages in terms of price, allowing it to gain a foothold in the low-end market.
Riva TNT is a product launched by nVidia to block Voodoo2. It comes standard with a large 16Mb video memory, fully supports AGP technology, supports 32-bit color rendering for the first time, and has faster D3D performance and lower cost than Voodoo2. Due to the price of Voodoo2, it has become the new favorite of many players.
ATI, which has been in the Apple world, also produced a graphics card called Rage Pro, which is slightly faster than Voodoo.