Learn more

According to TrendForce, Nvidia is working on the next generation of B100 and B200 GPUs based on the Blackwell architecture. The new GPUs are expected to hit the market in the second half of this year and will be used for CSP cloud clients. For those who don’t know, “CSP cloud customers” are customers who use cloud service providers (CSPs) for their cloud computing needs. The company will also add a simplified version of the B200A for enterprise OEM customers who need edge AI.

It is reported that the capacity of TSMC CoWoS-L packaging (used in the B200 series) continues to be limited. It is noted that the B200A will switch to the relatively simple CoWoS-S packaging technology. The company is focusing on the B200A to meet the requirements of cloud service providers.

B200A specifications:

Unfortunately, the specifications of the B200A are not yet fully understood. We can only confirm that the HBM3E’s memory capacity has been reduced from 192 GB to 144 GB. It is also reported that the number of memory chip layers has been halved from eight to four. However, the capacity of one chip has been increased from 24 GB to 36 GB.


The B200A will have lower power consumption than the B200 GPUs and will not require liquid cooling. The air cooling system of the new GPUs will also make them easier to customize. The B200A is expected to ship to OEMs around the second quarter of next year.

Supply chain research shows that the main shipments of NVIDIA high-end GPUs in 2024 will be based on the Hopper platform: H100 and H200 for the North American market and H20 for the Chinese market. Since the B200A will become available around the second quarter of 2025, it is not expected to interfere with the H200, which will be available in the third quarter or later.

The post Nvidia is working on AI GPU with 144 GB HBM3E memory first appeared on HiTechExpert.top.