From https://metaculus.com/questions/17094/gpu-tracking-before-2026/
Training large AI models like GPT-4 requires a large number of high-performance GPUs, such as NVIDIA Tesla V100 and A100 models, or Google TPUs (tensor processing units).
One way for AI policy to impact training (and, to a lesser degree, serving/inference) of large AI models is to track how these chips are used. This can be done via firmware - software that runs on the chip, typically for things like power management, memory allocation, tracking device performance, reporting crashes, or importantly, reporting usage statistics. Or it could be done at the cluster or datacenter level.
In Oct 2022, the US implemented export controls, that, roughly speaking, bans export of semiconductors that involve US in their manufacturing chain to China. These chips are the ones referred to as "US-export-controlled". This willingness of the US to put restrictions on how certain chips are sold to certain people implies a potential for regulations on the monitoring of certain chips.