Concluding remarks
Coding numbers in computers
Article REF: H1210 V1
Concluding remarks
Coding numbers in computers

Author : Daniel ETIEMBLE

Publication date: November 10, 2023 | Lire en français

Logo Techniques de l'Ingenieur You do not have access to this resource.
Request your free trial access! Free trial

Already subscribed?

11. Concluding remarks

Encoding of 8-, 16-, 32- and 64-bit integers and 32- and 64-bit floats has been implemented in all general-purpose processors for decades. The fixed-point format is used mainly in signal processors.

Over the past ten years or so, new formats have been introduced, the result of two considerations:

  • deep neural networks, particularly for inference, can benefit from reduced formats, enabling the surface area of arithmetic operators and power dissipation to be reduced, without any significant loss of precision. Some of these formats, such as 16-bit floats (FP16, BFP16), are implemented in the instruction sets of general-purpose processors. The TF32 format is implemented in the tensors of recent Nvidia GPUs;

  • many applications use specialized processors such as neural processors (Google TPU,...

You do not have access to this resource.
Logo Techniques de l'Ingenieur

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource. Click here to request your free trial access!

Already subscribed?


Ongoing reading
Concluding remarks

Article included in this offer

"Software technologies and System architectures"

( 227 articles )

Complete knowledge base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

View offer details