1. Introduction
Computer processors work exclusively with binary digits (0,1) called bits. In order to represent numbers, they must be encoded. The number of bits used to encode a number determines the quantity of numbers that can be represented: with N bits, we can have 2 N different numbers. The most commonly used types of encoding are integer and floating-point numbers, found in all general-purpose processors, as well as a number of representations found in more specialized processors, such as signal processors.
32-bit general-purpose processors have 32-bit integers, as well as 16-bit and 8-bit subsets, and 32-bit and 64-bit floats. 64-bit general-purpose processors add 64-bit integers to the above list.
To cope with the constraints of performance, energy consumption and memory occupation, 16-bit and 8-bit...
Exclusive to subscribers. 97% yet to be discovered!
Already subscribed? Log in!
Introduction
Article included in this offer
"Software technologies and System architectures"
(
227 articles
)
Updated and enriched with articles validated by our scientific committees
A set of exclusive tools to complement the resources
Bibliography
Exclusive to subscribers. 97% yet to be discovered!
Already subscribed? Log in!