Introduction
Coding numbers in computers
Article REF: H1210 V1
Introduction
Coding numbers in computers

Author : Daniel ETIEMBLE

Publication date: November 10, 2023 | Lire en français

Logo Techniques de l'Ingenieur You do not have access to this resource.
Request your free trial access! Free trial

Already subscribed?

1. Introduction

Computer processors work exclusively with binary digits (0,1) called bits. In order to represent numbers, they must be encoded. The number of bits used to encode a number determines the quantity of numbers that can be represented: with N bits, we can have 2 N different numbers. The most commonly used types of encoding are integer and floating-point numbers, found in all general-purpose processors, as well as a number of representations found in more specialized processors, such as signal processors.

32-bit general-purpose processors have 32-bit integers, as well as 16-bit and 8-bit subsets, and 32-bit and 64-bit floats. 64-bit general-purpose processors add 64-bit integers to the above list.

To cope with the constraints of performance, energy consumption and memory occupation, 16-bit and 8-bit...

You do not have access to this resource.
Logo Techniques de l'Ingenieur

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource. Click here to request your free trial access!

Already subscribed?


Article included in this offer

"Software technologies and System architectures"

( 227 articles )

Complete knowledge base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

View offer details