Fig 5 - uploaded by Amira Guesmi
Content may be subject to copyright.
Contrast between IEEE 754 Single-precision 32-bit floating-point format and Bfloat16 format.

Contrast between IEEE 754 Single-precision 32-bit floating-point format and Bfloat16 format.

Source publication
Preprint
Full-text available
Machine-learning architectures, such as Convolutional Neural Networks (CNNs) are vulnerable to adversarial attacks: inputs crafted carefully to force the system output to a wrong label. Since machine-learning is being deployed in safety-critical and security-sensitive domains, such attacks may have catastrophic security and safety consequences. In...

Contexts in source publication

Context 1
... BFloat16 is a truncated version of the 32-bit IEEE 754 single-precision floating-point format (float32). The BFloat16 format is shown in Figure 5; it consists of 1 sign bit, an 8-bit exponent, and a 7-bit mantissa giving the range of a full 32-bit number but in the data size of a 16-bit number. ...
Context 2
... BFloat16 is a truncated version of the 32-bit IEEE 754 single-precision floating-point format (float32). The BFloat16 format is shown in Figure 5; it consists of 1 sign bit, an 8-bit exponent, and a 7-bit mantissa giving the range of a full 32-bit number but in the data size of a 16-bit number. ...

Similar publications

Article
Full-text available
Understanding the business logic of the application helps to detect the race conditions in web applications. There is no logic-aware approach for detecting race conditions. Current solutions can detect only a few race conditions or they have false positives. They also result in DoS because they send a large number of requests in parallel to the app...