๐Ÿ”— JavaScript Doubles and Integers

๐Ÿ”— Description

If you have written JavaScript before you have probably used numbers. But have you ever wondered how a JavaScript number might actually be represented within the engine runtime?

We will explore the limitations of JavaScript numbers as well as learn some techniques for manipulating them.

We will look at the actual memory representations of integers and floats later.

๐Ÿ”— Exercise

๐Ÿ”— Number Type

First lets explore Number variables.

  • Define a variable named num and assign the value 41 to it. Then run typeof(num) to find what the engine considers the variable's "primitive type"
js ยป 

What does number actually mean? If we check the ECMAscript spec, we can see the following definition:

Numbers: IEEE 754-2019 double-precision floating point values.

So the specification implies that any variable will be treated as a 64-bit double-preciison float (This would be the type double in c++). Even though we assigned the variable to an integer value, the spec says to treat it as a double for most operations.

  • Next divide your variable by a number that is not a factor (ie if the number is odd, divide by 2). In c++ this would have truncated the value (41 / 2 -> 20 in c++). What does the same operation do in JavaScript?
js ยป 
  Reveal Answer

The division is a floating point division, so 41 / 2 -> 20.5, rather than 21.

Most mathematical operations in JavaScript are floating point operations. However there are exceptions. Bitwise operations only make sense for integral values, so the engine will convert all operands to integers of a fixed number of bits before performing these operations:

  • & bitwise and
  • | bitwise or
  • ^ bitwise xor
  • ~ bitwise negate

The above operators are very similar to their counterparts in c++.

  • Determine the size (number of bits) of the integers which bitwise operations act on. Are the integer values signed or unsigned? What happens if you use a number that is larger than the maximum number of bits?
js ยป 
  Reveal Answer

The bitwise operations act on 32-bit signed integers. If you try to use a number that has more than 32-bits, it will be truncated to 32-bits. For example, copy the following case into the REPL above: 0x100000000 | 1

This acts just like casting int64_t to int32_t in c++. Keep in mind, if the 32nd bit of your resulting value is a 1, the number will pop out signed (negative).

๐Ÿ”— Double Precision

64-bit double precision floats can represent a wide range of values. However they are limited by the amount of information that can be stored in just 64 bits (they cannot represent every possible real number). The ability to represent a given number is called the precision. If you try to use a number that is not possible to represent, or you perform certain mathematical operations you will experience a "loss of precision"

  • Try adding the number 1.1 together 3 times. You should get a strange looking result.
js ยป 
  Reveal Answer

Running 1.1 + 1.1 + 1.1 outputs 3.3000000000000003 which seems incorrect. This is because the value 3.3 cannot be accurately represented by a 64-bit double precision float.

Why is this the case? We need to examine how floating points actually work. The computer represents and computes floating points using IEEE 754.

A floating-point number consists of three parts: a sign bit, an exponent, and a "significand" (also called mantissa). The sign bit indicates whether the number is positive or negative. The exponent determines the magnitude or scale of the number. The significand represents the fractional part of the number.

The value of a floating-point number is roughly calculated as:

(-1)^sign * 2^exponent * significand

You can read this nice blogpost about floating points for more information on float mechanics.

Throughout this training, we will be using the website float.exposed to examine floating point numbers and their underlying bit pattern.

  • Put 3.3 into float.exposed. Notice how the value is not exact. Try incrementing the significand by setting the least-significant-bit to 1. Notice how it overshoots 3.3

You will also notice that each floating point value has an equivalent 64-bit "bit-pattern", which are the actual bits in memory. For example, 1.1 is approximated by the double 1.10000000000000008882 which has the 64-bit bit-pattern of 0x3ff199999999999a. (See https://float.exposed/0x3ff199999999999a)

๐Ÿ”— Max Integral Values

Doubles can represent a large range of integer values. However, since some of the 64 bits are being used to represent non-integral parts (the sign and exponent), it follows that you cannot represent the full range of 64-bit integers with 64-bit floats.

If you try to represent an integer outside the accepted range, it will lose precision and not be the expected value.

  • Try to find an integer which is equal to itself plus 1 in JavaScript (n == n + 1).
js ยป 
  Reveal Answer

Any large enough integer will do this, but the smallest one is 9007199254740992.

To find the exact bounds, we can check the value of Number.MAX_SAFE_INTEGER which tells us the largest integer we can represent before losing precision.

๐Ÿ”— Floats in Browser Exploitation

You may be asking why it's relevant to know so much about doubles for exploiting JavaScript engines. We will come to fully realize their importance later, but know that there will be several situations in browser exploitation where we can leak or control memory using 64-bit doubles.

When this happens, you must know how to reason about or control the underlying bit-pattern as described above. This means we will want to develop some helper functions for our exploits to easily convert between doubles and the actual 64-bit bit-pattern.

For the next part of the exercise, you will write a function that can perform this conversion. To pull this off we will make use of the array buffers and typed arrays mentioned earlier.

Array buffers have a nice feature where you can have multiple "typed views" into the same region of memory. We can set up an Int32Array and a Float64Array which both point to the same underlying buffer:

JavaScript
let memory = new ArrayBuffer(32); // Multiple of 8 let i32_arr = new Int32Array(memory); let i64_arr = new Float64Array(memory);

Writing to one allows you to read the same bit-pattern from the other as a different type. You will use this to write your conversion functions.

  • Create a function to_float64(low_32_bits, high_32_bits) which takes the two 32-bit parts of a 64-bit integer, and converts it to a 64-bit float. As a test, convert the number 0x4142434445464748 to a float.

Expected outcome:

JavaScript
> to_float64(0x45464748, 0x41424344) // Don't forget about endianness!! 2393736.541207228
js ยป 
  Reveal Answer
JavaScript
function to_float64(low_32_bits, high_32_bits) { let memory = new ArrayBuffer(32); // Multiple of 8 let i32_arr = new Int32Array(memory); let i64_arr = new Float64Array(memory); i32_arr[0] = low_32_bits; i32_arr[1] = high_32_bits; return i64_arr[0]; }

๐Ÿ”— Big Integers

Recently (circa 2020), JavaScript engines have added the BigInt feature which allows arbitrary precision integral values.

JavaScript
let bn = 0x4142434445464748n // BigInt literals must end in n let foo = bn + 1n // All operations must be with other BigInts let normal = Number(foo) // We can convert back to doubles using Number()

This is very useful for exploitation as we can represent large 64-bit values easily. Before this feature, or on older targets, you will need to break 64-bit values into two 32-bit values. You will see examples of this in some of the exercises later on, but feel free to experiment with BigInts as applicable.