String#length method may fool you!

Careful!

This is the code I have:
package biz.tugay;
public class Main {
    public static void main(String[] args) {
        final String koala = "\uD83D\uDC28";
        System.out.println(koala);
        System.out.println("koala length: " + koala.length());
    }
}

And the output I see:

So it is a nice Koala image, can you see it in your browser: 🐨 ?

I do not know what Font IntelliJ uses to show this nice character. I was not able to show it in Windows Terminal:

It seems like Monospaced is an Abstract font in Java, that uses various Fonts available in the System to print the characters. (source here).

Logical fonts are the five font families defined by the Java platform which must be supported by any Java runtime environment: Serif, SansSerif, Monospaced, Dialog, and DialogInput. These logical fonts are not actual font libraries. Instead, the logical font names are mapped to physical fonts by the Java runtime environment. The mapping is implementation and usually locale dependent, so the look and the metrics provided by them vary. Typically, each logical font name maps to several physical fonts in order to cover a large range of characters.

Anyway, what I want to show here is, Java will return 2 for the length of this character. This is actually documented in the docs:

Returns the length of this string. The length is equal to the number of Unicode code units in the string.

They also provide us this nice documentation:

The char data type (and therefore the value that a Character object encapsulates) are based on the original Unicode specification, which defined characters as fixed-width 16-bit entities. The Unicode Standard has since been changed to allow for characters whose representation requires more than 16 bits. The range of legal code points is now U+0000 to U+10FFFF, known as Unicode scalar value. (Refer to the definition of the U+n notation in the Unicode Standard.)

The set of characters from U+0000 to U+FFFF is sometimes referred to as the Basic Multilingual Plane (BMP). Characters whose code points are greater than U+FFFF are called supplementary characters. The Java platform uses the UTF-16 representation in char arrays and in the String and StringBuffer classes. In this representation, supplementary characters are represented as a pair of char values, the first from the high-surrogates range, (\uD800-\uDBFF), the second from the low-surrogates range (\uDC00-\uDFFF).

A char value, therefore, represents Basic Multilingual Plane (BMP) code points, including the surrogate code points, or code units of the UTF-16 encoding. An int value represents all Unicode code points, including supplementary code points. The lower (least significant) 21 bits of int are used to represent Unicode code points and the upper (most significant) 11 bits must be zero. Unless otherwise specified, the behavior with respect to supplementary characters and surrogate char values is as follows:

The methods that only accept a char value cannot support supplementary characters. They treat char values from the surrogate ranges as undefined characters. For example, Character.isLetter('\uD840') returns false, even though this specific value if followed by any low-surrogate value in a string would represent a letter.
The methods that accept an int value support all Unicode characters, including supplementary characters. For example, Character.isLetter(0x2F81A) returns true because the code point value represents a letter (a CJK ideograph).
In the Java SE API documentation, Unicode code point is used for character values in the range between U+0000 and U+10FFFF, and Unicode code unit is used for 16-bit char values that are code units of the UTF-16 encoding. For more information on Unicode terminology, refer to the Unicode Glossary.

But regardless, I find this behavior a bit weird, how about you?