Why in utf-8 are used the extra bits?

I read the documentation on utf-8 and wonder why you actually used data bits:
0x00000000 — 0x0000007F: 0xxxxxxx
0x00000080 — 0x000007FF: 110xxxxx 10xxxxxx
Well, that is clear, ASCII, starts with 0, and all following characters - the number of units(1) to zero(0) indicates the number of bytes - 1.
The essence of the question is: why was it necessary to designate a 2-byte code 110, if it was possible to use a value of 10, and why in each byte 10 in the beginning and so if they are separated in memory, why are they even and thus divide?
June 10th 19 at 14:34
1 answer
June 10th 19 at 14:36
Solution
Prefixes allow you to recognize the role of the byte.
0 - byte-symbol
10 - non-first byte of multibyte character
110 - the first byte of a DBCS character
1110 - the first byte trehbaytnye symbol
11110 - the first byte of 4-byte characters.
The original question (both parts together) can be formulated as follows: why do we need to fabricate the role neparnogo bytes. I believe it is necessary in order to loss/change a single byte led to the loss of all text from this character to the end, but only to the loss/change of one symbol, which includes this byte.
Thanks, now I understand really good defense, if the first byte of the error. - Rosamond.Walker commented on June 10th 19 at 14:39

Find more questions by tags Unicode