BOM (byte order mark, byte order mark) was created to solve a UTF-16 problem (and also the UTF-32, although this format is little used to save files).
As each character in UTF-16 is composed of 2 bytes (or in more rare cases by a pair of 2-byte units each), there is the possibility to sort them in different ways: byte 1, byte 2; or byte 2, byte 1 (on the order of the bits, no one discusses, at least...). Then little-endian architectures will prefer to use UTF-16LE (LE = little endian), which has the order "byte 2, byte 1" which is the most natural for the processor. And big-endian architectures will prefer to use UTF-16BE.
To differentiate the two types of UTF-16, BOM is used at the beginning of the file, which is a character that cannot be confused with its "inverse", so when reading it it will be possible to find out what is the order of the bytes of the rest of the file.
The UTF-8 was designed differently, where the order of bytes does not depend on the architecture of the computer. Hence, many consider it unnecessary to use BOM in UTF-8 files.
BOM, which in UTF-16 takes 2 bytes, when encoded in UTF-8 takes the form of 3 bytes. So some programs, despite the no-recommendation to use BOM in UTF-8 ended up adopting it anyway, because when they open a file and find those 3 special bytes, they will know that it is probably a UTF-8 file (because it is very rare for a text to start with 
, which is how GOOD appears if it is read as cp1252 encoding).
Now, whether or not you should use BOM in your files, the debate gets a little philosophical, because there are pros and cons...
@DBX8 but I believe the answer to that question may answer that question.
– Silvio Andorinha
ANSI is with one I; ASCII is with two I. ;-)
– Sony Santos