The varchar datatype considers non-UNUNULD characters, nvarchar instead works with UNICODE characters.
What you have to take into account is the amount stored by each data type.
VARCHAR will store the reported amount plus 2bytes. For example, a VARCHAR(10) field will store a maximum of 10bytes + 2bytes. These two extra bytes are precisely on account of being a variable-sized data type.
The NVARCHAR will take up twice the space plus the 2bytes of control. So, in the same example, an NVARCHAR(10) field will take up 20bytes + 2bytes.
This will make a big difference to your storage and should be taken into account.
Source
Roughly speaking, in the CHAR and VARCHAR world, each character occupies 1 byte. A byte is a set of 8bits and considering all the positions of these bits (on and off) we can have 256 combinations (2 8). This means that one byte is capable of representing 256 different combinations. For the American alphabet this is more than enough, for the Latin alphabet this is also more than enough.
The problem begins when we consider Arabic, Asian, Greek alphabets, etc. In this case, if we consider all possible letters and characters we will extrapolate all 256 combinations that 1 byte can represent. For these situations arose the NVARCHAR and the NCHAR. For these types of data each caractér occupies 2bytes. If one byte can express 256 combinations (2 8), two bytes can store 65536 combinations (2 16). With this amount of combinations, it is possible to represent any existing character only the storage cost gets higher.
If you use the CHAR and VARCHAR types and try to store certain characters, the available character universe will be restricted to the collation you have chosen. If you try to store another character that is not contemplated by that collation, that character will be converted to some approximate one. If you choose NCHAR and NVARCHAR, then this limitation does not occur.
Source
I suggest that this response be updated and/or corrected. The answer in English (https://stackoverflow.com/questions/144283/what-is-the-difference-between-varchar-and-nvarchar) to this same question brought other important points that need to be taken into consideration, regardless of whether it is Mysql or SQL Server: 1) Operating systems today already work with Unicode, so if I use varchar, I have an overhead to "convert" to Unicode both when reading and saving; 2) Disk space is less costly than codepage/character problems; 3) etc (see the answer, it’s quite interesting).
– sdlins
I didn’t see anything wrong or outdated in this answer so I don’t know what I could change. Some of these propositions are just opinions, so I stick to mine, but I respect the different ones, and the answer pointed out in Soen is simplistic and mistaken, despite all the votes, and mainly does not consider my context that is more important to me. For other contexts people should analyze what best fits them, using a standard solution for everyone is a mistake.
– Maniero