The algorithm @Bruno indicated in the comments (Levenshtein Distance) is a good algorithm for determining the similarity of two strings. There is another one a little more robust, called Damerau-Levenshtein which also considers the transposition of two adjacent characters - that is, takes into account some simple spelling mistakes.
But I suggest rethinking the questionnaire design.
Fuzzy search, and string similarity calculus, cause poor user experience in this case. Let’s say we used Levenshtein’s algorithm and determined that the answer given by the user may differ from the answer in the database in 10 characters, maximum.
What if my answer has 11 different characters? Is it necessarily wrong? Why is an answer with 10 different characters correct, and my answer is not?
Also, these algorithms just tell us how many characters are different - but they don’t tell us what, or what their meaning is. I can add 15 characters to an answer without changing its meaning - but I can also add just one comma, and radically change its meaning.
It is for these reasons that most of the computerised questionnaires are multiple choice - and the questionnaires with open answers are usually manually analyzed by a human being.
You can use algorithms for fuzzy matching, for example Levenshtein Distance, etc. Something more robust: http://lucenenet.apache.org/
– bruno