How elasticsearch split documents in tokens
Suppose I have webpages and I am storing those as documents in elastic search. Now i want to understand will elastic seach tokenize each word title as well as content? Or should we define a key in document for which elasticsearch will extract words and convert to tokens for that key. Also if it is tokenizing each key can you share sample format in which they will be stored by elastic search