Make tokenizer a property of the index, allowing for different indexes to use different tokenizers #205 and #21.
Fix bug that prevented very large documents from being indexed #203, thanks Daniel Grießhaber.
Performance improvements when adding documents to the index #208, thanks Dougal Matthews.
0.6.0
Ensure document ref property type is preserved when returning results #117, thanks Kyle Kirby.
Introduce lunr.generateStopWordFilter for generating a stop word filter from a provided list of stop words.
Replace array-like string access with ES3 compatible String.prototype.charAt#186, thanks jkellerer.
Move empty string filtering from lunr.trimmer to lunr.Pipeline.prototype.run so that empty tokens do not enter the index, regardless of the trimmer being used #178, #177 and #174
Allow tokenization of arrays with null and non string elements #172.
Parameterize the seperator used by lunr.tokenizer, fixes #102.
0.5.12
Implement lunr.stopWordFilter with an object instead of using lunr.SortedSet#170, resulting in a performance boost for the text processing pipeline, thanks to Brian Vaughn.
Ensure that lunr.trimmer does not introduce empty tokens into the index, #166, thanks to janeisklar
## 0.5.11
Fix bug when using the unminified build of lunr in some project builds, thanks Alessio Michelini
0.5.10
Fix bug in IDF calculation, thanks to weixsong for discovering the issue.