Maxim Leonovich
1 min readFeb 21, 2020

--

Hi, Thomas!

Thank you for your response! Actually, we just use the plain multi-lingual USE for our purposes. However, we also have a few BERT-based classifiers which we’ve fine-tuned on our dataset and also considered (haven’t done yet) tuning the tokenizer to support several domain specific terms.

--

--

No responses yet