The "Sacred Computer" studies and applies more tests of the INSAIT's finetuned Mistral-7B-instruct (BgGPT) on Google Colab, so everybody can experiment before the official release which is announced as 3.3.2024.
Is INSAIT's claim that the model is "comparable to ChatGPT" reliable or it is just an advertising slogan and it is more similar to GPT2? (Or all similar LLMs are, this is not a fault of a single model).
Donate cloud services if you wish to support me to conduct more deeper and thorough experiments. So far the Colab has limitations: 16 GB (15 shown on the dashboard) of the Telsa T4 are barely enough, attempts to execute summarization on "long" texts of 500 etc. characters failed due to Out of memory error.
Повече тестове на BgGPT 7B в Google Colab - дали претенциите за сравнимост с ЧатГПТ ("в някои задачи") отговарят на истината, или повече прилича на GPT-2? (GPT2-Medium на Свещеният сметач е от 2021 г). Кои са силните и слабите му страни? Следват още продължения и развитие на автоматизирани тестове, може би и обстойна техническа статия.
https://github.com/Twenkid/GPT2-Bulgarian-Training-Tips-and-Tools/
https://github.com/Twenkid/BgGPT/
0 коментара:
Post a Comment