微軟研究院在GitHub上釋出一套NLP預訓練模型UniLM,可完成單向、雙向和序列至序列預測,並可針對自然語言理解(NLU)和自然語言生成(NLG)來微調。它在NLP基準測試如SQuAD 2.0和CoQA問答任務方面皆優於BERT,而且在5項NLG資料集上達到SOTA等級。
本期周報還收錄:Ubuntu 19.10版終於釋出,聚焦K8s邊緣功能、AI整合開發;Nvidia推出5G訊號處理SDK Aerial,更與紅帽擴大合作範圍;Sotabench網站釋出benchmark,讓使用者測試GitHub上SOTA等級模型
bert github 在 bert · GitHub Topics 的相關結果
Easy-to-use and powerful NLP library with Awesome model zoo, supporting wide-range of NLP tasks from research to industrial applications, including Text ... ... <看更多>
bert github 在 Issues · google-research/bert - GitHub 的相關結果
TensorFlow code and pre-trained models for BERT. Contribute to google-research/bert development by creating an account on GitHub. ... <看更多>
bert github 在 TensorFlow code and pre-trained models for BERT - GitHub 的相關結果
BERT is a method of pre-training language representations, meaning that we train a general-purpose "language understanding" model on a large ... ... <看更多>